Only to see this edge case behavior:) Indeed it is unlikely in real life.
@yogasu data can for example generate massive metadata changes when yours not. There is no simple formula linking number of files to cache size.
Here you are example of metadata changes driven backup size (it of course results also with massive cache size):
So massive factor is how many files change - as it generates new metadata. So e.g. I backup 1 million files, they never change - 10 snapshots later I have 10m files in my repo but only 1m unique. Then in very different situation whith every file changing before taking new snapshot (even if it is only metadata change) I will end up with also 10m files in repo but metadata part will be many times larger.