Odd repo size with max compression?

Interesting. I did see somewhere that both compressed every blob, and was going to ask later how auto truly worked, after I poured through the GitHub threads and made sure I wasn’t asking a redundant question haha. Very strange result, then…

So am I correct in saying that auto is something like “zstd -3” and max something like “zstd -19” and in either case if Restic sees that the compressed blob is bigger than the original blob, Restic discards the compressed blob and just uses the uncompressed blob?

I do not, but I’m happy to try it again. I, too, was pretty surprised, and actually recreated the repo from scratch, ensuring --compression max was used for the init command, and then just to be 100% sure the second time, specifically using --compression max for the backup command as well (EDIT: or so I thought - I’m doubting this after subsequent testing with Michael below). I’m assuming that whichever compression switch you use at init time is set as the “default” and can be “overridden” for each backup. That’s how it appeared to work on auto, when I tested using no switches on a blank repo, then using auto, and then max, on two other empty repos, and comparing the final repo sizes (on a smaller file set, just to get a “feel” for how it worked).

I’m currently using auto on about 17TB of data, just to see what happens :joy: but also to see if I can break anything or flush out any bugs before compression gets pushed into self-update or brew upgrade.

So I’ll also start up:
restic_v2 init --repository-version latest --compression max -r /Volumes/Backup/repo_v2

Then do a:
restic_v2 backup --compression max -r /Volumes/Backup/repo_v2 ~ -v 2>~/Desktop/restic_v2_error.log 1>~/Desktop/restic_v2_output.log

And report back. :slight_smile:

2 Likes