Upgrading the repository format version for a 1 TB repo

Folks,

I have a version 1 repository with nearly 1TB of data. I’ve been using this for 6-7 years. I had been using restic 0.13 and have recently upgraded to 0.16.

My questions are:

  • Is it a good idea to run restic migrate on the repo for future compatibility and to save space? More specifically, I’m thinking of restic migrate upgrade_repo_v2 --compression max
  • Is this risky? I’m hoping that there is now plenty of experience out there on this.
  • If it is a good idea, any idea about the approx. run time for this?

Thanks!

1 Like

From my experience it works very well - I migrated multiple old repos without any issues. BUT - nothing comes with 100% guarantee. If you can not afford losing your data do not migrate without prior old repo backup.

Run time depends on your repo location - is it local NVME disk over Thunderbolt interface? Or rclone powered Onedrive backup over slow internet connection. You can estimate it yourself - all data has to be read and written again + some computational time (it should not be substantial unless you have relatively slow system e.g. raspberry pi).

You can also migrate to a v2 repo without compressing data blobs. Then you still get a much smaller metadata and compressed data added by future backups.

Good info, thanks. Yes, I am well backed up; the risk for me would be some unknown failure requiring copy from backups. But it’s sounding like the migration is fairly straightforward. TBT, I’ve found that restic is thoughtfully designed, so I’m not surprised that it ‘just works’.

Yes, I totally get that the rewrite time will have both CPU and back-end communication overhead. I was wondering about the computational load. My repo is a local raid that is mirrored to B2. That’s probably overkill, but has advantages.

Anyway, I will give the migration a shot and time a small volume of data using time restic prune --repack-uncompressed --compression max --max-repack-size=100G to get a sense of the runtime and space savings.

It would be interesting to hear about your findings and experience with migrating all 1TB repo. Can be handy for others attempting similar move.

Thanks @alexweiss, that’s a good point. It’s a nice feature that added data will be compressed and I can incrementally compress existing data with the --max-repack-size feature over time. Very nice.

And, @kapitainsky yes: I’ll report back.

A quick status report. Everything is working well so far!

The restic migrate itself ran in 4 minutes without issue. I did one pass to compress and rewrite a chuck of packs using

time restic prune --repack-uncompressed --compression max --max-repack-size=100G

on a relatively low-end Intel Core i5-7400T 2.4GHz server to an raid unit on a USB-3 bus.

I was able to compress and rewrite 100GB in 104 minutes of wall clock time. The data volume was reduced by 35%. All very smooth. So I’ll hammer away at this in chunks by appending this to restic forget script until the job is done.

3 Likes

Thanks for the info!

I would’ve opted for a restic sync snapshots to local repository and do it there most likely :smiley:

However I’d have no clue how to move the modified repository over in the end …

Another update: I began with 915G and after conversion and compression, the resulting repo is 330G. So nearly a factor of 3x smaller! That is very impressive. I was expecting a factor of 2 at best.

2 Likes

@thiscantbeserious You are saying that I could have done the migration and repacking on a copy of the repo? I didn’t consider that. But sure, that would work. Then simply move the whole thing to the desired end location afterward.