Copying between repos

Hi all,

Want to test my understanding of what would happen when I use restic copy

Suppose I setup initial backup job ‘local’ and thereafter copy entire repo contents to some cheap-but-possibly-unreliable-cloud-storage ‘A’

Thereafter - I run the backups between ‘local’ and ‘A’ seperately on the same scheduling (so there is a 1-1 correspondence between the snapshots/contents of both repos)

Would the data files develop in the same way?

I think yes as all the chunker params would be same. The only difference is the snapshot ID wihch are pseudo randomly generated.

Now - sometime later - some pack files are damaged on storage ‘A’ ---- to fix this; I can delete problematic snapshots from ‘A’ and then use the ‘copy’ command to copy from ‘local’ to ‘A’ without any extra wasted space.

Does it really work like this or are there other hidden gotchas I should know able?

I would not overcomplicate the setup, if your source storage supports snapshots, I would just run the backup job twice with a different target. That would also reduce the risk that the local backup gets corrupted and copied to the target repo, which might destroy all your backups at once (not fully sure, restic might check consistency during the copy step?).
If you really need the same restic snapshot, the copy command can be used. If your cloud storage supports rsync, you could also sync the filesystem which could also be helpful if you want to forget and prune snapshots locally and expect the same actions on the target.
But I personally do the first approach, 2 dedicated jobs which run independently. As I use a btrfs snapshot on my NAS as source, it does not really matter when I run the backup as the backup input does not change, even if the source data are changed in the meantime.
My “unreliable” storage is a union mount of onedrive remotes I target over rclone. Even if I did not really face issues yet, the setup is less straightforward and I consider it just as last safety net, if my other backups do not work any more.
Ideally, you would even use a second backup tool in case a bug in one tool does not affect all your backups (not saying that this happened to me, restic is great and reliable, but nobody is perfect).
If you use Synology, Hyperbackup would be the obvious choice, but plain rsync would also work, zfs snapshot replication is also an option.

As plan here is to run individual backups for the repositories, it’s not a good idea to simply copy the repository. Both repositories would have the same ID, which effectively makes restic’s cache useless. Instead the create the second repository by running restic init --copy-chunker-params --from-repo ....

Separate backup runs mean similar, but still slightly different snapshot contents. Each repository will have individual pack files (the encryption includes random data, thus two pack files with identical blobs would still be completely different after encryption).

You’re missing a call to repair index after deleting the damage pack file. If the missing blobs exist in both repositories, then yes. However, for directory metadata it’s quite likely to differ when snapshots were created individually.

Is that neccessarily cause problems? Supposing two repos were copy created and have same ID and are sharing the cahce — If there is a cache miss, then loads from repo (which is what would have happened anyway). If it is a cache hit, well the data was common on creation and so that isn’t a problem either?

Actually this was what caused me to go down this route — my actual setup is my cloud storage is kinda flakely and I have been encountering problems every couple of weeks or so (lots of ‘expected size xxx - got yyy’). Whenever it happens, I do repeated rebuild_index but it doesn’t seem to resolve the underlying issue and doing backup again doesn’t fix the missing data either.

So these days, I have a local repo on same hard disk with the same bunch of snapshots and when there is a problem I can copy over to the cloud storage to fix up the problem.

It was only later I learnt about the restic copy with the params piece and it is a kind of start-from-beginning resolution that would take another month or so to copy over the data the cloud repo… Thats why I am curious if there would be corruption/serious issues from the simple ‘copy files over’ approach I adopted…

Restic automatically cleans up the cache by checking which pack files etc. are still used by the current repository. The further both repositories diverge the more cache thrashing will happen. So, things won’t break from a functional perspective, but over time there will be a significant performance hit and lots of unnecessary downloads.

expected size xxx - got yyy means that the pack files are broken and not the index. repair index only extracts the index section from each pack files but does not verify the pack file content. repair packs <pack file IDs> should work though. However, with such a high amount of data corruption by the cloud storage, I wouldn’t be surprised if there’s additional less obvious damage. To be sure that the repository is healthy you would have to run restic check --read-data, which downloads the whole repository.

Not exactly sure what you mean here. Are those repositories exact copies of each other? That is you back up to the local repository and then use rclone or so to sync changes to the cloud? Or do you use a different setup here?

Both variants are affected by the cloud storage problems (which storage are you using?). The only way you can be sure that the repository is intact is by running check --read-data from time to time.