Errors in repositories happen for various reason. One way to fix a repository is that, the user backs up to two repositories. Restic provides an option to specify a second repository and look there for correct data blobs.
Does this make any sense?
If so, any hope that such feature (or similar ones on recovery from damaged repositories) might be added?
Hmm, this is probably creeping into “feature bloat” territory. You could probably do this manually easily enough, anyway.
You have two options already for a makeshift version of this…
One, you could rclone the repo to another target after running the backup script. This does mean you might accidentally copy the corruption / delete the missing blob, though. So…
Two, If you create your second repository using init --copy-chunker-params, then use a script to backup to the first repo, then immediately backup to the second repo. Should the first repo ever complain about a missing pack, you could either copy the pack manually from the file structure of the second repo to the first, or you could probably use restic copy to copy a snapshot over that would contain the blob / missing file.
I believe they’re looking into adding parity information, sometime down the road. That would probably be a more elegant solution to your original question anyway. That way you could just have 10% parity, or even 100% parity (which would double your repo size). I’m hoping it allows for a --parity-path flag where you could point to an external path containing the parity, but I have no idea how they intend to implement it.
I’ve said it before: everyday data corruption is one of the interesting points long time restic use made me realize. I guess we have five options: 1) use cloud storage, 2) hardware parity, 3) file system parity (ZFS probably? Not sure.), 4) double backups, 5) rsync repos and regular “--check --read-data(--subset)” runs. I’m mostly down with 5) right now. That also gives me a better feeling
Yeah, there WILL always be errors in repository at some point. The error rate of HDDs is one error every 10TB, so at some point you are going to see errors even if everything else is perfect.
You can use cloud, but errors happen on client side too.
The copy command will already transfer missing blobs between repositories (it may just be necessary to force it to copy the snapshots again), but that only works if the chunker parameters are identical. Besides that there is always the option to restore data from one repository and back it up to the other one. That will also add missing blobs again. So the repair steps would to remove the damaged blobs first and then find a replacement somehow.