There are integrity errors in pack files and blobs. The usual procedure to fix them is to remove damaged packs (pack ID doesn’t match, blob ID does not match or ciphertext verification failed), rebuild index and back up again.
How to automatically remove damaged pack files, instead of deleting them one by one?
You can filter the output of check --read-data for lines containing: pack %v contains %v errors and then look for packs whose name starts with that blob id prefix.
Restic will check for integrity errors, if the corresponding data exists in a provided source will remove damaged packs, run rebuild index, back up from source and check again.
For the remaining damaged packs, it will ask users to provide such files or data to fix them. It also provides an output clarifying the data that is lost, provides options to remove snapshots, print an output listing what and how much is lost, etc.
For some reasons I get tens of errors per 100 GB. I don’t know if other users experience the same thing.
I noticed it depends if laptops is connected to power, if the file size is large (for movies, expect a lot!), initial back up or incrementals, etc.
At that rate of errors there’s no point in repairing the repository unless you fix the hardware problems first. To add a datapoint: I have never seen a corrupted pack file in a backup repository in real life. And that includes dozens repositories that store thousands of snapshots, millions of files and several TB of data.
It does. I guess I’ve always either used pro systems that auto detect and correct hardware problems (like SAN systems) or had hard disks that just totally died. But only since using restic at home, I’ve had multiple occasions of corrupt files that I probably would never have noticed otherwise.
I think we’re livin’ on the edge and simply never notice