I’ve been using AWS as one of the destination for my restic backup. When I started to use borg in parallel I started to use rsync.net instead.
I like the simplicity of rsync.net and I’d like to move my restic repository there. However, unlike borg, the servers don’t have the restic binary.
This raise the question: how can I check the repository efficiently ? Without restic on the server I cannot run tasks like ‘check --read-data’ directly on their server and save bandwidth usage.
How do you handle this situation ? Run check on a subset of the data ?
Maybe they are willing to make sha256sum available as ssh command? Then checking the sha256sum against the file name for each file would be a good alternative to
check --read-data (of course combined by a local run of
Well I cannot really run
check --read-data locally. I do have local repositories for the same backup, but obviously they are different repositories. The local one could be fine while the remote wouldn’t. I do not sync repositories: I run a backup for each destinations I have (sftp, smb, ssd and s3).
I did try to contact the support to make
restic available on their servers for the purpose of running a
check --read-data locally on the servers, but I never got any feedback. I can try to ask for
If you can check the sha256 checksum of each file (
config excluded, of course) in your repository, IMHO a local run of
--read-data suffices. This
check run should be pretty fast and does not need much data to download from the repository.
I would even go further and claim that a local run of
check --with-cache would be enough if you already checked the checksums of the files in
Now having told you that besides local
restic check runs (which will check repository consistency) it is enough to check that the repository files did not accidentially change, you have of course another option:
- check if the process to transfer/store the data guarantees data consistency (should be usually the case)
- make sure you read and understand how your storage provider handles your data and what guarantees he gives (if you are unsure or not satisfied you should anyhow change your storage provider)
- Then it is basically the job of your provider to regularly check your contents for data corruption and to initiate counter-measures
- That is, instead of testing all files, testing random samples should give you enough confidence
If you do not trust your storage provider, choose another one. If you do trust your storage provider and want to implement control mechanisms, I would suggest to:
restic snapshots regularly and make sure the wanted snapshots are in your repo!
restic check regularly (will already check many files)
- test the sha256 of some random samples in /data/
- regularly run restore tests (these also do test the data in your repository, but also test a lot of other gotchas )
Actually I think testing only random samples could be a feature to integrate into
restic check - I’ll open an issue.
I made a PR to add this feature:
What’s the benefit of this over the existing
This is why I was asking. I always get some kind of timeout when I backup to S3. I want to switch to a provider that offer standard access method, like SSH with rsync.net. I’m really happy with rsync.net for my Borg repo. But unlike restic, I can run borg on the server to check the repository to speed up the process.
read-data-subset will always read the same files for a given value
n/m (to be more precise, it chooses a list of the first byte of the sha256 hash which equals to the two first characters in the filename).
To get a probability answer to “how probable is it that I do have a corrupted pack even though I checked
n packs” you should use a random subset of your packs to test. There it also makes much more sense to define the the sample size instead of testing a fixed percentage of all samples.
The statistics are basically identical to predict election results by questioning just a 1000 or so of all voters