Clarification of section in threat model

Hi, I’m looking to start using restic for backups, most likely using S3 or similar as the backend. I had a question about this section in the design doc:

However, the restic backup program is not designed to protect against attackers deleting files at the storage location. There is nothing that can be done about this. If this needs to be guaranteed, get a secure location without any access from third parties. If you assume that attackers have write access to your files at the storage location, attackers are able to figure out (e.g. based on the timestamps of the stored files) which files belong to what snapshot. When only these files are deleted, the particular snapshot vanished and all snapshots depending on data that has been added in the snapshot cannot be restored completely. Restic is not designed to detect this attack.

(I’m assuming that storage location refers to the repo located remotely, eg. S3, and not to the data files being backed up. If this is wrong, please disregard the rest of my questions :slight_smile: )

I’m interested in better understanding the meaning of “Restic is not designed to detect this attack”. Hypothetically, if someone did alter the the repo as described, does it mean that subsequent backups would appear to work successfully, but then not be usable for restores because required data is missing? Or would the next backup attempt complete successfully and be valid (assuming no further files are deleted)?

I saw some more discussion of the threat model on GitHub, but it still didn’t quite clarify this scenario (to me anyway). I’m less concerned about the risk of random snapshots in the repo being manipulated, but I wanted to confirm that this wouldn’t affect subsequent backups.

Thanks for reading and I look forward to trying out restic!

1 Like

This is one possible outcome. For example, if a pack is modified, which would invalidate at least one stored object, subsequent backups that contain that same object would not correct the breakage, because restic doesn’t validate all data in the repository during a backup. The repository says it contains “blob A,” restic believes it and doesn’t re-upload it. (This is the deduplication mechanism at work.) In that case, the subsequent backup would also be partially corrupt.

If the data that was modified/deleted does not actually appear in the next backup, then the next backup would be fully intact.

You can use restic check --read-data to cause restic to perform several tests against your snapshots (directory connectivity, etc.) and then verify all data objects which is both time-consuming, compute-expensive, and bandwidth-expensive.

This may not even detect all forms of malicious modification. For example, deleting a snapshot file would likely go undetected, and then the next prune would discard all of the data unique to that snapshot.

The best defense against such an attack is multi-tier: have one copy of your data on-site and two off-site. As much as possible, have the off-site systems pull from the on-site copy, and have them reject modifications and deletions (rclone copy --immutable implements exactly this operation).

This does mean you have to prune on three different hosts, but it means sabotage of the on-site backup system would be ineffective as long as the data had already been copied off-site by the time the sabotage occurs.

For S3, you could also enable bucket versioning, and don’t give your backup clients the ability to delete specific versions. Then enable a lifecycle policy that deletes old objects after some amount of time (7 days, 30 days, whatever window you think is reasonable for you to notice corruption and recover an older version of the bad repo file).

2 Likes

Great, thanks for the quick and comprehensive reply.

That all makes sense - I can see how having multiple copies off-site would enable additional chances to mitigate the risks. I think I’ll see how I go with bucket versioning, as I’d rather avoid the need for additional equipment.

Thanks again!

1 Like