So I guess that if I use S3 as the repository with a policy that automatically moves files to Glacier after a few days, restic won’t be able to make follow-up backups?
This is pretty similar to my approach. I do the following:
- I run
restic backup
against a local repository (on another RAID volume) - After the backup is complete, I perform
aws s3 sync [repo path] [s3 bucket] --delete
to synchronize any changes to S3 - I have S3 lifecycle rules that moves data/ files in my repo to Glacier Deep Archive after 7 days. Only the data directory (which is like 98% of the total repo).
This ticks off so many of my goals:
- Implements US-CERT’s 3-2-1 backup recommendation (2 local copies and 1 remote copy of data)
- Very fast performance for use cases that use the local copy
- I can directly access the cloud copy using the restic client for many tasks (lists of snapshots, diff snapshots, etc)
- Pruning is easy. I perform prunes against my local copy weekly.
- Pruning is low-cost (for my data dynamics). My GDA early deletion fees are negligible (< $0.02 per month). For people with more dynamic data, this could he higher, but could perhaps be mitigated with a longer delay before moving to GDA.
- Very low cloud-storage costs. (My Amazon costs are about 30% of what I used to pay at Wasabi.com)