More or less new to Restic here. Been using restic with Linode object storage and it works great but after some thought I started to wondering if I’m doing this in the most efficient way.
I’m currently using a single bucket that contains a single repository however I mainly have two backup sources (eg. DATA-X and DATA-Y). So to recap, I have all my snapshots from these two sources on a single repository on a single bucket.
Q: My logic tells me that having separate buckets or repos is better in terms of avoid mistakes, however I’m curious to know if any of this matters for restic.
I don’t mind having everything in one big “bucket/repo” as long restic still works efficiently when doing operations like recover/prune/etc.
So any gain going with multiple buckets and/or multiple repositories when having two distinct backup sources?
What combination seems to work best for you?
Thanks in advance!
What mistakes would this be?
Geerally speaking, restic doesn’t care, and theres no technical reason to separate them. On the opposite, you can now enjoy deduplication to the extent that’s relevant to your data.
The benefit of having 1 bucket for all sources obviously is deduplication but there are downsides as well:
- Every client can read all data in the repository.
- With big repositories maintenance tasks (especially
check --read-data) can take a long time and use a lot of resources (memeory and cache). Splitting repositories can help.
Personally I separate per project, which might not be acceptable for everyone but it turned out to be a good balance between security & deduplication gain.
How about separating things at the bucket level? E.g. one repo per bucket. this may be of less importance with perhaps 0 gain from a restic perspective and also it may complicate things a bit more in terms of management as each repo requires a different key etc. but I’m curious to know opinions anyways.
That’s exactly what I do (per repository bucket). I have a full automation of this with B2 and restic.
For me it works quiet well as I don’t have that much data in terms of GB or TB which needs backing up.
Hope it’s not too much off topic here but I spent the last days thinking about restic together with providers with some retention policy.
Like wasabi for example, where each object is billed at least for 90 days even if deleted after 10 minutes.
At the moment I am doing a backup, forget and prune every day. But I think prune does a lot of deleting and re-creating of files. So when thinking about saving money it might be better to not prune that often?
Anything else one could do to optimize costs in that context?
Personally, I backup to wasabi daily and forget / prune once a week, keeping at least 90 daily snapshots. There is no point deleting snapshots early since you are paying for them. This keeps my timed deleted storage costs minimal ($0.02 on an $8.00 bill; 131 GB-Day deleted vs 49137 GB-Day active). I could save more by pruning less often (monthly perhaps), but the 2 cents is bearable.
The wasabi billing console let’s you keep an eye on things so you can experiment and see the effects.