Backups of backups with Restic — how?

Hey.
I’m backing up some huge and extremely important data so I want to have “backups of backups”.
How I can replicate the whole restic state to another machine? Just rclone everything or there is some native way to do that?

Thanks.

It’s always a great idea to have multiple separate backups. Ideally one would use separate software for it as well, but that’s another story.

The restic repository contains of files that are the same regardless where you store the repository. So yes, you can rclone/rsync the repository folder between different places/storages and access/use the repository regardless where it is, as long as you specify the proper repository URL of course.

There’s also the restic copy command, which you can use if you initialized the target repository properly (e.g. using --copy-chunker-params, see restic help init and the documentation for more information). You can copy snapshots from one repo to the other with this command.

A third option, which is what I’d personally recommend, is that you simply do multiple backup runs. E.g. I back up my computer to multiple repositories, by running restic multiple times. So client → server A, client → server B, etc. However, since restic has such good means to verify the integrity of your repository, this isn’t necessarily what you want to do, if you are short on resources.

3 Likes

I have some files and folders listed in my include file.

I backup using a bash script scheduled by launchd. It backs up to one particular remote repository - b2.

Now let’s say I also want to backup the same files/folders listed in that include file should I just create another script that runs on different times? Or is there a way to backup to two repositories?

Just to clarify: my case is slightly different. I do not want to backup the repository (i.e. backup of backups) I want to have multiple destination/repos for one backup set.

Simply run restic twice, one for repository 1 and one for repository 2. How you execute/schedule/do those two runs is entirely up to you. If you have a single script already, perhaps just add a second call to restic in it?

This is not an option. Backup of backup shall be as secure as possible to prevent compromising even it existence to possible criminal activity.

What/which suggestion is not an option? What are you trying to protect against, and why is it not an option?

2 Likes

There are three types of people:

  • Not doing backups yet
  • Already doing backups <<== you’re here
  • Already doing backups of backups in a cold vault.

Can’t say am I glad for you to be at a second step or not. But once upon a time you’ll get to line three.

@ay-b You’re a funny person aren’t you, but also misinformed. There are actually only two types of people in this world:

  1. Those who can extrapolate from incomplete data

Let me know when you reach the fourth step on your “ladder” there, where you are doing multiple backups, with different software, to different places, with measures in place to resist complete erasure of the backups, and where potential corruption in your backups doesn’t propagate to your copies of the backups (like it does in your step three).

If you have nothing better to do than to arrogantly not answer questions that are meant to constructively clarify what you mean, please refrain from posting in the first place. It makes for such inefficient communication when you don’t.

:heart:

8 Likes

okay, okay.
So, you got your server compromised. Then your backup servers where you’re pushing data. But here comes a routine that pulling data from your backup servers. These pulling servers are completely independent — not sharing a single thing with the main infra.
To the moment when these discrete servers will get corrupted, you would already know that something went wrong and take measures.
Yep, paranoia may lead to very deep holes, but obvious and simple ways to keep backups safe aren’t so bad.

@rawtaz I am sorry I am pinging you in this topic again, if this gets out of hand I will create a new post. But I wanted add a follow up question to the proposed solution of Backups of backups with Restic — how? - #2 by rawtaz.

We are currently also looking at backups for backups for the current use-cases:

  1. What if our Restic & storage (B2) info is confiscated and the backups are deleted?
  2. What if Restic (for example with a new version) is corrupting the backup indices. Pure theoretical of course :slight_smile:

Especially for the first point, I feel like the only way to prevent the single-point of failure is to use a separate service. For example, snapshots that are active on the storage (which B2 sadly does not automatically supports) and are not running through the same push communication flow.

Just wanted to know what your take on this is.


Update: I think we can actually just do this by running a separate rclone run that will copy the Restic repo files to a separate location.

Backblaze allows to set ACLs for users. You can use it to prevent users from deleting files.

In an ideal world you would have multiple backups on different locations (local and remote) and taken by more than 1 program. Backblaze even wrote a blog post about the famous 3-2-1 strategy.

A copy of your repository won’t hurt. But you need to make sure that the repository is not corrupted else you will copy the corruption.

2 Likes

We’re having BoB server at a completely another account at a separate provider. Consistency of backups is tested daily by restoring them in a sandboxes.
Would be nice to have also tapes at third location, but that would look like a clinical paranoia.

@Nkmol The things you described are basically about not delending on a single storage provider for your backups. It’s a good idea to have multiple copies of your backups at completely different providers, like others here mention as well. So you’re thinking is right IMO. I also like what @ay-b says, that they’re restoring (and I presume also verifying the integrity of the restored files, e.g. by restic restore --verify in those sandboxes, or by other means), that’s real nice!

Regarding corruption, one has to assume it can happen anywhere and anytime, and that’s why I prefer using two separate backup runs instead of just one and syncing/copying that one backup to a second place - any corruption will be synced with it. So if you do this, you have to be aware of this and make a decision as to how you deal with it. If you test your backup’s integrity often enough that you reduce the chances of corruption being a problem in both copies, that’s fine. If you make multiple separate backups that should be fine too and has arguably even less chance of corruption in all your backups, but it really boils down to understanding the issue and making a decision that works for your needs.

1 Like