Best strategy for storing backup and redundancy?

Apologies if this is in the documentation, I couldn’t see it.

I have 2 different hard drives which I want to use for backup and redundancy. Both HDDs should contain the backup in case one of them fails.

What is the best way to achieve this using restic? Should I just use restic to create the backup on the first drive and then rsync or similar to copy the backup across or can restic write the backup to two different locations?

or should I treat each drive as a different repository and treat each drive as a completely independent backup?

My thought is the latter is better because if I’m using rsync and one disk has a corrupted file it will be copied to the other disk. The downside to this is that restic has to do all its processing twice.

Any advice is welcome!

In fact, my setup is different, but to stick with your two independent backup disks, I would use a filesystem like btrfs or zfs, backup to one of the two disks, rsync it to the second drive and take a snapshot. If the repository becomes corrupted, you will always have the latest snapshot.

I have similar goals: multiple copies of data with some resiliency to repository corruption.
The solution I use is similar to what sniner suggests:
o) All clients back up to one restic server, that stores the repository on a single HDD (ext4 file system – trusted and stable, but no snapshots).
o) Once backups and maintenance (forget+prune) are completed I clone the repository to B2 storage. This gives me an off-site copy of data.
o) Once the upload completes I rsync the repository to 2nd PC, to a dedicated btrfs filesystem and take a snapshot of the data. btrfs gives me data checksuming and snapshots.
o) Over the week I verify correctness of the Monday repository snapshot using restic check with –read-data-subset=x/5 option. So, once on Friday I check the 5th chunk of data, I believe that the Monday repository is correct.
o) Once the weekly repository snapshot is confirmed to be good, I delete old daily snapshots, leaving 4 weekly known good copies.

That way I have 3 copies of the current repository state: 2 locally (which is why I don’t use a RAID for restic repository disks) and one off-site. If a repository becomes corrupted, I’ll detect that within a week of the corruption and I will be able to find when it became corrupted by examining the daily repository snapshots. I can also restore repository from up to 4 weeks ago.

My rationale for this approach is:
o) restic backup is much slower than rsync. I want to avoid doing the backup twice.
o) I choose to trust the backup software, i.e. I do not protect myself from incorrect data being written by restic into the repository, but from a repository corruption by outside factors.
o) I want to do full data verification, just in case, but I want to minimise time spent doing it.

Therefore, I run the setup described above. I test the clone of the data, because if the secondary repository passes verification, then it’s statistically improbable, that the primary was corrupted at the time of the rsync.

In a paranoid mode, I could repeat the repository snapshot approach with the off-site data. In a well-off paranoid mode I could beef up my hosts and do a full repository data verification every day.

1 Like

I’m answering my own question here but I went for ZFS mirroring in the end.

It’s basically RAID1 where the data on the drives are mirrored but checksums all the files. So if an error is detected in one file it can fix it from the mirrored disk.

I went with this approach because although restic check would identify this kind of error, fixing it may be a bit of a pain.

Personally I use RAID1 locally and then have a remote system that pulls from the backup daily using rclone copy --immutable. This way the remote system is immune to local corruption that happens after the daily copy.

1 Like

Thanks for the tip. I added --immutable to my rclone scripts.