I have similar goals: multiple copies of data with some resiliency to repository corruption.
The solution I use is similar to what sniner suggests:
o) All clients back up to one restic server, that stores the repository on a single HDD (ext4 file system – trusted and stable, but no snapshots).
o) Once backups and maintenance (forget+prune) are completed I clone the repository to B2 storage. This gives me an off-site copy of data.
o) Once the upload completes I rsync the repository to 2nd PC, to a dedicated btrfs filesystem and take a snapshot of the data. btrfs gives me data checksuming and snapshots.
o) Over the week I verify correctness of the Monday repository snapshot using restic check with –read-data-subset=x/5 option. So, once on Friday I check the 5th chunk of data, I believe that the Monday repository is correct.
o) Once the weekly repository snapshot is confirmed to be good, I delete old daily snapshots, leaving 4 weekly known good copies.
That way I have 3 copies of the current repository state: 2 locally (which is why I don’t use a RAID for restic repository disks) and one off-site. If a repository becomes corrupted, I’ll detect that within a week of the corruption and I will be able to find when it became corrupted by examining the daily repository snapshots. I can also restore repository from up to 4 weeks ago.
My rationale for this approach is:
o) restic backup is much slower than rsync. I want to avoid doing the backup twice.
o) I choose to trust the backup software, i.e. I do not protect myself from incorrect data being written by restic into the repository, but from a repository corruption by outside factors.
o) I want to do full data verification, just in case, but I want to minimise time spent doing it.
Therefore, I run the setup described above. I test the clone of the data, because if the secondary repository passes verification, then it’s statistically improbable, that the primary was corrupted at the time of the rsync.
In a paranoid mode, I could repeat the repository snapshot approach with the off-site data. In a well-off paranoid mode I could beef up my hosts and do a full repository data verification every day.