Best way for internal and external backup? backblaze r2/s3 vs rclone?

hi,
i want to have an external backup of my data. at the moment i do an internal backup to another VM every hour and a full backup every day. the backup is done via restic.

i now want to do an additional external backup. i would use a cloud storage like backblaze, wasabi or such. what would be the best way?

setup a second backup with restic and use eg. backblaze b2. i am a little unsure what about the egress costs and api calls. wasabi would offer those free of charge but will have a delete retention, so deleted files would be counted for 90days.

would the second option to do an interal backup and then rclone the directory to backblaze be the better option (than two different/independend backups? what is about egress costs with rclone? would rclone also have to download files to check whats new or updated?
rclone as a standalone protocol and not via backup in restic

markus

It’s really down to personal preference. Mine is to use two separate backup “jobs” instead of just one that I then sync to the second repository. Both approaches work, and there are users of each of them.

Personally I make them separate to lessen the risk that there’s some corruption in the primary repository (not due to restic, but due to e.g. that storage or whatever) that is then synced along to my second destination. I prefer having two separate backup jobs that share nothing.

@rawtaz: that’s a good point! having two seperate jobs lower the risk of curroption. the only thing left is maybe the “costs” like CPU, IO, traffic. but with a large repository an rsync or rclone will also costs a lot of CPU to check which files are “newer” or “have changed” so i can also have restic run again and “backup” for a second time.

I am no expert in the subject, however what I do is to sync to a remote, then copy that a local repo. This is local data is then verified. The idea is that if I can read it back and verify the data locally, then the remote data is, at least at some point, good.

I also am cycling through multiple remotes (currently B2, e2, and Storj). At shutdown, the script runs a backup, syncs (copies) it to local, and then syncs from local to the other backups. Also, when starting the script, it syncs all the repos, checks them, prunes then, and then at the end checks the data integrity of the local.

That’s an example, not an endorsement. I really hate GitHub, so I’ve put zero effort into making it look like a proper repo, but I know it’s what people want to see. Also, it’s out of date. I don’t push to it, but rather use the web interface every now and then.