I’m sure you’ve all seen the 3-2-1 backup “strategy” where you want to have multiple backups on different “media”. I’m OK with making sure all my backups aren’t on the same platform/disk/whatever, but wondered what people’s opinions were of using the same backup program (i.e. restic).
I would think there is an argument that I should use two different programs, saving in two different formats for ultimate redundancy (say restic has some data corrupting bug slip through or something).
Flip side is complexity of setup, etc.
Currently I’m running restic, and then rcloning the repository to B2 for my offsite. I’ve tested restores from both, and run checks occasionally, but with this setup I’m pretty much counting on restic.
Of course if I did lose all my backups somehow, hopefully I wouldn’t lose my primary disk copy at the same moment, but it wouldn’t be ideal to start over.
Just wondered what other people were doing and/or opinions.
Personally I agree completely. I would recommend having two different backup software for at least two different backups. I would also not clone one backup when I was using just one program, instead I’d do two separate backups.
I agree. I backup to an off-site disk using Restic, Cloudberry, and a straight copy. Online I use Restic, Cloudberry, and S3 glacier deep archive of zipped collections of files, versioned and encrypted. My server uses Restic and S3 as well.
I too recommend having two different backup program (at least)
First I ensure that previous versions is turned on on all windows servers. It’s a quick first line backup. I use a similar technique on samba servers.
Then I use Veeam Free Agent on local stations/servers and clone whole machines to a local server with several TB free. I run btrfs snapshots on the server to prevent Cryptolocking of the veeam backup. I also create honeypot folders on the top of samba shares to shutdown the server if files changes in there, again for crypto protection. That backup can give me around 14 days behind and is very handy when a whole station/server crashes. I even build a procedure so I can restore a backup locally in Virtual Box and get back a server from last backup in few minutes. Of course, when budget permits, I go with Veeam Backup & Replication for a less DIY job
On top of that, I run restic + vss snapshot at file level with sftp to a remote location, under our full control. Restic is soooo efficient that some location has backup running each hours. This location has full redundancy of restic repos (via rsync) Then this whole location is rcloned daily to Back Blaze B2 for major crisis emergency. There is a substantial delay between those to avoid possible contamination/corruption.
Bottom line is always always always have a plan B.
Can you elaborate more on this? I’ve never heard of such a thing…
Simply put, by having a substantial folder (thousands of files) with a name preceding alphabetically other folder in the share, I trick the cryptolocker into crypting those files first. I monitor this folder for any changes with a tool such as
entr (http://eradman.com/entrproject/) to issue a command to force the samba service shutdown.(or the whole host if you want) before the cryptolocker gets to you real shared files. Similar behavior can be achieved on Windows Server.
|- Warning - Never modify any of these files.txt
|- Folder A - (Few thousand fake PDFs)
|- Folder B - (Few thousand fake PDFs)
|- Folder C - (Few thousand fake PDFs)
|- My data folder A
|- My data folder B
Your mileage may vary, make sure you test it througly in lab. Hope this helps.