Bare metal restore from restic repo: worked fine!

Today I wanted to try a full “bare metal restore” from my restic backup. I wanted to see how it works so that I did it at least one time when there is a real disaster.

I chose a smal webserver of mine with about 20GB of data which is backed up with restic each night to a sftp repo. The backup covers everything ("/"), just excluding proc, sys, tmp and var/tmp.

What I did:

  • created a VM which is big enough at some cloud provider
  • booted it with the provided resuce system
  • downloaded restic to /usr/local/bin, checked the repo with restic … snapshots -> worked
  • mounted the harddisk at /mnt/restore and did restic restore to that target.
  • did a chroot to /mnt/restore, grub2-install, grub2-mkconfig, dracut -f
  • changed fstab to the correct UUID
  • umount, reboot

What can I say? Beside some small problems that are not worth to mention and that have nothing to do with restic, everything was up and running. Everything took about 50 minutes.

Great! Once again thank you for this wonderful software.

Just a little pity was that although I used -v when I did restic restore, I could not see any progress during restore. Would be nice to see how far the process is.

6 Likes

Awesome, thank you very much for the report!

I had a similar experience; I was able to do a test restore of a full system using almost exactly the process you describe.

1 Like

Nice one. I don’t do full restores just because AWS bandwidth costs money, but I’ll do a local full restore at some point. I did do a restore of three random folders from AWS S3 and had no problems.

I know that backing up the whole / has a lot of overhead in storage and transfer. But it makes a disaster recovery so much easier for me since I just have to do the grub work after restore and everyhting behaves as before.

Regarding costs: AWS would be way to expensive for me. When I started playing around with restic I booked some “Storage Boxes” at Hetzner using sftp. Cheap and relieable. Now I am moving more and more over to wasabi which is full S3 compatible and does not charge bandwidth.

1 Like

Same here. We moved some repositories from Hetzner to Wasabi and are really happy with the speed, reliability and pricing. The only thing I’m missing is Hetzner’s ‘auto snapshot’ feature which is a safety net incase an intruder gets access to the client and deletes the whole repository.

I agree, the snapshots are nice. I wonder if something similar could be done with wasabi’s “versioning” feature.

Hi @betatester77,

Thanks for the report. The process you used is almost exactly the same I use here with tar in place of restic, so I’m very glad to hear it worked – it means very little alteration to my current procedures, basically just using restic restore in place of tar should do it.

Cheers,
– Durval.

2 Likes

Thanks for this post.

1 Like

A post was split to a new topic: Bare metal restore

@betatester77

Hiya, I wonder if the following steps are sound.

Backup laptop and perform BMR
source: Debian 12 laptop (LUKS LVM, / and /home and /var, etc. all reside on the same partition)
destination: sftp (ssh) backend

  1. backup laptop to sftp (ssh) backend (without running restic as root, see: Examples — restic 0.16.4 documentation)
  2. partition new external hard drive (partition table mbr or gpt?), create LUKS LVM and format ext4
  3. boot into, e.g. Debian Live and install restic
  4. unlock LUKS and mount new external hard drive to /mnt/external
  5. restore backup from sftp (ssh) to new external hard drive
  6. chroot /mnt/external
  7. grub2-install, grub2-mkconfig, dracut -f (I believe the commands are identical for Debian)
  8. update fstab on /mnt/external with new UUID of new external hard drive
  9. exit chroot, umount /mnt/external, reboot to new external hard drive

Thank you very much.

I am not an expert with LUKS but for me all the steps sound reasonable.

Thanks for your feedback. I will test it out within the next few weeks and report back. Again, much appreciated.

1 Like