Timeshift vs git vs Restic (linux OS and user data backup)

hello,
opening that maybe again … as some time passed and maturity got increased…

Still trying to figure out the way to backup linux server … mostly ubuntu. What wasa in my mind to divide that into multiple steps

  1. backup user data
  2. backup OS configurations

Regarding the 1) it might sound simple … install / setup restic and using some cron job execute backup of /root /home

Point 2)
The more critical is OS confing … and how to be able restore / spin new machine and restore where if failed…

I believe here it has to be split into two steps
A) get up to date list of all OS packages → packages.dump
B) backup all conf /etc and maybe other OS important dirs /var/, /opt ?

Backup both A) B) using Restic.

Maybe i dont see issues / problems… any view / ideas / references to articles where it was precisely analyzed / discussed and documented?

I think even Timeshit or git is mentioned for scenario 2) multiple times on internet, there is no point to use these tools?

  • timeshift can be fully replaced by Restic
  • backup conf files to git → all metadata (uid, git, permissions) will be lost. Is there any point of such a backup of the conf files?

Appreciate!

Just backup the entire system as documented.

Personally I just backup my data (/home /root /etc etc) and would just restore the OS via an ISO should I have to do so, then redeploy the backed up data.
I manually dump a list of my installed packages everynight (I use Debian) so I can know what’s installed.

There’s 50 ways to skin this cat, the key thing is not to get hung up on a perfect method, rather pick a method and test test test that it works for you.

1 Like

I think trying to backup your OS config is folly. If you are thinking of going that direction, you might be interested in looking into NixOS which can have your whole system defined as a configuration file that you can use to reinstall on a fresh system.

I think the most important things to backup is your data that is irreplaceable (family photos), or time consuming to get again (your mp3, ebook collection).

I’ve been using restic for the past few years to backup my data locally, and also do offsite backups with rest-server/Tailscale.

Whatever program you use (restic/rclone/borgbackup) just make sure you test your backups, automate them, and have an offsite backup, too.

I recently did a full install of my Yunohost server again. I didn’t have OS backups, but I had all my data. It was easy to get going again. I reinstalled Yunohost again. Installed apps. Then I copied back the data I had backed up. It’s nice to reinstall fresh and get rid of all the cruft and junk that builds up over time.

1 Like

As @arkadi already hinted, I highly recommend installing any server system in a way that is reproduceable. In other words: when installing a system, I first write the documentation and then follow it to see if it’s right.

Then, via a pre-backup script, I collect all config files that deviate from the standard and might change from time to time into a folder that is included in the backup.

Being able to reproduce the install at any time gives you more control if something goes wrong and it also saves you a little bit from the problem of ending support for a certain version of your distro. For instance when a new Debian version is released, you can “just” install the whole machine again on the new version. That way you also notice if something needs to be updated in your documentation. This is obviously only feasable with virtual machines where you can install the new machine while the old one still serves the purpose.

That’s my few cents to the story.

1 Like

Hi all,
thats for all valuable insights.

Yes, my plan is to have backup of

  1. system confs
  2. user data

The documentation is great way, but most of the time it got out out date on my end and its pain.

So trying to sort out how to manage / automate / and execute backups.

edit2: maybe one more dumb question … i want to implement 3-2-1 backup strategy,
ie backup endpoint devices to restic on my LAN, afterwards backup to Offsite.

Whats better practise … to backup local endpoints to local restic repo and then local restic repo to offsite?
Alternative to backup data from local endpoints via 2 routes

  1. one route to local restic REPO (NAS)
  2. second one to Offsite backend

ie 2 schedules have to be executed on endpoint devices?

Ref:

Thanks

Okay, to be more specific… the Backup strategy / architecture from my point of view could look as follow:

The reason why I run NAS_B is that primary NAS_A contains lot of media / music / movies (~50 TB) of data … which is too costly to backup to public Cloud (therefore my own backup, in localy diff. location within the same city, or city close by… )

NAS_A  (located in my flat)                                                                                NAS_B (different local geo. location)
    +-->>   replicated via ZFS snapshoting / mirror  !!! exclude backups ie. restic / borg repos !!!     -->
setup:
(truenas){zfs Raid6}                                                                                       (truenas){zfs Raid5}

Backups (endpoints as servers, client computers, mobile phones, docker instances)

Linux machines:

  1. via Restic to NAS_A (schedule daily? TBD)
    → backup conf / user data
  2. via Borg to NAS_B (schedule daily? TBD)
    → backup conf / user data
  3. the most important data (few GB / documents / etc) directly to Public cloud via restic
    (schedule daily? TBD)

Winows machines:

  1. via Restic to NAS_A (schedule daily? TBD)
    → backup conf / user data
  2. full / inc backup via Acronis TrueImage to NAS_A (replicate via zfs to NAS_B)
    (schedule daily? TBD)
  3. the most important data (few GB / documents / etc) directly to Public cloud via restic
    (schedule daily? TBD)
  4. real time backup → to NAS_A (cant recall tool name)

ESXi

  1. backup Linux or Win machines as per details above (file level - restic/borg)
  2. snasphoting of VM to NAS_A (partially mirror via zfs snapshotting to NAS_B)

Backup to the Public Cloud: Backblaze / AWS
→ backup data (ie encrypted borg / restic repo) from abovementioned backups (linux/win machines) backed up to NAS_A or NAS_B, depends which repo has less size (restic vs borg repo) directly to the public cloud provider with rclone / aws cli ?

I am not sure if thats not too much paranoid? or wont wear off my drives.

I don’t know which is better but you can create a local restic repo and rsync it to other places afterwards. There are quite a few discussions in here about that.

When backing up a whole server, always make sure you know what to do when it’s time to restore. From my experience that’ll be hard without an up-to-date documentation even with a perfect backup. After all the point of the whole operation is being able to restore.