Backing up whole linux server?


I have a linux (ubuntu 18.04) server on digital ocean (as a droplet). I would like to back up the whole sever.

I had a look at this thread: though I’m still a little confused.

As it’s a live server, when backing up, how does it deal with files changing, and databases?

Basically, what’s the failsafe restic method of backing up a whole server?

Thanks for any advice

Shut the server down so that the file system is not modified any more, and then make a backup. That’s the only truly fail-safe method :slight_smile:

Apart from that, restic may backup files while they are being modified, and depending on the application this may be invalid when restored.

Users have reported that taking a snapshot of the file system (e.g. built in with zfs, or using lvm or so) also works. Sometimes it’s enough to save the file system, and then make a separate backup of the databases by dumping them, like mysqldump | restic backup --stdin --stdin-filename foo.sql, which gives you a consistent snasphot.

Thanks for the reply. I think I’d go for backing up dirs (with exclusions) and separate dump of databases etc.

Err…dumb newbie question, but how would restic run if the server is shut-down? (Did I miss the joke??)

You would need to access the drive from another system that’s online. Typically this is what imaging software does. Taking an image (or a backup) while files are being modified is like taking a blurry picture: the long exposure causes the image to be inconsistent depending on the time each pixel is captured during the exposure. But if the image isn’t moving, then it’s perfectly consistent.

1 Like

That’s a beautiful analogy!

1 Like

I use dattobd and a snapshot script from urbackup.
Then, I use this kind of script:
I’ve been able to backup and restore entire servers without any trouble

Thks. That looks really interesting. Though I’m not clear why urbackup is required. Can’t dattodb create image & snapshot, and then use restic to copy across to backend storage?

Absolutely. Urbackup is not required at all, I just used its snapshot scripts (I used urbackup before):

I’ve looked into this. My linux/bash knowledge is still pretty basic so I’m not clear how the code works and I’d rather not rely on a script I can’t properly understand or modify.

Also, I realized that dattobd needs another volume to create the image. Since I’m on a digital ocean droplet (which is more a less a vm), it means paying for extra storage. I suppose I could temporarily use storage on digital ocean, and then delete it (as it’s charged by the hour), though that wouldn’t be very efficient in terms of using dattobd as I’d have to create a new image everytime if I understand the process correctly.

But thanks @draga79 for highlighting this solution, even though I don’t think I’m in the position to use it yet.

Actually, it doesn’t need another volume. It creates a (virtual) volume that holds the snapshot, mounts it and backups it, then deletes it. No further volumes are needed.

Hi, I thought dattobd works by: (1) creating an image, and then (2) creating a file with only incremental changes. On further changes, the changes in (2) are incorporated back into an updated (1). So there’s always an image and incremental file saved on the system, and these need storage space? Or have I misunderstood?

Yes, it’s correct. But they’re created on the local (snapshotted) file system, so you need some additional free storage, but not an external volume. Just some space on the local, snapshotted device.

Thanks for the further info. How would it work say, if I have a server with 50gb disc space, and 35gb files stored on server. Would there be enough space for the image/local snapshotted file?

I think it should. The image is quite small and will contain only deltas. I’m using it with smaller free spaces. You can try and see what happens.