How to backup a server without running restic directly on it?


#1

I have searched and read a lot of threads, but I still cannot figure out how to use Restic from a central backup server that pulls in from multiple target servers. Previously, I had a backup server that pulled in files via rsync and used hard link copies for versioning. This only required rsync on the source server, but required minimal CPU time. Restic provides so much more, but running it on the source machine requires additional disk space (for the cache), CPU time, and sometimes more memory than the VM has to offer.

I’m now trying to backup VMs running in the cloud to B2 (any file cloud). I don’t have a rsync target anymore, and I really don’t want restic credentials stored on those VMs anyway. Basically I’m trying to offer backups for customer VMs without needing a large intermediate file storage that restic runs on, force them to upgrade their disk or RAM, etc.

Is there any good way to run restic like this?

  1. A central backup server runs restic.
  2. The customer VM is accessed by the backup server. Customer VM cannot access the backup server.
  3. The backup server holds the restic credentials. It runs the restic comands. Data flows to and from the file cloud service to the backup server only. The restic cache is stored on the backup server (no penalty to the VM).
  4. Customer VM doesn’t know the backup credentials. Doesn’t know the restic repo credentials or even run it.

I feel like I’m hunting for a feature where restic can speak the rsync protocol, but all examples I’m finding have restic running on the source machine itself. Is there no way around that?


#2

One potential solution is to use something like sshfs or rclone mount to mount a view of the remote machine on the local system, then back that view up with restic.


#3

Another possible solution would be “reverse ssh” as described here. I don’t know how effective that would be. I would go with @cdhowie answer because there is no direct way right now to do that.


#4

I haven’t tried that, but have considered it. I ran across this bug report and I thought there might be a more supported way?

Very slow backup of SSHFS filesystems – need ability to skip inode-based comparison


#5

With reverse ssh, it looks like they are still running restic on the client side, though?


#6

It looks like a fuse mount with the noforget option might work. But it also sounds like that option requires more memory linearly to the number of files restic accesses over the mount.

I’ll give it a try later and report back. I expect the backup server will always have enough memory to complete the backup.

Has anyone ever tried using restic on a NFS mount? Those behave more like regular file systems. Maybe that would work better?


#7

I’m running restic backing up content over a NFS mount (to another NFS mount on the same NAS). It isn’t as fast as local disk, but it works well enough.


#8

I also use restic to back up files from an NAS over an NFS mount and it works well. In my case, I’m backing up to an S3 repo to which my network connection is the bottleneck, so I don’t even notice a decrease in speed.


#9

I haven’t tried it myself. I just read about it a while ago and though could be useful in your case.

As other users said, restic works with NFS mount too. Personally, I use Gigolo and mount via sftp. It is pretty fast. 5GiB backup in 17 min. Server is 8GiB RAM, restic is using 103MiB. Of course, it will consume more time and RAM with more data, but you won’t need any extra steps in the VM’s, just setup the ssh keys and that’s it. No restic or additional processes in the VM’s.

In this particular test, the backup directory has a lot of small files and a couple of big files. Those big files are .iso files and those files were what “slowed down” restic a little bit. This is the output:

open repository
repository b9349ed1 opened successfully, password is correct

Files:        2043 new,     0 changed,     0 unmodified
Dirs:            0 new,     0 changed,     0 unmodified
Added to the repo: 5.004 GiB

processed 2043 files, 5.307 GiB in 16:59
snapshot e4bada97 saved