Questions about using Restic in enterprise

Hey !

I just discovered Restic and I am slowly starting to use it, to understand how it works.

I work in a big company (500 endusers~), with MacOS, Linux, and Windows. And I need to find a backup solution that encrypts before sending, that is cross-platform, OpenSource, and actively supported.

So far, Restic does all these things perfectly! But I’d love to get some feedback if any of you are using it pro?

Since I also have to backup clients who can’t have a VPN, I opted for SFTP, because SMB/AFP over the Internet was not very secure…

Everything is backed up on a Synology server in the cloud. I would like to know if you think it’s a solution that holds the road? The hardest part is at the client level, I will have to automate the creation of SSH key pairs, then send them to the server, etc…

Another question is about the restoration, is it really that easy? Following the docs, I have the impression that it is.

Many thanks in advance, and sorry if this is not the right category/if my questions are too vague :’)

FWIW, I use restic to back up some tens of clients, mostly macOS ones but also a few Windows ones. I back them up to a REST-server which uses Let’s Encrypt for the certificates. I don’t see a point in using SFTP when you can use REST-server for the backend.

Generally speaking, it’s all been working very well. The only issues I’ve had was a while back where some backups were corrupted in certain circumstances, but that was eventually tracked down to be a bug relating to atomic writes or similar, and fixed by our hero Michael Eischer. After that fix, I haven’t had a single corrupt repository or any similar problems, and this is with clients who come and go and where backup runs are interrupted by hibernation, network loss, crappy connections, etc. In short, it just works, and I feel confident that all these backups are intact and ready to be used when I need them.

I’ve had to restore a few times. Sometimes I do it as part of migrating from an old computer to a new one. Other times I have to retreive some specific file(s) that were accidentally deleted or what not. It’s always worked fine (I use the restore command to do it).

When you do use the restore command, I recommend using the --verify option, unless you’re short on time or something. Then you know for sure that your data was restored properly.

I also regularly massage the backup repositories by running forget, prune and check --read-data. Again, they’re always fine nowadays, it’s rock solid.

The backup runs are scheduled to run once every hour between e.g. 7AM and 2AM, so quite a number of times during the day. There’s thousands of snapshots in each repository, but that’s not a problem.

For monitoring I keep it simple and just check the last timestamp of files created in the snapshots folder on the REST-server.

2 Likes

To add on what rawtaz said about restores - I’ve also found mount very useful indeed. Once mounted one can explore a repo by snapshot id, snapshot timestamp and by hostname. Very handy for comparing backups to find when a particular file was changed.

Thank’s for your feedbacks !

For the mounting, in my case I use SFTP so on Windows (and maybe on MacOS, not sure) I won’t be able to “just mount” the SFTP shares, but it’s good to know.

Thank’s again folks