Rest-server with letsencrypt?

So far we have used restic with our own Netapp S3 storage.
We want to switch to rest-server because it offers an append-only mode and we can use a cheaper storage.

S3 already has https, but for rest-server we need an extra TLS certificate.
Is there a documentation how to use letsencrypt with rest-server?
I have already searched for it, but found nothing.

How to use HTTPS is all explained on the rest server github page:

By default the server uses HTTP protocol. This is not very secure since with Basic Authentication, user name and passwords will be sent in clear text in every request. In order to enable TLS support just add the --tls argument and add a private and public key at the root of your persistence directory. You may also specify private and public keys by --tls-cert and --tls-key.

So use letsencrypt to generate your certs and point rest server to its files. You will find all details how to use letsencrypt here:

I was already on certbot.eff.org but found no documentation how to use certbot together with rest-server.
“My HTTP website is running …” (rest-server is missing)

well - you do not expect that they list any software which uses HTTPS:)

Choose:

“My HTTP website is running …Other”

This will explain how to generate cert files - then you can use them with rest-server. You will probably need some web server for it anyway - for letsencrypt to validate your domain. So other option is to use your web server and then reuse certs.

So, rest-server alone is not able to validate my domain with/for letsencrypt?
I should run (for example) nginx on port 80 together with certbot and then link the cert files into the rest-server root directory?
As far as I have unterstood the documentation restic uses port 8000 for communication with the rest-server, while certbot uses port 80 with letsencrypt.

Pretty much yes. How you obtain certs is up to you and rest-server does not have any mechanism to facilitate it. Given how trivial it is using any out of the box linux distro IMO it would not be wise to try to incorporate such functionality - and then manage it and worry about its security and interoperability.

1 Like

Alternatively, you might put your backup server behind a firewall, have it initiate the backup via ssh and use a reverse ssh tunnel you create only during backup to encrypt the whole thing like so:

ssh -R 1337:127.0.0.1:8000 user@host-to-be-backed-up "/usr/local/bin/restic -r rest:http://user:pass@localhost:1337/repo-name backup /path-to-be-backed-up --no-scan --password-command='echo $RESTIC_PASSWORD'"

All you need in this scenario is the backup server’s ssh pubkey in the authorized_keys file on the machine to be backed up and the restic repo password also stays on the backup server (e.g. in the backup script). rest-server in append-only mode and there is no way for the client machine to even reach the backup server.

Great idea, but our client hosts are behind a firewall, too, or use NAT.
The backup server cannot connect to them, the clients have to connect the backup server.
They have to innitiate the backup.
Besides this I (the backup server admin) do not have an account on the clients and will never get one.
The administration situation is complex - like Germany in the middle ages with hundreds of principalities

3 Likes

Okay then maybe use a forward tunnel to the server? This way all you have to do is give them a command they should use, no other setup (other than installing restic, of course).

Some of the clients are Windows and I do not know how to use ssh there (together with restic).
I doubt this is possible at all.
The world would be so much easier without Windows …

1 Like

Depending on the version of Windows, ssh does work. I was surprised to find that out when I was forced to use a Windows machine the other day for some remote work in a client’s network.

But anyway it was just an idea as I have been using this for a while now and find it very reliable. I like the idea that the backup server “pulls” the backups via ssh which is something restic can’t do (yet?) without a hack like that.

I have been using WireGuard for such on demand tunnelling. Windows or not it is easily controllable by scripting. Split tunnel, routing only traffic to rest-server (or similar) is my solution.

You could also use DNS challenges for let’s encrypt. Those do not require specific ports and even allow to provision tls services for non-exposed services.
My rest server runs behind my firewall and still gets valid certs, as I manage a public domain at hetzner and use acme.sh to get certificates from let’s encrypt.

The DNS records for the rest server are only available at my local pihole DNS server, the challenge records are published to the hetzner DNS records so that let’s encrypt can validate them.

I still use the reverse proxy of my Synology NAS (nginx) to provide tls to the rest server, but more out of convince than necessity (only one place to manage certs for multiple containers). You could totally get a cert via the DNS challege using acme.sh and provide it to the rest server via command flags.
If you don’t have a supported DNS provider you could even create an account at hetzner and refer the NS records of your domain to them. That is what I did, the domain was ordered at Strato and I did use the manual DNS challenge approach, but having API access is way nicer.
Strato is not offering an API, therefore I needed to redirect to another provider.

DNS challenges is what I use for al my internal services (rest-server among others).
You could have a look at lego to automate that.