How to prevent two restic tasks concurrently

Is there a proven way to prevent a long running backup scheduling overlap where you have a backup that hasn’t finished yet and a new one starts on the same repository?

It doesn’t seem like restic’s locking system prevents this. I want to schedule restic to run every few hours, but if it is already running I don’t want to cause problems so I want it to abort. Once the backup is caught up, it shouldn’t be a problem but when it has a lot to upload offsite, it might take longer.

What OS you are using? There is no universal way.

Sorry, Linux (Ubuntu flavor).

Then it is easy.

Basic approach is to run crontab with locking - linux - using flock with cron - Server Fault

But the best way would be to schedule your restic backup task/script using systemd.

You will find tones of examples on Internet, e.g.: Using Restic with systemd on Linux - Timur Demin's Blog

Do some reading about systemd - systemd - ArchWiki

It is extremely flexible and easily can meet your requirements.

2 Likes

I just want to share our approach to a similiar situation.

We once had the same requirement but no parallel backups on the same host, instead parallel restic forgets on various hosts which require exclusive locks on the same repository. We could not use flock or similar tools because they’re bound to a host and do not support cross-hosting mechanisms.

We then decided to use consul locking for this. We already had a consul instance running where all hosts which are backed up had access to via https endpoint. The idea then was a 2 step approach for the restic forget:

  • create a lock on central consul instance
  • do the restic forget and release the consul lock afterwards.

We automated it via setting of the following environment variables:

export CONSUL_HTTP_ADDR
export CONSUL_HTTP_TOKEN

The call of restic forget was then like follows:

consul lock restic/${service} \
   "restic forget --retry-lock 30m --host ${host} --prune --keep-daily $RETENTION_DAYS"

That’s only as an idea how one could solve similar issues.

2 Likes