I ran a backup, while on another machine I was pruning, so the fact that there is a lock is expected. However, the script that runs my backup didn’t actually fail, even though I use set -e.
I manually ran the restic backup command, and did echo $? afterwards, and got 0. I would expect it to be non-zero in this case.
The reason I’m posting this here instead of creating an issue on Github, is because I’m not entirely sure I’m right. This is the command I ran and its output:
$ restic --cache-dir=/storage/.cache/restic backup /storage/data
List(lock) returned error, retrying after 641.63144ms: repository is already locked by PID 27361 on server by user (UID 1000, GID 1000)
lock was created at 2018-02-22 16:09:04 (9h51m6.693228358s ago)
storage ID 233223ca
List(lock) returned error, retrying after 412.551319ms: repository is already locked by PID 27361 on server by user (UID 1000, GID 1000)
lock was created at 2018-02-22 16:09:04 (9h51m11.396380504s ago)
storage ID 233223ca
using parent snapshot bd16c963
scan [/storage/data]
[0:01] 1463 directories, 10561 files, 291.598 GiB
scanned 1463 directories, 10561 files in 0:01
[8:29] 100.00% 291.598 GiB / 291.598 GiB 12024 / 12024 items 0 errors ETA 0:00
duration: 8:29
snapshot 00f0b061 saved
Is the snapshot listed when you issue restic snapshots? I think this is only a warning that restic tried to create a lock but was unable to do so. The snapshot should have completed nonetheless.
Uh, wow, congratulations: You’ve found a bug! I can reproduce it. In your case, restic will add a new snapshot despite the exclusive lock in the repo, this needs to be fixed ASAP.
When an operation like prune or forget is run, the repo is locked exclusively. This means that other restic processes are not allowed to modify or add data in the repo during the operation. Other operations like backup also create a lock, but that is non-exclusive, which means several backup operations can run in parallel, but while a backup is running, it is not allowed to run prune or forget.
It’s a bug which may have severe consequences in the (hopefully rare) circumstance that during a running restic prune operation a new backup is written which references data that prune is going to remove. The bug will be addressed in PR #1638 and I’ll probably prepare a new release which fixes it shortly.