Greetings!
Restic currently fails for me with 82 stale locks. Since I need to run a backup but still would like to investigate how these locks appear, I tried the following:
root@mysql-backup:~/locks# restic --no-lock cat lock fc3da18450f46bad9e3bc45c5ca262196d09d1f91a62972224c86fdf800f6555
repository f2e5bf7e opened successfully, password is correct
unable to create lock in backend: repository is already locked exclusively by PID 32298 on mysql-backup by root (UID 0, GID 0)
lock was created at 2021-02-08 19:18:53 (16h57m49.201173616s ago)
storage ID 0025539a
the `unlock` command can be used to remove stale locks
Aside from that, any ideas why there might be so many stale locks from yesterday that restic cannot remove by itself? The docs state that restic can automatically remove stale locks in case it’s running on the same host and the PID of the locking process does not exist anymore. Apparently that’s not entirely true?
Is it safe to run restic {backup,prune,forget} with --no-lock (the documentation states that locks are used mainly for performance), and if so, what are the implications?
Also, how can I find the offending exclusive lock and delete only that?
In case it matters, backups are stored on Backblaze via b2 API.