Removing stale locks is successful but they're still there

restic forget is failing because of stale locks. Running restic unlock before running restic forget gives this output:

repository <repo> opened successfully, password is correct
successfully removed locks
repository <repo> opened successfully, password is correct
unable to create lock in backend: repository is already locked by PID <pid> on <machine> by root (UID 0, GID 0)
lock was created at 2021-07-05 01:39:15 (758h5m55.133873493s ago)
storage ID <storageid>
the `unlock` command can be used to remove stale locks

The repo is an s3 instance at

This suggestion might be way off-target, but do you have file versioning enabled in your S3 instance, and if so can you run a test with it disabled?

I’ve seen Wasabi S3 instances where files would magically reappear after deletion. In this case it was a bug associated with versioning in the client software (Mountain Duck - the devs rapidly fixed it once found). Maybe you’re experiencing something similar.

edit: yes probably a red herring. At least on Wasabi, locks work fine for me with versioning.

After running restic unlock, what does restic list locks report? It should list only one lock (itself) unless run with --no-lock

Thanks for the response and sorry for my late reply.

So after running restic unlock, restic list locks returns a long list of hashes (or one really long hash broken over multiple lines). It is 104 lines long and grows by one line every time I run the command. So it looks like locks are being created but cannot be removed.

I’ve confirmed that does not use any kind of file versioning.

So what happens if you manually remove those lock files (making sure restic isn’t running at the time, obviously)? Are you just missing delete permission, or is some other S3 policy blocking the action?

Anyway, assuming you can delete the files manually, presumably you’ll at least be able to run forget once?

Okay it looks like the S3 creds I had generated with Storj did not have delete permissions. Creating new creds with full permissions fixed the problem. I was able to remove the hundreds of stale locks and run forget on the repo.