I have a large repository (several hundred GBs). I can mount and use it just fine. But when I unmount it, or try to run other restic commands (eg, unlock or check the repository), I get one page of these errors:
Remove(<lock/a7a08e1260>) returned error, retrying after 2.441779286s: file does not exist
It prints this error for each lock ID around 10 times, and overall after few minutes aborts.
There are huge number of locks in locks directory. Can I manually delete these locks?
To unmount, I press CTR-c, but since the repository is large the process takes time and thus I press another CTR-V to abort it.
It could be that caused locks accumulated.
Removing all locks manually probably won’t help, because the error seems to be that it can’t find the right lock. If I dont abort the process, the last message is:
If there is nothing else going on on the repo, you can manually delete locks there.
Just for good measure, can you issue restic cache --cleanup? Probably not the reason but it won’t harm.
Also what is your restic version? There were some lock-related improvements somewhere around 0.11 if I remember correctly, you might want to upgrade your version.
I removed (large number of) locks manually and did
restic cache -cleanup
Mounting is now faster (probably because restic does not have to try all these locks). However, the issue remains with these messages:
Remove(<lock/1050db6d1c>) returned error, retrying after 7.649837917s: file does not exist
…
error in cleanup handler: file does not exist
It seems, after unmounting (with CTR-C or unmount), the locks can not be read or removed.
The repository is on a server, and mounted on a client’s machine. The problem occurs if I mount the remote repository as
restic mount -r sftp:XXX
But if I mount the sever with:
sshfs -o allow_other XXX
the problem does not happen.
Could it be a permission issue (restic somehow does not have permission to remove or even read the lock)? Is there a flag for restic similar to allow_other in sshfs?