Restic with wasabi repository - Issues after init and first backup

Greetings,

I am fairly new to restic and while I succeeded at installation and first test backup, I am running into issues after that.

Use case:
Backup local and mounted folders to wasabi s3.
Using Linux machine running Unbutu 18.1

For the successful part:

~$ restic version
restic 0.9.4 compiled with go1.11.4 on linux/amd64 
$ chmod 400 ~/.restic.env
$ echo $RESTIC_REPOSITORY
s3:https://s3.wasabisys.com/xxxxx
~$ echo "Test 2" > test2.txt
$ restic init
created restic repository 813decaa8e at s3:https://s3.wasabisys.com/xxxxx
$ restic backup test2.txt
open repository
repository 813xxx opened successfully, password is correct
created new cache in /home/xxxx/.cache/restic
Files:           1 new,     0 changed,     0 unmodified
Dirs:            0 new,     0 changed,     0 unmodified
Added to the repo: 393 B
processed 1 files, 7 B in 0:01
snapshot 3c58420a saved

Now for the issues, which are about “Lock/” not being found

~$ restic snapshots
repository 813xxx opened successfully, password is correct
Load(<lock/c18250de40>, 0, 0) returned error, retrying after 508.094172ms: The specified key does not exist.
Load(<lock/c18250de40>, 0, 0) returned error, retrying after 1.380968351s: The specified key does not exist.
Load(<lock/c18250de40>, 0, 0) returned error, retrying after 1.743768923s: The specified key does not exist.
Load(<lock/c18250de40>, 0, 0) returned error, retrying after 3.199427394s: The specified key does not exist.
Load(<lock/c18250de40>, 0, 0) returneock/c18250de40d error, retrying after 4.455233397s: The specified key does not exist.
signal interrupt received, cleaning up
:~/.cache/restic$ sudo -E -u xxxx -- /usr/local/bin/restic list locks
repository 813xxx. opened successfully, password is correct
Load(<lock/c18250de40>, 0, 0) returned error, retrying after 502.997153ms: The specified key does not exist.
Load(<lock/c18250de40>, 0, 0) returned error, retrying after 793.726177ms: The specified key does not exist.
signal interrupt received, cleaning up
:~/.cache/restic$ sudo -E -u xxxx -- /usr/local/bin/restic unlock
repository 813xxx opened successfully, password is correct
Load(<lock/c18250de40>, 0, 0) returned error, retrying after 486.5972ms: The specified key does not exist.
signal interrupt received, cleaning up

Now this works

~/.cache/restic$ sudo -E -u xxxx -- /usr/local/bin/restic list locks --no-lock
repository 813xxx opened successfully, password is correct
c18250de40ee5047ada6352d835ce14a142d2f9ce442427757c30d1352d59d07

But problems continue

$ restic snapshots
repository 813decaa opened successfully, password is correct
Load(<lock/c18250de40>, 0, 0) returned error, retrying after 383.637736ms: The specified key does not exist.
signal interrupt received, cleaning up

I kept trying other things that did not work

$ restic cache --cleanup
no old cache dirs found
~$ restic backup test2.txt
open repository
repository 813xxx opened successfully, password is correct
Load(<lock/c18250de40>, 0, 0) returned error, retrying after 488.764295ms: The specified key does not exist.
signal interrupt received, cleaning up

I noticed in the repository that there exists a directory named locks that contains the file that can not be located. The message mentions lock/…, without the final s. I do not know if this could be the issue but in any case I do not know where to continue with this.

Thank you for any guidance.

As an update, I tried to modify the IAM policy following Append-only mode with S3 (Wasabi)
I ended up with:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::xxx"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject"
      ],
      "Resource": "arn:aws:s3:::xxx/*"
    },
    {
      "Effect": "Allow",
      "Action": "s3:DeleteObject",
      "Resource": "arn:aws:s3:::xxx/locks/*"
    }
  ]
}

The problem remains.

Please note, there last two “Resource” lines in the policy above have a “*” character following the “/”, but is is not shown in the post above.

I’ve reformatted your messages, there’s an icon labelled with </> in the editor that you can click after marking several lines, it’ll mark the text as verbatim.

First: which version of restic is that (run restic version)?

What’s happening here is that the service apparently lists a file that could not be accessed with the account (and the policy) you’re using. Restic was able to create a lock file, but it is unable to read it afterwards. That’s what the errors say. For safety reasons, restic unlock won’t remove locks it cannot read and check. There’s restic unlock --force, but that’s not a solution. You need to find out what’s wrong.

I’m guessing (I don’t have any experience with Wasabi) that the policy isn’t quite right yet. Can you try reading the file with an s3 client, e.g. mc?

Note that I have experienced this same problem with S3, due to a bug on S3’s side. Sometimes, in a bucket with versioning enabled, deleting a file will not remove it from the object listing even though the most recent version is a delete marker. This means that the listing returns an object that can’t be fetched, and you get the same behavior you’re seeing here.

For me, the fix was to go into the S3 console, enable showing versions, and delete all versions of that lock. This only happened to me once, and it was shortly after enabling versioning. I suspect that S3 has some listing synchronization bug that can show up when deleting a file shortly after enabling versioning.

I don’t know if that same bug might be present in Wasabi’s service.

Thank you for the reformatting and the ideas.

It is:

I will check what you mention using mc client. I was able to mount the s3 instance with s3fs and my credentials, and to list a file that I uploaded separately from the backup.

Thank you, This may probably be the case. I have versioning enabled. I do not know if it is related to this but I noticed that when I tried to download the file in question, it is not possible and I get a message about missing file or so.
I will check later with “showing versions” and see how it goes.

@cdhowie

Regarding this:

I see that there is only one version of the lock and it was created at the time of initializing my last repo.

But there are several versions of the “config” file, one for every repo I tried to use and later deleted. I will check later if deleting these config files helps.

Deleting the config files should not matter. That’s not what restic is complaining about.

Thank you.

Additionally, to this point

I was now able to download that lock file.

@cdhowie

Given that

I wonder if you could expand on your advice given that there is only one version of the lock:

Should I try to delete it and see if that helps or should I be only do that if there were old versions?
And if I delete it, should I delete the file or file + directory?

I just found something interesting about the locks. It is the only file in the repository that when I toggle between enable showing versions and not enabling showing versions, there is a change in the way the file is presented.

When showing versions is not enabled:

When showing versions is enabled:

It is the only file where there is one version of a file that shows this behavior. All other files/directories where there is only one version only show a picture like the first picture even when toggling between enabling and not enabling showing versions. I do not know if this is completely relevant but wanted to add the information.

Thank you.

I am using wasabi without issue, both backing up and restoring. My policy is slightly different to yours:

“Action”: [
“s3:PutObject”,
“s3:GetObject”,
“s3:DeleteObject”
],

You don’t have the DeleteObject.

I stand a bit corrected. I see you have an additional policy for the lock. Does this mean you will never do a prune?

Thank you.

I started with a policy like yours and then following:

And also following:

I made the changes.

I could return to:

And see what happens.

Do you think that I should keep the specific?

Regarding

I am still not familiar with that operation but it suggest some cleaning and that I should have a broader DeleteObject policy.

I am not a restic expert so I’ll let them give you specific advice. In my case I back up just under 2TB of personal data and don’t worry about other users. Therefore I don’t worry about append only operation and I don’t use versioning. Once a week I do a forget keeping 90 days and then prune.

Thank you, it is good to know restic + wasabi is working for you.
Also good to know about forget and prune.

Hello,

I was able to read a file in the S3 bucket.

 #s3fs xxxxx /s3mnt -o passwd_file=~/.pwd-s3fs -o url=https://s3.wasabisys.com
 #cd /s3mnt
 #ls
 config  data  index  keys  locks  snapshots  test2.txt 
#more test2.txt
Test 2

I am now trying to see if deleting the only version of locks or maybe some little change in the IAM policy will work.

@cdhowie @fd0
Well, I made some progress and it seems to be a very peculiar situation.
Lock file still there, usual situation:

~$ restic backup test2.txt
open repository
repository bc0455dc opened successfully, password is correct
Load(<lock/b3e52e2f6d>, 0, 0) returned error, retrying after 647.799643ms: The specified key does not exist.
 signal interrupt received, cleaning up

I deleted the lock file (only way was to enable showing versions and delete it, if I unable showing versions it is the same file and it will not be deleted, the same file changes status between current and past version as I showed in Restic with wasabi repository - Issues after init and first backup (this intends to point to post #11 in this topic)

After deleting the lock file, success again

~$ restic backup test2.txt
open repository
repository bc0455dc opened successfully, password is correct

Files:           0 new,     0 changed,     1 unmodified
Dirs:            0 new,     0 changed,     0 unmodified
Added to the repo: 0 B  

processed 1 files, 7 B in 0:01
snapshot da1688bd saved

Now trying again with the lock file there, I go back to the previous situation about the lock file.

Now I delete the lock file again and new success

~$ restic snapshots
repository bc0455dc opened successfully, password is correct
ID        Time                 Host        Tags        Paths
------------------------------------------------------------------------------
7dd44441  2019-01-20 23:22:15  yyyy-yy              /home/xxxx/test2.txt
da1688bd  2019-01-21 19:02:56  yyyy-yy              /home/xxxx/test2.txt
------------------------------------------------------------------------------
2 snapshots

Now that the lock file is there again, any other operation brings me back to the problem.

Not sure what else I can try. Any other suggestion will be most welcomed.

@cdhowie @fd0

Ok now it seems to work.

  1. Deleted locks
  2. Unabled bucket versioning in my bucket

Now repeated restic commands work without the need to manually delete the lock

~$ restic snapshots
repository bc0455dc opened successfully, password is correct
ID        Time                 Host        Tags        Paths
------------------------------------------------------------------------------
7dd44441  2019-01-20 23:22:15  yyyy-yy              /home/xxx/test2.txt
da1688bd  2019-01-21 19:02:56  yyyy-yy              /home/xxx/test2.txt
------------------------------------------------------------------------------
2 snapshots
~$ restic snapshots
repository bc0455dc opened successfully, password is correct
ID        Time                 Host        Tags        Paths
------------------------------------------------------------------------------
7dd44441  2019-01-20 23:22:15  yyyy-yy              /home/xxxx/test2.txt
da1688bd  2019-01-21 19:02:56  yyyy-yy              /home/xxxx/test2.txt
------------------------------------------------------------------------------
2 snapshots
~$ restic backup test2.txt
open repository
repository bc0455dc opened successfully, password is correct

Files:           0 new,     0 changed,     1 unmodified
Dirs:            0 new,     0 changed,     0 unmodified
Added to the repo: 0 B  

processed 1 files, 7 B in 0:01
snapshot 1925f42d saved
~$ restic snapshots
repository bc0455dc opened successfully, password is correct
ID        Time                 Host        Tags        Paths
------------------------------------------------------------------------------
7dd44441  2019-01-20 23:22:15  yyyy-yy              /home/xxxx/test2.txt
da1688bd  2019-01-21 19:02:56  yyyy-yy              /home/xxxx/test2.txt
1925f42d  2019-01-21 19:49:10  yyyy-yy              /home/xxxx/test2.txt
------------------------------------------------------------------------------
3 snapshots

Is this the way the bucket has to be set, i.e. not enabling bucket versioning? Does that have any consequence for the performance of restic?

Thank you for all the guidance :smile:.

Versioning should not break restic. It sounds like Wasabi is including deleted objects when retrieving a listing of objects. This would either be a bug on Wasabi’s side, or restic is not using the Wasabi API correctly (e.g. when listing objects, it’s asking it to include versioned objects whose most recent version is a delete marker).

Versioning should have no impact on performance.

The point of versioning is to enforce that an API key cannot irreversibly delete/corrupt data in the repository. If some ransomware gets on the machine performing backups and the API key can overwrite or permanently delete objects, you might as well not have backups in the first place. If you turn on versioning, and use an API key that is not allowed to delete prior versions of objects, then you can always roll the bucket back to a point in time when the repository contents were valid.

tl;dr: Versioning is about security, not performance.