Any way to get Restic to work with S3 Object Locking or a Read Only key?

I am trying to setup restic in a way that the S3 keys have no access to delete and any purge/clean up I do, I do manually by providing keys. This is to protect against randomsware but everything I try fails.

Everything I have read says Restic has issues with object locking. I tried read only keys, but that fails when it tries to clear the lock.

Is there any viable solution to prevent any keys used for restic to not allow deleting?

1 Like

It’s unclear what errors you encounter and such.

You mention both keys and locks. Not sure how you mean there, but regarding locks, have you tried the --no-lock option?

$ restic help
...
  --no-lock      do not lock the repository, this allows some operations on read-only repositories
...

Generally speaking, the rest-server has an --append-only mode which is made for protecting against deletion of backups in the repository. It’s not the same thing as immutable storage, but it’s about your use case. But I get it if you want to use S3, just letting you know.

1 Like

I have not tried no-lock, I wasn’t sure what impact that would have. Would --append-only be the best option for backing up to S3 storage using keys that specifically deny delete ops?

I just posted another question that maybe related. I am running a backup offsite, I tried to see what happens if I run the backup a 2nd time at the same time to the same repo, and restic doesn’t seem to care. Which is a situation I am trying to look out for if my offsite backup has a lot to backup then the automated task kicks off before the first one is finished. Shouldn’t locking cover this? What happens if I go no-lock?

I would like to setup a backup job that cannot delete, then once a month I can manually run a delete prune job using keys I provide manually that are only available from an encrypted vault I unlock manually. This way I can still prune, but no automated task/software can destroy the repo (i.e. ransomware).

--no-lock has no effect for the backup command. backup runs are allowed to run in parallel, the lock is only used to prevent prune/check and backups to run at the same time.

rest-server only supports local storage.

restic does not support object locking so far, and setting it up manually is extremely tricky to get right. Normal clients must be able to remove lock files, but for all other files its sufficient to only be able to write the once. It might be possible to create credentials with such permissions on the S3 side.

Other than that, make sure to read Removing backup snapshots — restic 0.16.3 documentation .

1 Like

You mention no-lock is to prevent backups from from running at same time, but also say backup runs are allowed to be run in parallel.

Will I get corruption or other problems if I am backup off site once a day, and one day takes longer and a second copy starts to run at same time?

Is there any problem running --no-lock just for backups using S3 keys that have NO DELETE flag? Then use delete friendly keys if I need to do a prune/check manually.

Being able to use a S3 key that denies delete ability is my ideal solution. If there is no harm in running backup twice on the same repository at the same time, that’s the only scenario I’d be in with --no-lock as I am happy to just do manual prunes as it would be very rare if ever I need to.

No, that’s not what he said. Quote:

Simultaneous backup is not the problem.

Simultaneous prune/check and backup is the problem.

1 Like

Ahh, that makes more sense, I read it wrong.

So using --no-lock doing backups on an automated fasion should allow me to use NO DELETE keys as long as I make sure I am not running a backup when I do a prune/check, since backups will be running with --no-lock, I can’t trust a prune/check’s lock to be safe without making sure there is no restic processes running first. This would be an acceptable scenario for me if it is the case.

Update: I tried backing up remotely to S3 using --no-lock and no delete S3 keys and it backs up, but then tries to remove a lock and dails over and over again. I need to go in manually with delete enabled keys and unlock or wait for the lock to expire, but the backup process just keeps failing on and on until I kill it. It seems --no-lock is ignored with “backup”.

1 Like

Yes, as I’ve already mentioned before, the backup command ignores the --no-lock option and always uses locking.

1 Like

Yes, backup does ignore the --no-lock option but always writes and removes locks in the /lock dir of your repository.

Can you create a key which excludes the object lock from this dir?
If not, you can either use a self-patched restic version with backup respecting --no-lock (should be a small change, but take care about pruning, then!) or have a look at A restic client written in rust - #53 by alexweiss, which is lock-free by default.

While repository locking is useful for maintenance operations like prune and check I’ve chosen to use bucket policy to restrict allowed actions which makes it much more flexible and still safe:

{
    "Statement": [
      {
        "Action": [
          "s3:ListBucket",
          "s3:GetBucketLocation"
        ],
        "Effect": "Allow",
        "Principal": {
          "CanonicalUser": "_IAM_USER_ID_"
        },
        "Resource": "arn:aws:s3:::_BUCKET_NAME_",
        "Sid": "AllowBucketListing"
      },
      {
        "Action": [
          "s3:GetObject",
          "s3:PutObject"
        ],
        "Effect": "Allow",
        "Principal": {
          "CanonicalUser": "_IAM_USER_ID_"
        },
        "Resource": "arn:aws:s3:::_BUCKET_NAME_/*",
        "Sid": "AllowBackupOperations"
      },
      {
        "Action": [
          "s3:GetObject",
          "s3:PutObject",
          "s3:DeleteObject"
        ],
        "Effect": "Allow",
        "Principal": {
          "CanonicalUser": [
            "_IAM_USER_ID_"
          ]
        },
        "Resource": "arn:aws:s3:::_BUCKET_NAME_/locks/*",
        "Sid": "AllowLocksOperations"
      }
    ],
    "Version": "2012-10-17"
}

Note it uses canonical users and IAM_USER_ID is up to be replaced with user id, while BUCKET_NAME - with the bucket name you use for backups. Please also note, it describes the IAM_USER_ID only access to the bucket, so if you want to access to the bucket via UI or another access key you have to add them to that policy with sufficient permissions, otherwise they won’t be able to access the bucket such as S3 service vendor UI could also loose an access.

3 Likes

@teran I was thinking about something like this.

But be aware that s3:PutObject normally also allows to overwrite objects with whatever you like (garbage, empty objects) so this policy alone won’t be an effective ransomware protection - _IAM_USER_ID_ won’t be able to remove important files from the repository but still can completely destroy it…
This is why there is something like Object Lock to prevent this…

1 Like

Sure, but AFAIK S3 bucket policy have no semantics for overwrite definition, however if you enable bucket versioning it will work totally fine :slight_smile:

PS. There’s also another mechanism called object lock - it wont stop the user from creating new version but it wont also allow to remove the particular version keeping it immutable - so this could be additional step to versioning.

1 Like

I’m doing it the same way as @teran with the only difference that my policy is attached to the user, so I don’t need the Prinicipal / CanonicalUser part. It should have the same effect afaik.

I also enabled Versioning and I am checking for versions once a day. Should there be any non-current versions (except for deleted files), I will get an error message.

I also have a life cycle rule which deletes all non-current versions after 30 days and deletes expired delete markers.

I also have a S3-user with full access which I use for removing outdated snapshots manually every 2-3 months (i.e. forget and prune). However, those credentials are on a special machine.

1 Like

I use a S3 compatible provider (iDrive) so I am unable to do bucket policies (as far as I know).

I don’t know iDrive, but I did it with Wasabi and IONOS. You might need to do it via the command line.

Hello! Sorry for reviving a post that’s probably already old.

I’m looking to improve security/protection of my restic buckets in AWS S3, in the event of a server being hacked and the hacker getting access to the IAM credentials (and destroying the backups with it).

As mentioned above,

be aware that s3:PutObject normally also allows to overwrite objects with whatever you like (garbage, empty objects) so this policy alone won’t be an effective ransomware protection - _IAM_USER_ID_ won’t be able to remove important files from the repository but still can completely destroy it…

And as a alternative, this was mentioned:

if you enable bucket versioning it will work totally fine :slight_smile:

I was now considering to enable versioning in my buckets, but I’m a bit confused how “practicable” it would be to restore a bucket that has been tempered.

My understanding of versioning is that each object gets their own version, and it’s not possible to restore a whole bucket state from a point in time, right?
So, if I want to restore a bucket which got several files destroyed, I’d have to be looking at each object’s versions, also would have to know which objects were affected, and restore the correct version for each. Am I thinking well here?

There was also this suggestion, above:

There’s also another mechanism called object lock - it wont stop the user from creating new version but it wont also allow to remove the particular version keeping it immutable

@teran - could you please clarify this one? Will that work well with restic?
If I have an IAM user that only has “PutObject” access, I suppose it will be able to create new snapshots, and then I need to run some “aws” command to lock all those files, is it? And then, before purging the snapshots, we would unlock the objects?

Do new snapshots not need to update existing files in the bucket, ever?
What about the meta/state files? (like config and index)

Anyway - I was looking into this, and seems that Object Lock only works if Versioning is enabled.
I need to be careful also, with extra costs due to Versioning. Restic itself already gives “versioning” (the snapshots themselves), so I was hoping to add extra protection without significant extra costs.

I’m still looking to see what’s my best option! :slight_smile:

Thank you!

It is possible - no problem. You just need software which supports it. Restic does not but you could use workarounds e.g. mounting it with rclone mount --s3-version-at and restoring from there (it will be read only so you have to inject writeable locks directory - again you can use rclone combine and union remotes). Also restic does not support lock extensions - it is needed periodically to extend files lock as otherwise you lose protection for some older files making all lock setup useless. Again you can DYI it yourseld with aws CLI and simple bash script.

All together unfortunately means that unless you really understand what you are doing and are prepared to do some scripting forget about this solution with restic.

There is more details and chatter about this subject in this thread:

only true ransomware protection is provided by using compliance locking mode and all backup solution able to use it. In such mode nobody and nothing can delete data for as long as lock protects them. Only way is to terminate S3 account. So in case you try to play with it start with short lock and limited data until you iron your setup:)