I am trying to setup restic in a way that the S3 keys have no access to delete and any purge/clean up I do, I do manually by providing keys. This is to protect against randomsware but everything I try fails.
Everything I have read says Restic has issues with object locking. I tried read only keys, but that fails when it tries to clear the lock.
Is there any viable solution to prevent any keys used for restic to not allow deleting?
It’s unclear what errors you encounter and such.
You mention both keys and locks. Not sure how you mean there, but regarding locks, have you tried the
$ restic help
--no-lock do not lock the repository, this allows some operations on read-only repositories
Generally speaking, the
rest-server has an
--append-only mode which is made for protecting against deletion of backups in the repository. It’s not the same thing as immutable storage, but it’s about your use case. But I get it if you want to use S3, just letting you know.
I have not tried no-lock, I wasn’t sure what impact that would have. Would --append-only be the best option for backing up to S3 storage using keys that specifically deny delete ops?
I just posted another question that maybe related. I am running a backup offsite, I tried to see what happens if I run the backup a 2nd time at the same time to the same repo, and restic doesn’t seem to care. Which is a situation I am trying to look out for if my offsite backup has a lot to backup then the automated task kicks off before the first one is finished. Shouldn’t locking cover this? What happens if I go no-lock?
I would like to setup a backup job that cannot delete, then once a month I can manually run a delete prune job using keys I provide manually that are only available from an encrypted vault I unlock manually. This way I can still prune, but no automated task/software can destroy the repo (i.e. ransomware).
--no-lock has no effect for the
backup runs are allowed to run in parallel, the lock is only used to prevent
backups to run at the same time.
rest-server only supports local storage.
restic does not support object locking so far, and setting it up manually is extremely tricky to get right. Normal clients must be able to remove lock files, but for all other files its sufficient to only be able to write the once. It might be possible to create credentials with such permissions on the S3 side.
Other than that, make sure to read Removing backup snapshots — restic 0.16.0 documentation .
You mention no-lock is to prevent backups from from running at same time, but also say backup runs are allowed to be run in parallel.
Will I get corruption or other problems if I am backup off site once a day, and one day takes longer and a second copy starts to run at same time?
Is there any problem running --no-lock just for backups using S3 keys that have NO DELETE flag? Then use delete friendly keys if I need to do a prune/check manually.
Being able to use a S3 key that denies delete ability is my ideal solution. If there is no harm in running backup twice on the same repository at the same time, that’s the only scenario I’d be in with --no-lock as I am happy to just do manual prunes as it would be very rare if ever I need to.
No, that’s not what he said. Quote:
backup is not the problem.
backup is the problem.
Ahh, that makes more sense, I read it wrong.
So using --no-lock doing backups on an automated fasion should allow me to use NO DELETE keys as long as I make sure I am not running a backup when I do a prune/check, since backups will be running with --no-lock, I can’t trust a prune/check’s lock to be safe without making sure there is no restic processes running first. This would be an acceptable scenario for me if it is the case.
Update: I tried backing up remotely to S3 using --no-lock and no delete S3 keys and it backs up, but then tries to remove a lock and dails over and over again. I need to go in manually with delete enabled keys and unlock or wait for the lock to expire, but the backup process just keeps failing on and on until I kill it. It seems --no-lock is ignored with “backup”.
Yes, as I’ve already mentioned before, the
backup command ignores the
--no-lock option and always uses locking.
backup does ignore the
--no-lock option but always writes and removes locks in the
/lock dir of your repository.
Can you create a key which excludes the object lock from this dir?
If not, you can either use a self-patched restic version with
--no-lock (should be a small change, but take care about pruning, then!) or have a look at A restic client written in rust - #53 by alexweiss, which is lock-free by default.
While repository locking is useful for maintenance operations like prune and check I’ve chosen to use bucket policy to restrict allowed actions which makes it much more flexible and still safe:
Note it uses canonical users and IAM_USER_ID is up to be replaced with user id, while BUCKET_NAME - with the bucket name you use for backups. Please also note, it describes the IAM_USER_ID only access to the bucket, so if you want to access to the bucket via UI or another access key you have to add them to that policy with sufficient permissions, otherwise they won’t be able to access the bucket such as S3 service vendor UI could also loose an access.
@teran I was thinking about something like this.
But be aware that
s3:PutObject normally also allows to overwrite objects with whatever you like (garbage, empty objects) so this policy alone won’t be an effective ransomware protection -
_IAM_USER_ID_ won’t be able to remove important files from the repository but still can completely destroy it…
This is why there is something like Object Lock to prevent this…
Sure, but AFAIK S3 bucket policy have no semantics for overwrite definition, however if you enable bucket versioning it will work totally fine
PS. There’s also another mechanism called object lock - it wont stop the user from creating new version but it wont also allow to remove the particular version keeping it immutable - so this could be additional step to versioning.
I’m doing it the same way as @teran with the only difference that my policy is attached to the user, so I don’t need the
Prinicipal / CanonicalUser part. It should have the same effect afaik.
I also enabled Versioning and I am checking for versions once a day. Should there be any non-current versions (except for deleted files), I will get an error message.
I also have a life cycle rule which deletes all non-current versions after 30 days and deletes expired delete markers.
I also have a S3-user with full access which I use for removing outdated snapshots manually every 2-3 months (i.e. forget and prune). However, those credentials are on a special machine.
I use a S3 compatible provider (iDrive) so I am unable to do bucket policies (as far as I know).
I don’t know iDrive, but I did it with Wasabi and IONOS. You might need to do it via the command line.