I’m currently using restic to backup some data, on a public-facing server, to a GCP bucket. I was thinking about a scenario in which the server in question is compromised by attackers. I don’t really want an attacker to be able to wipe or corrupt any existing backups. So, I am wondering if there is any way to lock down the GCP permissions such that restic/GCP role associated with the restic key is only able to create new files, and read existing files?
I can see there are a number of different object permissions in GCP that I can assign. Would assigning
storage.objects.get be sufficient if this server was only performing backups, and pruning/forgetting was handled elsewhere? Does restic ever need to overwrite data in a file?
My understanding is that data packs are only ever created. I don’t know if the same applies to the
index directories. My assumption from what reading I’ve done is that files/objects under
config would only ever be created. And I’m guessing that
keys would be created when the repository is initialized. I’m also assuming that instances running
restic backup or
restic restore will need delete permissions to
locks. I can’t tell with
I’ve found a number of other posts that seem to suggest that the recommended way of protecting existing backups is to have restic backup to a repository on a local-network host, then have that machine
rclone (with some extra flags so that remote data is not overwritten) the repository to the remote backend. I’m curious if this can be done without rclone.