Yes and it is pretty simple. Keys not having permissions makes that every key works essentially the same; there’s no master / limited access key. However, you could serve the repo to clients through a rest server with append only mode so the server is the only one who can actually delete data.
Yes, this is possible. Note that the server would have to add a key to each repository. The passphrase can be the same, but the key would be different.
The server cannot enforce this or otherwise command clients to back up.1 The clients would have to be configured to do this, for example using cron on *nixes or the task scheduler on Windows.
Yes, the server can do this since it has a key in each repository. Note that the prune/check schedule needs to be coordinated with the clients’ backup schedule, since prune and check both require an exclusive lock on the repository. If the server has the repository locked when a backup starts, the backup will fail. (Hmm, @fd0, would it be possible for restic backup to retry locking with backoff so that backups are delayed instead of aborted?)
1 Technically, this can be done, but it’s not supported as a native restic feature. For example, you could have the backup server command a client to run a backup script over SSH.
To deal with this, I’ve been using loops like the following:
LOCK_WAIT=1800 # total time to allow for operation to start
LOCK_DELAY=60 # delay between attempts
EXPIRE=$(( SECONDS + LOCK_WAIT ))
until [[ $EXPIRE -le $SECONDS ]] || eval restic forget $RESTIC_ARGS $RESTIC_FORGET_ARGS; do
echo Waiting for lock.
Granted, this would try again on any failure that caused restic to return a non-zero value and does not provide a backoff, it has working well for me.
@Den I wasn’t aware of any lock timeout, nor have I noticed that behavior on my systems. For my timeout, 1800 seconds just seemed like a good place to start. It can vary depending on system/repository.