Lock/unlock repository for non-restic needs?

I did a search of the forum and documentation, but didn’t see anything to answer my question.

I know restic locks the repo when it does repo-ish things, like prune.
I have a home network with a restic repository on a NAS. I want to back up (copy) that repository to the cloud, but don’t want anyone doing a restic backup while I am doing the cloud copy.

It seems like I would need a sequence something like this:

  • restic -r repo lock
  • (cloud copy of repo files)
  • restic -r repo unlock

I suppose read-only access via restic would be OK, but definitely not writing the repo during my copy.

Is there such a thing?
Or is what I am asking about not normal, and there is another way to do this?

Thanks!

– Steven

Hey Steven, welcome.

Hmm, technically this is safe to do, and I don’t know if it’s restic’s job to implement a file lock that takes place outside the repository.

You could just check and see if there are any locks in the locks folder of the repository, and if so, don’t do the file copy. Then before doing a backup, have a script check for the presence of a file next to the repo, and if that file exists, then it must mean you are doing a copy. Make sure to place the file when the process starts and clean it up when it is done.

But this seems like a lot of work… Restic repos are safe to copy while backups are happening. I do it all the time. What is your concern with copying during a backup?

1 Like

Is that true 100% of the time? I guess I just assumed that you don’t want to be doing a copy of the underlying restic repository files while the restic backup is happening. If it is truly safe to copy the restic files while a backup occurs, then I don’t have a problem.

My concern is getting a half-baked restic repository at the destination after my copy was done. That’s why I was wondering if there was a restic command to just lock the repo and unlock the repo.

Thanks - Steven

1 Like

I see. My understanding is that backup operations are purely additive. “Half-baked” restic repositories are normal between the start and end of a backup, and a copy of the repo in that state is not harmful or broken, either.

In the “half-baked” state, restic check will report that there is extra data in the repository. If it’s because a backup is not yet complete, you should not yet run prune until the backup is complete, otherwise you will have to start the backup over. (Most scripts that I’ve seen which automate restic don’t consider this, potentially causing data loss :exclamation:)

Having a repo in a “half baked” state is not an error, certainly not a fatal error; and next time you do the file copy, if the backup has finished, the copy will naturally repair itself. If the backup never finishes and you need the partially-finished snapshot for what data it does have, there’s a new restic recover command that will be coming out soon which can make a new snapshot based on the “extra” data in the repo.

So, yes – read operations during a backup are safe. Even certain writes (“backup writes”) are safe to do concurrently in a restic repo.

@fd0 can correct me if I’m wrong – I might be, as always. :slight_smile:

Incidentally, have you checked out Relica? It’s a backup service I’ve developed that provides a nice GUI for restic: https://relicabackup.com – we’ll soon be working on a feature that replicates a repository to other destinations so you won’t have to worry about scripting it yourself. I’d love to know what kinds of functionality/flexibility you require here so we can consider it when we implement it!

1 Like

Yes, it is, @matt is right. The repository format is designed in such a way so that files in the repo are only ever added or removed completely, never changed. Besides lock files (in the locks/ dir), the only operations which remove files are prune and forget. When you do not run these operations while the sync to the cloud is still in progress, you’re fine. A run of backup for example will only add completely new files to the data/, index/ and snapshots/ directories.

You got everything right as far as I can see :+1:

1 Like

I think the question is already answered but if you want to do something like this anyways, you could do this without locking the repository. You could just create a script for the NAS that creates an external “lock” file (outside the repo) and removes it at the end of the operation. That way you can also create a script for the hosts to backup and make that “host script” to check if the “lock” file is present before doing that backup so if present do nothing. Maybe it sounds complicated but it is pretty simple. This works pretty well in my experience. I use this for another reason but maybe this could be done easily.

Thanks for all the comments!

Sounds like my fears were unwarranted.

I can easily control not copying to the cloud during those critical operations like prune since the same machine which would prune the database is also the one which would push up the copy of the restic repo to the cloud.

I love communities like this, where we can ask for and get answers!

3 Likes

That sounds like a solid plan.

Please remember that when you sync to the cloud after a prune, you need to remove the files on the cloud storage that have been removed locally, otherwise the prune operation won’t have any effect on the remote repo.