Cloning a repository to another location?

Is there a best practise way to clone a restic repository to another location?

My scenario is that I currently backup all machines to a local server via SSH and then clone that backup to a different location with rsync as a desaster insurance.

I now want to do the backups using restic and I’m guessing that rsyncing a repo every night is not the cleverest way as rsync doesn’t understand what’s going on inside the repo. As far as I can see there is no function in restic to do that other than have the clients do two backup runs, right?

Any tipps? Thanks!

1 Like

restic will not update any existing files in the repo, so it should work nicely together wir rsync and not copy any data twice. But it might also propagate errors to the second copy, so it might be a good idea just to run a second backup instead nonetheless.

Hello @nicnab.

When it comes to restic repo, rsync can actually do a pretty good job, when syncing it to different location.

Restic backup will only add to the repo, rsync will detect it and transfer what is new. When you run prune, some packs will be rewritten, and some deleted. Again, rsync can detect all these changes, transfer what has changed and delete what is gone (with --delete flag, of course).

So, I don’t see any inefficiencies there.

Personally, I’m using rclone sync to replicate restic repo to cloud storage, and it has also been working very good since implemented.

1 Like

Sounds great! I will use rsync then. Errors in the repository must be avoided in the first place anyway and the cloning only takes place weekly anyway.

Thank you for your help!

I am doing this and plan to add a ‘restic check’ or maybe even restic check -read-data’ test into my rsync script. If my local backup has any issues, I don’t want to overwrite my potentially clean remote backup!

In my case, my remote server is a small freeBSD box and I’m thinking about running periodic 'restic check’s on that box as well just to ensure all is well with my data. This would seem to be a real advantage of backing up to a physical machine as compared to cloud storage.


Good idea, yes. I’m doing this manually from time to time and so far didn’t come across any problems but it surely sounds better to automate and mail problems.

I backup everything into one repo daily, prune on sunday noon and rsync it all away every night. I get confused checking the pruning results but I guess once that has been running for a couple of months, I will see the pattern :sweat_smile:

At one point you should also try to restore your files and verify that it works. It would be horrible thinking that you’ve been backing up for months and then the data that you actually need is not there or there is some other issue. Testing backups is critical.


Absolutely. Actually I recently used the backup to switch laptops and it worked very well!

1 Like

So what will happen if repo is in use adding new files from client and while it is going you start rsync of the repo to repobackupserver?
As I understand, rsync will on start create the list of the
present in the repo and copy them, but not the files added to repo after rsync start. So at the end you will have at the repobackupserver a copy repo in the inconsistent state - some files from the last backup were added some weren’t.
Is it ok for restic repo?
I expect it to be ok, because you have the same sitiation when usual backup is interrupted in the middle, but i may miss something.

It should be OK for restic. What will happen is effectively the same as when you run a backup job with restic and it gets cancelled half-way. There will be uploaded files, but since there won’t be a snapshot written yet that references that data, they will just be lingering. Next time you run a backup, the already uploaded data will be used (not uploaded again), and on your next rsync after that backup completed, you will have that new snapshot in your rsynced repo as well.

Restic is designed such that when you back up it only adds files to the repository, so it will not have fiddled with your existing data and/or snapshots, and for this reason it will be fine to have a repository that contains all your previous snapshots and some of the uploaded data from the last backup run.

1 Like

Thank you for clarification.

1 Like

You might want to sync the snapshots directory first and once that is finished the data/index directories. That way you can be sure that all copied snapshots are in fact accessible.

When rsync and the backup run in parallel you might see rsync copying packs that are currently uploaded and end up incomplete in the copy on the first try. A latter rsync run will/should then finish copying the remainder of these files.

On the other hand it’s never really nice to have a machine do two (even non conflicting) things at a time as you then have to split cpu, io and networking performance, usually slowing everything down.

What I usually do is have the backup server run a cron script that does an ssh into all machines to be backed up one after the other, performs pre backup things (like db dumps), performs the backup and then goes on to the next machine. When all is done, I check whether it’s a friday night to run forgets and prune and after all is finished, I rsync to a remote location. It’s simple and works well for me.

1 Like

Hi guys,

very interesting discussion, and very relevant for what I’d like to do.

My plan (simplified):

  1. scheduled backup using restic backup to given repo (local to local)
  2. scheduled copy of repo using rsync (local to remote)
  3. scheduled check of both repos via restic check --read-data (local and remote)

I’d like to keep those tasks independent, ie no batch where one comes after the other, but independent tasks that might interleave. Therefore my interest is in the points where restic ‘interacts’ with itself or with rsync.

1,2: restic writes backup to repo while rsync reads the repo (local)
2,3: restic writes backup to repo while restic checks the repo (local)
2,3: rsync writes copy of repo while restic checks the repo (remote)

Core question: Are these operation safe in the sense that both repos (local and remote) will not be corrupted and none of the tasks fail with errors?
Note that it is ok if the repos are incomplete temporarily, but whats there can be processed.

From what I understood answers are:
1,2: rsync does not fail, the remote repo might end up incomplete (eg missing snapshots) but will be fixed on the next run of restic backup

2,3 (restic write while restic check): ?

2,3 (rsync write while restic check): ?

Any thoughts welcome. Even if you indicate that I should better open another topic.

Thx buddies.


in general your plan sounds fine. I do that as well (local backups and then rsync to remote location). Answers to your question:

Not 100% sure but 99% rsync won’t care and just sync whatever is there. restic won’t even notice.

restic check will create an exclusive lock so restic backup will fail (without breaking anything).

Should be fine since check doesn’t change the repo.

Is there any particular reason why you don’t want to orchestrate the whole thing? I just wrote a bash script that runs on the backup server, executes some pre-backup stuff, opens ssh sessions to all machines to be backed up and executes restic. on friday nights it does some pruning and checking because i have all weekend. Works pretty well so far.


Hi Nico,

making sure the steps are sequential requires at least the following, which I try to avoid.

  1. Long running jobs: imagine a new gigabyte of data for which backup then check then rsync then check run one after the other. I prefer independent bits that succeed on their own.

  2. Different machines over the internet need orchestration. Connection problems would cause the entire thing to fail. I prefer the smallest possible things to fail.

I asked the particular question separately. Here I give my thoughts why this might fail (added after my original post in this here thread):



okay yes rsyncing into a running check will probably make restic ring alarm bells. I don’t see any way you can get where you want to be but that doesn’t mean there isn’t one. Maybe try a new thread and see what the experts come up with!


Thanks for you time, very much appreciated!