Interrupting huge backup

I’m doing a mega-backup of 1.8 TB, the first one of this dataset, using the backblaze/b2 backend. It looks like it will take a total of 5 weeks due to the upload limit on my home connection. I’m currently about 2 weeks in. It looks like I will have to interrupt this backup as the system that’s doing it will need turning off/moving.

Looking at other, similar, questions on here, it looks like restic can pick up from where it left off, but I’m just wondering how this works behind the scenes. Would it show up as another snapshot? Should I just interrupt (i.e. Ctrl-C) the current backup process? Given the time investment it would be good to know what to expect.

Is there anything else I should do in preparation…?

Thanks in advance!

I can’t think of anything else that needs doing - just interrupting it with CTRL+C should be fine (I’ve done it plenty of times :).

Restic has sent a bunch of data already, but not saved any snapshots in your repository. Next time you run it, it will scan through all the files you asked it to back up again, but it will only send the data that hasn’t already been uploaded to the repository. So in short, it will just pick up where it left off.

It will only save a snapshot when it’s done with the complete backup run, so whenever you let it complete you will see the first snapshot.

Note that restic will have to re-hash everything locally to determine what has already been uploaded, so you will see pretty significant disk I/O and restic CPU as it makes its way through everything it’s already uploaded. It might take awhile before you see it starting to upload new content.

I was doing backup of 1.5 TB (loads of various size files) and it was interrupted several times while being uploaded over slow connection. Some of it was me interrupting due to various reasons, some of it was connectivity issues.

With my backup on HDD drive it took nearly 4 hours to re-hash. Same data off LUKS encrypted drive from a server with weak CPU took much longer (due to CPU). Same data off SSD/NVMe took about 50 minutes (due to CPU).

My slow upload was due to being on the other side of the world from my backup target. I ended up doing some workaround to get it across faster like uploading different parts of my backup through different connections (some via VPN, some via VPS server and some directly via my ISP - they all use different routes to my backup target). In the end I did backup of everything, which was just one big re-hashing sessions cause all the parts were already uploaded. Now I need to prune a bit :slight_smile:

If you have a large enough external HDD and a physical location you can go to with a faster connection, you could back up to the HDD then rsync/rclone the HDD to the cloud destination from the faster connection. This is a good way to “seed” the backup location with an initial backup. Then follow-up backups should be substantially faster since only changed files need to be hashed next run.

Thanks so much for all the answers, guys :+1:

Unfortunately I’m locked down and though I’ve just started a new job with an employer who I bet will have a nice fast connection, I doubt I’m going to be able to make it into the office for at least the next few months…

Loving restic - it’s helped me out loads recently :smiley: