Using Restic on Ubuntu 21.04 and a 1 month old Ryzen 7 laptop with 40GB RAM. I’m backing up to a USB connected 2TB spinning hard drive. Restic tells me a 1TB backup job will take 37hrs! Rsync only took about 14 hrs under same conditions. Is this normal for Restic to be so slow?
Here’s a screenshot of the Restic job details:
.
.
Which restic version are you using? Please start by running the test using the latest restic version, which is 0.12.0 and let us know the output from that
I’m using 0.11. I’m halfway thru the initial (slow) backup. After that’s done I’ll update Restic to 0.12. Only 21hrs left! I’m praying fervently that Ubuntu doesn’t crash before backup completion. Hate to have to start all over again.
Is there a significant speed difference between 0.11 and 0.12?
Someone else told me that Restic does a lot more stuff in the background compared to rsync, hence the slow speed for the initial Restic backup.
That’s the case. On a very high level, rsync reads each source file and writes it to the destination file. Restic reads the file, splits it into blocks, deduplicates the blocks (so each is only saved once), encrypts and bundles several into files, and only then saves them to the destination directory.
Hopefully you’ve successfully finished your first backup by now?!
One other bottleneck you might have experienced is due to the SMR drives used in many USB enclosures today. Given you’ve got a nice beefy machine, this may be slowing you down far more than @fd0’s suggestions above.
Restic’s small default filesize means that a repo often comprises many more files than the source data. And all of those small files are a speed-killer for SMR drives once their internal cache is full.
The above problem isn’t limited to Restic’s backup operation: try to rsync a restic repo to an SMR drive and you’ll see it slow to a crawl.
Other than avoiding SMR USB drives (which is increasingly impractical), or switching to a backup, such as Borg, that uses bigger datafiles (but then you miss out on other Restic goodness!), the solution is to increase the repo’s datafile (“pack”) size, but that’s still only possible by applying a PR, for example this. Fingers-crossed for the next release: after the epic pruning speed-improvements of 0.12, I can’t think of another in-the-works change that will make as big an improvement for as many of Restic’s (potential) users, particularly as small pack files also impact performance on some cloud-backup targets.
I’m guessing by the slow pace of implementation of this feature, none of Restic’s primary devs suffer from the bottleneck (not a criticism, that’s their prerogative, and their work on Restic is greatly appreciated!).