i got a new problem with another restic job and another backup hdd.
It runs very slowly, only about 9 mb/s. After almost 10 hours only 320 GB saved (see screenshot).
My old data disk (mainly my own photos and videos) is connected as a backup HDD via USB 3.1 (in an external housing). The port is okay, I backed up about 1 TB in 1.5 hours with Clonezille, but on another external 2TB HDD.
Is it the housing of the hard disk or do I have to use other settings?
My batch file restic-backup-ralf-daten2.sh looks like this:
Does anyone have an idea why this is so slow?
The other backups to NAS (only 4TB) and to internal HDD are much faster. The PC is also relatively fast (32 GB RAM).
I once installed âgnome-disk-utilityâ and measured with it.
It actually looks good on both sides here, both hard drives (sda and USB) are almost equally fast.
I think I found the problem but not solved it yet.
Itâs because my backup hard drive is an SMR HDD as is the new internal 18TB as well. The 8 TB, which I now want to operate as an external HDD, can probably not be controlled correctly via USB, which is why it is so terribly slow.
So new and old HDD are SMR hard drives. Internally, this is not noticeable. Extreme when backing up with restic (see above).
Any ideas so I donât have to throw away the hard drive?
Maybe special parameters for restic or something like that?
That might depend on the HDD enclosure and the actual workload. I wouldnât be surprised if the HDD works pretty fast as long as you only write a few GB at a time. But so far I havenât found a good description of the performance characteristics of an SMR HDD, which isnât just based on anecdotes.
Thanks for Your answer, but i donât know how this worksâŠ
I read, that is not an standard option for restic, but i donât know how to change this in source code.
Iâm not an programmer.
Maybe your photo files are bigger than 4Mb (the default size of restic pack files), hence you donât see the same small-file-related slowdown when copying the files across directly.
Yes, all workarounds offered above do require code changes and recompiling. However, from the PR discussion @MichaelEischer linked you to, it sounds like hopefully not too long before we get the new option included .
I may be missing something here, but it appears that all of your tests are using straight forward reading or writing. There does not appear to be any testing that includes processing (no deduplication, no encryption, âŠ). If you want the best performance for your backups you should use a less processor intensive method, improve your processor capabilities, or just bear with the slow first backup.
Thanks for the feedback. The alternative test with FreeFileSync is in fact not comparable to restic because it only copies directly.
However, this tool also breaks down later and then only manages less than 40 MB/s with a decreasing tendency.
It is definitely due to the SMR technology of the hard disk why all backup systems break down so much.
The question remains, will other parameters help me to improve it so that it doesnât take that long (200-300 hours for 6.5 TB photos is already tough)?
Where can I find recompilation instructions and where can I find the parameters in the source code?
I once dared and recompiled restic. The constant minpacksize in packer_manager.go now has the value 128 * 1024 * 1024 (default 4 * 1024 * 1024). I hope this is the value you meant?
At least itâs running (right now).
The question is whether there will be problems if I continue the backup I started (still with the old value 4 MB) with the new restic version?
In any case, restic check says âeverything is okayâ.
Iâm curious if itâs going a little better with the SMR hard drive.
Regards,
Ralf
PS:
At around 30 minutes, the cut is calculated: 80 MB/s.
Thatâs a value thatâs OK on USB, I think.
Iâm curious when it will break again and how strong.
Congratulations on your adventure - this was also the motivator for me learning how to compile this stuff .
Yes, what you describe sounds right as a hardwired solution to produce a 128Mb pack target size (the alternative options listed above make larger code changes to expose it as a command-line option).
As I understand it:
While there is such a thing as âtoo bigâ, your chosen level of pack size shouldnât lead to any incompatibility issues (both your version and the standard version can read either pack sizes)
Anything you add to a repo with your executable will be packed into the larger files
However, any existing data will remain in small files, and if/when you use the release binary thisâll also add any new data as smaller files.
Iâll be interested to hear your final conclusions on performance, especially since my experiences were when rsyncing an existing repo to USB, not when using restic to directly write a new repo to one.
Thanks so much. Your Tipp was the Gamechanger for my problem.
[âŠ]
The primary problem is still the hard driveâs SMR technology.
The larger block now compensates for that and Iâm very happy about that because now the PC will probably be done with the full backup in a little over a day.
The job is still going and hardly ever collapses. I think about 60 MB/s is an acceptable value via USB.
The CPU load has increased noticeably, but I can still do normal work that doesnât require 100% CPU power.
so in the end it completed the full backup at about 45 MB/s average.
The CPU load is noticeably higher than with 4 MB packed size. But itâs okay.
Another job is currently running with a ânormalâ old 2TB Seagate disk, i.e. with CMR technology.
Here the throughput is about twice as large, i.e. about 90 MB/s. I only save JPG files here, about 1.4 TB.
I use a SATA docking station with USB 3.0 interface and save my photo and video data on old hard drives. And with another job I back up my home on a NAS with restic via cron every day.
The routines are now running as I would like them to. Okay, it can always be faster, but thatâs enough to get the data volumes under control.
Thatâs great, and thanks for the update! Yes, that matches my experience. Itâs more about getting the SMR drive times to be tolerable, or âgood enoughâ to the point where itâs not worth the extra expense of switching to only CMR. Glad you made it!
Iâm in the same situation, struggling with the slowness of my external SMR HDD.
What I donât understand, related to the SMR technology, is: how is it that a bigger (pack) file size improves the write performance? Can someone explain that (in simple terms)?