Restic first backup very slowly

Hello,

i got a new problem with another restic job and another backup hdd.
It runs very slowly, only about 9 mb/s. After almost 10 hours only 320 GB saved (see screenshot).
My old data disk (mainly my own photos and videos) is connected as a backup HDD via USB 3.1 (in an external housing). The port is okay, I backed up about 1 TB in 1.5 hours with Clonezille, but on another external 2TB HDD.

Is it the housing of the hard disk or do I have to use other settings?

My batch file restic-backup-ralf-daten2.sh looks like this:

#!/bin/bash
# Script fuer wöchentliche restic Backup 

# PW hinterlegen
export RESTIC_PASSWORD=******
BACKUPDIR="/media/backup8T/backup/restic/ralf/daten2"
Scriptlog=/home/ralf/cron.log

DATUM="$(date +%d-%m-%Y)"          ## Datumsformat einstellen
ZEIT="$(date +%H:%M)"              ## Zeitformat einstellen 

restic -r $BACKUPDIR backup --exclude-file=/home/ralf/exclude-files-ralf-daten2.txt /home/ralf/Daten2

echo $DATUM $ZEIT "-- restic Sicherung fĂŒr Ralf lokal gelaufen!" >> $Scriptlog 2>&1

Does anyone have an idea why this is so slow?
The other backups to NAS (only 4TB) and to internal HDD are much faster. The PC is also relatively fast (32 GB RAM).

Thank you for your feedback.

Regards, Ralf

Update:
I did disk tests of the backup disk with dd. Write test here:

dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc 

Result= 144 MB/s

Read test without Cache:

dd if=tempfile of=/dev/null bs=1M count=1024 

Result= 174 MB/s

Okay, not very fast, but actually fast enough for my purposes.

Regards, Ralf

Update2:

I once installed ‘gnome-disk-utility’ and measured with it.
It actually looks good on both sides here, both hard drives (sda and USB) are almost equally fast.

Any ideas how I can get this faster?

Thank you for your answers.

Greetings, Ralf

Data on 18TB HDD (sda)

Data ob 8TB HDD (USB)

Benchmark on 8 TB

Benchmark on 18 TB

Update3:

Readtest with hdparm.
The results seem fine.
sda

hdparm -tT --direct /dev/sda 

/dev/sda:
 Timing O_DIRECT cached reads:   898 MB in  2.00 seconds = 448.50 MB/sec
 Timing O_DIRECT disk reads: 822 MB in  3.01 seconds = 273.44 MB/sec

sdd

hdparm -tT --direct /dev/sdd 

/dev/sdd:
 Timing O_DIRECT cached reads:   476 MB in  2.00 seconds = 237.80 MB/sec
 Timing O_DIRECT disk reads: 566 MB in  3.00 seconds = 188.66 MB/sec

I still haven’t identified the problem.

Greetings, Ralf

Update4:

I think I found the problem but not solved it yet.
It’s because my backup hard drive is an SMR HDD as is the new internal 18TB as well. The 8 TB, which I now want to operate as an external HDD, can probably not be controlled correctly via USB, which is why it is so terribly slow.

So new and old HDD are SMR hard drives. Internally, this is not noticeable. Extreme when backing up with restic (see above).

Any ideas so I don’t have to throw away the hard drive?
Maybe special parameters for restic or something like that?

Thank you for your feedback.

Greetings, Ralf

1 Like

You can increase restic’s minimum (pack) file size. It should make a big difference to your particular bottleneck. For example:

2 Likes

The PR to increase the min-pack size is Feature: min packsize flag by metalsp0rk · Pull Request #3731 · restic/restic · GitHub .

That might depend on the HDD enclosure and the actual workload. I wouldn’t be surprised if the HDD works pretty fast as long as you only write a few GB at a time. But so far I haven’t found a good description of the performance characteristics of an SMR HDD, which isn’t just based on anecdotes.

1 Like

Thanks for Your answer, but i don’t know how this works

I read, that is not an standard option for restic, but i don’t know how to change this in source code.
I’m not an programmer.

Regards,
Ralf

Thanks also for Your answer, but i don’t know how this works

The parameter --pack-size returns “unknown flag: --pack-size”

Do I have to change the source code here too?
How does it work?

Regards,
Ralf

For further tests I started another program, ‘FreeFileSnc’.
3.15 TB photos (with raw files).
The same source HHD and the same target HDD (on USB).

The program is much faster than restric.

What could be the reason?

Regards,
Ralf

Maybe your photo files are bigger than 4Mb (the default size of restic pack files), hence you don’t see the same small-file-related slowdown when copying the files across directly.

Yes, all workarounds offered above do require code changes and recompiling. However, from the PR discussion @MichaelEischer linked you to, it sounds like hopefully not too long before we get the new option included :slightly_smiling_face:.

1 Like

I may be missing something here, but it appears that all of your tests are using straight forward reading or writing. There does not appear to be any testing that includes processing (no deduplication, no encryption, 
). If you want the best performance for your backups you should use a less processor intensive method, improve your processor capabilities, or just bear with the slow first backup.

Thanks for the feedback. The alternative test with FreeFileSync is in fact not comparable to restic because it only copies directly.
However, this tool also breaks down later and then only manages less than 40 MB/s with a decreasing tendency.

It is definitely due to the SMR technology of the hard disk why all backup systems break down so much.
The question remains, will other parameters help me to improve it so that it doesn’t take that long (200-300 hours for 6.5 TB photos is already tough)?

Where can I find recompilation instructions and where can I find the parameters in the source code?

Thank you very much.

Regards,
Ralf

I once dared and recompiled restic. The constant minpacksize in packer_manager.go now has the value 128 * 1024 * 1024 (default 4 * 1024 * 1024). I hope this is the value you meant?

At least it’s running (right now). :slight_smile:
The question is whether there will be problems if I continue the backup I started (still with the old value 4 MB) with the new restic version?

In any case, restic check says ‘everything is okay’.

I’m curious if it’s going a little better with the SMR hard drive.

Regards,
Ralf

PS:
At around 30 minutes, the cut is calculated: 80 MB/s.
That’s a value that’s OK on USB, I think.
I’m curious when it will break again and how strong.

Congratulations on your adventure - this was also the motivator for me learning how to compile this stuff :slight_smile: .

Yes, what you describe sounds right as a hardwired solution to produce a 128Mb pack target size (the alternative options listed above make larger code changes to expose it as a command-line option).

As I understand it:

  • While there is such a thing as “too big”, your chosen level of pack size shouldn’t lead to any incompatibility issues (both your version and the standard version can read either pack sizes)
  • Anything you add to a repo with your executable will be packed into the larger files
  • However, any existing data will remain in small files, and if/when you use the release binary this’ll also add any new data as smaller files.

I’ll be interested to hear your final conclusions on performance, especially since my experiences were when rsyncing an existing repo to USB, not when using restic to directly write a new repo to one.

1 Like

Thanks so much. Your Tipp was the Gamechanger for my problem. :slight_smile:

[
]

The primary problem is still the hard drive’s SMR technology.
The larger block now compensates for that and I’m very happy about that because now the PC will probably be done with the full backup in a little over a day. :slight_smile:

The job is still going and hardly ever collapses. I think about 60 MB/s is an acceptable value via USB.
The CPU load has increased noticeably, but I can still do normal work that doesn’t require 100% CPU power. :wink:

Thank you and everyone else for the tips.

Greetings,
Ralf

1 Like

128MB are still within the value range allowed by Feature: min packsize flag by metalsp0rk · Pull Request #3731 · restic/restic · GitHub . There is a theoretical corner case involving hundred thousands of very small files, but it is rather unlikely for that to become a problem.

1 Like

Hello Nev,

so in the end it completed the full backup at about 45 MB/s average.
The CPU load is noticeably higher than with 4 MB packed size. But it’s okay.

Another job is currently running with a ‘normal’ old 2TB Seagate disk, i.e. with CMR technology.
Here the throughput is about twice as large, i.e. about 90 MB/s. I only save JPG files here, about 1.4 TB.

I use a SATA docking station with USB 3.0 interface and save my photo and video data on old hard drives. And with another job I back up my home on a NAS with restic via cron every day.

The routines are now running as I would like them to. Okay, it can always be faster, but that’s enough to get the data volumes under control. :wink:

Greetings,
Ralf

CPU_restic

2 Likes

That’s great, and thanks for the update! Yes, that matches my experience. It’s more about getting the SMR drive times to be tolerable, or “good enough” to the point where it’s not worth the extra expense of switching to only CMR. Glad you made it!

1 Like

I’m in the same situation, struggling with the slowness of my external SMR HDD.

What I don’t understand, related to the SMR technology, is: how is it that a bigger (pack) file size improves the write performance? Can someone explain that (in simple terms)?