Slow initial backup speed with sftp target


Currently I have a backup running (755625 files, 367 GiB). It’s already running for 28 hours and ETA is still at 40h.

Most of the time I have an upload speed of 8Mbit/s, sometimes it goes up to the 30 MBit/s (max of the current uplink (LTE)), and sometimes it’s somewhere in between.
If I see it correctly, the speed goes up when restic handles bigger files.

The LTE uplink is only temporary for the first backup to not disturb the daily business, and the regular uplink only has 15MBit/s. For the incremental backups that should not be a problem, as there are only a few changed files per day.

It’s okay for me to wait for the backup to be finished, but wonder if there is something I might have done wrong? Is using VSS the culprit? Or is it just the sheer amount of small files?

It might be faster if I just stop the backup, download the repository to the local drive, resume the backup to the local repository and then push it back to the remote server (with multiple connections to use as much bandwith as possible)?

Restic 0.13.1 on Windows (running on an Xeon E-2286G, 32GB RAM, SAS 12Gb/​s drives attached to an HW Raid Controller). restic process is excluded from antivirus


I did a rough calculation: 8 Mbit/s is 1 MB/s is 3600 MB/h is 3,6 GB/h.
So, your 367 GiB will take round about 100 h.
For my feeling, you are in a good range :grinning:
The 30 MBit/s are not guaranteed, you share the bandwidth with others.

hehe, okay :smiley: well,… I’ll wait.

Yeah of course, but the cell I’m connected to has enough bandwidth left… I can start an additional upload to the same remote server, upload a file and have a total bandwidth usage of ~30Mbps