[finding the bottleneck] restic v0.16.0 - restoring 1.4M files with 1.147 TiB

It seems like that restore is not sufficiently optimized for small files or I’m withnessing something odd here with restic v0.16.0

The NAS-Box I’m backing up to does have 50 MBits upload - just as the recieving end - so both sender and reciever have around 7MB/s upload-capacity

I’m using rest-server … when I try to restore a snapshot of a single disk, even on NVMe the speeds are stuck at around ~2MB/s in average.

If I look at the restore output it might be that the number of files transfered is to low … since there is hardly any load on either end - also not in regards to IO (the sending end does have fairly fast Toshiba MG10 HDDs).

Things I tested:

  • restic mount + rclone copy with 12 threads → ~4 MB/s
  • scp -r of the restic repo folder as a whole → slow as a pulp ~512 KB/s
  • rustic-server → no difference
  • rustic restore → errors out with timeouts even on rustic-server

Things that might be the cause:

  • The sending end (=NAS-Box) does only have an old Atom Dual-Core with 1GB Ram, then again → stats tell me cpu usage is at ~28% load and RAM at ~50% … maybe the NIC? That was previously maxing out at 40 MB/s … so that doesn’t make sense

The connection itself is made via Tailscale … backing up was not an issue like I said and I was able to saturate 7 MB/s almost 100% of the time … I’m a little bit clueless right now …

1 Like

Can you please move the discussion on rustic to rustic-rs/rustic · Discussions · GitHub (or the rustic Discord channel)?

Removed - thats just OT because I was looking for a solution as to why the speed is so slow

Well … turns out the issue was indeed the upstream channel of the shitty provider which hosts the repository.

While it can theoretically reach 7 mb/s the latency can do 200ms jitters easily, and as such generate a cut in half approximately due to TCP overhead … so the numbers weren’t off to much!

Thank god we’re living in a third world internet country here … only thing that helped was an in-place restore onto my MergerFS share with a lot of deduplicated files cutting down a full 2 week restore into a few hours.

Otherwise I’ll use that as a lesson:

Always do an additional backup per week → external Drive making things much less painfull to restore …

and only use remote backups as the absolute last measurement