Slow "restic dump" performance with no seemed bottlenecks

Hi! Firstly, thank you for such awesome backup tool. After many weeks of researching we decided to use Restic as our primary backup solution because of good backup/restore speed and no threading limits as borg has.

But. Restore performance raises questions. Yesterday I tried to restore large (235 GB) single file (tar archive, compressed with zstd) via restic dump ... stdin | zstd -d -T 4 | pv > /dev/null, and it took 2h20m (132MB/s on average). And OK maybe we could live with it, but… There was no any bottlenecks in our monitoring dashboard or tools like atop/htop:

  • None of the processes exceeded 40% of one of the 12 cores;
  • Network usage was on stable 221-267 Mbps (iperf3 to backup host showed 941 Mbps, so there was enough bandwith to fill it up)
  • Server had a lot (~56 GiB) of free memory

Maybe someone else encountered that situation? I just don’t understand where the bottleneck is.

Restic version: restic 0.9.6 compiled with go1.13.4 on linux/amd64
Backend: rest-server 0.9.7 compiled with go1.10 on linux/amd64

I’ve compiled manually restic with debug options, and currently rerunning restic dump with --cpu-profile option, but not sure if it may help. Debug log already has 400 thousand lines…

restic dump is not good way to measure restore performance. It does way more remote requests than necessary… one request at a time. Try restic restore and make sure you use the latest master as a major perf fix was merged couple of days ago.


Yes! Restore from master is a way faster and fills up whole network bandwith. Will use that in the future and keep in mind your words about dump.

Thank you.

1 Like