Hi! Firstly, thank you for such awesome backup tool. After many weeks of researching we decided to use Restic as our primary backup solution because of good backup/restore speed and no threading limits as borg has.
But. Restore performance raises questions. Yesterday I tried to restore large (235 GB) single file (tar archive, compressed with zstd) via
restic dump ... stdin | zstd -d -T 4 | pv > /dev/null, and it took 2h20m (132MB/s on average). And OK maybe we could live with it, but… There was no any bottlenecks in our monitoring dashboard or tools like atop/htop:
- None of the processes exceeded 40% of one of the 12 cores;
- Network usage was on stable 221-267 Mbps (iperf3 to backup host showed 941 Mbps, so there was enough bandwith to fill it up)
- Server had a lot (~56 GiB) of free memory
Maybe someone else encountered that situation? I just don’t understand where the bottleneck is.
Restic version: restic 0.9.6 compiled with go1.13.4 on linux/amd64
Backend: rest-server 0.9.7 compiled with go1.10 on linux/amd64