Restic maximum throughput

I am wondering what other people’s experience is with restic restore throughput.
I have test restored some large VM files under what I consider an “optimal” restic environment.
(64 GB RAM, 12 core cpu, repository on (local) 2TB NVME SSD restoring to SSD) and I can’t get any more than around 80MB/sec throughput. This is doing restore with restic restore command and NOT mount…

Anyone getting better than that, and if so, how?

I think that major bottleneck is default pack size - 16MB. It means reading a lot of small files…

In configuration like you described I would definitely use much bigger one, e.g. 128 MB - Tuning Backup Parameters — restic 0.16.2 documentation. What makes sense for cloud storage over often slow network does not for local SSD repo.

IMO it will translate to clearly higher restore speed.

You might also try to increase file read concurrency - Tuning Backup Parameters — restic 0.16.2 documentation

As we’re speaking about local SSD to SSD, I’m pretty sure that won’t make any difference when restoring.

On my machine for a test with 5 1GB files with random data, I see about 400MB/s restore speed (although that number likely benefits quite a bit from the SHA256 hardware acceleration available in the AMD Ryzen CPU). What might help is increasing the number of backend connections from the default of 2 to e.g. 5 (or even higher, up to 12) using restic restore -o local.connections=5 latest -t restore-target.

Readconcurrency only applies to the restic backup command, not restore, so that doesn’t help.
I did try increasing the number of connections to 12 and that did speed things up considerably. I am getting around 300MB/s now.

Thanks