I’ve had the “opportunity” to restore a couple of large snapshots (1TB and .75TB) several times over the last few weeks (don’t ask).
All works fine, but I have noticed that restore performance is very non-linear. I’ll get 400-500GB of data back very quickly (oddly fast, really), then it gets slower and slower.
Any idea why this happens?
To me this sounds more like performance bottleneck/s unrelated to Restic itself.
Depending on where you restore your snapshot from (NFS share, S3 or equivalent, external HDD via USB, …) there are multiple things where a restore could get slower.
Also depending on what kind of data you restore it could be that if you have a very large set of small files it will take longer to restore them as it is when writing lots of small files.
I agree, doesn’t sound like a restic thing at all. It’s probably due to other causes such as disk I/O, networking, and other congestion or whatever. The times I’ve restored I’ve seen pretty consistent speeds, it just keeps restoring as fast as it can given network and such.
OK, thanks all… mostly thought I would see if there was a known reason it might act that way. Not sure what’s happening, but I can believe it’s something else in the path.
It’s a disk-to-disk restore, all spinning drives. On unRaid - I wonder if the parity process is getting behind… anyhow, thanks for the advice and tool!