I have a relatively small repo, and even retrieving stats for a single snapshot takes a very long time:
$ time restic stats -r rest:https://restic.mimas.internal.nobbz.dev/nobbz -H mimas latest --json
enter password for repository: found 2 old cache directories in /home/nmelzer/.cache/restic, run `restic cache --cleanup` to remove them
{"total_size":233036305352,"total_file_count":1373911}
restic stats -r rest:https://restic.mimas.internal.nobbz.dev/nobbz -H mimas 41.59s user 2.08s system 110% cpu 39.415 total
Is there anything I can do to speed up the operation?
Without knowing your restic & rest-server versions, the hardware and rough specifications, network / I/O conditions, how many snapshots you have, etc., it’s impossible to say.
$ /nix/store/q3f1npb45q0xrxkq3fsi312l74iaiah5-restic-rest-server-0.11.0/bin/rest-server --version
rest-server version rest-server 0.11.0 compiled with go1.17.11 on linux/amd64
$ restic version
restic 0.13.1 compiled with go1.17.11 on linux/amd64
The restic.mimas.internal.nobbz.dev resolves to a locally bound IP, which I can usually use with 0 latency or other issues.
The host in question is selfhosting its own backups, before they get copied to b2 about once a week. There are also backups of other hosts aggregated on that backup repository.
The repository currently holds 110 snapshots and looks like this on disk (ext4):
$ df -h /var/lib/restic
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/usbpool-restic 492G 389G 80G 84% /var/lib/restic
The drive holding the volume is (as one might see from the name) on a USB HDD plugged in via USB 3.0.
The Host is a repurposed laptop from ~2015 running some internal services with an “Intel(R) Core™ i5-4210M CPU @ 2.60GHz”.
During the scan, CPU gets from idle to a spike of 100% and then goes down to ~35% on average until the scan is done.
For those stats - especially running from a hard disk - 40ish seconds seems fairly reasonable. On my Raspberry Pi 4 I have a 350GB repo with 300 snapshots running on a USB 3 hard drive, stats --latest takes 20s.
A backup without changes takes less than 15 seconds, why does a stats take nearly thrice as long?
I just want to emit some extra metrics after each backup (snapshot size, duration, etc) and has hoped that I can somehow get some of the information through the stats, though as that takes forever I am not sure anymore whether that makes sense. Especially when there are no real changes, increasing the runtime by the factor of 4 (roughly) feels wrong.
In this regard: Is there a way to get the summary of the last run in JSON format written to a file, such that logging can still capture the regular output?