Hi.
We’ve observed an issue with prune run times since version 0.13 . When we upgraded from 0.12.1 to 0.13.2, one system with somewhat of a special constellation seen an increase in prune time from 20 minutes to about 1 hour 15 minutes.
Back then, we didn’t analyze or report the issue. We’re now upgrading to 0.16, and i hoped that 0.16 will solve the issue. However, we’re still seeing this 3+ times increase in prune runtime, where we’re seeing a decrease on a lot of other systems. I also confirmed the behavior with 0.15.2 , which makes me think we have a bit of a snowflake case here.
The repository is shared between two hosts. Each host saves snapshots with an unique tag and only operates on his own tag for pruning (that’s a scenario we have in place a couple dozen times).
There are ~630 snapshots total, we create one every two hours and have a 30 day retention. The runs are spaced out for even / uneven hours between the two systems and the runtime (backup / forget --prune) generally is around 25 - 30 minutes, using restic 0.12.1 .
Full forget
command:
forget \
--group-by host \
--tag "${HOSTNAME}" \
--keep-daily "${KEEP_DAILY}" \
--keep-weekly "${KEEP_WEEKLY}" \
--keep-within "${KEEP_WITHIN}" \
--keep-monthly "${KEEP_MONTHLY}" \
--keep-yearly "${KEEP_YEARLY}" \
--prune &>>"${PRUNE_OUTPUT}"
Due to the nature of the frequent snapshots, data added in each run is pretty minuscule.
2023-08-24T11:30:03+02:00 Starting backup
using parent snapshot cf4aea42
Files: 72 new, 33 changed, 2248357 unmodified
Dirs: 12 new, 63 changed, 368115 unmodified
Added to the repo: 620.980 MiB
processed 2248462 files, 136.119 GiB in 5:15
snapshot 149db3bc saved
...
collecting packs for deletion and repacking
[0:02] 100.00% 66565 / 66565 packs processed
to repack: 34373 blobs / 618.080 MiB
this removes 49 blobs / 567.438 MiB
to delete: 13 blobs / 11.961 MiB
total prune: 62 blobs / 579.399 MiB
remaining: 2644562 blobs / 421.186 GiB
unused size after prune: 20.644 GiB (4.90% of remaining size)
Besides the amount of snapshots, the system isn’t too unusual after all.
I’m happy to provide more information if needed.