I have a (large?) repository with about 2.1Tb of data; I add a dozen snapshots each work day, then wanted to forget/prune in the weekend. Unfortunately, the prune operation is so slow that at the current rate it would take over 300hours, so clearly I can not perform it without stopping the hourly backup for a long time.
Your current prune operation is slow because it wants to repack many packs. This might be due to duplicates (maybe because of aborted backup or prune) which are not perfectly handled in 0.12.0. There is
but this still needs a review and should so far not be used in a production repository.
EDIT: If it is possible for you to copy your repo, I would be very much interested in you testing this PR
You can try to play around with the parameter --max-unused. If this is set to unlimited ,only duplicates and tree packs will be repacked.
Moreover, you can use --max-repack-size to limit the size which is actually repacked. This should also work with duplicates.
Always try with --dry-run to see what prune intends to do and run it without when you are satisfied with the choice of your parameters.