I am confused about prune --max-repack-size=0
(restic v0.16.4)… (Apologies if similar questions have been asked, I was unable to find any!)
Recovering from ‘no free space’ errors says:
“In most cases it is sufficient to instruct prune
to use as little scratch space as possible by running it as prune --max-repack-size=0
. […] Obviously, this can only work if several snapshots have been removed using forget
before. This then allows the prune
command to actually remove data from the repository. […]”
Eh? NO, it is NOT “obvious”, if I am understanding the intention correctly. I currently suspect (for reasons explained below) the above means (or perhaps would be better phrased as?):
“In most cases it is sufficient to instruct prune
to use as little scratch space as possible by running it as prune --max-repack-size=0
. […] This can only work if there is sufficient scratch space available. You may need to remove one or more snapshots, using forget
, to create adequate scratch space. This then allows the prune
command to actually remove data from the repository. […]”
(The redacted parts (“[…]”) may also need editing if my guess is broadly-correct. Also, whilst not said, I assume this “scratch space” must be within the REPO
, and hence, within the containing partition/filesystem/cloud/whatever…)
The situation I ran into is the REPOv1
in question has largely filled up the available space; the REPOv1
is about 1.4TB on a 2TB device/filesystem/partition, leaving about 0.5TB free space available. Using the current “restic 0.16.4 compiled with go1.21.6 on linux/amd64”, after migrating in-place to REPOv2
(migrate upgrade_repo_v2
), the subsequent prune --compression=max --repack-uncompressed
ultimately drove the filesystem/partition containing REPOv2
out-of-space.
Thank You, restic did handle the situation gracefully, and I was able to recover an intact and still-functional REPOv2
(largely by, if my memory is correct, a simple prune --compression=max
on that “overfull” newly-created REPOv2
).
But… There was clearly loads of “scratch space” available, albeit insufficient for the presumed-“duplicated” compression max version. At the present time I do not find the documentation clear on this point, but the impression that I currently have is --max-repack-size=0
“encourages” restic v0.16.4 to compress and then “commit” (index?) each snapshot individually, rather than “queuing them up” for “committal” at about the end of the prune --repack-uncompressed
. (I am aware I am very probably not using restic terminology here! (If it helps, think git(1) terminology.))
That is, my current guess is that without --max-repack-size=0
, then prune --repack-uncompressed
writes the (fully-)compressed packs, etc., into REPOv2
, but does not remove the (now-obsolete) REPOv1
packs (and indexes, etc.), until the Very End. This is Very Understandable, but I currently am finding the documentation lacking, or at least confusing. (In addition, at the moment, I have not run any additional tests/experiments to try and confirm my guessing… in part to my insistence on always have one proved-correct (and up-to-date) backup at all times.)
Perhaps the current documentation needs revisiting…?
Clarifications and corrections are most welcome!
Edit: Improved typography (as per @rawtaz useful correct and helpful suggestion).