Restic not obeying --max-unused parameter on prune

Hi all,

Not really sure why but Restic doesn’t seem to be pruning my repo according to the max-unused parameter I am using.

Background

Repo has about 809Gb of used data and about 200GB of unused data. As a full max-unused 0 prune takes too long to complete (and causes me issues with temporary scratch space) - instead I slowly prune down 5GB at a time — it was going fine till I had about 105GB left and now Restic doesn’t seem to prune down anymore. Despite having the unused data in repo, when I give it the stepdown prune command, it just doesn’t do anything and I am in the same position as before running.

What is going on?

• The output of restic version

restic 0.17.1 compiled with go1.23.1 on linux/amd64

• The complete commands that you ran (leading up to the problem or to reproduce the problem).

for i in $(seq 105 -5 5); do
	echo Remaining $i GB
	prune -r $TARGET --max-unused $i\G
done

• The complete output of those commands (except any repeated output when obvious it’s not needed for debugging).

Remaining 105GB
repository 241d8bc5 opened (version 2, compression level auto)
loading indexes...
[0:12] 100.00%  18 / 18 index files loaded
loading all snapshots...
finding data that is still in use for 20 snapshots
[0:16] 100.00%  20 / 20 snapshots
searching used packs...
collecting packs for deletion and repacking
[0:04] 100.00%  46722 / 46722 packs processed

to repack:             0 blobs / 0 B
this removes:          0 blobs / 0 B
to delete:             0 blobs / 0 B
total prune:           0 blobs / 0 B
remaining:        889547 blobs / 916.972 GiB
unused size after prune: 105.905 GiB (11.55% of remaining size)

done

(same output for --max-unused 100G and 95G, etc etc )

It always worked for me, so here are just some hint for further try and error:

  • You mean restic prune instead of prune in your for loop, don’t you?
  • --max-unused $i\G with a backslash may be correct but how about --max-unused ${i}G?
  • What does --max-unused 9% (in % rather than G) do?
  • You can also use --max-unused 1G --max-repack-size 5G to only prune 5 GiB towards the goal of having only 1 GiB of unused space.
  • Just speculating: could it be that you have very large files and you can’t just delete some packs. It would need to repack more than 5 GiB to reach a new consistent state?
  • Can you try with --dry-run? Like --max-unused 100M --dry-run.

The repo is my home photo/movie collection so a mix of small / big files but nothing close to 1GB in size.

The percentage approach doesn’t work either (I actually had started with that and when it stopped working went to the fixed GB appraoch)

Before I did both of these - I was actually using max-repack-size 2G on a loop to slowly reduce down the unused space — it was only when that stopped I relooked at the max-unused to force the issue but still not working

I experimented with dry-run a bit and found:

  • At 0 it would repack 367GB to remove the entire unused 105GB (but I am reluctant to do this coz it is almost half my entire report and means a really long (potentially multiday??) process)
  • Dropping down even further below 1G — when I feed max-unused 100M and successively lower inputs - still get nothing to do until 6M and then it gets interesting:

Input // Repack
6 // 364,1 GB
5 // 3.8MB (a small tiny bit!)
4 // 367.5GB
3 // 364.9GB
2 // 366.6GB
1 // 364.6GB
0 // 367.3GB

Its weird things don’t happen until input is 6M and below and then drop-then-bounce at 5M input and with all other inputs having slightly different amounts to repack (but ending at same end state - no unused space)

Anyway - I am going to try 5M and see if it mixes things up enough make restic prune work normally again

Hmm - looked like the 5M dry-run output was illusion - when I did run for real - suddenly it wanted to repack the ~365GB which is not workable for me and I terminated the process.

At a loss what to do now; I might start just deleting some less important snapshots to see if can force restic to reevaluate the prune

Are you using --repack-cacheable-only ?

Try a restic check ?