Prune fails on missing tree

With a missing tree, prune will fail hard (see below), new backups are fine. rebuild-index did not complain, but did not fix the problem.

If the corruption is persistent and confirmed, how can the repository be “repaired”? I expect there will be data loss, but can the structure be repaired to a state were prune will succeed (or does purge need a fix so the error is non-fatal)?

I reviewed the suggestions from Persistent repository corruption and data loss, and one extreme solution would be to forget the affected snapshot.

Cheers!

$ restic prune
repository a7073830 opened successfully, password is correct
loading indexes…
loading all snapshots…
finding data that is still in use for 70 snapshots
[0:00] 1.43% 1 / 70 snapshots
id 004d3d7d3c7619f9aa7c02e53d62bbc007d7f3ee47e2676714c32360fbce121f not found in repository
github[.]com/restic/restic/internal/repository.(*Repository).LoadBlob
github[.]com/restic/restic/internal/repository/repository.go:162
github[.]com/restic/restic/internal/repository.(*Repository).LoadTree
github[.]com/restic/restic/internal/repository/repository.go:728
github[.]com/restic/restic/internal/restic.loadTreeWorker
github[.]com/restic/restic/internal/restic/tree_stream.go:37
github[.]com/restic/restic/internal/restic.StreamTrees.func1
github[.]com/restic/restic/internal/restic/tree_stream.go:164
golang[.]org/x/sync/errgroup.(*Group).Go.func1
golang[.]org/x/sync@v0.0.0-20200625203802-6e8e738ad208/errgroup/errgroup.go:57
runtime.goexit
runtime/asm_amd64.s:1373

This is not a nice way to fail, you are right. But the prune command in fact always rejects to prune anything if there are any missing blobs.

The best way to resolve missing blobs is to run rebuild-index and re-backup data which contains the missing blobs. For missing trees this would mean to backup the completely unchanged dir (including completely unchanged subdirs) - which often is not possible.

The second way restic officially supports is only to forget the affected snapshots.

If you are willing to use experimental code, there is

which resolves it with a “minimal” modification of you snapshots.

1 Like

I will try the repair command, since that seems to match my desired outcome: keep the snapshot, but accept minimum data loss. I will report back on the results. Thanks for the hint.

Thanks again for the pointer. I was able to repair all my repos (three of six had errors).

Unfortunately, I had some instances of “the root tree is damaged → delete snapshot”. A cursory investigation showed that these snapshots where not recoverable. I didn’t do a deep-dive (I have alternative backups, and these were older snapshots), and just got rid of the errand snapshots.