I’m running an unattended backup script using cron.
Sometimes it fails because the SSH connection drops unexpectedly (due to external factors).
When that happens, the next day the script usually handles the repository lock just fine.
However, occasionally we run into one of the following cases:
A
[2025-04-18 03:11:43] 🔧 [COMANDO] Ejecutando: restic prune -r /x/x/x/x -p /x/x/x/x/x.txt
[2025-04-18 03:11:56] ❌ [ERROR] Error en prune para repositorio local: counting files in repo
building new index for repo
[0:10] 89.74% 19399 / 21616 packs
[0:11] 100.00% 21616 / 21616 packs
repository contains 21616 packs (940023 blobs) with 89.997 GiB bytes
processed 1047316 blobs: 107293 duplicate blobs, 24.194 GiB duplicate
load all snapshots
find data that is still in use for 2808 snapshots
id fdad5ba3e128951c5cc0d96eecb6ae9a64061ac4800540c1e0c479af224d20e7 not found in any index
restic/repository.(*MasterIndex).LookupSize
/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/restic/repository/master_index.go:55
restic/repository.(*Repository).LoadTree
/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/restic/repository/repository.go:568
restic.FindUsedBlobs
/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/restic/find.go:9
main.runPrune
/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/cmds/restic/cmd_prune.go:152
main.glob..func11
/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/cmds/restic/cmd_prune.go:26
github.com/spf13/cobra.(*Command).execute
/usr/share/gocode/src/github.com/spf13/cobra/command.go:632
github.com/spf13/cobra.(*Command).ExecuteC
/usr/share/gocode/src/github.com/spf13/cobra/command.go:722
github.com/spf13/cobra.(*Command).Execute
/usr/share/gocode/src/github.com/spf13/cobra/command.go:681
main.main
/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/cmds/restic/main.go:40
runtime.main
/usr/lib/go-1.7/src/runtime/proc.go:183
runtime.goexit
/usr/lib/go-1.7/src/runtime/asm_amd64.s:2086
Question
In this case, I guess that it’s normal to return an error, and that the error itself is due to the repository being corrupted because of the interruption during the process?
B
[2025-04-21 18:10:30] ❌ [ERROR] Error en prune para repositorio remoto: counting files in repo
building new index for repo
[0:10] 1.39% 225 / 16173 packs
[0:20] 2.44% 394 / 16173 packs
[0:30] 2.44% 394 / 16173 packs
[0:40] 3.44% 556 / 16173 packs
[0:50] 4.86% 786 / 16173 packs
[1:00] 6.29% 1018 / 16173 packs
....
Question
In case, the prune command seems to run successfully, but I’m wondering:
Could it be that if the index has to be rebuilt, the command returns a non-zero exit code even if the prune itself works correctly?
¡Ay, caramba!, it is a version older than 8 years
Even stretch went EOL @ 2020…
Anyway, if you can, use a newer version. There are a lot of improvements (especially regarding exit codes) since. Not sure how far the backwards-compatibility goes though, still, should not harm the data to try.
Memory management also got waaay better since your current version, I remember the struggle.
Just a suggestion: I’d compile the highest runnable restic version on your environment and switch to that if I were you.
restic is a standalone binary. Debian stretch should be “new” enough to be able to run the latest restic version, just pick the official binary from Github. Given the massive amounts of fixes and improvements that have happened over the years, I wouldn’t use such an old version if I had to rely on the backup.
Be careful when mixing restic 0.3.3 and more recent versions. In theory, the repository format should still be compatible, but I doubt that this has ever been tested. (Well, restic 0.18.0 will definitely be able to read and work with the repository, but I’m only 90% sure of the opposite direction)