Exit status codes

Environment

restic 0.3.3
Debian GNU/Linux 9 (stretch)

Situation

Hi everyone!

I’m running an unattended backup script using cron.

Sometimes it fails because the SSH connection drops unexpectedly (due to external factors).

When that happens, the next day the script usually handles the repository lock just fine.

However, occasionally we run into one of the following cases:

A

[2025-04-18 03:11:43] 🔧 [COMANDO] Ejecutando: restic prune -r /x/x/x/x -p /x/x/x/x/x.txt
[2025-04-18 03:11:56] ❌ [ERROR] Error en prune para repositorio local: counting files in repo
building new index for repo
[0:10] 89.74%  19399 / 21616 packs
[0:11] 100.00%  21616 / 21616 packs

repository contains 21616 packs (940023 blobs) with 89.997 GiB bytes
processed 1047316 blobs: 107293 duplicate blobs, 24.194 GiB duplicate
load all snapshots
find data that is still in use for 2808 snapshots
id fdad5ba3e128951c5cc0d96eecb6ae9a64061ac4800540c1e0c479af224d20e7 not found in any index
restic/repository.(*MasterIndex).LookupSize
	/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/restic/repository/master_index.go:55
restic/repository.(*Repository).LoadTree
	/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/restic/repository/repository.go:568
restic.FindUsedBlobs
	/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/restic/find.go:9
main.runPrune
	/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/cmds/restic/cmd_prune.go:152
main.glob..func11
	/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/cmds/restic/cmd_prune.go:26
github.com/spf13/cobra.(*Command).execute
	/usr/share/gocode/src/github.com/spf13/cobra/command.go:632
github.com/spf13/cobra.(*Command).ExecuteC
	/usr/share/gocode/src/github.com/spf13/cobra/command.go:722
github.com/spf13/cobra.(*Command).Execute
	/usr/share/gocode/src/github.com/spf13/cobra/command.go:681
main.main
	/tmp/brian/tmp2p0u2v9e/build/amd64/source/src/cmds/restic/main.go:40
runtime.main
	/usr/lib/go-1.7/src/runtime/proc.go:183
runtime.goexit
	/usr/lib/go-1.7/src/runtime/asm_amd64.s:2086

Question

In this case, I guess that it’s normal to return an error, and that the error itself is due to the repository being corrupted because of the interruption during the process?

B

[2025-04-21 18:10:30] ❌ [ERROR] Error en prune para repositorio remoto: counting files in repo
building new index for repo
[0:10] 1.39%  225 / 16173 packs
[0:20] 2.44%  394 / 16173 packs
[0:30] 2.44%  394 / 16173 packs
[0:40] 3.44%  556 / 16173 packs
[0:50] 4.86%  786 / 16173 packs
[1:00] 6.29%  1018 / 16173 packs
....

Question

In case, the prune command seems to run successfully, but I’m wondering:

Could it be that if the index has to be rebuilt, the command returns a non-zero exit code even if the prune itself works correctly?

Thanks in advance!

Are you real? This is ancient.

¡Ay, caramba!, it is a version older than 8 years :slight_smile:
Even stretch went EOL @ 2020…

Anyway, if you can, use a newer version. There are a lot of improvements (especially regarding exit codes) since. Not sure how far the backwards-compatibility goes though, still, should not harm the data to try.

Hi! Yes, unfortunately we can’t upgrade the version yet :sweat_smile:

Hello, yes, same here

We still can’t update the O.S for now

In the end, I set up a cron job, the exit status code was this:

2025-04-23 07:15:21] ?? [COMANDO] Ejecutando: restic prune -r /x/x/x/x -p /x/x/x/x/x.txt
[2025-04-23 07:29:44] [INFO] Salida del comando prune (local):
counting files in repo
building new index for repo
[0:10] 87.78%  19012 / 21659 packs
[0:11] 100.00%  21659 / 21659 packs

repository contains 21659 packs (949999 blobs) with 90.090 GiB bytes
processed 1057292 blobs: 107293 duplicate blobs, 24.194 GiB duplicate
load all snapshots
find data that is still in use for 2823 snapshots
[0:10] 0.28%  8 / 2823 snapshots
[0:20] 0.35%  10 / 2823 snapshots
[0:30] 0.35%  10 / 2823 snapshots
[0:40] 0.35%  10 / 2823 snapshots
[0:50] 0.35%  10 / 2823 snapshots
[1:00] 0.35%  10 / 2823 snapshots
[1:10] 0.35%  10 / 2823 snapshots
[1:20] 0.35%  10 / 2823 snapshots
[1:30] 0.35%  10 / 2823 snapshots
[1:40] 0.35%  10 / 2823 snapshots
[1:50] 0.35%  10 / 2823 snapshots
[2:00] 0.35%  10 / 2823 snapshots
[2:11] 0.35%  10 / 2823 snapshots
[2:23] 0.35%  10 / 2823 snapshots
[2:30] 0.35%  10 / 2823 snapshots
[2:40] 0.35%  10 / 2823 snapshots
[2:50] 0.35%  10 / 2823 snapshots
[2025-04-23 07:29:45] ? [ERROR] Error en prune para repositorio local (código: 137): counting files in repo
building new index for repo
[0:10] 87.78%  19012 / 21659 packs
[0:11] 100.00%  21659 / 21659 packs

so it was the OOM killer :upside_down_face:

Memory management also got waaay better since your current version, I remember the struggle.
Just a suggestion: I’d compile the highest runnable restic version on your environment and switch to that if I were you.

Restic is stand-alone binary you can run from any location. You do not have to uninstall your vintage one if you can’t.

So just download the latest one and try it.