Fatal error using prune command

I have a restic repository that is about 350GB, and I tried the restic forget / restic prune commands for the first time. The restic forget command succeeded (removed 22 snapshots), but when I ran “restic prune --max-unused 10%”, the restic command exited with the following error output (pasted at the bottom of this post).

I then did a “restic unlock”, and repeated the “restic prune --max-unused 10%”, and the command succeeded the second time. But I thought I would report the error that I saw with the first prune attempt.

I am on Windows 11 and the restic version command shows:
restic 0.17.3 compiled with go1.23.3 on windows/amd64

Relevant environment variables are RESTIC_REPOSITORY, which is set to an rclone backend pointing at a NAS device: rclone:my-cloud-nas://jakek/restic-repo. Also RESTIC_PASSWORD_FILE is set.

Thanks, overall restic has been a great tool!

Here is the error, I truncated this because the full dump was large, but I can include the full dump if it would help.

repository 239f949d opened (version 2, compression level auto)
loading indexes...
fatal error: unexpected signal during runtime execution
[signal 0xc0000005 code=0x0 addr=0x1f2f663bfcb pc=0x6c14ef]

runtime stack:
runtime.throw({0x159b726?, 0x61de3ff600?})
        /usr/local/go/src/runtime/panic.go:1067 +0x4d fp=0x61de3ff590 sp=0x61de3ff560 pc=0x70ee0d
runtime.sigpanic()
        /usr/local/go/src/runtime/signal_windows.go:395 +0x265 fp=0x61de3ff5d8 sp=0x61de3ff590 pc=0x6f0c65
runtime.(*mspan).isFree(...)
        /usr/local/go/src/runtime/mbitmap.go:1124
runtime.scanConservative(0xc00045dac0, 0x20, 0x0, 0xc000060150, 0x61de3ff728)
        /usr/local/go/src/runtime/mgcmark.go:1558 +0x12f fp=0x61de3ff628 sp=0x61de3ff5d8 pc=0x6c14ef
runtime.scanframeworker(0x61de3ff6c8, 0x61de3ff728, 0xc000060150)
        /usr/local/go/src/runtime/mgcmark.go:1034 +0x185 fp=0x61de3ff688 sp=0x61de3ff628 pc=0x6c0825
runtime.scanstack(0xc00030fdc0, 0xc000060150)
        /usr/local/go/src/runtime/mgcmark.go:888 +0x2c7 fp=0x61de3ff7b8 sp=0x61de3ff688 pc=0x6c0187
runtime.markroot.func1()
        /usr/local/go/src/runtime/mgcmark.go:238 +0xb1 fp=0x61de3ff808 sp=0x61de3ff7b8 pc=0x6bee71
runtime.markroot(0xc000060150, 0x48, 0x0)
        /usr/local/go/src/runtime/mgcmark.go:212 +0x1a5 fp=0x61de3ff8b0 sp=0x61de3ff808 pc=0x6beb05
runtime.gcDrainN(0xc000060150, 0x20d693)
        /usr/local/go/src/runtime/mgcmark.go:1311 +0x15d fp=0x61de3ff8e0 sp=0x61de3ff8b0 pc=0x6c0e7d
runtime.gcAssistAlloc1(0xc00030f500, 0x20d693)
        /usr/local/go/src/runtime/mgcmark.go:653 +0x10f fp=0x61de3ff940 sp=0x61de3ff8e0 pc=0x6bf94f
runtime.gcAssistAlloc.func1()
        /usr/local/go/src/runtime/mgcmark.go:544 +0x1b fp=0x61de3ff960 sp=0x61de3ff940 pc=0x6bf81b
runtime.systemstack(0xc000074e00)
        /usr/local/go/src/runtime/asm_amd64.s:514 +0x49 fp=0x61de3ff970 sp=0x61de3ff960 pc=0x715409

That’s a crash in the go runtime itself. I couldn’t find anything particularly useful on a quick search. Please post the full stacktrace, maybe that sheds some light on what’s going on.

Thanks, yes certainly I can post the full trace. It looks like Discourse has a limit of 32,000 characters in a post and the full stacktrace is about 66,000 characters. Sorry for this newbie question, but what do others do in this situation? Should I break it into 3 separate posts or should I attach a public link to a zip file hosted in my google drive?

Thanks,
Jake

I would try pastebin.com

Oh perfect, thanks. Here is the stack trace:

Thanks again,
Jake

Thanks for the stacktrace. I took another look at the error code. Apparently 0xc0000005 == _EXCEPTION_ACCESS_VIOLATION, so it looks like the garbage collector tried to read from the wrong place. Other than that the stacktrace shows that restic was loading the repository index at that point, which involves a lot of decoding. I don’t see anything particularly problematic, although I’m wondering whether the sha256 calculations at the bottom might have something to do with the crash (just a wild guess). The next restic version will switch back to using the sha256 implementation from the standard library so that might already be enough to fix the issue.

Other than that please keep an eye on whether the crash shows up again or whether this was a one time event.

2 Likes

Thanks for taking a look at the stacktrace. I ran a second prune command on the original repository without seeing the error, and also ran the prune command on a second repository of similar size without seeing the error, so not sure right now of a way to reproduce it.

But yes definitely I will keep an eye out for this crash in the future.

1 Like