What are the implications on my repo of this error?

I’m getting the unexpected end of JSON error on a B2 repo again - this time while running --prune (which, by the way, is agonisingly slow - why does it take several hours to count files and rebuild the index?)

Here’s the error message:

Stat(<data/42f188c656>) returned error, retrying after 425.539723ms: Stat: unexpected end of JSON input
pack file cannot be listed 4381e249: ReadAt: ReadFull(<data/4381e2493e>): unexpected end of JSON input
Stat(<data/44089715ed>) returned error, retrying after 455.821327ms: Stat: unexpected end of JSON input
Stat(<data/4408a2871a>) returned error, retrying after 611.509061ms: Stat: unexpected end of JSON input
Stat(<data/445c1ee2be>) returned error, retrying after 503.327776ms: Stat: unexpected end of JSON input
pack file cannot be listed 4492e776: ReadAt: ReadFull(<data/4492e77622>): unexpected end of JSON input
Stat(<data/4496caa56f>) returned error, retrying after 566.744142ms: Stat: unexpected end of JSON input
Stat(<data/4496df1cd1>) returned error, retrying after 295.133106ms: Stat: unexpected end of JSON input
unable to refresh lock: Delete: unexpected end of JSON input
Stat(<data/44d6f03ae7>) returned error, retrying after 478.377086ms: Stat: unexpected end of JSON input
pack file cannot be listed 44d3a61a: ReadAt: ReadFull(<data/44d3a61a6e>): unexpected end of JSON input
Stat(<data/44d723d66a>) returned error, retrying after 273.386153ms: Stat: unexpected end of JSON input

I’ll try and re-run it with a debug log, but, for the moment, I’m wondering if there are any implications for my repo? Will I need to run --prune again? will the repo be corrupted?

Which version of restic is that?

We’ve resolved these errors recently, but the code for that is not contained in any release yet. There are two different issues at play here:

Both fixes are not contained in any release yet, would you mind retrying with the latest master branch? If you don’t want to compile restic yourself, you can get a beta release from here: https://beta.restic.net/v0.7.3-212-gf3bff129/

Ah, and when was that? I see several failed integration tests today with the B2 backend, so maybe Backblaze has a problem?

This was run last night (although it’s still running…) and the version is master from about a couple of weeks ago. I’ll update, rebuild and see where that gets us…

Running again now with the lastest build from master and using debug. Already encountered a strange message:

body not drained, 0 bytes not read

Also, I’m curious as to why --prune always runs rebuild index, which not only seems to take several hours, but also (according to an answser to another recent thread of mine), doesn’t do much unless the repo is corrupt…

You’re right: Rebuilding the index isn’t strictly necessary for a repo without any corruption. At the moment, we’re rebuilding the index (and not trusting the index files from the repo) for safety reasons, because prune is the one critical operation which really removes data and has the potential to destroy what’s in the repo. I’d really like to avoid that.

Eventually, we’ll improve the prune process (I have many ideas besides not rebuilding the index), but that’ll take some time.

Just completed all checks and forgets/prunes on my B2 repos. It took around 11/12 hours - a little longer than I’d like!

Any tips on improving that (if it’s possible at this stage) gratefully received!

Just a thought: To what value (if any) have you set “-o b2.connections=n” ?

In my case I found that setting n = 6 was adequate (i.e. sufficient to saturate my connection bandwidth) when doing a full backup, but a higher value (n >= 16) was needed to avoid painfully slow check and prune operations with B2. I confess that I never saw anything quite as slow as you’re seeing, however.

Ah - that’s interesting! I must confess that, while I have played around with that option, it was only on the backup and I largely stopped using it because it seemed that the default was close enough to the ideal. It hadn’t, however, occurred to me that a different value on a different operation might be worth trying. Thanks - I shall give that go!

@phunni did the -o b2.connections=n work for you? I have ~300GB backup in B2 that I have not been able to finish prune for yet (we’re talking days of running the prune command)

It helped a bit, but it’s still slow. Not as slow as yours, though. I have more than twice the amount of data as you and it usually finishes a prune in less than 24 hours. Still much longer than I’d like, but not as bad as “days”.