Really slow first backup after upgrading to repo v2

Hi guys, I followed the instructions in the 0.14 release notes for upgrading an existing repo. Specifically, I used migrate upgrade_repo_v2 and then forget --prune. (Hopefully that accomplishes the same thing?)

Anyway, after doing that I ran a regular backup from one of my devices (177GB), and it’s taking forever, as if it’s shipping everything over to the remote repo again. Can anyone explain why this is happening? Is it something about the cache not being valid anymore?

Strange, I aborted it and retried later and it worked as quickly as normal.

Which backend do you use? Any special options passed to restic?

It’s the rest-server backend:

restic -v -r rest:http://backups.example.com/ \
    -p password.txt \
    --exclude-file=exclude.txt \
    backup /home/kylrth

Hmm, I don’t see anything that’s obviously problematic here. So I guess we’ll have to if/until the problem reappears to debug this any further.

1 Like

I have been having this same issue using B2. Aborting and restarting hasn’t helped. Here’s what the progress status is saying:

[4:15:27] 2.21% 6587 files 48.189 GiB, total 151228 files 2.127 TiB, 0 errors ETA 188:11:14

Here is how I am running it:
restic backup $FILES -o b2.connections=10 --exclude “#recycle” --exclude-caches --compression off

I turned compression off because I am running this from my NAS, and I think compression was causing my NAS to run out of memory and kill the backup process.

Could you send a SIGQUIT signal to restic (pressing ctrl+/ also has that effect)? This will abort the current backup run and let restic print a stacktrace which shows what it is doing. This won’t damage the repository, but you’ll probably want to run restic unlock to remove the stale lock afterwards.

You could also try to use the S3-API for B2, see Preparing a new repository — restic 0.14.0 documentation .

I put the stacktrace from SIGQUIT at https://pastebin.com/JcE2y0UB

That stacktrace looks like something went wrong inside the go runtime: Both FileSavers are waiting for the BlobSavers. However, both BlobSavers are stuck inside malloc(!).

Another strange thing is that one goroutine is marked as locked to thread: goroutine 5 [select, 382 minutes, locked to thread]:. But as far as I can tell, this should never be the case for a parked thread.