I’m assessing the possibility of backing up directly to gdrive with rclone, with a large dataset of ~800GB. All of this is with version 0.9.5 from github releases.
I did an initial upload that failed at times because of data rate limits (several 403 and 500 errors). Still, I estimate ~95% of data is properly uploaded.
restic check failed on this initial snapshot.
I later managed to fix the these API errors and now restic consistently works (thanks to
--tpslimit=10, which I didn’t know in the beginning). So, the objective now is to achieve a snapshot that checks OK, without re-uploading everything. I read a bit about similar cases and tried this:
restic forget <id>, since the snapshot is bad.
restic rebuild-index, which I understand finds all packs and reindex any needed. This succeeded and saved some more indexes.
restic recover, to create a base snapshot with whatever is there. This completed, but complained about some missing trees.
restic check, which failed also because of missing trees.
restic prune, to drop any missing pieces. This one is raising the following exception:
[$ restic prune repository 671e0295 opened successfully, password is correct counting files in repo building new index for repo [15:10:21] 100.00% 146581 / 146581 packs repository contains 146581 packs (2148425 blobs) with 695.380 GiB processed 2148425 blobs: 23136 duplicate blobs, 4.205 GiB duplicate load all snapshots find data that is still in use for 1 snapshots tree 1149c7a284f140dd66d1573a11719a30dfd5c7accff0cd21c6b966b0da38d31f not found in repository github.com/restic/restic/internal/repository.(*Repository).LoadTree /restic/internal/repository/repository.go:709 github.com/restic/restic/internal/restic.FindUsedBlobs /restic/internal/restic/find.go:11 github.com/restic/restic/internal/restic.FindUsedBlobs /restic/internal/restic/find.go:31 github.com/restic/restic/internal/restic.FindUsedBlobs /restic/internal/restic/find.go:31 main.pruneRepository /restic/cmd/restic/cmd_prune.go:191 main.runPrune /restic/cmd/restic/cmd_prune.go:85 main.glob..func18 /restic/cmd/restic/cmd_prune.go:25 github.com/spf13/cobra.(*Command).execute /restic/vendor/github.com/spf13/cobra/command.go:762 github.com/spf13/cobra.(*Command).ExecuteC /restic/vendor/github.com/spf13/cobra/command.go:852 github.com/spf13/cobra.(*Command).Execute /restic/vendor/github.com/spf13/cobra/command.go:800 main.main /restic/cmd/restic/main.go:86 runtime.main /usr/local/go/src/runtime/proc.go:200 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1337]
restic check fails afterwards because of several missing trees.
Is this a bug? My understanding is that if I did
prune without any snapshot, everything would be removed. So
prune must be done while having some snapshot if I want to salvage anything. Or am I wrong here?
Right now I’m not sure on how to proceed. At worst, I can redo the full backup, but it would give me more confidence if I managed to heal this. I’m learning a lot about restic in the process too.