Pruned the copy destination, now copy fails

My usage model is:

restic backup --repo /local/repo
restic copy --repo /local/repo --repo2 remote

Not thinking it through, I decided to save some space on the cloud storage provider:

restic prune --repo remote

Hooray, saved a few GB! But now the copy throws errors…

  1. this was probably a bad idea, right?
  2. will pruning the local repo change it in the same way so that the repos are again consistent?
  3. is there another way to fix this issue?
  4. is there a way to verify that two repos are the same or contain the same data? would be useful to check that a copy succeeded
  5. is there a best practice for pruning in this case? Will pruning the local repo and then running a copy effectively prune the remote?

Thanks!

A problem report with related questions is pretty pointless if you dont provide the actual symptoms, error messages and preferrably also the complete commands you run and their output. The answer you expect depends on a lot of factors, of which most are hidden in the aforementioned information.

  1. Normally prune does not damage a repository in any way. But unless you provide more details its not possible to say more about that.
  2. No.
  3. Please provide the error reported by copy, the output of check for both repositories. We also need to know which restic version you are using.
  4. Not directly. copy without parameters will copy all snapshots to the destination repository. If you want to ensure that the destination repository is intact, then just run check for that repository.
  5. The copy command only adds data to a repository. It does not track and/or propagate snapshot deletions.

I was hoping for high-level opinions about the sanity of that cmd sequence, without getting too far into the weeds! But I forget a lot of people here want to get into the weeds.

Here are the specifics:

restic version: 0.12.1
arch: 4.9.0-12-amd64 #1 SMP Debian 4.9.210-1+deb9u1 (2020-06-07) x86_64 GNU/Linux

cmds:

  1. backup --repo /mnt/backup /home/user files
  2. copy --repo /mnt/backup --repo2 b2:resource
  3. prune --repo b2:resource -v
  4. copy --repo /mnt/backup --repo2 b2:resource -v

I’ve been running #1-2 for a while now, but then ran #3 once, and then #4 (which is really the same) fails:

snapshot 3f748e5e of [xxx] at 2021-01-07 10:06:10.995480026 -0800 PST)
skipping source snapshot 3f748e5e, was already copied to snapshot b1118fc3

snapshot a73b8662 of [xxx] at 2021-09-19 01:19:27.82599004 -0700 PDT)
skipping source snapshot a73b8662, was already copied to snapshot cc59fa82

[more snapshot + skip messages]

Remove(<lock/0f1ed5d1e6>) returned error, retrying after 720.254544ms: Delete: b2_delete_file_version: 400: File not present: locks/0f1ed5d1e6a9ccb11665211a995063bd6aa2a7106e312e21ad222e9fbc50219a 4_z50b3c1b1964246f472330915_f1071810b0da199e7_d20211009_m055925_c000_v0001401_t0023
Remove(<lock/0f1ed5d1e6>) returned error, retrying after 873.42004ms: Delete: b2_download_file_by_name: 404: File with such name does not exist.
Remove(<lock/0f1ed5d1e6>) returned error, retrying after 1.054928461s: Delete: b2_download_file_by_name: 404: File with such name does not exist.
Remove(<lock/0f1ed5d1e6>) returned error, retrying after 1.560325776s: Delete: b2_download_file_by_name: 404: File with such name does not exist.
Remove(<lock/0f1ed5d1e6>) returned error, retrying after 3.004145903s: Delete: b2_download_file_by_name: 404: File with such name does not exist.
Remove(<lock/0f1ed5d1e6>) returned error, retrying after 2.147653057s: Delete: b2_download_file_by_name: 404: File with such name does not exist.
Remove(<lock/0f1ed5d1e6>) returned error, retrying after 3.739082318s: Delete: b2_download_file_by_name: 404: File with such name does not exist.
Remove(<lock/0f1ed5d1e6>) returned error, retrying after 5.099891944s: Delete: b2_download_file_by_name: 404: File with such name does not exist.
Remove(<lock/0f1ed5d1e6>) returned error, retrying after 10.263247495s: Delete: b2_download_file_by_name: 404: File with such name does not exist.
Remove(<lock/0f1ed5d1e6>) returned error, retrying after 19.514091959s: Delete: b2_download_file_by_name: 404: File with such name does not exist.
error while unlocking: Delete: b2_download_file_by_name: 404: File with such name does not exist.Remove(<lock/0f1ed5d1e6>) returned error, retrying after 656.819981ms: Delete: b2_download_file_by_name: 404: File with such name does not exist.
error while unlocking: Delete: b2_download_file_by_name: 404: File with such name does not exist.Remove(<lock/0f1ed5d1e6>) returned error, retrying after 656.819981ms: Delete: b2_download_file_by_name: 404: File with such name does not exist.

But also with these errors, the exit status was set to 0.

The command sequence itself is sane and shouldn’t cause any problems. Most (if not all) restic commands are built such that one can’t accidentally break a backup repository. (Deleting lots of snapshots is still possible though).

That error message shows that the problem is totally different from what I’ve expected based on the high-level description: the problem isn’t specific to the commands you’ve used. Instead you’re seeing the B2 issues already reported in Restic hangs with B2 since 2021-10-01 and crash on unlock · Issue #3541 · restic/restic · GitHub .

Although it may sound strange, that is actually correct. The problem here is that restic doesn’t realize that the old lock file was already deleted on B2 and continues complaining. The lock mechanism still works nevertheless. However, it could leave lots of stale locks behind which you then have to remove using unlock.