Receiving error "error for tree [null]"

Hello there.

I am using restic. 0.11 and just noticed this error. The repo still mounting and backup still working.
As I use a script to backup, check and prune. This can probably be happening for a while or at least from my update to 0.11 in 15/nov.

repository 6d9f087d opened successfully, password is correct
counting files in repo
building new index for repo
[2:46] 100.00%  11822 / 11822 packs
repository contains 11822 packs (404282 blobs) with 57.861 GiB
processed 404282 blobs: 0 duplicate blobs, 0 B duplicate
load all snapshots
find data that is still in use for 49 snapshots
[0:21] 63.27%  31 / 49 snapshots
id 0000000000000000000000000000000000000000000000000000000000000000 not found in repository*Repository).LoadBlob

Oh, interesting, this sounds like a bug. Can you please confirm it’s restic 0.11.0?

Yes, it’s restic 0.11.0 on Fedora 33 (kernel 5.9.11). BTRFS on both machine and external USB 3 HDD.

❯ doas restic self-update
writing restic to /usr/bin/restic
find latest release of restic at GitHub
restic is up to date

❯ uname -a
Linux tars 5.9.11-200.fc33.x86_64 #1 SMP Tue Nov 24 18:18:01 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

❯ doas btrfs filesystem usage /
    Device size:		 214.16GiB
    Device allocated:		 143.05GiB
    Device unallocated:		  71.12GiB
    Device missing:		     0.00B
    Used:			 103.92GiB
    Free (estimated):		 108.12GiB	(min: 108.12GiB)
    Data ratio:			      1.00
    Metadata ratio:		      1.00
    Global reserve:		 228.86MiB	(used: 0.00B)
    Multiple profiles:		        no

Data,single: Size:139.01GiB, Used:102.01GiB (73.38%)
   /dev/mapper/luks-7b63c76d-cf2c-4661-9431-4a3c79d4cf3f	 139.01GiB

Metadata,single: Size:4.01GiB, Used:1.91GiB (47.72%)
   /dev/mapper/luks-7b63c76d-cf2c-4661-9431-4a3c79d4cf3f	   4.01GiB

System,single: Size:32.00MiB, Used:48.00KiB (0.15%)
   /dev/mapper/luks-7b63c76d-cf2c-4661-9431-4a3c79d4cf3f	  32.00MiB

   /dev/mapper/luks-7b63c76d-cf2c-4661-9431-4a3c79d4cf3f	  71.12GiB

❯ doas btrfs filesystem usage /run/media/marcelo/THEGREY/
    Device size:		 931.51GiB
    Device allocated:		 798.51GiB
    Device unallocated:		 133.00GiB
    Device missing:		     0.00B
    Used:			 788.65GiB
    Free (estimated):		 140.62GiB	(min: 74.12GiB)
    Data ratio:			      1.00
    Metadata ratio:		      2.00
    Global reserve:		 512.00MiB	(used: 64.00KiB)
    Multiple profiles:		        no

Data,single: Size:794.49GiB, Used:786.87GiB (99.04%)
   /dev/mapper/luks-366d593c-cd8d-48c5-8714-b26b1ff8595f	 794.49GiB

Metadata,DUP: Size:2.00GiB, Used:908.12MiB (44.34%)
   /dev/mapper/luks-366d593c-cd8d-48c5-8714-b26b1ff8595f	   4.00GiB

System,DUP: Size:8.00MiB, Used:144.00KiB (1.76%)
   /dev/mapper/luks-366d593c-cd8d-48c5-8714-b26b1ff8595f	  16.00MiB

   /dev/mapper/luks-366d593c-cd8d-48c5-8714-b26b1ff8595f	 133.00GiB

❯ ~/Projects/rest-o/
====== Iniciando backup de /home/marcelo/ para /run/media/marcelo/THEGREY/BACKUPS/tars_linux/ =====

open repository
repository 6d9f087d opened successfully, password is correct
lock repository
load index files
start scan on [/home/marcelo/ /home/marcelo/Games/steamapps/compatdata/275850/pfx/drive_c/users/steamuser/Application Data/HelloGames/NMS]
start backup on [/home/marcelo/ /home/marcelo/Games/steamapps/compatdata/275850/pfx/drive_c/users/steamuser/Application Data/HelloGames/NMS]
scan finished in 25.447s: 286078 files, 30.049 GiB

Files:       286078 new,     0 changed,     0 unmodified
Dirs:        60450 new,     0 changed,     0 unmodified
Data Blobs:   2049 new
Tree Blobs:   2440 new
Added to the repo: 247.739 MiB

processed 286078 files, 30.049 GiB in 3:17
snapshot 5f420137 saved
====== Processo de backup finalizado =====

Ok, that’s odd. Somehow one of the sub dirs in a snapshot references an invalid tree ID (all zeroes). You could try finding out which snapshot it is by running:

$ restic find --blob 0000000000000000000000000000000000000000000000000000000000000000

How old is the snapshot? Can you guess which version of restic was used to create it?

Would you mind adding an issue on GitHub so we can track the aborted run of prune? It should not abort, but print a warning instead.

The all zeros tree is actually the root tree of a snapshot. FindUsedBlobs still works recursively in restic 0.11.

Sorry for the delay.

❯ restic find --blob 0000000000000000000000000000000000000000000000000000000000000000
repository 6d9f087d opened successfully, password is correct
Unable to load tree 0000000000000000000000000000000000000000000000000000000000000000
 ... which belongs to snapshot 9f16b85875496e2f9618871af9775e5f41133043b3549f6d8bc4a0580df61542

I can’t recall, I think May 2019 with restic 0.9.3 (really guessing here).

Ok. I’ll open one.

1 Like

Thanks! Here’s the issue (for others reading along):

Running this command that MichaelEischer suggested on GitHub, helped to find the snapshot causing the error. I mounted my repository and checked the snapshot with that date and time reference and it was empty. Removing it solved the error message:

restic list snapshots -q | while read idxid; do \
        restic cat -q snapshot $idxid | jq 'select(.tree | contains ("0000000000000000000000000000000000000000000000000000000000000000"))'; \

As @cristian-spiescu mentioned that CTRL + C could’ve caused the problem, I can assume that it could be it or part of what caused it. Since I use two displays and keep various Windows/softwares opened, some times I issue a command in the wrong one and also some times it takes a while for me to notice :frowning:. It could’ve happened maybe more than once because I also put some terminal windows on a separated Desktop when it’s going to take a while to finish.