I have been backing up from an x86 box but today.
When I check the repo integrity from a docker conrtainer running on my M1 Mac I get a bunch of errors
# restic version; restic check
restic 0.18.0 compiled with go1.24.4 on linux/arm64
using temporary cache in /tmp/restic-check-cache-3796576143
create exclusive lock for repository
repository 9fdb5441 opened (version 2, compression level auto)
created new cache in /tmp/restic-check-cache-3796576143
load indexes
[0:00] 100.00% 3 / 3 index files loaded
check all packs
check snapshots, trees and blobs
Load(<snapshot/0786694185>, 0, 0) returned error, retrying after 1.232655534s: <snapshot/0786694185> does not exist
Load(<snapshot/0953bcecdd>, 0, 0) returned error, retrying after 861.58655ms: <snapshot/0953bcecdd> does not exist
I think this is due to the underlying CPU. If I do the same test on my x86 NAS, the repo is healthy.
If I did identify the issue correctly, would it be worth, if at all possible, to warn about the difference in CPU architecture rather than go ahead and start spewing worrying errors?
I’m happy to give it another go if it can help. It could of course be something else but both backups are restore are done from docker containers with virtually the same Dockerfile. The only difference would be the host.
I got it working (and documented) on my NAS, so if the problem is only with me, I’m happy to lay it to rest.
Ah I didn’t even see the “docker” part. Are you sure your container has the necessary rights to read the whole repository? Can you try with a local binary? Docker sometimes complicates things.
The same Dockerfile used on my x86 NAS works fine so I think I’m good. I also made sure to run the container as root to be sure to avoid permission issues.
Running restic on my Mac (M1) gives me this
$ restic version restic 0.18.1 compiled with go1.25.1 on darwin/arm64
$ restic -r rclone:xxxxx snapshots enter password for repository: repository 9fdb5441 opened (version 2, compression level auto) Load(<snapshot/526db8fd8c>, 0, 0) returned error, retrying after 920.866841ms: <snapshot/526db8fd8c> does not exist Load(<snapshot/526db8fd8c>, 0, 0) returned error, retrying after 1.671092315s: <snapshot/526db8fd8c> does not exist Load(<snapshot/526db8fd8c>, 0, 0) returned error, retrying after 5.811415563s: <snapshot/526db8fd8c> does not exist
It’s odd but I don’t know how I could debug this further. I thought that it was a problem with restic not warning about a CPU arch change and failing but if it works for you, then the problem is on my side and we need to look no further
No worries, thanks for your help. For me, the problem was resolved once I was able to rest restore. If someone has the same problem as me and want’s me to debug in some way, I’ll be happy to help.
You probably did so already, but I would double check the process is actually running with root rights. Docker can change the user. If you copy the output of this command from the failing machine to here while its working, we can all see its running as root:
ps -ef | grep -E “restic|rclone”
Double check the lines don’t contain passwords…
Have you checked wether the missing snapshots exists? I think that would point to either permission problem or wrong computed snapshot hash:
Load(<snapshot/526db8fd8c>, 0, 0) returned error, retrying after 920.866841ms: <snapshot/526db8fd8c> does not exist Load(<snapshot/526db8fd8c>, 0, 0) returned error, retrying after 1.671092315s: <snapshot/526db8fd8c> does not exist Load(<snapshot/526db8fd8c>, 0, 0) returned error, retrying after 5.811415563s: <snapshot/526db8fd8c> does not exist
I can’t say too much about platform dependencies here. But for me the errors do look like problems with accessing the backend. These could be because of specific (maybe platform-dependent) settings you are using, or (patform-dependent) permissions, etc. But IMO you should try to check if the backend works (identically) in both cases and not try to search for differences about CPU.
Disclaimer: I am not a programmer, so this is a wild guess here…
First I thought that this could be caused by a different endianess. But it seems that x86 and Apple silicon M1 both are little-endian. Nevertheless, there still are gotchas for software developed for different processor architectures. For example, here is an article from Apple about porting software from x86 to Apple silicon.