Dreaded ciphertext verification failure

That’s not great :confused: Restic is telling you that the config file (which you should have one of in the root of your repository) or the key file (which you should have one or more of in your keys/ directory) is corrupt.

The config file will appear to be a binary blob, as it is encrypted. The key file(s) however should contain JSON data.

Please first establish which of these files you have, and what they contain. Please also tell us what exact command (obfuscating hostnames and other sensitive data) you used to transfer the repository.

Thanks for the reply!

OK, to the best of my knowledge I have only one config file, with length 155 bytes in the root folder of the S3 backup bucket.

Should there be another one, somewhere? If so, I never noticed one.

My trusty backup script, which I’ve been running via cronitor for years:

#!/usr/bin/env bash

export RESTIC_PASSWORD="MyPassword"
export RESTIC_REPOSITORY="s3:https://s3.ap-southeast-2.amazonaws.com/<mybucket>"
export AWS_ACCESS_KEY_ID="accesskey"
export AWS_SECRET_ACCESS_KEY="secretaccesskey"

if ! restic snapshots
        echo "No bucket found, initialising a new one."
        restic init

restic key list

restic backup -o s3.storage-class=${S3_STORAGE_CLASS} /myfolder

I also just noticed that there are 2 key files in /keys. It’s complaining about one of them, but not the other. Just for kicks, I tested by copying both into another directory and then leaving only one and then the other in the /keys folder.

With keyA in the /keys folder:

$ restic check
using temporary cache in /tmp/restic-check-cache-2118777768
Fatal: config or key <keyA> is damaged: ciphertext verification failed

With keyB in the /keys folder

$ restic check
using temporary cache in /tmp/restic-check-cache-510746577
repository 6b4ae7e9 opened (version 1)
created new cache in /tmp/restic-check-cache-510746577
create exclusive lock for repository
Fatal: unable to create lock in backend: ciphertext verification failed

There should be only one single config file. My (test) config file is exactly 155 bytes in size too.

If you run restic --key-hint <theOtherKeyFileHashHere> check, do you get a different result? If you still get an error, do you get the hash you specified instead, in the error message?

Also, I was curious about the aws command you used to sync the data, not the restic command you use to back up your data.

The sync command was a simple one-liner (after setting all the appropriate IAM and bucket policies, of course):

aws s3 sync s3://oldbucket s3://newbucket

That ran for a long-time with no errors. Was run twice just to make sure. The second round gave no output at all, so I assumed all was in order.

As for running with --key-hint, see my attempt at being thorough and leaving only one key file in the /keys directory at a time.

Which version of restic are you using? Can you copy the key files to your computer and run shasum -a 256 or similar on each of them, and see if their name matches the hash you get from this command?

$ restic version
restic 0.15.1 compiled with go1.19.5 on linux/amd64

output of sha sum check (obfuscated):

$ sha256sum keyA
<keyA> <keyA>
$ sha256sum keyB
<keyB> <keyB>

It seems to me that keyB works, considering that restic reports it could open the repository using that key and the key file name also matches its own hash.

The fact that the repository could be opened means restic was able to decrypt the config file. I can’t say if it would be possible to decrypt the encrypted config file and still have a partial corruption (even if just one byte, for example in a part that specifies the encryption parameters) in the config file’s data, but I would be surprised.

Assuming that is not the case, i.e. that your config file is fine too, I’m starting to wonder if there are some hardware issues or bad memory involved. But that’s mostly a long shot as I don’t know the restic internals well enough to be able to debug this like @MichaelEischer is :slight_smile: Regardless, can you do some memory tests on your system?

I don’t suppose you can copy the repository from S3 to a local disk and try running restic on it there?

Would is be possible to run restic on an entirely different system to see if you still get the very same symptoms?

Also, can you try other commands, such as just listing the snapshots using restic snapshots and let us know what the yields?

while it’s not impossible that all of a sudden out of nowhere I have a memory issue, seeing as I literally ran one last backup manually to the original bucket without issues, then ran the sync from within an AWS cloudshell prompt (if those guys have memory issues it’d be a frightening thought!) - I’m just think it’s highly unlikely.

copying the repo locally is the same issue why I don’t simply start over from scratch (the snapshot history isn’t super-crucial): too slow in my corner of the world. it’d take forever!

Any restic command gives the same results, e.g. restic snapshots.

Running it from a different system might be possible though. Will give it a try.

Just ran on a physically different Linux machine.

Output/results are the same.

$ restic --key-hint keyB check
using temporary cache in /tmp/restic-check-cache-2670965372
repository 6b4ae7e9 opened (version 1)
created new cache in /tmp/restic-check-cache-2670965372
create exclusive lock for repository
Fatal: unable to create lock in backend: ciphertext verification failed

$ restic --key-hint keyB snapshots
repository 6b4ae7e9 opened (version 1)
created new cache in /home/user/.cache/restic
Fatal: unable to create lock in backend: ciphertext verification failed
1 Like

That usually means that the password for the key is the correct one, but the key doesn’t match the config file.

The password also works for that key, and restic got one step further: it could also successfully decrypt the config file. So this is probably the correct key. However, the lock files are either damaged or from a different repository. You can use restic unlock --remove-all to remove the broken lock file. Afterwards, the repository is hopefully accessible again.

1 Like

OK, ran unlock --remove-all without --key-hint and got the error, so it was using keyA by default.


$ restic --key-hint keyB unlock --remove-all
repository 6b4ae7e9 opened (version 1)
successfully removed 2 locks


$ restic --key-hint keyB check
using temporary cache in /tmp/restic-check-cache-2791600578
repository 6b4ae7e9 opened (version 1)
created new cache in /tmp/restic-check-cache-2791600578
create exclusive lock for repository
load indexes
error: error loading index 0043b30790336acb38b9790f0ef8a7cfe51e366fcc9dc6c7d591fa5e59a99cc3: ciphertext verification failed
error: error loading index 00ba5e2a867fab3cfd069f54ccb1a9f8466c17952541a294266d48922f6606e6: ciphertext verification failed
error: error loading index 004bc7b12f6d0f678bf80c2ec72823d20d73871f39bac096ea61213c5eff6735: ciphertext verification failed
error: error loading index 00384ea2b320bdb75522f3be5bf58505df50aebd1ca02d77dfc671e085806a1d: ciphertext verification failed
error: error loading index 014871ef7d383f613e00e2c29b2994cc68adfcb12d3d7d762812dfa468dfb76e: ciphertext verification failed
error: error loading index 00555664d6586357ab79a8d1c75552137c271ac12836c8fa42d5b7d7ab5796da: ciphertext verification failed
error: error loading index ff1c857ec7742ffdb799a1a0bafa491266360527a8d21249fbaf70bb52dd521a: ciphertext verification failed
error: error loading index ffe3c9dc2b0e201f6953826bd5e8c344fd507bf161ef6d1c4d08c254756b8826: ciphertext verification failed
error: error loading index fffebd8e4dd5e5c170c592a2e1095ed45b136298464648bcff38dcde7fe19a00: ciphertext verification failed
Fatal: LoadIndex returned errors

snapshots has the same issue. Some progress, some issues remain. Should I simply remove the file for keyA? Is it somehow still being used leading to these errors?

It seems like you synced your repository over/on top of another one. Is that possible?

No, I used an empty, newly created bucket.

Out of curiosity, I’m trying to run restic inside the Goland debugger, but having a hard time, due to it using a separate builder tool, which confuses the hell out of Goland. I wanted to do some tracing with breakpoints and see what values it thinks it should have.

Is it somehow possible that restic got confused and that the config is encrypted with keyB - but the actual data and snapshot files are still encrypted with keyA? I have no idea how that would have happened as part of an S3 sync, but just throwing that out there.

The only possible way to achieve that is to mix files from different repositories which were using the same password (The master key of a repository never changes, and restic does not modify the config file, except in the moment when a repository is upgraded to version 2). There might also be some way to achieve that when a bitflip occurs in memory while creating a new key (although newer restic versions should also detect that). But that definitely cannot happen when calling s3 sync.

Judging from the error message it seems likely that the data and snapshots might use keyA. You could create a fake config file by following the steps in Changing a repository id - #6 by MichaelEischer . But make a backup copy of the current config file and the keys/ folder first. Additionally, the fake config file will allow you to access existing data (assuming that keyA works), but it will likely break deduplication for newer data.

1 Like

Apologies for silence, my day job was keeping me occupied.

Been glancing at the instructions, but the break of dedupe is a bit worrisome.

I’m slowly starting to lean towards the “oh, just get it over with and start over” scenario. I’m doing some speed tests to gauge the required time.

Would some “official” method of “oh sh*t” recovery for such worst case scenarios be considered? E.g. with a separate and external recovery file, like other platforms use? --recovery-key <supersecretfile>?

There’s so far no recovery key, but you can just create a copy of the config file and the keys folder. Together with the password, these are sufficient to decrypt the data in a repository.

Except when, apparently as here, the config got corrupted.

The config or key files becoming corrupted is exactly what @MichaelEischer’s suggestion protects against. So what you wrote just now makes very little sense :slight_smile: Also, I don’t think it’s entirely established that it is the config file that is corrupt, unless I missed something along the way.