Lock on S3 - unable to create lock in backend: ciphertext verification failed

Hi!
I am using a bucket on AWS S3 to create backups of some files and folders that are on my Mac.
I have the following profiles.yaml (I changed some values to avoid disclosing them):

version: "1"

global:
  # run 'snapshots' when no command is specified when invoking resticprofile
  default-command: snapshots
  # initialize a repository if none exist at location
  initialize: false
  # priority is using priority class on windows, and "nice" on unixes
  priority: low
  # resticprofile won't start a profile if there's less than 100MB of RAM available
  min-memory: 100
  # Retry a restic command that failed on acquiring a lock every minute 
  # (at least), for up to the time specified in "--lock-wait duration". 
  restic-lock-retry-after: 0
  # Ask restic to unlock a stale lock when its age is more than 2 hours
  # and the option "force-inactive-lock" is enabled in the profile.
  restic-stale-lock-age: 0

# a group is a profile that will call all profiles one by one
groups:
  # when starting a backup on profile "full-backup",
  # it will run the "root" and "src" backup profiles
  full-backup:
    - abc
    - efg
    - jki

# Default profile when not specified on the command line (-n or --name)
# There's no default inheritance from the 'default' profile,
# you can use the 'inherit' flag if needed
default:
  # add environment variables
  env:
    TMPDIR: /tmp
  password-file: restickey.txt
  repository: "s3:s3.amazonaws.com/bucketname"
  
abc:
  inherit: default
  initialize: true
  # LOCAL lockfile so you cannot run the same profile more than once at a time
  # (it's totally independent of the restic locks on the repository)
  lock: "/tmp/resticprofile-profile-abc.lock"
  force-inactive-lock: true
  
  backup:
    verbose: true
    exclude-caches: true
    # files with no path are relative to the configuration file
    source:
      - /abc
    tag:
      - abc 
    
    # ignore restic warnings when files cannot be read
    no-error-on-warning: true
    schedule: "*-*-* 00:10"
    schedule-permission: user
    schedule-priority: background
    schedule-log: "abc-backup.log"
    schedule-lock-wait: 0

  prune:
    schedule: "*-*-02 04:00"
    schedule-permission: user
    schedule-lock-wait: 0

  check:
    read-data: true
    schedule: "*-*-01 01:00"

efg:
  inherit: default
  initialize: true
  lock: "/tmp/resticprofile-profile-efg.lock"
  force-inactive-lock: true

  backup:
    verbose: true
    exclude-caches: true
    source:
      - /efg
    tag:
      - efg
      
    schedule: "*-*-* 00:30"
    schedule-permission: user
    schedule-priority: background
    schedule-log: "efg-backup.log"
    schedule-lock-wait: 0
    
  check:
    read-data: true
    schedule: "*-*-01 01:30"

jki:
  inherit: default
  initialize: true
  
  backup:
    verbose: true
    exclude-caches: true
    source:
      - /jki
    tag:
      - jki
      
    schedule: "*-*-* 01:00"
    schedule-permission: user
    schedule-priority: background
    schedule-log: "jki-backup.log"
    schedule-lock-wait: 0

  check:
    read-data: true
    schedule: "*-*-01 03:00"

After running the init command, If I run the first backup (abc) with the command run resticprofile -v-n abc backup, it works, but when it ends, if I run another backup (efg or jki) I receive the following error:

using temporary cache in /tmp/restic-check-cache-4052454400
repository bb817cb9 opened (version 2, compression level auto)
created new cache in /tmp/restic-check-cache-4052454400
create exclusive lock for repository
unable to create lock in backend: ciphertext verification failed
2024/05/09 14:19:50 check on profile 'default': exit status 1

I noted that restic created in the bucket on S3 the folder locks that prevented me from running other backups I set in the groups.

I read this resource: References — restic 0.16.4 documentation

I solved the problem by deleting the folder locks on the S3 bucket so I could run the backup regularly, but it was not the solution.

Is there something I am wrong?
Is there any opportunity to avoid the lock?

I think it was an issue depending on AWS and (probably) on the bucket’s name.
I created another bucket, and it seems to be working fine now.

1 Like

That error only happens when the lock file is either corrupted (shouldn’t be possible with S3) or belongs to a different repository. (Or your system has massive memory corruption issues. But for that to be a problem with lock files, you’d already have noticed the problem somewhere else).