Hi. Loving restic so far. I’m using it to backup a webserver directory to a business Google Drive. The backup script runs without issue every day. I have a prune script which runs once a week, which occasionally throws an error (locking) and fails to backup. The lock seems to be from the previous prune process (i.e. its dated approx 168 hours ago). I have a couple of questions about this.
The script logs the output of the prune process to a text file, and then emails it to me. The relevant line is: /usr/bin/restic --repo rclone:mydrive:directory forget --prune --verbose --keep-daily 6 --keep-weekly 10 >> $LOGFILE
This works great when restic is behaving itself. However when it receives a lock error (or, I’m guessing, other errors), it doesn’t log this to the text file. There is NO output. It would be useful to have this information to see at a glance that something has gone wrong, and, optionally, also be able to use this information to make decisions in the script. eg. If lock detected with ‘check’ then run ‘unlock’, and send me an alert. How do I make the output of restic errors appear in my logfile?
I understand the correct way to fix the lock error is to run the restic ‘unlock’ command. First of all is there any downside to running it before I run each weekly prune? If there is no issue, then I might as well append it to the front of the script before pruning.
Or would it be better to run it at the end of the script, to ensure the locks are released at the end of the run?
Thank you cdhowie. I’ve been struggling with stdout and stderr for years now: some kind of mental block. I can never quite figure out where to put them in the command line! Thanks for the pointer, and now I’m looking forward to the next error!
The best way to think about it is as a matrix of fds. By default:
0 is stdin and is connected to terminal input.
1 is stdout and is connected to terminal output.
2 is stderr and is connected to terminal output.
But note that 1 and 2 are still separate fds which means they can be independently flushed, etc.
Redirections are processed left-to-right and all they do is manipulate one of these fds.
This opens the log file for append as fd 1. Note that fd 2 (stderr) is still connected to the terminal.
This literally says "call dup2(1, 2)" which makes fd 2 refer to the same file description as fd 1 (they share offset, for example). Effectively this redirects stderr into whatever stdout currently is.
If you put these redirection directives in the opposite order, then 2>&1 would not appear to do anything – it would effectively merge the stdout and stderr fds, but then right after you’d be replacing the stdout fd with redirection into the log file. Stderr would, however, not be affected by this and would continue to send output to the terminal.
If you’re the only one backing up to this repo, and you’re absolutely sure you’re not backing up when the script runs, then there shouldn’t be any problem.
I do know that restic won’t clear an active lock - see this thread. One thing to note is that they definitely don’t recommend manually removing anything from the locks folder.
I have restic running in a Windows environment with about 80 clients. What I did in my maintenance script was make each user backup script check for an “unlocked” file. If the file exists, run the backup. If it doesn’t, wait X amount of time, try again. The maintenance script (see this thread) removes the unlocked file, waits about 24 hours for all the backup jobs to finish and notice the “unlocked” file is gone (I typically run it on a Friday afternoon before the weekend), then calls restic unlock, then checks for the existence of any files under /path/to/repo/locks, if they exist it waits ~15m then runs restic unlock again, waits, runs again, etc until it clears, runs my pruning jobs, then recreates the “unlocked” file. Just a thought, if it would be useful to you.