I aplogize if this has been discussed before - I tried my best to research questions here before posting.
I’ve been backing up several systems with Restic with great success.
Recently, a task has come up which requires me to periodically extract files with a specific extension from those (Windows) systems, and I would like to extract them from the Restic snapshots - as they are up to date and I can find the files I need easily from a Linux machine instead of the Windows clients.
However, when I try to mount in the background (both from my terminal and a script), one of the following happens:
the terminal hangs until I disable the process from another terminal
the mount exits and nothing happens
the system logs me out of my ssh session
I’ve been trying the following commands:
restic -r sftp://[path-to-mount] mount [path-to-mountpoint] &
Following another thread here, I also tried the following:
nohup restic -r sftp://[path-to-mount] mount [path-to-mountpoint] > restic-mount.log 2>&1 &
And various variations of those two commands.
I’m not sure what I’m doing wrong and would greatly appreciate help - is it possible to mount, extract files, and umount a restic snapshot from a bash script?
Can you just create a new folder using mkdir and try again? Is the mountpoint located on some special filesystem?
I don’t have the slightest clue why that should happen. restic is definitely not messing with the terminal. Do you have any special ssh/shell configuration that could cause this?
Hi,
The issue repeats itself with a new folder - I’ve also been able to verify this is happening on a brand new Linux install (tried Ubuntu 20.04 and CentOS 7).
As far as I’m aware there’s no unique shell configuration on my server, and definetly nothing on the new installs I’ve tried. I’ve tried across several filesystems as well (XFS and BtrFS).
I do not get logged out if I run the same command without the closing &.
The only common factor can think of is the system on the other end (a TrueNAS server).
While I would be glad to solve this strange issue, the intent of my post was to figure out what would be the ‘best practice’ when attempting to mount Restic snapshots via script - the command I posted could be flawed for this.
Are you running that command directly on the terminal or in a script? The logout behavior sounds like a bash script with set -e that exits after the first non-zero exit code.
What is the reason to prefix the command with nohup? And what does restic print if you run the command without a trailing &?
To use restic mount in a script you probably want something like the following:
restic mount [...] &
restic_pid=$!
# do stuff
kill -SIGINT $restic_pid
wait # wait until restic mount terminates
Most likely, although I don’t see what could be the problem. How do you call that script exactly via ssh? Any usage of sudo or similar? The sftp access probably requires some ssh public key to be present and accessible. Is restic able to use that key without blocking on a password prompt?
What happens if you call false in your ssh session? Does that also terminate the ssh session?
I’ve tried both calling a script from my shell (./[script-for-restic].sh) - no sudo or anything similar - as well as just executing the restic mount command.
Calling false does nothing. It does not log me out.
I also have a repository password file as well as an SSH key set up for my target.
However, as it turns out, executing the command in cron is succesful both directly and via script - and it’s good enough for my needs so that is what I will do.