Bash scripting with Restic - mount, extract specific files and umount

Hi all,

I aplogize if this has been discussed before - I tried my best to research questions here before posting.

I’ve been backing up several systems with Restic with great success.
Recently, a task has come up which requires me to periodically extract files with a specific extension from those (Windows) systems, and I would like to extract them from the Restic snapshots - as they are up to date and I can find the files I need easily from a Linux machine instead of the Windows clients.

However, when I try to mount in the background (both from my terminal and a script), one of the following happens:

  1. the terminal hangs until I disable the process from another terminal
  2. the mount exits and nothing happens
  3. the system logs me out of my ssh session

I’ve been trying the following commands:

restic -r sftp://[path-to-mount] mount [path-to-mountpoint] &

Following another thread here, I also tried the following:

nohup restic -r sftp://[path-to-mount] mount [path-to-mountpoint] > restic-mount.log 2>&1 &

And various variations of those two commands.

I’m not sure what I’m doing wrong and would greatly appreciate help - is it possible to mount, extract files, and umount a restic snapshot from a bash script?

Thanks!

restic shouldn’t be able to cause issue 1 or 3. Could the system maybe be running out of memory?

Which output does restic print? And which version are you using?

Hi Michael - thanks for answering!
When running the command:

 nohup restic -r sftp://[path-to-repository] mount [path-to-mountpint] > restic-mount.log &

The following is the contents of restic-mount.log:

mount helper error: fusermount: mountpoint is not empty
mount helper error: fusermount: if you are sure this is safe, use the 'nonempty' mount option

Just to clarify - the mountpoint is a newly created, empty folder.
This command also logs me out of my SSH session if I press Enter.

I’m using Restic version 0.12.1 on OpenSUSE.

The system is a test server which does absolutely nothing - I monitored the memory usage when running the command and everything looks OK.

Thanks!

Can you just create a new folder using mkdir and try again? Is the mountpoint located on some special filesystem?

I don’t have the slightest clue why that should happen. restic is definitely not messing with the terminal. Do you have any special ssh/shell configuration that could cause this?

Hi,
The issue repeats itself with a new folder - I’ve also been able to verify this is happening on a brand new Linux install (tried Ubuntu 20.04 and CentOS 7).
As far as I’m aware there’s no unique shell configuration on my server, and definetly nothing on the new installs I’ve tried. I’ve tried across several filesystems as well (XFS and BtrFS).

I do not get logged out if I run the same command without the closing &.

The only common factor can think of is the system on the other end (a TrueNAS server).

While I would be glad to solve this strange issue, the intent of my post was to figure out what would be the ‘best practice’ when attempting to mount Restic snapshots via script - the command I posted could be flawed for this.

Thank you for your help!

Are you running that command directly on the terminal or in a script? The logout behavior sounds like a bash script with set -e that exits after the first non-zero exit code.

What is the reason to prefix the command with nohup? And what does restic print if you run the command without a trailing &?

To use restic mount in a script you probably want something like the following:

restic mount [...] &
restic_pid=$!
# do stuff
kill -SIGINT $restic_pid
wait # wait until restic mount terminates

Hi, and thank you for your help,

I’ve tried following your suggestion - below is the script I’m using:

restic -r sftp://[user]@[remote_host]:[port]//[path_to_share] mount /tmp/restic/ &
restic_pid=$!
echo "Retic is $restic_pid"

Which works, but does not actualy send the mount process to the background - and locks the terminal when I run it (displaying logout).

If I happen to create some error (missing mount directory, password or similar), it also logs me out of the SSH session.

I’ve managed to replicate this on several servers - is there anything I’m missing?

Thanks!

Most likely, although I don’t see what could be the problem. How do you call that script exactly via ssh? Any usage of sudo or similar? The sftp access probably requires some ssh public key to be present and accessible. Is restic able to use that key without blocking on a password prompt?

What happens if you call false in your ssh session? Does that also terminate the ssh session?

Thanks again for the help Micheal.

I’ve tried both calling a script from my shell (./[script-for-restic].sh) - no sudo or anything similar - as well as just executing the restic mount command.

Calling false does nothing. It does not log me out.
I also have a repository password file as well as an SSH key set up for my target.

However, as it turns out, executing the command in cron is succesful both directly and via script - and it’s good enough for my needs so that is what I will do.

Thank you so much for your help thus far.