Backup with error: transport endpoint is not connected

Hi, I am running Restic to backup files from Remote File System mounted with SSHFS. The Restic version is 0.13.1 and the operation system is CentOS7.
I created shell script for the restic backup and set a cron job to run the shell script for schedule backup.
Sometimes, the backup failed and the error message is as follow:

open repository
reading repository password from stdin
lock repository
load index files
using parent snapshot 4a4482f3
start scan on [/mnt/xx/xx/xx/xx]
start backup on [/mnt/xx/xx/xx/xx]
scan finished in 2.466s: 775 files, 382.645 MiB
error: open /mnt/xx/xx/xx/xx/xx: software caused connection abort
error: open /mnt/xx/xx/xx/xx/xx: transport endpoint is not connected
error: open /mnt/xx/xx/xx/xx/xx: transport endpoint is not connected
error: open /mnt/xx/xx/xx/xx/xx: transport endpoint is not connected
....... (Many line with error: open /mnt/xx/xx/xx/xx/xx: transport endpoint is not connected)
Fatal: unable to save snapshot: Lstat: stat /mnt/xx: transport endpoint is not connected
The status code is : 1
reading repository password from stdin
scanning...
Stats in restore-size mode:
Snapshots processed:   157
   Total File Count:   10763
         Total Size:   4.759 GiB
using temporary cache in /tmp/restic-check-cache-2725503532
reading repository password from stdin
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
[0:00] 100.00%  157 / 157 snapshots
read 100.0% of data packs
[0:02] 100.00%  252 / 252 packs
no errors were found


When I try to mount again with SSHFS, the next schedule backup works. However, after several days, the same error happen again. Is this normal for the backup? May I know why it happens or if there is any method to make it more stable?

The shell script for the restic backup is as follow:

backup_item_path=$1
backup_repo_path=$2
output_txt_filename=$3
export RESTIC_PROGRESS_FPS=0.016666
echo 'repo_password' | sudo -u restic /home/restic/bin/restic -r $backup_repo_path --verbose backup $backup_item_path >> $output_txt_filename 2>&1
echo "The status code is : $?" >> $output_txt_filename
echo 'repo_password' | sudo -u restic /home/restic/bin/restic -r $backup_repo_path --verbose stats >> $output_txt_filename 2>&1
echo 'repo_password' | sudo -u restic /home/restic/bin/restic -r $backup_repo_path check --read-data-subset=100% >> $output_txt_filename 2>&1

There are multiple folders to backup and I set up multiple cron jobs as follow in the root user:

/usr/bin/bash /path_to_shell_script "/mnt/backup_item_path/" "/backup_repo_path" "/output_txt_path"

Afaik SSHFS had some gimmicks over long-lasting connections. Maybe you could find something on the SSH logs of source and destination (e.g. connection issue/broken pipe or if there are any other apps which use this mount giving access warnings). Any case, I’d suggest using SFTP connection which restic has built-in support or setting up the connection over rclone if possible, so whenever restic is triggered it would create a fresh connection for backup.

There is only support in restic for storing a repository via SFTP, but not for reading files from SFTP. Using rclone to mount the remote folder via SFTP could work though.