Question about Restic backup suddenly running slow

Hi, I am running the Restic backup for several months and it works fine.
I have a backup that can be completed very fast normally. However, it suddenly run very slow today.

The backup information is as follow:

open repository
reading repository password from stdin
lock repository
load index files
using parent snapshot 3dbd27a7
start scan on [/mnt/xxxxxx/]
start backup on [/mnt/xxxxxx/]
scan finished in 712.801s: 1691 files, 705.143 GiB

Files:           1 new,  1689 changed,     1 unmodified
Dirs:            0 new,   541 changed,     0 unmodified
Data Blobs:      1 new
Tree Blobs:    524 new
Added to the repo: 21.398 MiB

processed 1691 files, 705.143 GiB in 4:02:49
snapshot 3f317d3d saved
The status code is : 0

The backup information of the previous backup is as follow:

open repository
reading repository password from stdin
lock repository
load index files
using parent snapshot 30ac8864
start scan on [/mnt/xxxxxx/]
start backup on [/mnt/xxxxxx/]
scan finished in 20.216s: 1690 files, 705.140 GiB

Files:           0 new,     1 changed,  1689 unmodified
Dirs:            0 new,     3 changed,   538 unmodified
Data Blobs:      0 new
Tree Blobs:      4 new
Added to the repo: 1.742 MiB

processed 1690 files, 705.140 GiB in 0:20
snapshot 3dbd27a7 saved
The status code is : 0

I did not add many files between the 2 backup, and when I compare the snapshots, the result is as follow:

comparing snapshot 3dbd27a7 to 3f317d3d:

  +/mnt/xxxx/yyyy.xlsx

Files:           1 new,     0 removed,     0 changed
Dirs:            0 new,     0 removed
Others:          0 new,     0 removed
Data Blobs:      1 new,     0 removed
Tree Blobs:    524 new,   524 removed
  Added:   21.398 MiB
  Removed: 21.165 MiB

The size of the newly added excel file is around 3.8M

I am running the backup to backup the file from Remote File System mounted with SSHFS. The Restic version is 0.13.1 and the operation system is CentOS7, and multiple Restic backup are run at the same time to backup different folder.

May I know why this backup suddenly take much longer than normal? Thanks.

Please start by including the complete restic commands you used to run these two backups (including any environment variables, etc).

I have created a shell script for the restic backup and set a cron job to run the shell script. The 2 backup are scheduled backup. After every backup, it runs the check command for checking integrity and consistency. I followed the guide (Examples — restic 0.13.1 documentation) to create a restic user.

I have multiple folders to backup and I set up multiple cron jobs as follow in the root user:

/usr/bin/bash /path_to_shell_script "/mnt/backup_item_path/" "/backup_repo_path" "/output_txt_path"

The shell script is as follow:

backup_item_path=$1
backup_repo_path=$2
output_txt_filename=$3
export RESTIC_PROGRESS_FPS=0.016666
echo 'password' | sudo -u restic /home/restic/bin/restic -r $backup_repo_path --verbose backup $backup_item_path >> $output_txt_filename 2>&1
echo "The status code is : $?" >> $output_txt_filename
echo 'password' | sudo -u restic /home/restic/bin/restic -r $backup_repo_path --verbose stats >> $output_txt_filename 2>&1
echo 'password' | sudo -u restic /home/restic/bin/restic -r $backup_repo_path check --read-data-subset=100% >> $output_txt_filename 2>&1

I checked the environment variable with the root user using printenv command and the result is as follow:

LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.m4a=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.oga=01;36:*.opus=01;36:*.spx=01;36:*.xspf=01;36:
LANG=en_HK.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=my_host_name
AWS_SECRET_ACCESS_KEY=MY_AWS_SECRET_ACCESS_KEY
which_declare=declare -f
USER=root
PWD=/root
HOME=/root
MAIL=/var/spool/mail/root
SHELL=/bin/bash
TERM=xterm
RESTIC_PASSWORD=passowrd
AWS_ACCESS_KEY_ID=MY_AWS_ACCESS_KEY
SHLVL=1
LOGNAME=root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
HISTSIZE=1000
RESTIC_REPOSITORY=s3:https://s3.amazonaws.com/xxxxxx
LESSOPEN=||/usr/bin/lesspipe.sh %s
BASH_FUNC_which%%=() {  ( alias;
 eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@
}
_=/usr/bin/printenv

Is there still a performance problem? The most likely explanation is that either reading from the sshfs mount took much longer or that there is a performance problem during the upload.

In the slow backup run, all files are reported as changed, which means that restic had to read all 700GB via sshfs. So either the metadata of all files was changed on the remote side, or restic saw different metadata while reading from the sshfs mount. On possible reason would be that the inodes in the sshfs mount have changed. AFAIR these are not stable and can change from time to time. If the problem shows up again, you could give restic backup --ignore-inode [...] a try.