Hi restic users,
I’m running into an interesting issue when I’m backing up my machine to an SFTP location on my local network, I’m using restic 0.16.4 compiled with go1.22.2 on linux/amd64
(Ubuntu).
The command run and the error look like follows:
sudo restic -r sftp:nas:/restic --verbose --exclude={/dev,/media,/mnt,/proc,/run,/sys,/tmp,/var/tmp} backup /
open repository
restic@<redacted>'s password:
enter password for repository:
repository 42e7b8f4 opened (version 2, compression level auto)
lock repository
no parent snapshot found, will read all files
load index files
[0:02] 100.00% 22 / 22 index files loaded
start scan on [/]
start backup on [/]
error: no result
error: no result
error: no result
scan finished in 27.601s: 1436676 files, 309.476 GiB
Fatal: unable to save snapshot: sftp: no space left on device
I am using config in .ssh for the machine looking like
Host nas
HostName <redacted>
User restic
Port 2222
ServerAliveInterval 60
ServerAliveCountMax 240
The connection definitely works fine, interesting is I ran the command initially and it would backup and upload all files, only finishing the backup would fail. If I now rerun the command it reproducibly gives the output noted above.
Manually logging in with the correct user (restic) on the machine via SFTP it looks like this:
# sftp nas
sftp> df -h
Size Used Avail (root) %Capacity
7.0TB 3.5TB 3.5TB 3.5TB 50%
sftp> cd /restic/
sftp> pwd
Remote working directory: /restic
sftp> ls
config data index keys locks snapshots
sftp> df -h
Size Used Avail (root) %Capacity
7.0TB 3.5TB 3.5TB 3.5TB 50%
The permissions for the user are also fine, I can see the uploaded files as well as the lockfile being created when I re-run the backup command as lined out on the very top and space is available. (I actually already played around with different “sftp roots” for the user already since at /
the NAS would report a different availability than in the restic
folder. But that is definitely not the case anymore).
I have also checked
and checked for enough space on my machine as well as for the .cache
folders:
root@lynx2:~/.cache# df -h /root/.cache
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p6 98G 61G 32G 66% /
root@lynx2:~/.cache# df -i /root/.cache
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme0n1p6 6467216 1271625 5195591 20% /
root@lynx2:~/.cache#
Another observation I made is that the amount of GiB scanned ever so slightly increases after subsequent runs. Is there a way to increase verbosity further when running restic?
Any further hints would be appreciated, thank you.
-f