Hi Folks,
I have a speed issue too.
My setup is as follows:
- Backup-Source: Ubuntu with ZFS-Filesystem
- restic 0.15.1 compiled with go1.19.5 on linux/amd64
- all Source files are part of ZFS-Dataset myDataset
- Backup-Destination: Western Digital NAS
- restic 0.15.1 compiled with go1.19.5 on linux/arm
- rest-server version rest-server 0.11.0 compiled with go1.17.6 on linux/arm
- Backup Job:
- Create ZFS-Snapshot:
zfs snapshot myDataset@resticbackup
- Create Mount Point:
Mkdir -p "/mnt/snapshot/myDataset"
- Mount just created ZFS-Snapshot:
mount -t zfs myDataset@resticbackup /mnt/snapshot/myDataset
- Start Restic Backup
- Create ZFS-Snapshot:
restic --password-file /root/restic/.resticPass
-r rest:http://docker:[PASSWORD]@restic.MYDOMAIN.de:8000/docker backup
--tag fullbackup
--files-from-verbatim /root/restic/backup-DirList
--ignore-inode
-v
- Unmount Snapshot:
umount /mnt/snapshot/myDataset
- Delete Mount Point:
rmdir /mnt/snapshot/myDataset
- Delete ZFS Snapshot:
zfs destroy myDataset@resticbackup
All works like intended, but even on the incremental runs it seems, that restic reads all Files from the disk and recalculates the checksums, because my backup is lasting more than 9 hours for 1.4TB. Most of the time (25.995.155s) is used by scanning the files as you can see in the Log.
Here is the Restic Output:
open repository
lock repository
using parent snapshot cc5c2ad6
load index files
start scan on [/mnt/snapshot/myDataset/Dir1 /mnt/snapshot/myDataset/DirN]
start backup on [/mnt/snapshot/myDataset/Dir1 /mnt/snapshot/myDataset/DirN]
scan finished in 25995.155s: 1590443 files, 1.404 TiB
Files: 1583 new, 386 changed, 1588474 unmodified
Dirs: 6 new, 779 changed, 976992 unmodified
Data Blobs: 5615 new
Tree Blobs: 779 new
Added to the repository: 3.224 GiB (3.224 GiB stored)
processed 1590443 files, 1.404 TiB in 9:16:21
snapshot 3d010c06 saved
What am I doing wrong?
PS: There is nearly no traffic during the first 7 hours of the Backup.