First of all thank a lot for your help in advanced, and for the great Restic tool.
I review already some topics that did not help in my case, therefore I decide to detail my situation and if possible improve it.
This is my setup:
I have a 2,8Tb in around 268k files for backup (disks are formatted on encrypted APFS). This is on a updated OSX (imac), 4 Tb Apple Raid 0 external disk (2x2Tb SSD) over Thunderbolt 2. Also imac has its OS on SSD, therefore all cache is also on SSD.
I’m happy with the speed of the units… this I think should not be the problem, even accessing to small files and multi read data. SSD are pretty fast.
Restic is doing backups over LAN to a SFTP server on windows (yes… it has to be on Windows… don’t ask me why… it’s a long story). I’m using Rclone for that, that also encrypts the repository at rest.
I’m having a decent 100-150 Mbps constant throughput on the backup (the first one) that is taking days (BTW).
This is my Restic backup line
restic --password-file ~/.file_1 -r rclone:SFTP_enc: backup /Volumes/DISK
Problem > Even when I modify a very small % of those 268k files, it takes hours to determine what files are identical and what to backup.
I guess Restic has an internal cache that stores metadata info of all the files of the last snapshots (and previous). It’s stored on SSD, also the data. Why takes hours on verifying 268k files?
On Backing up — restic 0.12.1 documentation, I saw info used for determine if the file is the same or not to perform backup. Based on the info I posted, should I have to use any of the “–ignore-ctime” or “–ignore-inode” options?.
I used to do this backup with Rsync, and it determines in seconds which files does it needs to backup (I’m just referring to the time on determining what to backup).
IMHO… something is not configured correctly, 2,8Tb is not that big.