Performance - SFTP + Rclone

Hi all:

First of all thank a lot for your help in advanced, and for the great Restic tool.

I review already some topics that did not help in my case, therefore I decide to detail my situation and if possible improve it.

This is my setup:

I have a 2,8Tb in around 268k files for backup (disks are formatted on encrypted APFS). This is on a updated OSX (imac), 4 Tb Apple Raid 0 external disk (2x2Tb SSD) over Thunderbolt 2. Also imac has its OS on SSD, therefore all cache is also on SSD.

I’m happy with the speed of the units… this I think should not be the problem, even accessing to small files and multi read data. SSD are pretty fast.

Restic is doing backups over LAN to a SFTP server on windows (yes… it has to be on Windows… don’t ask me why… it’s a long story). I’m using Rclone for that, that also encrypts the repository at rest.

I’m having a decent 100-150 Mbps constant throughput on the backup (the first one) that is taking days (BTW).

This is my Restic backup line

restic --password-file ~/.file_1 -r rclone:SFTP_enc: backup /Volumes/DISK

Problem > Even when I modify a very small % of those 268k files, it takes hours to determine what files are identical and what to backup.


  1. I guess Restic has an internal cache that stores metadata info of all the files of the last snapshots (and previous). It’s stored on SSD, also the data. Why takes hours on verifying 268k files?

  2. On Backing up — restic 0.12.1 documentation, I saw info used for determine if the file is the same or not to perform backup. Based on the info I posted, should I have to use any of the “–ignore-ctime” or “–ignore-inode” options?.

I used to do this backup with Rsync, and it determines in seconds which files does it needs to backup (I’m just referring to the time on determining what to backup).

IMHO… something is not configured correctly, 2,8Tb is not that big.


You are writing that it needs to be Windows but does it need to be SFTP? SFTP is a really bad protocol when it comes to performance. rest-server was written specifically as backend for restic and runs on Windows.

Thanks. I taught about it, but I was not sure that it runs on windows. Just for being sure about it; should I need to go over dockers (I read that it’s a very bad idea run Docker over windows)?, or I need to compile it from source with Go compiler?

You can do this if you want but there is no need to compile yourself or use Docker. Binaries for multiple operating systems and architectures are automatically build for every new release (just like with restic). Take a look at the assets on the release page.

Ohh!!! my bad . Thanks… it was just under my nose.

In parallel… (I’m now installing the Restic Server), I this will not affect the time for the snapshots… right?. I mean, backend is not involved on the Snapshot preparation (that is taking +1h each time).


AFAIK restic needs to communicate with its backend while scanning to know which files are new or have changed. But I can’t tell if and how speed varies between backends at this step because I’ve never truly compared it.

Uhm… maybe anyone can confirm. Does restic stores local metadata cache BEFORE ends the first snapshot?. I mean, the first backup did not finish.

I saw that the process is faster than when It’s deduplicating and backing up, and I guess this is because it has metadata already cached from the unfinished snapshot.

Can anyone confirm this please?

Also… is there any reason why it takes 1 hour for preparing (just verifying the files it needs to backup)?

Extra comment: I stop the fist snapshot and started again (same command). I saw many SFTP connections, and 200 Mbps of upload with many files “uploaded”. It’s clearly faster than the first one, since on 13 mins got 6.55%… but it’s still doing something…¿¿??. If I already backup this, it should compare with local cache and ignores the file(s)…


restic only stores encrypted data in the repository. It’s not really necessary to encrypt everything a second time. restic is also able to speak sftp itself, which might be faster than using rclone as an intermediary.

restic prints statistics about modified files when finished. Do the numbers of new, changed and unmodified files roughly match what you expect? Which restic version are you using?

All necessary metadata for a snapshot should be stored in the cache are running a backup once. Later backup runs will then just access that local cache.

How is the CPU / memory load on the system? For me restic only requires about 8-10 minutes to scan 1.5 million files on macOS (on an ssd) while occupying a single CPU core.

restic is able to reuse (most) already uploaded data even if a backup was canceled without creating a snapshot.


I’m using last restic version available. I did not finished my first backup

this is an imac 27’’ late 2012 with 24 Gb of memory if I remember well. CPU is around 15% and have free around 15 Gb of RAM

The main problem is that restic is not reusing cached metadata info, at least when I interrupted the backup and I started it again. It takes hours for continuing on the last file

Last update:

I’ve finish the first snapshot. I added 1 file and created another snapshot. It took 38 secs on creating the new snapshot… so all good here… no issues. Seems like restic is not using completely cached metadata for finishing the snapshot.

restic is only able to check for changes since the last snapshot after finishing the first snapshot. That is when restarting the backup before it has even finished once, restic has to scan every file again. It won’t have to upload everything again, but still has to open and read every file.

There’s an experimental PR than can probably help here: backup: Add resuming from aborted backups by aawsome · Pull Request #3230 · restic/restic · GitHub