Restic on Windows taking long time on unchanged folder

First time using restic, trying it out so please bear with me. Using the latest stable version, 0.9.5, 64 bit, on Windows 10

I have a hierarchy of ~200k small files.
The first backup took 35 mins:
“processed 265958 files, 17.595 GiB in 35:49”

When I repeated the same command, without any file changed, it took 10 mins, see below.
I would have thought that once the hierarchy is scanned and the index is loaded, restic would just compare one against the other and upon seeing that they are identical, it would stop right there, after the 20 seconds it took to scan the directory and load the index… Why does it start going through and reading the files from the hierarchy (“start backup”)?
Thanks!

J:\restic_backups>restic -r repo1 --verbose --password-file J:\restic_backups\p1.txt backup e:/me
open repository
repository c5d5168c opened successfully, password is correct
lock repository
load index files
using parent snapshot 41e8514a
start scan on [e:/me]
start backup on [e:/me]
scan finished in 21.507s: 265960 files, 17.595 GiB
error: open \?\e:\me\1\TFS_Workspace.Me2\FirstTeamProject\Backup\Online backup.vs\ConsoleBackupWithS3\v15\Server\sqlite3\db.lock: The process cannot access the file because it is being used by another process.
error: open \?\e:\me\1\TFS_Workspace.Me2\FirstTeamProject\TextGeneral.vs\TextGeneral\v15\Browse.VC.opendb: The process cannot access the file because it is being used by another process.

Files: 0 new, 0 changed, 265958 unmodified
Dirs: 0 new, 0 changed, 1 unmodified
Data Blobs: 0 new
Tree Blobs: 0 new
Added to the repo: 0 B

processed 265958 files, 17.595 GiB in 10:59
snapshot 81195af7 saved

That’s what it’s doing, but it has to check the metadata of every file against the trees in the repository.

Having said that, 11 minutes to scan 266k files does seem high. There may be other factors at play – is this an external drive? HDD or SSD or network drive? Does the underlying disk show any problems in its SMART report?

Thank you for the answer.
So my understanding of the process is:

  1. the “scanning” reads all the metadata like file dates and sizes of all files in the hierarchy to be backed up, into RAM.

  2. The loading of index files, means reading of the metadata of the files that already exist in the repo, into the RAM as well

  3. with both of the above in RAM, it’s a matter of comparing the two sets against each other, as you described, by taking each element in the first set and searching and matching it against the elements in the second set. So then this would all be done in RAM and it would be only CPU and RAM access time that would affect its performance. Since the “scanning” step is already done, there would be no reason to access the disk, unless it turns out a file has changed and it actually needs to be backed up.

From restic’s output, it looks like the scanning step is done in 22 seconds, so everything else is the matching algorithm running.

For me, the folder e:/Me to be backed up is on an internal SATA 3 SSD and that shows as healthy in SMART. The destination is a local spinning hard drive that goes to about 150 MB/sec (though the destination drive should be irrelevant since not much is being written).
I have an Acronis installation from 2014, and it does an incremental, under the same conditions, in about 7 minutes, and so does borg, I just tried it. They both figure out that the hierarchy hasn’t changed in an average of ~7 minutes.

So I guess this must be an inefficiency in how the matching of the two sets in memory is implemented in Restic? Not sure if it would have to do with the Go language in which it’s written (borg says sensitive routines are implemented in C/Cython rather than Python that’s used for the rest).
It’s a bit of a downer that Restic is so slow for no apparent reason.
If there’s anything I could do different, I would appreciate some pointers.
Thanks for any input!

Could those errors be the cause for the delay? Have you tried excluding those files and see if it helps?

Hi - thanks for asking, the errors are not the cause, restic doesn’t get stuck on them, I can tell because it keeps chuggin along, updating the display with the file it’s currently working on. The time spent is evenly distributed as it goes through the whole hierarchy. It’s not a delay as in being stuck, it just seems to be inefficient as it works its way through the file metadata.
I tried without those lock files present (they were there because I had Visual Studio open), and I get the same time results, and without any access errors.

This sounds like the underlying storage has poor seek performance.

Note that if you have the backup repository on the same physical storage as the data being backed up, there’s going to be significant thrashing as the read head moves between the repository and the source data.