From what I understand, restic detects modifications in a way that is similar to what rsync does - it looks at file metadata like the mtime.
I now wonder if it helped restic if any file modifications always caused all parent directories (well, all directories that can be touched with the current user permissions) to have their mtime updated. Because then, restic would only have to scan some top level directories, which would happen very quickly. I have drives with 4.5 million files on them, so avoiding redundant scans is important.
At least not at the moment. Restic always visits every directory independent of timestamps attached to it. Such an optimization would only make sense if the filesystem provides the guarantee that the mtime of parent directories is updated recursively. However, I’m not aware of classical filesystems that implement that feature.
You can take a look at
restic backup --no-scan, that disables the backup size estimation and thereby eliminates one filesystem traversal.
if any file modifications always caused all parent directories (well, all directories that can be touched with the current user permissions) to have their mtime updated
how would this be achieved? Regardless of how it is implemented – external tool, or FS feature – this will cause a massive number of writes to the disk, and the higher level directories (especially /home/$USER) would be constantly being written. I am positive you’d be creating more load on the system, and that too on a continuous basis, than whatever you save from restic scanning your files
in any case, if you somehow had such a feature, and it told you what top level directories should be scanned, you can create an exclude-file with negative matching (uses the
! character; docs have an example) and that should do what you want as far as restic is concerned