Most efficient way to scan large directory

I have a directory on an HDD that I am backup with lots of subdirectories (around 4000), but each directory is relative small.
What is the most efficient way to backup it (already run with --no-scan).
I read about the File change detection, but it is only for files.

1 Like

Do you have any real problem to solve with this 4000 directories backup? It does not sound like anything restic can not deal without any special steps.

I am asking as one of my backups has 200k directories and runs smoothly with pretty much all default options.

3 Likes

Yes, forget t \o mention it is an HDD, so it takes hours to complete

I am afraid you have to bite a bullet then. Hopefully not much data changes and only first backup will be slow.

You can experiment with --read-concurrency flag - lower can be better for rust disk. But I would not expect any miracles.

Yeah the most you can do is add --read-concurrency 1 (cuts down on disk head thrashing) and --no-scan (which will skip the initial tree scan and save you a few minutes maybe). Other than that, yeah, you gotta read the data to backup the data. And a slow source is, well, slow.

Also make sure your cache and tmp directory are on a nice fast SSD, and not on that HDD.
export TMPDIR=/path/to/tmp
export RESTIC_CACHE_DIR=/path/to/tmp/restic

Or try use ramdisk for Temp dir.

1 Like

But yeah, in the end, none of tips are going to make it any faster than the 80-120MB/s you typically see from a single HDD. Can’t make it faster than the source.