Speed issues with local server to local repo

So here’s a weird one. I have two file servers. One remote, the other local. The remote has ~1TB of data. The local one has ~1.8TB. The repo is also local.

Both the local and remote file servers have a complete snapshot in the local repo. Here’s where things get funky. The remote creates new snapshots in about 5-10 minutes. The local server? To the local repo? Takes about 6-8 hours. Every. Single. Time.

There is indeed cache usage. I see the folder size changing. The type of data is mostly the same. It’s the same repo. The local server is definitely the better equipped, faster server (not just because it’s local, either). Memory usage is at 4GB out of 8GB. CPU usage is miniscule. I have no idea what’s going on. Thoughts?

I had a similar issue with a 4TB repository.
Below a certain size, things were snappy, as soon as I crossed (IIRC) 2TB, things slowed to a crawl.

I think it was caused by a large amount of data files in each of the blob directories, making file access slow. A single “ls” command would take 15-30 seconds to complete.

There was a discussion in a github issue about increasing the blob size, but I ended up with Borgbackup instead.

I’m thinking it might actually be because the local server drive is iSCSI, and not a physically attached drive like the remote. For whatever reason, restic is much slower if the drive is not physically attached. I’d understand a little bit of slowdown, but it’s the difference between 8 minutes and 8 hours, even if the speed of the source is equal to that of a local drive.