So here’s a weird one. I have two file servers. One remote, the other local. The remote has ~1TB of data. The local one has ~1.8TB. The repo is also local.
Both the local and remote file servers have a complete snapshot in the local repo. Here’s where things get funky. The remote creates new snapshots in about 5-10 minutes. The local server? To the local repo? Takes about 6-8 hours. Every. Single. Time.
There is indeed cache usage. I see the folder size changing. The type of data is mostly the same. It’s the same repo. The local server is definitely the better equipped, faster server (not just because it’s local, either). Memory usage is at 4GB out of 8GB. CPU usage is miniscule. I have no idea what’s going on. Thoughts?