Hello everyone. I am restoring a large dataset of 35 TB with lots of little files and and wondering if it’s possible to speed up the process? At the moment it seems like the data is already in place but the comparision and metadata sync process looks like it take a very long time even though there is very little cpu activitiy.
I am using an s3 compatible backend on OCI with the --pack-size 64 --overwrite if-newer
options at the moment. I am running the restore on some very beefy systems with 80 cores and 3TB of memory with a file system that can handle 100K++ iops so could turn up threads if that were an option.
Any suggestions?