No worries. I’d come across a post that talked about it and it seemed like it might have been a hidden flag or something, but was probably just Forked code.
The main reason was because I have some large Repos that are made up of predominantly very large ProRes video files. These files “never” (maybe in some distant future) get deleted, and only more get added. They do however get reorganized and renamed.
Having them split up into 1-8MB pieces makes using Cloud storage quite slow and restrictive… I’m currently using Wasabi, but was hoping to migrate to using Google Drive. Google’s Team Drives have a limit of 400,000 files per drive, and all my Repos already have > 1m files. I’m not sure if Google My Drive has the same limitation, but figured it would be worth getting ahead of the growth issue in general.
Forcing the file sizes to 100-250MB a piece seemed like it would probably help a lot for my situation. Although I dunno what this would mean for the handful of smaller project/meta files that do change occasionally.
For me at least, its only the Repo that has backups from my Laptops that it makes sense having smaller chunks, because files on there change often. Most of my other content is Static.