Good to know!
Do you know if in a perspective of AWS S3 and Data Transfer, will those improvements make “prune” use less data transfer?
As per post: Huge amount of data read from S3 backend
prune
downloads every pack header to create a temporary index, crawls all snapshots (which means downloading every tree object that can be reached from any snapshot), downloads any blobs that are still used and exist in the same pack as an object to be deleted, re-uploads these blobs, deletes the old packs, then reindexes again (downloading every pack header a second time).If you do this frequently, the traffic adds up pretty quickly.
Thank you very much!