When backing up multiple client machines to the same repo the duplicate detection works for the common case of same files (for example the Windows directory of multiple client computers).
However when I want to back up each „client“ to its own repository (because it uses a unique and not shared key) then this cross-machine savings don’t work.
However what I could imagine would be useful to specify a second, fallback (read-only) repo (with different shared or no keys) which hosts such shared files. So I can backup a template computer and save those duplicates. This could also be used to host extra large files in different storage QoS.
If this works it’s low footprint depends I guess on the rolling checksums, do they work on the unencrypted files? If not, would it be possible to check each file against two keys/repos - what do you think?
Bonus feature: consolidator which can find those shared segments and move them to the shared repo.