I’m trying to figure out the perf implications of this statement in the “restic copy” help:
Note that this will have to read (download) and write (upload) the
entire snapshot(s) due to the different encryption keys on the source and
destination, and that transferred files are not re-chunked, which may break
So if my source snapshot is 1TB worth of the same 1KB file repeated over and over, a copy cmd would:
- expand/decrypt/download entire 1TB source snapshot (hopefully incrementally) to localhost
- reapply dedup algorithm to src snapshot
- upload deduplicated data to dst repo
Is this about right?
So the main inefficiency is that, even though the entire src snapshot is deduplidated to <<1TB, the operation will have to download the expanded version and re-dedup it. The bandwidth usage between src and localhost is 1TB, and between localhost and dst is only the dedup’ed size.
And because the two repos don’t share any dedup information, this entire process will happen each time a copy like this takes place, right?
I’m considering using copy to replicate a local repo to the cloud, but because of these (possible) issues, it doesn’t seem like a good fit.
What if both repos share the same encryption key? Does something a bit more sane happen ? If the keys are the same, is it possible to avoid the expand/dedup?