I have a script that uploads nightly snapshots to B2. All told there are several terabytes of data in play, spread across several ‘projects’.
Nightly, this amounts to a much smaller amount of data since only a small subset of this data changes.
I’ve recently changed the machine on which this backup script runs, and noticed that it treated all files as new (it re uploaded all [several TBs] the data) - in spite of the cache directory being a shared NFS mount – and available on the new machine. I was surprised by this.
Is this the expected behaviour? If so, is there a way to disable that?
Our datastore is a 100TB NFS mount – and ideally, the snapshot would be run identically from any machine.