I know, we have had the topic here and there:
What happens if a blob of an new added backup results the same checksum as a blob that exists already in the repo? Restic would incorrectly de-duplicate the new data and in case of restore, incorrect data would be written (unnoticed, in the middle of some file).
It all boils down to extensive discussions about probability. But to really validate a backup, it is currently necessary to restore it after completion and compare it byte-by-byte.
I would like to take this up again. Would it make sense to add a validation parameter, which automates this with a dry-run restore-and-comparing after completion if a backup?
In a further development step, maybe the affected file could be read again and it’s blobs modified in some way (maybe simply divided) to fix the problem.
What do you think?