Apart from a couple of posts touching more or less the subject, I could not really determine if
--read-data is useful/recommended against a cloud storage backend.
What is considered best practice here ? I understand that this command could be useful to keep bits fresh on disk, and exercise the drive to prevent bad sectors becoming a problem (the drive would reallocate sectors via it’s SMART functionality).
However cloud storage tends to be redundant, so I’m not sure what to think?
I’m also confused about
--read-data-subset and it’s recommended usage.
Let’s say I have a huge archive, and to prevent download costs (and time), I run the following command daily:
restic check --read-data-subset $(date +%j)/365 && \ restic backup
As long as my backups are up to date via
restic snapshots latest, I know I lowered the potential to have a corrupted restic archive, via the prerequisite
restic check command.
How would this strategy be useful? Considering that, in reality, adding content to the archive means I won’t have checked 100% of it’s content after a year has passed. In other words, I could still have a corrupted archive without realizing.
What is the recommended approach here ?
From my experience,
restic checks are too problematic to run outside the main backup script. As the repo gets too large, it becomes locked for longer and longer and
check prevents backups from actually running. Or vice versa, the backup script prevents
restic check from running because of the locked state, etc.
Thanks for any feedback