StrikeLines’ post about his restore case made me think about certain things…
What might be best practice for desaster restore with restic?
Is it possible to forecast memory usage of certain operations? (particularly mount, restore and stats)
Are (from an overall perspective) per-folder-restore-operations (one by one starting with the mostly essential ones) significantly slower than full restore from the start? (I guess yes)
What is best practice to forecast time left until restore processes are finished?
Backup strategy: Would it be beneficial (for restore speed) to split all the data to backup into groups of similar importance? (I guess yes at least when using dedicated restic repositories for each importance class (at the cost of deduplication benefit). Unsure whether splitting in different snapshots might also help a bit.)
Best way to find out of course would be to test desaster restore scenarios which always is advisable, but on the other hand often quite time and resource consuming. Anyone tried different approaches who would like to share his findings?
From a theoretical perspective: Could anyone with deeper knowledge of how restic works give some advice?