764287
April 4, 2019, 3:48pm
2
Some threads I found on the forum about large datasets. Please keep in mind that restic’s performance depends on a few factors like hardware, backend and dataset (number of files and snapshots) etc.
Hi!
I’m researching various backup software and I want to see how large can Restic scale in practice. I am looking at a 40-60TB project (total size, multiple clients, largest around 10-20TB) and want to figure out if people have prior experience regarding what kind of hardware will be necessary to handle such a load.
How big is your original data set?
How is the data churn (ie. how much data changes on every snapshot)?
How big is the resulting Restic archive?
How long does the backup take to …
I’m thinking of using restic to back up some servers which deal with fairly large datasets.
The total amount to be backed up would be about half a petabyte, with us adding a few hundred gigabytes more each day.
The underlying storage would be a Oracle ZS4-4 appliance, serving via SFTP. Alternatively, if sftp would be a major bottleneck I could run the REST server if that would be faster — but only if I can somehow get a Solaris 11 restic binary. Or I could mount over NFSv4 but I’d greatly pref…