Anyone use restic with large data sets?

I’m currently testing restic 0.9.4 with 11TB, ~5M files data set. Backend is sftp. The backup part works fine. After original hassle of putting 11TB to the remote end subsequent sweeps only take about 1h. I noticed, though, it might consume upto 5-6GB of RAM but I can afford it on 128GB RAM server.

The restore part looked problematic, but it seems that there will be the solution soon (Restic 0.9.4 is still slow on restore (sftp backend)).

The real pain is the ‘prune’ function. It took 6 days to complete it. Since I have ssh access to the remote storage server I’m trying to run ‘prune’ locally on it now in the hope that it will work faster with the direct access to the repository. It has been running for 28h so far and has not finished yet.

1 Like