How much free space is required to restore a file?


I am interested in using restic for VM backups. Since the VM image is about 1 TB, how much free space is needed to restore the VM image from some snapshot? More precisely in addition to the 1 TB for the restored image how much more space is needed for temporary files during the restore?

I am asking because when restoring a 100 GB VM image file with duplicity it turned out that i needed about 300 GB total free space before begining - 100 GB for the resulting file and ~180 GB for temporary files.


I think you can use restic --no-cache dump <snapshot> <file-path-in-repo> > myfile.out and restic will only write the file contents to stdout, no overhead, so you can redirect it to a local file. That’d work for Linux and macOS, no idea about other OSes.

BTW Restic can be easily patched to measure this under Linux:

diff --git a/cmd/restic/cmd_dump.go b/cmd/restic/cmd_dump.go
index 4a9e57a3..c4bdea63 100644
--- a/cmd/restic/cmd_dump.go
+++ b/cmd/restic/cmd_dump.go
@@ -3,6 +3,7 @@ package main
 import (
+       "io/ioutil"
@@ -181,6 +182,13 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
                Exitf(2, "cannot dump file: %v", err)
+       pid := os.Getpid()
+       iostats, err := ioutil.ReadFile(fmt.Sprintf("/proc/%d/io", pid))
+       if err != nil {
+               panic(err)
+       }
+       fmt.Fprintf(os.Stderr, string(iostats))
        return nil

You don’t need a patch, just use the tool pv:

$ restic --no-cache dump <snapshot> <file-path-in-repo> | pv > myfile.out

It’ll print throughput and stats while the process is running.


Great question, it’s not that easy to say. If you use a metadata cache (which you should, as it speeds up the operation), you’ll probably need between 2-15% of the whole repo size. It’s hard to say for individual repos. If you don’t use a cache, the operation will be slower but it needs much less space.

I’d expect that you’ll need less than 200MB overall for temporary files.

Please let us know the result if you try it!

Oh wow, that’s not great. Please be aware that restic and duplicity work in completely different ways. restic has random access to all bits of data it needs to restore a particular version of a file, whereas duplicity needs to go through the last full and then all incremental backups to restore a file (if I remember correctly).

1 Like

It’s a bit different though, right? as from /proc you can get not only what Restic writes to the pipe but what the process writes to other places, so you can measure (with some caveats) the effect --no-cache has for example dumping a 7MiB file:

% rm -rf ~/.cache/restic

% restic-dev --no-cache -v dump latest "/Game Over.mp3"  > out
rchar: 7021123
wchar: 6939801
syscr: 507
syscw: 41
read_bytes: 0
write_bytes: 6938624
cancelled_write_bytes: 0
rubiojr@x390 ~/tmp
  % restic-dev -v dump latest "/Game Over.mp3"  > out
rchar: 7091784
wchar: 7010547
syscr: 488
syscw: 57
read_bytes: 0
write_bytes: 7032832
cancelled_write_bytes: 0

Perhaps not very practical from an end-user PoV :stuck_out_tongue_winking_eye:, but interesting experiment nevertheless.

1 Like