Progress bar for restore

There does not seem to be a progress bar for restore. I searched and there are pull requests on GitHub on this.

Has a progress bar been coded since?

It’s a useful feature. Right now no information is printed other than one line.

It doesn’t have to be precise. Some rough values would be good.

Right now I have to calculate the sizes manually.


It’s a valid request, but in the meantime what I do is “restic stats (snapshot ID)”, then “restic restore (snapshot ID) -t /some/path” then in another tmux pane / window, run “watch -n 300 'du -d 0 -h /some/path”

If you’re on a Mac, and you want it to be slightly more accurate, you can install GNU coreutils (brew install coreutils) then run “watch -n 300 'gdu -d 0 --si” in order to use SI units like Restic does (GiB instead of GB).

Don’t have it refresh too much (the -n 300 part) as it may cause excessive disk thrashing and slow down the process. :slight_smile:

1 Like

Nice use of watch!

I could grep the size and divide by total expected size returned by restic stats. This prints size and percentage.

1 Like

I’ve been going to the target restore on the linux server with du -sh command.
But it would be helpful to get a progress bar just like when backup and --verbose.

Was looking to include a progress bar in the commands and came across this post. Is it included by default for uploading to a repository?

“restic stats (snapshot ID)”, then “restic restore (snapshot ID) -t /some/path” then in another tmux pane / window, run “watch -n 300 'du -d 0 -h /some/path”

When you put run “watch -n 300 'du -d 0 -h /some/path in tmux pane which is the folder path where you are currently backing up, it will then show you a progress bar of the downloading progress of files that is happening there?

What does the -t do? Am on a ubuntu mate machine.
Is it 300 milliseconds? I see you mention excessive trashing, what’s a fair value?

Oh, no, there’s no actual progress bar. I’m just using the Watch and Du commands to estimate how much has been restored. The -n switch is number (in tenths of a second), and the -d 0 means don’t traverse directories. You could do -d 1 to show one level of subdirectories.

All this is in the man pages for both Watch and Du. :wink:

That said, I think Rustic has progress bars? I’ve found Restic to be more resilient on poor connections and am sticking with it, but it might be just what you’re looking for. Both can use Restic repos. :man_shrugging:t2:

1 Like

Being the rustic author: Yes, it does provide progress bars while restoring file contents:

$ rustic -r /tmp/repo restore 3dd7:src /restore/src
using no config file (/home/alex-dev/.config/rustic/rustic.toml doesn't exist)
enter repository password: 
[INFO] repository /tmp/repo: password is correct.
[INFO] using cache at /home/alex-dev/.cache/rustic/10065d9fca3cec25c005da74ca98be2d7b943be443807afe846fe5cb04b6c35e
[INFO] getting snapshot...
[00:00:00] reading index...               ████████████████████████████████████████          1/1                                                   
[00:00:00] collecting file information...
Files:  55 to restore, 0 unchanged, 0 verified, 0 to modify, 0 additional
Dirs:   8 to restore, 0 to modify, 0 additional
[INFO] total restore size: 348.6 kiB
[00:00:00] restoring file contents...     ████████████████████████████████████████ 348.64 KiB/348.64 KiB 60.27 MiB/s  (ETA 0s)                    
[00:00:00] setting metadata...
[INFO] restore done.

Just out of curiosity: Which kind resiliency are you missing? Feel free to report a bug or continue the discussion on rustic discussions as this is a bit OT here…


Hi. Consider me a Restic rookie. Right now, I’m restoring a 44 GB file (a VM image) using:

$ restic restore latest --password-file=password --include /path/to/vm_image.vdi --target "$HOME/restore"
repository 3a4bd577 opened (version 1)
restoring <Snapshot 3f6cc21e of [/home/hb] at 2023-02-13 18:00:05.34288589 -0800 -0800 by hb@laptop> to /home/hb/restore

This has been going on for a couple of hours (and I expect it to take many more hours to download). However, it looks like restic restore allocates a 44 GB target file immediately, because immediately after launching restore, I saw:

ls -l /home/hb/restore/vm_image.vdi 
-rw------- 1 hb hb 44220547072 Feb 17 21:19 /home/hb/restore/vm_image.vdi

and this is still the case.

So, to add to the wishlist here: it would be nice to get per-file restore progress updates.

Related question: What happens if I interrupt the current restic restore process; will it resume where the file was interrupted, or will it start over from scratch?

No, a resume is not implemented in restic, it will start over from scratch. But there is this PR: Make restore resumable by aawsome · Pull Request #3425 · restic/restic · GitHub

BTW: you see the 44GB file immediately as restic first allocates the file before filling its contents.

About rustic: The progress bar shows the restore progress of the contents-to-restore. If you only restore a single file, you see the progress of restoring that file. And, the resume functionality for restore is implemented in rustic, i.e. there you wouldn’t restart from scratch but use all of the allready existing (correct) contents.

Thanks for clarification and explanation.

So, say that my computer is rebooted, for one reason or the other, during a long-running restic restore, how can tell afterward whether the restore completed before the reboot? All I have is a file with the same as name and file size as the target file. It’s okay to point me to a FAQ or the docs, if this is documented somewhere.

Thanks for letting me know about rustic. Looks promising. In this case, I’m restoring from an SFTP server, which sounds like rustic does not support.

AFAIK there is no way supported by restic to check this. Again, advertising rustic :wink:, you have two possibilities there:

  • rustic diff latest:/path/to/vm_image.vdi /home/hb/restore/vm_image.vdi would read the file on disc and tell if it is different to the one in the snapshot.
  • rustic restore --dry-run would also tell you the file is correct or not. Note the by default the restore command trusts the mtime of existing files and only reads local files if mtime or size differs to the one in the snapshot. To ensure that the content is read, there exists the --verify-existing option.

Feel free to open issues if you need similar functionalities within restic.

BTW: rustic does support SFTP via the rclone backend.