Ahh good point about the deduplication. Still think this feels too much like a hack to really use in practise. But the idea is nice.
Not sure why my backups are taking so long. Using a MacBook Pro from earlier this year with a 500 GB SSD. I guess I just I need to work some more on my exclude files.
For now I’ve changed by schedule to backup my working dir once per hour (6 min) and then the whole home folder two times per day (1 hour). I find similar times on other computers that I back up (all MacOS).
I like the idea of continuous backup, but not sure if it really fits the philosophy of restic at the moment (but I might be wrong).
I can’t help but think that if it takes 50 minutes for a full scan, there’s something very slow with your filesystem/disks?
Per-file speed will vary, from your local 400,000 IOPS SSD to your 200 IOPS hard disk to your 200 IOPS high-latency network file system. But the problem is scan time is linear with with number of files, so there will always be a number of files that will make restic slow. If you have many millions of files, there is not something wrong with your filesystem that restic is ‘slow’
But yes, it’s a bit of an issue that you don’t see your entire filesystem/tree in your snapshots, it’s much messier to restore stuff with this approach.
Very good point @askielboe@rawtaz! That’s a huge fly in my continuous-wrapper ointment
So here is one where it scanned about a million files in 54 minutes (the scan time varies a bit with the load):
scan finished in 3243.814s: 1055943 files, 139.734 GiB
I’ve since then added a bunch more to my exclude file which cuts the scan time down quite a bit. I think the main culprit is a lot of protobuf files which I ignore(d) using *.pb. This yields scan times of ~ 30 min (including the other excludes I’ve added):
scan finished in 1789.684s: 756149 files, 93.377 GiB
If I instead exclude the path that contains the .pb-files I get a scan time of around 5 minutes:
scan finished in 286.896s: 757815 files, 93.381 GiB
So avoiding filename wildcard excludes (and extending my exclude file in general) seems to have fixed the issue for now.
It could be an SSD stuck the other side of SATA controller? Which would massively impact the speed. 50 minutes for a local NVMe SSD would sound slow, as normally you could read every byte of a whole 1TB NVMe SSD in a fraction of that time
I also find Restic very slow for incremental backups. It does a lot of disk reading given my file system is close to static. Reading every file to see if a hash has changed is really inefficient - thorough but inefficient. Looking at the file modified date and only checking files changed since the last backup could reduce backup time by orders of magnitude for rarely changing sets of files - which is probably most backups.
On EC2 servers you have a burst balance for disk use, and running a backup that reads every file in your backup set could easily use up most or all of your disk credit. That would leave your production workloads running slowly. It’s the “ebs burst balance”.
Even on a dedicated server or home PC this is inefficient and slows down regular computer use.
I’d like to change from Borg to Restic because of a few problems with Borg, but I don’t really want to have my server or PC having to read GB or TB of data daily when I’ve usually changed about 10 files totaling about 20MB.
I really like Restic, and hope that one day I can use it as my primary backup program. For now I think I’ll continue to use it for weekly or monthly backups, but I don’t think it’s suitable for daily or more frequent backups.
Hi @tomwaldnz, restic already only checks the modified date (and not even the size) for repeat backups of the same file. But it does all the file checks linearly, one file at a time. So most of the backup time is just wasted/idle time waiting for file stat calls to return. Hence is it also much slower on high latency filesystems.
“When you backup the same directory again (maybe with new or changed files) restic will find the old snapshot in the repo and by default only reads those files that are new or have been modified since the last snapshot. This is decided based on the modify date of the file in the file system.”
The scan is done as a linear task, and the linear file stat process starts in parallel. But each task is based on a linear algorithm right now, that scales linearly. It gets twice as slow if you latency it twice as long, and twice as slow if you have twice as many files.
The slowness is because restic usually can’t utilize all the available filesystem bandwidth and/or network bandwidth. The new restore will make a that not true for restores, which will be better than linear in the next release.
Thanks @whereisaaron, based on your information I’ve done some research and worked out why I was getting excessive disk access. It turns out my virus scanner, Avira, was virus scanning every file that Restic wanted to back up even if it was unchanged. I guess that’s either a bug or a feature. Once I disabled that it was quite fast.