We’ve just merged PR #1040 which adds a local metadata cache! I’d really like to get your feedback on it.
In order to use the cache, you just need to build restic from the master branch (install Go (>=1.8), check out the repo, run go run build.go) and run this version of restic on a repo. The index and snapshot files will be cached automatically, so you should see a much better startup time the second time you run it.
Historically, the files below data/ in the repo may contain data, metadata or both. We’ve since changed the behavior so that new files will either just contain data, or metadata, but not both at the same time. For the cache to be most effective, the repo needs to be cleaned up of those mixed files. The next run of restic prune will do that for you.
I’d like to point out that bug is completely benign. So don’t be afraid to go run build.go and check how fast restic can really be.
The cache support is the next best thing in restic development, easily bringing 100x - 200x speedup to the slowest commands like check/prune. And all that for measly 2-5% extra disk usage, who can complain?
I’ve been running the PR:WIP “DO NOT USE” code for the last 2 months, and I’m really happy with the results. What took hours, now takes minutes. Now both prune & check are run every night after backup, and it’s all over long before I even wake up. Just like it’s supposed to be.
I haven’t even gotten to the “second time you run it” part yet- still doing my first backup with the new code, and it’s so much nicer on my poor Internet connection. This backup looks like it’ll complete in ~30 mins, rather than the 6-7 hours it usually takes.
Am I imagining things, or has the code been streamlined to make fewer uploads even for a cold/empty cache? I have an asymmetric Internet connection that usually performs very poorly during remote restic backups.
Load indexes
Check all packs
pack 253079238db54b1d056ceb4603681ca213045bf80bf8843730addefa77341279: does not exist
pack 8c11c637452591773e4d5e928bcf3a180c14412f58a16aada5868a29632c1f1e: does not exist
followed by lots and lots of “pack not referenced in any index”, and finally:
pack ff4973f804cbee7ac55cd16eafaec8eea04d826972c79bdc6de1bfb56e7081d7: not referenced in any index
pack ff95e710c578bd84f60ba64f28a44fd7f60e243dbedc6ab8d7e3209271dfc65e: not referenced in any index
Check snapshots, trees and blobs
Fatal: repository contains errors
Is prune or rebuild-index the proper thing to run here?
Update: I suspect this probably has nothing to do with the local-cache code. I had to ^C out of a backup the other night when it was completely saturating my network. Pretty sure I caused this a few days ago, and am just detecting it now.
Will this also speed up prune itself?
I have a repository with 30 snapshots where “restic forget + restic prune” takes a really long time.
Most of the time is spent during
"find data that is still in use for 30 snapshots
[27:46] 60.00% 18 / 30 snapshots
That sounds very tempting
Before I go on and compile restic from git: Is there a way to go back to an older version once I ran “restic prune” with the new version?
I.e. if I run into problems can I still use 0.7.3 with the “updated” repo?
So, I’ve successfully completed the prune run for my repository. I backup about 80G of data, the repository holds 30 snapshots atm which amount to 257G. ~/.cache/restic is 5.6G. I find that rather a lot.
Is it to be expected that the cache takes up this much space?
It’s not unexpected. 5.6GiB of 275GiB means 2.1% is metadata, that’s what other users observed too. The (internal) cause is that we use JSON for metadata (introduced mainly for ease of debugging) and don’t compress the JSON documents. Once we add compression (or change the serialization format so something else), this will get better.
On the other hand, does the filesystem you are saving have atime or relatime turned on? If yes, then you’re saving more metadata, because when the atime of a file has changed then restic will re-write the metadata for the containing directory.
Wow, wow, atime changing rewrites metadata? That’s new and a bit… unexpected. Do we then restore original atime when restoring files? If yes, then very cool. If not, then I’d vote to ditch atime from the metadata, 'cause it’s then more trouble than it’s worth.
I have relatime on all my filesystems, I like it that way. And of course, by design, atime will change constantly as files are accessed. Unless that can really be restored, then I’d rather that backup is more efficient. Otherwise, if restore can also restore atime, that I find a really, really cool feature.
Will it clean up cache dirs from stale backup after a while?
It seems like $HOME/.cache/restic is going to collect cache dirs for each experiment I run with restic. Ideally, the code will look at all the caches in that directory and delete any that haven’t been updated in a month.