Rustic: equivalent of `copy`


As restic prune (and only that) needs 5 to 10 minutest to open the repository in which it sits on a single core, I started playing with rustic.

I have already re-implemented my backup script using it, and it works well in theory, though the only thing I have not yet found a solution for is moving my local snapshots to the off-site storage.

With restic I just used restic copy to get data from one to the other. Though how can I do that with rustic?

Thanks @NobbZ for considering rustic!
I think your question is better suited in the rustic discussions.

That said, you can use restic and rustic on the same repository, e.g. restic copy and rustic prune. But I think this we can also better discuss in the rustic discussions - here it’s a bit off-topic.

I haven’t seen the discussions, perhaps link them more prominently in the readme?

Anyway I will reopen a thread there.

If you tell us a bit more about your repository (e.g. the statistics printed by prune and the size of the index folder) then we can probably speed up the prune command in restic.

  1. I have to correct myself, it is not “opening the repo” it is when “loading indexes”
  2. After I was able to actually complete a full run using rustic (which repacked ~150 GiB to remove a couple of MiB) the problem seems to be gone
  3. (Output realigned for readability)
    # du -s *
            4 config
    432394992 data
       141280 index
            8 keys
            8 locks
          376 snapshots
  4. The deltions are probably from the last run using rustic
    loading indexes...
    loading all snapshots...
    finding data that is still in use for 63 snapshots
    [4:37] 100.00%  63 / 63 snapshots
    searching used packs...
    collecting packs for deletion and repacking
    [0:00] 100.00%  15303 / 15303 packs processed
    to repack:             0 blobs / 0 B
    this removes:          0 blobs / 0 B
    to delete:             0 blobs / 1.517 GiB
    total prune:           0 blobs / 1.517 GiB
    remaining:       2798618 blobs / 410.663 GiB
    unused size after prune: 0 B (0.00% of remaining size)
    deleting unreferenced packs
    [0:02] 100.00%  50 / 50 files deleted

Thanks in advance!

I will probably stick with rustic anyway, due to lockless prune. But restic served me well for a good time, and I am still happy to help figuring out issues and helping solving them.

Just a few comments (most are however rustic-specific)

  • rustic allows to finetune the (targeted) pack size much more than restic does (e.g. depending on the repo size) and prune does repack packs which are too small or (optionally) too large. See rustic config --help for more info. For a ~400GiB repo, the default would be a targeted data pack size of ~52MiB and prune would repack packs smaller than ~15MiB. This might explain your large amount of packs-to-repack.
  • A pruned repository can lead to significant index-load speedup, especially as small index files are combined into larger ones (this holds for restic prune as well as rustic prune). Also larger pack files lead to slightly smaller index files and may improve the speed of the index-load.
    To see information about number of repo files and size du is not enough. rustic provides the rustic repoinfo command, for restic see Add new command `repoinfo` by aawsome · Pull Request #2543 · restic/restic · GitHub
  • It seems you did run rustic prune and afterwards restic prune on the repository. This is not much a problem (beware of parallel running backup jobs, though!) but restic doesn’t recognize the packs marked for removal by the two-phase pruning of rustic. restic just sees them as “unreferenced packs” and will immediately remove them (Hence the warning about parallel running backup jobs) - maybe this is what you are seeing here.
1 Like

Doesn’t that mean that the index still contains references (only used by rustic) to these pack files? It’s probably a good idea to repair the index…

The slow index loading might be due to Piping mysqldump output to restic does not work anymore in 0.14.0 · Issue #3916 · restic/restic · GitHub which will be fixed in the next release.

The index needs only be repaired if you think that restic prune didn’t leave a valid index. (It rewrites the complete index and afterwards removes all previously existing index files.)

rustic does indeed save the pack files marked for deletion in an additional packs_to_delete section in the index files. Moreover, it saves more information for the pack files like a timestamp (creation time or time when marked for deletion) in an additional time field. As additional json fields are ignored by restic, restic only sees the kept pack files in the index and additional pack files (the marked ones) which seem to be not referenced in the index. If there would be a backup job parallel to the rustic prune run, these pack files could potentially contain needed data for this just-generated snapshot. But as the restic prune run afterwards did not report any missing pack files, that was obviously not the case. So deleting these pack files is no problem and is what a future rustic prune run would have done anyway.

A problem is to run rustic backup (which does not create lock files) parallel to restic prune (which immediately removes packs) or any backup job parallel to rustic prune --instant-delete (which immediately removes packs and doesn’t read lock files), all other combinations are safe, IMO.

Side remark: There is the feature --keep-pack in rustic prune which allows to keep packs at least for a given period (this is useful e.g. if you pay for a minimum hold time). This feature also uses the time field. A mixed run of rustic/restic prevents this feature from working as restic doesn’t save this information.

“rebuilding index” was not printed and therefore the index was not rebuilt. Thus the information about marked pack files is still in the index, but the pack files no longer exist. But yes, that is probably not an actual problem, as long as these marked pack files are not still in use somewhere.

1 Like