Status of the compression feature

Hello restic contributors/community,

I would like to know what the current status for the compression feature is. I created this post in order to not add more comments to Github Issue #21 as @fd0 asked not to do so.

Unfortunately the Staus-Raodmap Wiki was not updated since 18th October 2018 and the comments at the issue are quiet old (except the last four comments). In the near future i would like to use restic as a replacement for an old Obnam setup and it would be beneficial to use a version with compression feature or to know that it is under development. So it would be nice to get a little update on this feature status.

I’m not asking for an explicit deadline or sorts of, but for a status like, for example “Is currently worked on” or “is coming soon”.

Thank you for your answers.

1 Like

AFAIK, development for a compression feature hasn’t started yet (at least not in the official Github repository).

You should try restic out before you make compression a requirement. Not all data benefits from compression.

Some time ago I’ve extensively tested and compared Borg Backup and restic. It turned out that the respository size of restic and Borg Backup are about the same for most of my data sets, even though compression was enabled for Borg Backup.


Thanks for your answer.
I already compared restic backups with the backups created via Obnam and BorgBackup. The size of the repository was about ~400MB while the ones from obnam and Borgbackup with compression were about ~115MB. So in my use case compression would be beneficial.
But in will think about it anyway.

Interesting, Why would that be?

Many common file types (e.g. most image/audio/video files, MS Office documents, some PDFs…) are already compressed, and can hardly be compressed further.
Typically compression only makes a big difference if you have tons of pure text files.


Having tons of text (files) is part of my use case.

1 Like

I also use borg and gain a lot with compression (mostly code projects).
I’d like to know what make so hard to add compression ?

I got to know that it is more problematic to change the repository format than implementing the compression.
And with this i fully understand why it takes some time to implement this change. Nobody wants their repositories and backups broken or incompatible.

1 Like

For now dedup works pretty well.

For the record I migrated my old BackupPC archives to restic (see for details). BackupPC uses both compression and deduplication. I have about 47Tb of raw data in 1.5Tb of effective data with BackupPC. It takes something like 1.7Tb with restic.

This is good enough for me.

1 Like

Use case I care about/need to implement:
Having dev environment with Gitlab / Jenkins and so on. While Gitlab has it’s own backup feature, which uses compression, Jenkins and other dev tools/sites have not, so making some kind of tar.gz greatly improves size.
This is important for offsite backup, both for transfer speed (i.e. finish backup faster) and space used.

Has anyone tried creating a .tar.gz or similar of the entire directory being backed up, and then only backing up the compressed file instead of the original directory? It seems that gzip is actually deterministic in terms of hashes, and from my testing it doesn’t change the mtime of individual files either. From my understanding of how restic works, this might mean that restic is able to recognize the deterministic similarities between two compressed files with an identical name, and avoids duplicating identical data?