Time for v1.0? With the CERN article and my own experience, I would say restic is production ready. Plus it makes it easier to sell to those-who-must-be-obeyed.
I’ll be a downer here and say that it’s absolutely not ready for v1.0 unless we want to concede that big backups just aren’t part of scope. There are a number of really bad performance problems that come up that make big backups kind of impossible or at least miserable.
Everything from huge memory consumption (tens of GB) to major CPU usage. We just backup our DB with restic (one big dump file a few times a week), and prune is so slow we don’t even bother with it. It’s a real bummer.
If we want to say these are out of scope and document accordingly (I’m not opposed to this if it’s realistic), seems like v1 isn’t crazy.
I agree, it’s not 1.0 for a while. We’re still slowly working on various things that are needed before you can start thinking about 1.0.
When referring to big backups, I’m curious how big is big?
I’m also curious about the size of the db dump. And also does this big backup include anything else or is it just the database?
Thanks!
I’d define big as 10’s of TB. This is a realistic size for engineering, video editing or data processing firms. Many of these are going to be small businesses who likely don’t have in house experts on data backup, and will likely come looking to Restic as a simple solution.
Restic is absolutely not ready for prime time when it comes to restoring large repositories. Even with PR2195, restic will slow down to a crawl as it nears the end of a large restore process. For us, the result was hundreds of corrupted files, still incompletely restored after weeks of running restic restore on a 30 TB backup.
Hi,
I am wondering if that issue is still present or was it maybe fixed with v0.14.0.
It adds compression which might at least increase the total size of data before the backup in the destination reaches the same size as before.
Also, I think having a version number greater or equal than v1.0.0 helps with semantic versioning as currently every change could be a breaking change versioning-wise and at least what I learned so far this should not be the case for restic.
Semantic Versioning 2.0.0 | Semantic Versioning
Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.
This doesn’t mean that this is the case. If you look at the design philosophy of restic, the main thing is to not break compatibility. This is not a problem and whether something is 1.x is not what should be the deciding factor for you to run it or not. Either you trust restic or you don’t, it’s up to you. Personally, I trust it very much, and I run it in production for several clients. Even if there were to be a breaking change, that will not in any way interrupt your usage of restic or your ability to restore backups.
Versioning has been discussed before, not sure what else to add. Please let it happen when it happens.
Okay, I see, thanks for your reply @rawtaz.
I did not want to create any pressure, I just found that thread and thought that the main blocking concerns have been addressed.
I understand that versioning is a tricky topic and often means different things to different people. I favor restic’s way of versioning over some other commercial versioning styles or even using the year (e.g. like Ubuntu with 22.04 and so on).
I am currently adopting restic for my personal cloud backups and was really happy with its stability and design concepts, thanks for all your work on that.