I’ve been using restic successfully for a while. I have been using the Backblaze B2 API/backend and have a lot of data in my current repo in a B2 bucket. I saw this warning and I wanted to ask for a little more detail to decide if I need to switch to using the S3 API.
The way forward with B2 is to use S3. It’s well supported by B2 so there’s really no reason not to. Just go for it. The contents of the repository is still the same so you should be able to just change the repository URI unless I’m terribly mistaken (I don’t use either of them myself). No need to convert anything.
I’ve been using Backblaze as a storage backend for years now and when I started using it there was definitely no warnings. As asked by @DigitalMan already, is there somewhere some resources as what could go wrong when using Backblaze backend without S3-compatibility layer?
I’ve tried to start using S3-API on Backblaze, but can’t do that.
Previously I had restic repo specified like this: b2:BUCKET_NAME:DIRECTORY
Reading various documentation, I’ve tried to use this instead (in addition to changing environment variable names from Backblaze to AWS specific): s3:s3.eu-central-003.backblazeb2.com/BUCKET_NAME/DIRECTORY
Using the change above, restic spits out an error:
Fatal: unable to open config file: Stat: Access Denied.
Is there a repository at the following location?
Now, I was unable to find region eu-central-003 from anywhere except from some old Reddit post - is there a better way to find a region of my bucket region URL in Backblaze? I definitely have buckets at EU region, but maybe it’s not correctly specified.
Any other ideas how to make restic work with S3-API on Backblaze to not be affected by the aforementioned warning on restic documentation?
I, also not aware of this warning by now, use restic with b2 for years and never had any problems. I also did restores already without issues. I am sure other people here can confirm the same.
For some reason still last year I tried to use backblaze over s3. But I didn’t like this thing that unused data then gets not deleted but only “hidden”. I found that confusing and didn’t want to put any bucket setting to delete hidden files. So I stayed with b2.
When you log into the b2 web ui you click on lifecycle rules of the bucket your using. Then choose Keep only the last version of the file. Then click update bucket. You’re done. Same behavior as before.
This is a default safety feature of object storage of many vendors.
Thanks for sharing the article. Reading that I’m a little confused - you’re writing there that:
It will do the server-side copying where possible.
What does it mean in practice when it comes to additional cost from Backblaze for the migration? Can you remember what size of bucket did you migrate and how extra cost did it add to your Backblaze bill in the end?
It’s probably falling mostly under the most expensive Transactions Class C right? I have multiple buckets for restic totalling ~7TB and would like to understand how much this migration is going to cost for me approximately before taking this road.
So as I follow this conversation, it is transaction class C, and each file is one API call. So this would be approximately in total: cost(number of files) = 0.004$ * (number of files-2500)/1000 (+ some other calls, which I cannot estimate and assuming you have more than 2500 files)
Great question. The cost of the storage while there are temporarily two copies of the data will likely be the higher cost.
In my case I had a 1.5 TB repository, 276,015 files with mostly 4 MB pack size. The equation MelcomX gave yields $1.09. For the month that I did the transfer and a bunch of testing, my bill had $1.32 in class C transaction costs.
i also use b2 protocol at the moment. backups are working, i can do restic mount.
i can change the B2-Vars to S3 and switching to s3 backend seems simple. i have tested it with list snapshots and mounts.
i am a bit uncertain if i should change from b2 to s3