Backup - help - minio

hello
is there any specific thinking/idea/performance etc…behind using MINIO Jan-Piet Mens :: My restic backend of choice: minio instead of backing up directly on FS of local drive?

thanks!

Hi :wave:
I also use minio. I do it because having s3-compatibility gives you some good options like syncing between different providers, being able to use IAM roles etc.
I don’t think you’d gain much if you’re just going to backup your laptop, probably s3 will be even slower than the direct access, but it at least makes sense on bigger environments.

well.
i have multiple client devices etc… but i don’t wanna use any commercial store for off-site backup - ie cloud. I want to do off-site on my own devices. So not sure. I thought that minio adds some performance benefits etc…

hm anyone? thx

I don’t have any experience with Minio but here I’m using rest-server and although I didn’t measure it, I am under the impression that it is faster than using sftp directly. In addition you can set rest-server to append-only mode in order to have some basic protection for the backups.

Like you, I don’t want to use a commercial cloud provider so I’m running a cheap little server in my basement as a backup target. In my dad’s basement there is a Raspberry Pi 3 with a large USB HDD and I mirror everything at night using rsync. This setup is cheap and pretty reliable so far. The Pi uses the day for running restic checks with read-data-subset to make sure everything arrives intact.

@gurkan sorry for a late reply.
Well, i will have more machines like 3 laptops, then maybe 2 more computers or so.

I thought that s3 compatibility can somehow benefit using sync to s3/google drive, but as you are saying it might be even slower?

I want / need to sync all my local backups (restic backup data) sync off-site and was thinking about aws s3/ or google drive.

If i do not use minio backend then i will have to sync data using some aws native client etc…

I am confused or not sure which way is the best to go. I thought that s3 bucket replication might be faster/ more secure etc. just guess… havent done any research.

Anyone else any experience on this?
Thanks

hello,
anyone any ideas here?:slight_smile: thanks!

I thought that minio adds some performance benefits

Compared to local storage, in my experience, this is the opposite of what happens. If you have the option to just use local storage and your primary concern is to get the best performance, then just use local storage. Adding additional layers (minio, restics REST API, etc) is always going to reduce performance to some extent.

Other than that, I think it’s really down to personal preference which remote backend you use. The REST API backend has the lowest overhead/best performance in my experience though. When I was picking a backend a few years back, I narrowed it down to minio/REST API as the best performing options (sftp wasn’t even close). The REST API was slightly better performing than minio, so it won out. I’d encourage you to run your own tests though; restic has improved a lot since I did my testing, and your local environment is going to be probably very different to mine.

If i do not use minio backend then i will have to sync data using some aws native client etc…

I personally use restic copy for this, so I’m not constrained by which backend hosts the repository. Performance isn’t a primary concern for me when copying backups off-site though. If it was, I’d probably use something like rclone to just copy the repository files between the backends.

I prefer running a restic copy to just copying the files as I think it’s more likely to detect issues (because it has to read + write each snapshot). If I just copied the files from one backend to another, I could theoretically just copy a corrupted file from repo A to B, and I wouldn’t pick up on that until I next ran restic check --read-data.