The dream: mysql backups

Is there any tiny fiber in your body that thinks about making restic for mysql?

So that one could browse database backups like you can browse file backups?

I know you already have your hands full with restic file backups, but it keeps being on my mind. I’m so not in love with the current mysql / mariadb backup solutions, restoring alone is already such a pain and does not offer any easy snapshot restores whatsoever.

If only the world would see a backup solution like restic but then for databases… :smile:

While currently don’t plan to build something especially for databases, you can run mysqldump | restic backup --stdin --stdin-filename db.sql and afterwards mount the repository, then you can browse the files and inspect them with an editor… I don’t know, maybe that helps you a bit :slight_smile:

2 Likes

Yeah not really, it’s the restic magic I’m hoping for, you see, restoring single records from a snapshot, that kind of stuff :wink:

It’s just a wish I hope comes true in one way or another.

Keep up the great work, as usual still loving the magic of restic.

1 Like

Isn’t a database dump just a file with a bunch of CREATEs and INSERTs? And isn’t a database just s bunch of files on disk anyway? Why couldnt restic back those up?

It would be cool if it could be part of a larger single snapshot: “back up the current content of DB x and of directory y” (e.g. Wordpress mysql and uploads folder). Unfortunately, --stdin is mutually exclusive with specifying files.

Well it’s about browsing those creates and inserts in a user-friendly way, with an easy restore procedure, for example: restore this table as it was a week ago, or single rows. The dumps are exactly that, dumps. Not easy to go through and indeed not connected to a snapshot on disk.

I just dump the databases at the same frequency as restic backups, staggered by a few minutes so the dump completes before restic starts.

Dump files are a pain, but the most reasonable way to deal with this is to either set up a quick VM and restore the dump, or if you’re creating separate dumps for each database, restore the dump you’re interested in to a new database on your existing server.

This way, you can browse and use all of the tools that already exist to work with mysql to create an export file of the information you’re looking for, and then import it into your live database.

Trying to add this kind of functionality to restic seems like a lot of work for something that’s easy to work around.

1 Like

The current optimal approach seems to be writing out a tree to disk like /database/${database}/${table}.sql and then using restic to backup that. Doesn’t take too much scripting, so I’m not sure it would make sense for restic to get into this vs just sharing some wrapper scripts. Just make sure you pass options to your database dumper to maximize de-dupability by removing timestamp comments and forcing consistent ordering, e.g.:

mysqldump ${mysql_args} \
    --force \
    --single-transaction \
    --quick \
    --compact \
    --extended-insert \
    --order-by-primary \
    --ignore-table="${1}.sessions"

I mean, for consistency you want to do that anyway. You don’t want to back up a running database (edit: without snapshotting the data of it, obviously) and expect that it’s consistent.

We actually do back up running database datadirs, but use btrfs snapshots to get a point-in-time view of the data. This process is substantially faster than doing SQL dumps, both for backup and restore.

The ancient ritual of “dump & pickup” is still in common use. (DBA dumps the database, backup product/admin picks it up)
Good

  1. Online recovery quicker than recovery from backup app
  2. DBA owns the restore and available for immediate system recovery. Assuming local storage is always available.
  3. Backup app/admin is only responsible for secondary copy(s) not involved in the initial recovery process.

Not so good

  1. Additional local storage required. Need at least twice the storage required space for a single dump.With storage only a for a single dump you destroy the online recovery point when another dump commences. If there is a fatal database problem during the backup there is no online copy to restore from.
  2. MYSQL doesn’t have native snapshot capabilities have to use third party products for filesystem snapshots. Previous post mentioned btrfs.
  3. Susceptible to ransomware, snapshots aren’t usually affected by ransomware as they are only read-only and faster for recovery.

If you keep a few hours/days worth of btrfs snapshots around, that mostly mitigates the “bad” points. In particular, COW means it doesn’t require very much extra storage, and recovery can be very fast – stop database server, create a new subvolume from the desired snapshot over the “production” subvolume, start database server.

This only works if the DB workload performs well on COW, of course. (Though you can use nodatacow with snapshots; the first write to a shared extent will unshare it with COW, but subsequent writes to the unshared extent won’t be COW.)