Restic Disk Usage Has Me Tearing My Hair Out

The uncompressed source data on the DB server is 4.8 TB.

They are not SQL dumps. There are simple directory copies taken with the MySQL instances all in a down state.

That’s a fair question. How would I tell?

Then this is probably what’s going on. Restic sees the data as uncompressed, which means it sees 4.8 TB. It backs this up, and on the remote side it stores a bunch of binary encrypted stuff, which is hardly very compressible by ZFS. So you end up with 4.4 TB.

That said, restic’s compression should be able to deal with this a bit I suppose, let’s see what Michael says.

The source data is 4.8 TB, but when it gets rsync’d to storage server 1, ZFS compresses it to 2.2 TB. If restic sends 4.8 TB of uncompressed data over the wire to storage server 2, I would expect ZFS on storage server 2 to do the same thing as storage server 1 did when it received the raw data from the DB server. The end result should still be 2.2 TB of used space on storage server 2.

It’s a further mystery to me that the Day 2 backup is twice the size of Day 1. It’s like there’s no incremental backup being done, just another full copy.

No, because the data is not the same when stored on server 2 as it is/was when stored on server 1.

  • On server 1, the ZFS sees compressible data and thereby successfully compresses it from 4.8 TB to 2.2 TB on disk.

  • On server 2, the ZFS there sees less compressible data (restic’s encrypted binary blobs et al), and thereby is not as successful in compressing it (but manages to do so from 4.8 TB to 4.4 TB on disk) - or it’s restic that compresses the data from 4.8 TB to 4.4 TB.

At least that’s my theory, but whether it’s correct or not depends on the answers to Michael’s questions :slight_smile:

1 Like

Recent restic dev build print the repository version when running a command, e.g. restic snapshot prints:

repository c881945a opened (repository version 1) successfully, password is correct

Version 2 supports compression, but that feature is for now only available in dev builds. But as it is opt-in for now, your repository is probably still using version 1. Especially as the ZFS compression shows that the data compresses reasonably well.

That is indeed strange. That would essentially require the db to change data in like every MB of the database, which sounds a bit much. How much uploaded data does restic report, when running the incremental backup?

Since there are 144 separate runs of restic each day, I will pick one that is reasonably large as an example. Here’s the log from Wednesday…

14:08:02: — site719 —
14:08:02: exec: restic --tag site719 -r /restic_remote_repo.nfs --verbose backup /zpool0/db_rsyncs/site719
14:26:44: open repository
14:26:44: lock repository
14:26:44: no parent snapshot found, will read all files
14:26:44: load index files
14:26:44: start scan on [/zpool0/db_rsyncs/site719]
14:26:44: start backup on [/zpool0/db_rsyncs/site719]
14:26:44: scan finished in 39.814s: 13124 files, 399.499 GiB
14:26:44: Files: 13124 new, 0 changed, 0 unmodified
14:26:44: Dirs: 12 new, 0 changed, 0 unmodified
14:26:44: Data Blobs: 754686 new
14:26:44: Tree Blobs: 13 new
14:26:44: Added to the repo: 383.849 GiB
14:26:44: processed 13124 files, 399.499 GiB in 18:41
14:26:44: snapshot ccb16896 saved

Note that it added about 400GB to the repo because it was the first run on a fresh/empty repo.

And here’s the log from Thursday for the same backup…

09:03:33: exec: restic --tag site719 -r /restic_remote_repo.nfs --verbose backup /zpool0/db_rsyncs/site719
09:09:08: open repository
09:09:08: lock repository
09:09:08: using parent snapshot ccb16896
09:09:08: load index files
09:09:08: start scan on [/zpool0/db_rsyncs/site719]
09:09:08: start backup on [/zpool0/db_rsyncs/site719]
09:09:08: scan finished in 66.770s: 13138 files, 402.077 GiB
09:09:08: Files: 15 new, 514 changed, 12609 unmodified
09:09:08: Dirs: 0 new, 8 changed, 4 unmodified
09:09:08: Data Blobs: 43682 new
09:09:08: Tree Blobs: 9 new
09:09:08: Added to the repo: 22.987 GiB
09:09:08: processed 13138 files, 402.077 GiB in 5:34
09:09:08: snapshot f52a2f5b saved

That one added only about 23GB.

And yet the restic_repo folder was 4.4 TB after the initial backup, and 9 TB after the second day.

What commands are you using when you inspect a) the compressed and uncompressed size of the source data on server 1, b) the compressed and uncompressed size of the restic_repo folder on server 2?

Also, how often do you run these 144 different backups?

I use df -h on the source database server. It has 144 separate filesystems, so I add up the “Used” amounts, Then I use du -hs on the db_rsyncs folder on storage server 1, and then du -hs on the restic_repo folder on storage server 2. I’m aware that there are differences in the way usage is calculated between df and du.

Every day.

So just once per day, or what?

Do you get any other results when you use zfs list instead?

zfs list has a lot of options. The basic command does not show much of use.

On storage server 1…

[root@store50b db_backup_stage2]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool0 7.72T 6.64T 7.72T /zpool0

On storage server 2…

[root@store50a zpool0]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool0 10.1T 4.26T 10.1T /zpool0

Are there any special options you’d like me to try?

Not in particular, I was mostly curious if you are seeing different numbers when you use the ZFS commands instead. The examples you gave doesn’t seem to match the sizes you’ve written earlier, so I’m not sure what to make of it.

You could run zfs get all|grep compress to see what you get out of that though. But please do so for the relevant data sets, I presume you have more than just the root/pool filesystem?


Anyway, I think that the entire discussion here is with only pieces of information revealed. In order to fully understand what you see, we need to see the same thing, which means e.g. the full logs of all runs involving the problem and the repository. As well as some more specific output from disk usage checks, I suppose.

But a more important thing is that instead of dealing with this somewhat big source and repository, IMO the sane way to debug this further is to do the following:

  1. Create a new folder the same way as the others on server 1, but with just some test data as the source to back up. This can be a copy of one of your current databases if you want to, but it would of course have to be somewhat big in order to make it easy to follow up on (e.g. if its disk usage on the repository server was to double, it’s easier to see that with a large size). It would also make sense to keep it to just one folder and one single backup job, so you don’t have to fiddle with multiple logs from different jobs, instead only focusing on one minimal test case.

  2. Create a new repository on server 2, the same way you created the other ones, but for this test source data explicitly.

  3. Back up the source data to the new repository the same way you back up the others, and collect logs as usual. Try to reproduce the problem.

  4. If you can reproduce the problem with this test data and test repository, do the very same thing but with the source data and destination repository being on “regular” filesystems such as ext4, to see if that makes any difference to the outcome on the repository side.

If there is a difference when you don’t use ZFS, that’s a good starting point to dig further into.

Generally speaking, restic backs up the data it sees, deduplicates (and compresses it, if you have that type of restic version), and then stores binary encrypted files on the repository side. It’s quite unlikely that it backs up more data than it detects having changed and that you see in the output of the backup runs, and it really doesn’t store more data than it sent on the repository side. So this is why it’d be good to isolate the matter by removing the ZFS component (assuming you can reproduce the problem in a controlled manner). Especially since you had a similar issue in this forum a while back, which was never fully understood or figured out.

It’s all about systematic isolation and testing to verify and draw conclusions, until you find the root cause.

3 Likes

On the source server, I created a single subfolder containing a MySQL directory tree. As you can see, the folder occupies 130 GB on disk…

[root@store50b zpool0]# du -hs db_rsyncs_2/site092
130G db_rsyncs_2/site092

On the destination server, I created a new restic repo…

**[root@store50a zpool0]# mkdir restic_repo_2 **
[root@store50a zpool0]# restic -r restic_repo_2 init
created restic repository 09f32ebfc1 at restic_repo_2

I added the new repo to the NFS exports and reloaded the NFS server service.

From the source server, I mounted the new repo as an NFS volume…

[root@store50b /]# mount -t nfs store50a:/zpool0/restic_repo_2 /restic_remote_repo_2.nfs

I then ran restic against the source subfolder…

[root@store50b zpool0]# restic --tag site092 -r /restic_remote_repo_2.nfs --verbose backup /zpool0/db_rsyncs_2/site092
open repository
repository 09f32ebf opened (repo version 1) successfully, password is correct
created new cache in /root/.cache/restic
found 1 old cache directories in /root/.cache/restic, run restic cache --cleanup to remove them
lock repository
no parent snapshot found, will read all files
load index files
start scan on [/zpool0/db_rsyncs_2/site092]
start backup on [/zpool0/db_rsyncs_2/site092]
scan finished in 1.502s: 12679 files, 293.054 GiB

Files: 12679 new, 0 changed, 0 unmodified
Dirs: 11 new, 0 changed, 0 unmodified
Data Blobs: 569363 new
Tree Blobs: 12 new
Added to the repo: 283.796 GiB

processed 12679 files, 293.054 GiB in 17:17
snapshot f4d3c9a7 saved

I note that restic is using repo version 1, and it saw 293.054 GiB of data.

On the destination server, the restic_repo_2 folder is now 285 GB in size…

[root@store50a zpool0]# du -hs restic_repo_2
285G restic_repo_2

For comparison purposes, I did a simple rsync of the source folder from the source to the destination server…

[root@store50b zpool0]# rsync -avh /zpool0/db_rsyncs_2/site092 store50a:/zpool0

On the destination server, that folder occupies 130 GB…

[root@store50a zpool0]# du -hs site092
130G site092

So… we find that the source folder is 130 GB, and when using rsync the destination folder is also 130 GB; however, when using restic, the destination folder is 285 GB.

No, there’s just the zpool0 root filesystem on both servers.

1 Like

Thanks for that test and the clear elaborate explanation of what you did, that’s super useful :slight_smile:

This is (IMO, unless I’m totally missing something here) fully expected, because of what I wrote here - namely:

  • When you use restic, the data/files you are investigating the size of on the destination server is no longer the same as the data you are investigating the size of on the source server - that is, assuming that the 130 GB you are seeing is the compressed size (which I believe to be the case, and this also matches the output from restic).

  • When you instead use rsync, the data on the destination is the same as on the source, and thereby you get the same sizes (assuming the same configuration in ZFS).


Can you please do the exact same thing as you did here, but using a source and destination filesystem that is not compressing or deduplicating, e.g. plain old EXT4 or XFS? That would provide very useful information I think. You don’t have to use the same servers, you can use another Linux system, just make sure it’s the same set of data on the source and that you perform the very same steps as you did above. The only difference should be the source and destination filesystems.

Yes, indeed. 130 GB is the compressed size on the source and destination. 285 GB is also the compressed size on the destination, but I gather that either ZFS cannot really compress restic’s blobs, or restic is adding a ton of overhead to the raw data.

I’m setting up that test now, but I suspect we can foresee the results. The raw source folder will be about 290 GB, the raw destination folder (copied with rsync) will be the same size, and the restic repo will also be roughly the same size after the first backup.

Assuming that’s what happens, it would leave me with a conundrum. If restic cancels the benefit of ZFS compression, that would be a huge bummer. Since I’m not going to give up compression, I would have to give up restic and go with something less elegant (possibly rdiff-backup) but which retains the benefit of compression.

Hello. I am a former Database Administrator of Oracle databases, not an employee of Oracle. A couple of comments:

  1. The database is shut down while the backup is run. This is good method for simple backups and there is much more complexity if the database has too be kept running.
  2. While I have never administered a MySql database backup / restore I think the lack of deduplication is because there is no duplication. In an Oracle db every block, what ever the minimum size the db uses to write to disk, contains a number which is related to the last time something was changed in the block. That means every time an update is run every block which is changed gets a new number. While the database block likely does not match the size of the restic block, it is very likely that any production database has many, perhaps most, of the blocks being changed. This means that from restic’s point of view it cannot deduplicate the block so it will create a new block.
  3. I do not know if there is a block header at the start of the tablespace, disk file, which gets set to the maximum of the new number. If this were to happen I’m guessing that only the restic blocks which were changed would be backed up. But what would happen if the first row is changed to add one character? Would that cause all subsequent blocks to look different because they are different group of characters? That would stop deduplication.
    Further reading: mysql log sequence numbers

Please, please make sure to try a recovery and starting the database. I have had many people claim they have recovered databases by recovering files but when the database is actually started there were serious error messages.
Good luck.

Yeah, that’s correct - both are compressed, it’s just that ZFS isn’t able to compress encrypted data very well. This is perfectly expected and is not specific to encrypted files produced by restic - the same would happen with any other encrypted data/files.

Yeah, I agree. But it’s still an important baseline - if we can establish this then there’s less confusion about the compression’s effect on things, and we can then start determining if the growth of the repository better matches the growth of the data you back up with restic.

It’s not so much restic that cancels anything - you are literally talking two different types of files here. You simply cannot expect your destination ZFS to be able to compress encrypted files, regardless of what produced those files. In other words, if you want an encrypted backup, you’ll have less compression.

It would be a shame if you have to trade a really good backup software for something else though, but I totally get that if you rely on the compression you might have to ditch the encryption aspect of your backups. It’s a tradeoff you can do once you know the reasons for and the effects of such a decision.

@forbin Have you considered backing up dumps of the databases instead of the database directories themselves? E.g. if you could run a dump that produces what’s generally called “rsyncable” and pipe that to restic, this might perhaps yield a better overall result than having restic back up raw database files (which I think is what @punchcard was saying too).

For example, something like mysqldump ... | gzip --rsyncable | restic ... --stdin --stdin-filename siteFoo.sql might be worth trying. Perhaps you need some additional option to mysqldump to optimize how it lines up the data in the file, to make sure it’s as compressible as it can be, but try it like this first if you don’t know.