Efficiency of backing up (large) encrypted container files (TrueCrypt / VeraCrypt)

I only found threads discussing the mtime change behaviour of TrueCrypt / VeraCrypt, which I understand, so I hope this new topic is welcome.

I’ve been wondering if restic is able to handle a large encrypted container file efficiently without mounting it and backing up the contents itself. Unfortunately I don’t have any technical / lower level understanding of both what exactly happens in these containers and how exactly restic would handle them.

Say I have a 100GB container file. Obviously, filled with actual data or not, 100GB will have to be transferred to the repository initially. Now say I delete one 1GB file from and add another 1GB file to the container file. What’s the worst-case scenario in terms of data transfer volume and computational resources required in relation to the same process outside of a container file?

Thanks for your efforts in this great project!

Greetings, Christian

restic will have to check the complete container for modifications each time something changes. That is restic will have to read the full 100GB every time anything within the container changes.

Afaik the container behaves such that changing one sector within the container results in roughly one changed sector of the container file. Or to put it differently small changes within the container should translate to small changes of the container files and the converse applies for large changes.

Deleting a 1GB file usually only results in a few metadata changes in a filesystem, so this should only result in restic uploading a few MB. The amount of data restic has to upload after adding a 1GB file depends on whether the file is written without fragmentation or not. In the first case, the upload size of restic should be somewhat similar to the size of the new file.

You can certainly use restic to backup large files like filesystem or vm containers. Bandwidth and remote storage use will only reflect the differences on each backup. But you will not get around reading the whole file on every backup. Depending on the size of your container and the speed of your storage this may or may not be practical.

The laptop that I’m currently typing on has virtual machine images in the ~10 GB size range. With a fast NVMe SSD that reads several hundreds of MBs per second it is still perfectly reasonable to simply include these files in the daily restic backup.

But if your container size is in the 100s of GBs on rotating disks you will easily end up stressing your disks and your performance for hours.

And please note that backing up a container with restic is not an atomic operation. So backing up a mounted container will quite possibly leave you with inconsistencies and the danger of data loss. Always unmount them (or remount ro) before backing them up.