Using restic to backup encrypted containers?

There shouldn’t be much duplicated data in the container anyway except for unallocated space. I’m more interested in deduplication between snapshots.

That should work. I’ve tested it on a local repository already. I can loopback mount the container from the fuse filesystem. I’ve done the same thing with my current setup over sshfs. It’s very slow but useful in a pinch.

I though the chunker_polynomial was required but now that I think about it it’s only needed to backup new data, not reading old data.

I did some testing on a local repo with a single 1GB file backed up. Looking at the content of the snapshot I got the corresponding tree blob id. That tree blob got a list of all the data blobs for the file in order. Using the index file I figured out which pack had the first data blob of the backed up file. I deleted that pack from data/ and tried to do a restore. Restic tried to re-read the missing file several times which isn’t that useful on a local repo. Eventually it gave up without restoring anything. Same thing happened when I corrupted the same pack file.

Restic had access to the tree blob so it knew which data blobs were needed. It also had the length of all data blobs from the index file so it could figure out the right offsets for the data it still had but it still gave up after the first error.

Loss of the tree blob means loss of chunk order so that’s loss of the entire file.

I’m mostly worried about mistakes in the implementation. Like that security bug in tarsnap. It used AES in counter mode but after some refactoring of the code the counter nonce wasn’t incremented anymore. The developer of tarsnap is a genius but he can still make mistakes. If there are few contributors those mistakes might not be noticed right away.

Is that only a problem with prune or all operations? Adding new snapshots should only add new files. Those files might end up truncated but on the next run restic could check that old files are fine so the new snapshot won’t reference the old corrupt data.

Perhaps it’s better to ask which one you use (if any). The setup used by the developer is usually the best tested one.

Zfs itself might be good but I’m using Linux so that means zfsonlinux. That wasn’t so great last time I checked. It’s an out of tree module so you constantly have to rebuild it when the kernel change. Last time I checked it didn’t have trim support which would be useful if I want to use it in the container.