Well, it’s right about being atomic. Not all use cases care about it though. But if you have the option (for some apps) it’d be safer option to backup the snapshot.
I believe with databases, even a filesystem snapshot will not lead to consistent backups in all cases: How could the filesystem know, after which writes two files are in a consistent state? It can’t. So the 7 Ps apply (Proper Preparation and Planning Prevents …).
Yes 7P is always must but this is one of the restic problems biting people trying to use snapshots - which is good trend showing that ZFS/BTRFS slowly are getting traction.
Problem is:
It might be subtle for not much data and few files but is real PITA when you have massive datasets.
I believe with databases, even a filesystem snapshot will not lead to consistent backups in all cases: How could the filesystem know, after which writes two files are in a consistent state? It can’t.
You are correct that the filesystem does not know when two files will be consistent. Snapshots are necessary but not sufficient. I write:
Ideally, our backup would look like the system had lost power in an instant. It’s still possible for there to be half-complete writes, but at that point it is up to the applications to handle such edge cases correctly. Whereas before the applications couldn’t detect or fix the corruption, now they have all the information to do so.