High Memory Usage on Backup, Prune, and Check

Drop these and use --one-file-system instead. It’s more resistant to the introduction of additional virtual filesystems by your distro. Then, obviously, you have to explicitly list each filesystem you want to back up, if / is not the only one.

Do you have any sparse files? IIRC restic does not handle sparse files specially, but df will not include the size of any holes in sparse files as used. If you have sparse files with very large holes in them, restic can deduplicate the chunks of zeros, however. Just because restic has found 240GB of data doesn’t mean that it will be adding 240GB of data to the repository, because this stat is taken before deduplication happens.

Another possibility is that you haven’t excluded some virtual filesystem (another reason to use --one-file-system) and that virtual filesystem contains about 180GB worth of data.

Edit: I’m not 100% sure how the scanner treats hard links (they are backed up and restored correctly, though) and so it’s possible that the size of the scanned data could be inflated by hard links, but, again, they will be correctly processed during backup.