So I backed up an old hard drive that had pictures on it I’m fairly certain I have elsewhere. Out of curiosity, is it possible to see if this snapshot has any unique files? I suppose one way would be to clone the repo, do a prune --max-unused 0, then delete the snapshot, run prune --dry-run and see what the savings would have been?
It would be cool if you could do something like restic stats --find-unique and it prints a list of unique blobs and/or filenames. Just a thought!
The best I can think of would be to diff the snapshot with a recent one to get the filenames. You could then loop over all deleted ones and check their sizes. This approach has a few downsides I can think of:
It could take a while.
Due to how deduplication works the content blobs might still be in use for other (unrelated) files. Since pictures are usually compressed in some way, I find this unlikely. But I might be totally off here.
I think moving a file would not count as a metadata change but “removed” and “added” since the “node” would move but the metadata would (mostly) stay the same.
I stumbled upon this topic in search of finding a way to find big files. For example, imagine accidentally downloading a large blob of data (e.g. backup data) and it was being backuped and not in an excluded directory. You later catch it and delete it. Now there are a few snapshots that store useless data. I could probably live without these exact snapshots and prune that data if I knew it was there…
So if you find a way. Let me know. It might help me too^^