Hi,
I’m using restic 0.7.3 and just created a new local repository. Afterwards I backed up just some subfolder of the disk I wanted to create the backup of:
restic -r /mnt/disk2/backup/ backup /mnt/disk1/folder1
This was about 160GB, afterwards I backed up the full disk with some excludes
restic -r /mnt/disk2/backup/ backup /mnt/disk1/ --exclude /mnt/disk1/folder2/ --exclude /mnt/disk1/folder3/
The repository is now about 360GB.
Then I deleted the older snapshot with only the single folder to clean this up a bit.
I ran restic check and restic prune on this and both are somehow complaining:
$ restic -r /mnt/disk2/backup/ check
enter password for repository:
Create exclusive lock for repository
Load indexes
pack 3813e130 contained in several indexes: {e2535d14 f0f82fc6}
pack b4aeda7c contained in several indexes: {46d6d771 f0f82fc6}
pack 85372216 contained in several indexes: {2b9e53ae f0f82fc6}
pack 718e2bbe contained in several indexes: {b3438080 cf4b4b8e}
pack 0fa6fd5b contained in several indexes: {43b2dd73 cf4b4b8e}
pack 8ea6fbd0 contained in several indexes: {b337c26b bf1ac38e}
pack 76f66a59 contained in several indexes: {be54ffe0 f0f82fc6}
run restic 'rebuild-index' to correct this
Check all packs
Check snapshots, trees and blobs
$ restic -r /mnt/disk2/backup/ prune
enter password for repository:
counting files in repo
building new index for repo
[10:19] 100.00% 74065 / 74065 packs
repository contains 74065 packs (456931 blobs) with 357.935 GiB bytes
processed 456931 blobs: 0 duplicate blobs, 0B duplicate
load all snapshots
find data that is still in use for 1 snapshots
[0:15] 100.00% 1 / 1 snapshots
found 442915 of 456931 data blobs still in use, removing 14016 blobs
will remove 0 invalid files
will delete 0 packs and rewrite 3120 packs, this frees 116.879 MiB
[18:13] 100.00% 3120 / 3120 packs rewritten
counting files in repo
[9:18] 100.00% 74029 / 74029 packs
finding old index files
saved new index as 6a61a895
remove 80 old index files
[0:02] 100.00% 3120 / 3120 packs deleted
done
Where does this come from, after only so few operations? Especially, what are the 116MB of data which are being deleted? I didn’t delete anything between the runs, and it doesn’t say this is duplicate data.