I’ve been using restic with this current setup for about 11 months very successfully, but I’ve noticed that in the last few days the total amount jumps from
Stats in restore-size mode:
Snapshots processed: 31
Total File Count: 104821947
Total Size: 1.861 TiB
to
Stats in restore-size mode:
Snapshots processed: 30
Total File Count: 101311613
Total Size: 27.496 TiB
This happened when my /tmp/ directory ran out of space on 23rd November.
How then can I reduce the total size back to its old amount please?
I’ve been running this /usr/local/bin/restic -r $RESTIC_REPOSITORY -p ~/cron/pass.txt forget $EXEMPTIONS --prune --repack-uncompressed --cache-dir $CACHE_DIR and ist accompanying ‘rebuild-index’, ‘check’ and ‘stats’ since but I’ve been unable to reduce the size.
Load(<data/9b060c4615>, 0, 0) returned error, retrying after 2.502263465s: Copy: write /tmp/restic-check-cache-420226399/218c8db944c4f519754705ab1fec5ebd7a95e007ebc90213d749328fee73623e/data/9b/tmp-411571184: copy_file_range: no space left on device
None of my snapshots contain my tmp directory as that is on a separate hard drive from my source files and also the restic backups. I’ve looked at the system tmp directory and there is nothing there relating to restic, plus I’ve also rebooted since the 21st November when the problems developed.
But seeing as the problems arose on the 21st November, are you suggesting that I remove all snapshots since that date up to now?
I’ve removed 2 snapshots and restic also removed 2 snapshots before going into its ‘rebuild-index’, ‘check’ and ‘stats’, and the total size has changed from 27.496 TiB at 0400 today, to 25.610 TiB at 1535, so its going in the right direction, but I’m unsure of the next step.
Sorry I seem to have totally misunderstood you before. I thought some content of your /tmp folder ended up in your repo! I have no idea where the extra data could have come from.
Can you mount the repo and explore what’s in it? ncdu is a very good tool for that. After mounting as I described above you’d do something like this to get a detailed view of its content:
ncdu mnt/snapshots/latest (might take a while to go though the 25 TiB)
Maybe someone else here with more in depth knowledge of restic has an idea what could possibly have gone wrong with restic itself here. But I doubt that this is the case.
To be honest, I’m not sure. Its very definitely the product of version 1 and version 2 backups, and I’d migrated to the new backup version 2. But not to worry, I’ve started a new restic backup which only took just under 4 hours to complete, as against 39 hours which took version 1 to complete!
if you run stats in restore-size mode, it displays the size it would take to restore ALL snapshots. The repo space used is typically much smaller due to deduplication.
The restore size which was reported by certain older restic versions nearly randomly excluded (well, everything with the same inodeID was only counted once) when computing the restore size for multiple snapshots. That number had some correlation with the restore size, but that’s about it.
And other replies have helped to explain how to get rid of unwanted snapshots, but if you do not fix the source data you will back it up again in the next run. So verify your data source that your script will back up too.