Hi, I have a storage box service on Hetzner (10 TB).
Currently I have just one restic snapshot of my data, and the reported size is about 4.1 TB.
ID Time Host Tags Paths Size
-----------------------------------------------------------------------------------------------------------
4718ae95 2025-05-21 16:31:42 nas /share/CACHEDEV3_DATA/Veeam 4.169 TiB
Yesterday I tried to upload a second snapshot, but it was interrupted:
repository dcb067c6 opened (version 2, compression level auto)
using parent snapshot 4718ae95
[0:33] 100.00% 413 / 413 index files loaded
Save(<data/808519e6c7>) returned error, retrying after 561.435156ms: OpenFile: sftp: "Quota exceeded" (SSH_FX_FAILURE)
Save(<data/95e97eb630>) returned error, retrying after 750.808699ms: Rename: sftp: "Quota exceeded" (SSH_FX_FAILURE)
Save(<data/085d134f9c>) failed: sftp: no space left on device
Fatal: unable to save snapshot: sftp: no space left on device
If I check Hetzner, it shows me that the disk space is full, but I can’t explain that.
Could be a Hetzner issue, or am I missing something about restic behavior? How can I verify this?
Thank you!
restic 0.17.3 compiled with go1.23.3 on linux/arm64
Start with poking around your Hetzner account to identify how much free space you have left. Then if less than you expect identify what is using it. Can be some other data or maybe ZFS snapshots (if you have them enabled)?
If free space left is what you expect then maybe your next snapshot is simply too large?
Also it is possible that your repo has some unused data (in case you deleted some snapshots already). Make sure you prune it.
It is correct command. But you should read documentation more carefully:
For all remove operations, two commands need to be called in sequence: forget to remove snapshots, and prune to remove the remaining data that was referenced only by the removed snapshots.
You removed snapshot but not data from repo so it still occupies storage space.
Try to run prune… but if there is no free space left at all then it can be problematic now… In such case refer to:
Hi, I just ran restic check and I have several packs “not referenced in any index”.
This is the last part of my output:
221 additional files were found in the repo, which likely contain duplicate data.
This is non-critical, you can run `restic prune` to correct this.
check snapshots, trees and blobs
[0:22] 100.00% 1 / 1 snapshots
no errors were found
Anyway, at this point I feel not pretty confident: are these packs related to my last attempt (partial) to run a backup, or are they some leftovers of my previous snapshot?
Hi, I have no snapshot feature on Hetzner (already checked )
Now I am running prune and it looks like it will remove about 6 TB of data!
collecting packs for deletion and repacking
[26:22] 100.00% 630287 / 630287 packs processed
to repack: 0 blobs / 0 B
this removes: 0 blobs / 0 B
to delete: 6415911 blobs / 5.990 TiB
total prune: 6415911 blobs / 5.990 TiB
remaining: 4536414 blobs / 4.006 TiB
unused size after prune: 131.014 GiB (3.19% of remaining size)
so, definitely not an issue with Hetzner.
I wasn’t aware of these “unused pack”. I should run restic check to be sure of my actual repository’s content, in the future.
Just for clarity for anyone else reading this thread and getting confused - yes, this is 100% correct. I’m unsure why kapitainsky suggested you’d need to run prune seperately above when you had issued the --prune flag to the forget command. You only need to run prune when you’ve called forget without using the --prune flag, but your output shows you did put --prune on the command, so a seperate prune run should not have been necessary.
At this point, my only explaination for all these “packs not referred to any index” is that they belong to some previous partial backup job. Does it sound reasonable?