Full filesystems prevents any forget/prune actions

I have a lot of remote backups, and my diskspace got saturated. So forget and prune commands fail because they can not even create the lock on the remote storage server.
How can I make room so that I could start doing some clean up? Can I simply manually ‘rm’ some of the oldest directories under the ‘/data’ directory in the repositories?
If I do this, how could I ‘resynch’ restic, so that subsequent snapshots, forget, prune, backup commands run fin? Should I simply run a ‘forget’ command to remove the deleted snapshots? Or is there no need to resynch at all?

Using restic 0.9.6.

You should not delete files inside the restic repository.

First of all, please update to the latest release which is version 0.11.0 - it contains numerous immense optimizations. Then run forget, yes (otherwise there won’t be anything to prune).

Second, free up as much space as you can on that device (if needed, temporarily move some other files than those in your restic repository, to some temporary storage).

Third, please read https://restic.readthedocs.io/en/latest/060_forget.html#customize-pruning and then use the --max-repack-size option to the prune command, to limit how much disk space restic uses when repacking files during the prune (e.g. if you have 2 GB free, set --max-repack-size 1g).

The more space you get temporarily free up, the better :slight_smile:

Thanks a lot for the quick answer.
I’m a bit wary of upgrading right now. Is that really mandatory? I do not want to mess too much with versions. Are those versions absolutely compatible?
Regarding forget, I can not run it, it fails with error ‘can not create lock’.

Second, I can not free up space, that device is only and purely dedicated to storing the backups, I have nothing else on it.
Only option would be to move other restic repositories (which are stuck too). Could I simply temporarily move some repositories (through a tar of data, locks, keys, snapshots, index and config) to another device, so that I could forget and prune whatever repos are left?

Yes, there’s nothing to worry about in that regard. You really want to use the latest version, it has a ton of improvement in the prune and check commands, and many other things as well. Just do it :wink:

You can run it with --no-lock as long as you can guarantee that nothing else is touching the repository at the same time. But if you can do the below to temporarily free up space, start with that instead.

Yep, or even just parts of them. As long as e.g. one of those repositories isn’t touched, you can move just a part of it out of there, fix your other repo(s), and then move exactly the same parts back again - the meanwhile-unused repository won’t notice or care.

I’d suggest using rsync or rclone instead of tar, but I guess tar should work too.

If you can just get e.g. 10 GB free, then you should be able to free up more and more space using the option to prune I mentioned earlier - run it once, then adjust the value for it as you get more and more space free.

FWIW, I had a storage for two repos the other day where I found that there was just 2 GB disk space free (out of 300 GB). I used forget then prune --max-repack-size 1g for the first run, got some space free’d up, then prune --max-repack-size 10g and after that I was home free.

Thanks. I’m realising that the backups seem to have been stuck for quite a while, the most recent ones on some repositories are nearly two months old, so they are actually useless. Is there a way to aggressively completely remove them and start from fresh?

Personally I would just delete the entire repository and init a new one. I guess you could forget all snapshots and then prune, possibly with that --no-lock option, but why complicate it.

If you do delete it and then create a new one, the client will generate a new cache directory (you can delete the old one).

Now that you mention it, indeed, I should have found that option on my own ! :slight_smile:
Thanks again for your precious help.

You’re welcome! You’re welcome! (a second time to make Discourse happy)

1 Like

Note however, that this is not yet included in 0.11.0, you need to get the latest beta…

@vbartro What @alexweiss wrote above, you need the latest beta that you can download from https://beta.restic.net/?sort=time&order=desc . Sorry about the confusion.

I tried to remove the whole data. This allowed my backups to resume, because I have now some free disk space.
But a “restic snapshot” command still shows me the list of all previously known snapshots.
I guess this is coming from the cache, which, if I’m not mistaken, is located under $HOME/.cache/restic.
I wanted to clean the cache too, but not sure how to do it. I have a lot of various repositories managed from that restic. I’m seeing around 20 subdirectories under .cache/restic directory.
Is there one directory per repository? If so, how can I identify which directory relates to what repository?
Or should I run a restic cache --cleanup?
Or should I simply remove everything which is under .cache/restic, and restic will the build it again from the various remote repositories?

Any chance there’s documentation on how to use the beta files? I can’t seem to find even a shorthand for what to do with the downloaded beta for the version I would use.

It’s just one single binary executable file. All you have to do is download it and run it (perhaps first doing chmod +x thefilename first, if you’re on Linux et al). Just make sure you don’t get the source download, but the other ones, for your platform.

All, just wanted to thank everybody for their help. I finally removed all old backups (removed the whole directory where backups were being stored) and started building backups again. I added some monitoring of my own to make sure I’m getting proper warning when disk space gets dangerously low.

If I may, one nice piece of addition would be proper stats at various steps of a restic sequence :

  • when calling ‘snapshots’ and ‘check’ command : would be nice to know each snapshot size

  • when calling ‘backup’ command : would be nice to have the total disk space used by the backup we just ran

  • when calling forget : would be nice to know how much disk storage will be freed

  • when calling prune : would be nice to know how much disk space was actually freed.

Some of these information are already available, but kind of ‘hidden’ in the logs, and so potentially could change between versions. And I’m not sure whether it refers to data volume before or after compression/encryption. I could for example grep for ‘Added to the repo’ to get backup size, or I grepped for ‘this frees’ to know how much was freed after a prune. And I then have to convert various units (MiB, GiB, KiB) to just one unit to then compute stats and warnings.

See this PR for JSON ouput: