Interrupted Backup - Repository Cleanup?

That is very interesting. I just searched my 2.2 TB of backups here at home and didn’t find a single tmp or temp file. I’m using rest-server as a backend fwiw.

The only way that I ended up with any temp files was when my backup process was prematurely interrupted/cancelled. During normal operation, I didn’t experience any temp files that lingered around.

I’m currently testing with an sftp repository. I can’t speak to if temp files get created via other backup protocols (like rest-server, as you mentioned).

When you are lost in the matrix, the best thing to do is return to the source…

When you terminated the backup, you didn’t use SIGKILL by chance did you (if on UNIX-like system)?

i.e.

kill -9 pid
# or
kill -SIGKILL pid

That likely would prevent the deferred function call that cleans up the temporary file before terminating.

Otherwise, not sure why the tmp files remain.

Damo.

1 Like

When you terminated the backup, you didn’t use SIGKILL by chance did you (if on UNIX-like system)?

No. I terminated it while it was running from the cli by pushing Control-C, which should be the equivalent of SIGINT?

I found my particular writing of temp files in sftp.go, so it looks like the creation of temporary files is protocol specific

I won’t pretend to understand the code well enough to know why these temporary files linger around if the backup is interrupted. But I’m glad to know that they can be removed after the fact without any ill effects.

My repo already had “Keep only the last version of the file” set. I did way over a day (probably closer to 48 hours) to see if those files would have been automatically deleted, but they were not. I’m going to let them sit for a whole week and try to remember to check daily to see if they do end up clearing up eventually.

For those still following along, Backblaze did eventually delete those files from my bucket that restic had purged. It took closer to 48 hours or so, but it did eventually happen.

I don’t like how they start billing you immediately when you use space, yet when you go to delete space you end up paying for it for a few extra days until they truly delete it. But on the flip side they’re pretty affordable and you only pay for what you use.

1 Like

I don’t trust those cloudy folks if I don’t have to. One disk in my basement, one in my drawer and one in my brother’s basement, rsync for syncing, then regular restic checks with read-data at all ends. It’s cheap and has been working for years.

I just run this at the end of my scripts:

rclone --include ".davfs.tmp*" --include "*-restic-temp-*" --include "*-tmp-*" --include "~tmp*" delete hetzner:restic-db

Use it with -n first, to make sure it only deletes what you want it to.

2 Likes

I’m the same. Individuals on their bargain bin plans mean little to cloud providers. And will indemnify themselves as part of their tos for any loses you might incur. They want accept any accountability for your loses.

If you have the skills and time the 3-2-1 backup strategy at least gives you full control and accountability for the safety of your data.

1 Like

Those temp files are only used by the local and the SFTP backend to ensure that pack files are written atomically. Restic first creates such a temp file, writes the whole file content to it and finally renames it to its final name. The rename happens atomically, so if you delete one of those files while restic is actively writing to it, this will just result in an upload error that will automatically be retried.

tl;dr: Feel free to just delete those temp files.

If someone wants to implement the cleanup for those files, feel free to open a PR.