Prune jobs killed

Hey there,

I am using restic for backups (over sftp) of my nextcloud instance hosted on a small nanopi neo2 and everything seems to be working just perfectly.

But I just noticed that in my script that runs the backups and starts regular prunes and everything there is a problem:

When pruning my repository, the process gets killed pretty soon and it seems to be an OOM.
I have no problem with pruning when I add --max-unused unlimited (so no repacking occurs).

I suspect the minimal specs of the SBC are to blame… it’s an Allwinner H5 with Quad-core 64-bit Cortex A53, but I guess more importantly, it has only 512 MB of DDR3 RAM and with nextcloud running in the back it needs quite a lot of that already for apache and mariaDB…

I already added export GOGC=5 to my script to make it work better somehow, but I can’t really remember what for… but that should help, shouldn’t it?

Is there any other way I can restrict restic to use less memory?

Can somebody point me to a place where I might be able to make it work?

Any help is appreciated! Thank you!

Edit: The stage “finding data that is still in use for XX snapshots” goes fine, as well as “searching used packs”, “collecting packs for deletion and repacking” and “deleting unreferenced packs”. But after that when “repacking packs” is going on (at around 3 / 317 packs and 1 minute), it is killed…

Edit2: I have tested only running the prune on the machine, nothing else really and it went through. So, apparently, if nothing else uses up a lot of memory, it is indeed enough… but the whole pruning took 30 minutes and I would rather not shut down my nextcloud for that long… but it’s a workaround for now.

You can’t really avoid the fact that your memory is tight. You could try using zram to extend things a bit. I did that on a 1gb rpi3, and for a while restic would work more comfortably while other things were still running. But in the end, and with the repo (and therefore memory requirements) growing, I bit the bullet and got a 4gb rpi4.