Memory consumption


Already quite a while, I experience memory problems with restic (I use 0.8.0). I use restic to backup lvm volumes on the hypervisor, and I allocated already additional memory (4G) in total to make backups. But restic is very resource hungry.

What can I do (e.g. disabling cache) to reducre memory consumption?


– Jaap

Hey, we’re already aware of the issue. I need to find some time to try and improve the memory usage.

In the meantime you can try setting the environment variable GOGC to a value between 1 and 100 (e.g. start with 20). This makes the Go runtime garbage collector more aggressive and may improve the memory usage.

Ok, great. I will try that

@jgrfrm Would you mind reporting back with how it went for you with this? Did you try setting the GOGC environment variable for restic, what did you set it to and did it change anything?

Personally I’m using GOGC=20 in a somewhat memory constrained OpenSUSE to make it run (if I don’t it fails due to lack of memory), so you could try that.

I tried to that.

I use restic at a Xen hypervisor to backup my logical volumes by taking a snapshot.

Because it is hypervisor, I do not want to assign too much memory to the hypervisor (1GB max).

With GOGC setting, restic does not work in such a constrained memory environment.

If fact, it consumes all available memory, such that the hypervisor crashes.

A use case of restic would be to back up a hypervisor by taking a snapshot of the logical volumes containing the guests.

But the high memory consumption of restic does not allow this.


– Jaap

If there any way you can share the snapshot to another host where you run restic to back up the snapshots?

Yes, that is now how I want to proceed.

I make a daily back up on a separate server with enough memory.

I plan to use that server to make off site restic backups.

– Jaap

At risk of promoting the impression of a AA group gathering:

“When I realized I was stuck with memory consumption, it was already too late.”

I have restic running in a chron worker, like linked.
And it appears from my cluster monitor that after a backup, memory does not get cleared, but this container sticks around idle with about 2GB of memory consumption…

Maybe it’s a new aspect to it, but anyhow it would be great time of the year to see the memory footprint go down during backups…

If it’s any comfort, memory consumption will be easier to address/optimize due to improved profiling possibilies once the new archiver is in place.

9 posts were split to a new topic: “restic prune” returns stacktrace