I’m experiencing out of memory errors after upgrading from 0.8.3 to 0.9.3/0.9.4. The repository continues to (appear to) work properly using 0.8.3. Both backup and check commands fail in 0.9.4. I’m using the dockerized version of the restic backend running under Docker on a Synology NAS, found at https://hub.docker.com/r/restic/rest-server/, with the restic front end running under Win7. The repository is approximately 2.6TB.
I didn’t see how to attach a log file and the log is too big to post. The complete log can be found here: https://www.dropbox.com/s/62edvtv2t8ehfgo/resticoom.txt?dl=0. It contains the output from:
restic -r rest:http://synology:8001 check >resticoom.txt &2>1
Any help appreciated!
Did you solve that in the meantime?
Nope. No one ever replied, and for now I’m just using 0.8.3. I’ve actually been chasing another problem where the restic repository written by the backend running in docker on a Synology NAS isn’t being backed up to Backblaze B3 unless the cloud sync process is paused and resumed.
If you have any suggestions they’d be welcome!
I am still in the evaluation phase, so I don’t have enough experience to troubleshoot your problem. Reports like that make me uneasy though.
Maybe @fd0 can shed some light on the issue?
Sorry for not responding earlier, I don’t have much time at the moment. restic keeps an index of what data is stored where in the repo in memory. It seems to scale with the number of files (more files -> more memory usage) rather than the size of the repo. I’ll tackle this issue next.
Did you ever have a chance to look at this?
@fd0 Any chance of this issue getting some attention? When this happened I reverted to 0.8.3 but now I find I need “find --tree” and that’s blowing up with the current version so I’m looking for a solution. Is it possible (for me) to build the current version with a larger heap (or whatever memory restic is using)?
Please try running restic with the environment variable
GOGC=20, this might help reducing the amount of memory needed.
Thanks very much for the suggestion. Currently the machine is down while the office gets reorganized so as soon as I have it back up I’ll try this.
Same failure, unfortunately. Also unfortunately I’ve had a memory failure and so am currently running in only 8G. Crucial seems unable to find a replacement for this DDR3 CL8 DIMM so it may be a week or two before I’m back up to 16G.
This is really annoying. But looking at the output you have, it appears that restic is/was using 1.5GB memory and wanted to allocate another 800KB. If you have 8 GB on that machine, you must have something else that’s hogging a lot of memory?
Everything is relative so I’m not going to say that 1.5GB of memory is a low requirement for a backup program, but it is what it is currently. I don’t feel it’s insanely high either, for a repo of that size. Surely we’d like to make it less, but people with actual coding skills need to attend that.
Can you show us the output of
free right before you run restic, and also an updated output from it so we can see the amounts of memory at the same point in time?
So I don’t have a problem with memory consumption – as far as I’m concerned it’s there to be used. I was having the same failure before the 8G dimm crapped out, so I was seeing this on a 16G machine. In any event, it’s Win7 not Linux so I don’t have the “free” command available. However the numbers from the task manager before starting are:
I did shut down the “memory pigs” but that’s relative as nothing was really consuming much beyond a couple of hundred meg.
As I ran restic/check I watched the total memory in use and it didn’t rise above 6.5G before restic threw the out of memory exception. GOGC=20 didn’t seem to have any effect.
I’m finally back to a “reasonable” memory configuration-- 24G. I’m getting the same failure at about the same point: The task manager indicates about 6.5G memory in use and then restic throws the out of memory exception.
So is there a heap size parameter or something similar I can tweak, either at runtime or by recompiling?
I do have the log of this failure with 24G, but it’s a copule of thousand lines so I’m loathe to include it here in the post.