Restic copy - memory issues

Hi

running restic 0.18 on debian 13, in a LXC running on proxmox

2cpu + 1G ram

I am just filling my new cloud storage location so the initial copy is taking time - 4 days and counting.

What i have seen is my memory usage for restic has grown - I have to add 4G extra to the LXC because restic was using more and more.

I noticed that even after I killed my initial restic copy and restarted the memory usage was still high

normal usage looks like I can stick to 1G of ram for the LXC - but there seems to be a reason that it might grow

is this a bug - should restic use the amount of memory thats available - is there a config setting for that

the new repo is wasabi - my older cloud repo is gdrive with rclone

restic stats
repository 814edff7 opened (version 2, compression level auto)
[0:02] 100.00% 26 / 26 index files loaded
scanning…
Stats in restore-size mode:
Snapshots processed: 25
Total File Count: 1820511
Total Size: 4.290 TiB

restic stats --mode raw-data
repository 814edff7 opened (version 2, compression level auto)
[0:02] 100.00% 26 / 26 index files loaded
scanning…
Stats in raw-data mode:
Snapshots processed: 25
Total Blob Count: 777420
Total Uncompressed Size: 589.184 GiB
Total Size: 585.663 GiB
Compression Progress: 100.00%
Compression Ratio: 1.01x
Compression Space Saving: 0.60%

When you initially went to post, there was a wall of text that asked for a number of things to be added to your post: What version of restic, what environment variables (if any) do you have set relevant to restic (obviously not anything secret)

Later version of restic have made a lot of memory improvements, that’s why I ask.

Hi

sorry thought I had added the relevant stuff

restic 0.18.0 compiled with go1.24.4 on linux/amd64

env varibale is basically the aws keys and the from and to repo passwords

interestingly I just ran out of space on wasabi - as you can see above its 500G on prem - but some how its now taken up 1T in wasabi - I did have a broken copy so not sure if it left lots of artifacts around

/usr/bin/restic -r “s3:https://x.wasabisys.com/xxxxx” copy --from-repo /restic/xxxx

stats gives me

repository a0c1b29b opened (version 2, compression level auto)
[0:04] 100.00% 993 / 993 index files loaded
scanning…
Stats in raw-data mode:
Snapshots processed: 6
Total Blob Count: 381842
Total Uncompressed Size: 283.660 GiB
Total Size: 283.128 GiB
Compression Progress: 100.00%
Compression Ratio: 1.00x
Compression Space Saving: 0.19%

but wasabi says I am using over 1T…

how do I know what I can clean up in the repo . or is it better to delete it all and re upload ?

EDIT

ran a check and then a prune

to repack: 0 blobs / 0 B
this removes: 0 blobs / 0 B
to delete: 855514 blobs / 817.872 GiB
total prune: 855514 blobs / 817.872 GiB
remaining: 381842 blobs / 283.128 GiB
unused size after prune: 0 B (0.00% of remaining size)

restic does tend to use a bit of memory, there’s a formula elsewhere on the forums for how much.

Once thing I’ve found that helps keep it in check is setting

GOGC=5

in my environment variables. That makes the Go garbage collector run much more frequently. Give that a go and see if it helps any? You are already running the latest version, so there’s no wins to be had there by upgrading!

You most likely do not want to use copy.

       The "copy" command copies one or more snapshots from one repository to another.

       NOTE: This process will have to both download (read) and upload (write) the entire snapshot(s) due to the different
       encryption  keys used in the source and destination repositories. This /may incur higher bandwidth usage and costs/
       than expected during normal backup runs.

       NOTE: The copying process does not re-chunk files, which may break deduplication between the files copied and files
       already stored in the destination repository.  This means that copied files, which existed in both the  source  and
       destination  repository, /may occupy up to twice their space/ in the destination repository.  This can be mitigated
       by the "--copy-chunker-params" option when initializing a new destination repository using the "init" command.

A use case was not mentioned. I would create the new repo, treat it as separate, and add to it. Leave the old repo for retention, and remove it when no longer needed.

When cancelling the copy operation it had to start over again as restic has to rescan all the chunks - meaning it has to redownload everything from Google Drive again. Also leading to the orphaned files you saw cleaned up during prune.

Not mentioned is how your Google Drive is getting copied. Maybe it is mounted to /restic with rclone mount. Maybe with a –cache-mode full. This will use a lot of ram, and will be inefficient.

Hi

thanks for that.

I will expand on my current setup and maybe explain more the thought process and use case

I backup local stuff to 2 restic repos they are kept on prem.

Then I backup those to GDrive and Wasabi

The are seperate nothing is shared. i don’t want my source boxes to restic off site direct. think its better to have a local copy and then resstic copy the local restic repo to the remote one.

My issue was with the initial copy - for 500G on disk it had taken 1T on remote storage and also i ran into the memory issue as well

Just added more memory to the lxc so ..