When you initially went to post, there was a wall of text that asked for a number of things to be added to your post: What version of restic, what environment variables (if any) do you have set relevant to restic (obviously not anything secret)
Later version of restic have made a lot of memory improvements, that’s why I ask.
restic 0.18.0 compiled with go1.24.4 on linux/amd64
env varibale is basically the aws keys and the from and to repo passwords
interestingly I just ran out of space on wasabi - as you can see above its 500G on prem - but some how its now taken up 1T in wasabi - I did have a broken copy so not sure if it left lots of artifacts around
repository a0c1b29b opened (version 2, compression level auto)
[0:04] 100.00% 993 / 993 index files loaded
scanning…
Stats in raw-data mode:
Snapshots processed: 6
Total Blob Count: 381842
Total Uncompressed Size: 283.660 GiB
Total Size: 283.128 GiB
Compression Progress: 100.00%
Compression Ratio: 1.00x
Compression Space Saving: 0.19%
but wasabi says I am using over 1T…
how do I know what I can clean up in the repo . or is it better to delete it all and re upload ?
EDIT
ran a check and then a prune
to repack: 0 blobs / 0 B
this removes: 0 blobs / 0 B
to delete: 855514 blobs / 817.872 GiB
total prune: 855514 blobs / 817.872 GiB
remaining: 381842 blobs / 283.128 GiB
unused size after prune: 0 B (0.00% of remaining size)
restic does tend to use a bit of memory, there’s a formula elsewhere on the forums for how much.
Once thing I’ve found that helps keep it in check is setting
GOGC=5
in my environment variables. That makes the Go garbage collector run much more frequently. Give that a go and see if it helps any? You are already running the latest version, so there’s no wins to be had there by upgrading!
The "copy" command copies one or more snapshots from one repository to another.
NOTE: This process will have to both download (read) and upload (write) the entire snapshot(s) due to the different
encryption keys used in the source and destination repositories. This /may incur higher bandwidth usage and costs/
than expected during normal backup runs.
NOTE: The copying process does not re-chunk files, which may break deduplication between the files copied and files
already stored in the destination repository. This means that copied files, which existed in both the source and
destination repository, /may occupy up to twice their space/ in the destination repository. This can be mitigated
by the "--copy-chunker-params" option when initializing a new destination repository using the "init" command.
A use case was not mentioned. I would create the new repo, treat it as separate, and add to it. Leave the old repo for retention, and remove it when no longer needed.
When cancelling the copy operation it had to start over again as restic has to rescan all the chunks - meaning it has to redownload everything from Google Drive again. Also leading to the orphaned files you saw cleaned up during prune.
Not mentioned is how your Google Drive is getting copied. Maybe it is mounted to /restic with rclone mount. Maybe with a –cache-mode full. This will use a lot of ram, and will be inefficient.
I will expand on my current setup and maybe explain more the thought process and use case
I backup local stuff to 2 restic repos they are kept on prem.
Then I backup those to GDrive and Wasabi
The are seperate nothing is shared. i don’t want my source boxes to restic off site direct. think its better to have a local copy and then resstic copy the local restic repo to the remote one.
My issue was with the initial copy - for 500G on disk it had taken 1T on remote storage and also i ran into the memory issue as well