High number of S3 GET requests


I’m trying to find out why my backup process does a very unbalanced ratio of GET/PUT on my AWS S3 repository: there are almost 300.000 GET requests into the index/ and snapshots/ prefix while there are less than 200 PUT requests in the same time period.

I’m running restic in Kubernetes pods, without any local storage for caching or anything. Maybe that’s already the issue. They are supposed to do pretty much write only, there are no restores or checks running.

I found How to estimate PUTS GETS for S3 pricing? around a similar topic where @cdhowie kindly explained what gets written during a backup. I could not find any elaboration what is read during the creation of a backup (and then the thread somehow goes offtopic, IMHO). restic outputs load index files during the start, so I guess it reads something.

I’d appreciate if somebody could explain me what’s read during a backup and how I could reduce the # of GET requests.

Kind regards,

This is the issue. During a normal backup run, restic has to read all index and snapshot files. If they are not cached, they are read from the backend.

If you are not willing to use a cache, you should try to minimize snapshot and index files - you can do so by often running a forget and prune - but this also involves reading those files, of course :wink:

Thanks a lot for your reply. Yes, I guess that makes sense. I wish there would be the way to have other storage options for that cache than a local file system.

Can you clarify this? I get that you haven’t mounted any PVCs or such, but are you saying that the container in the pod has a read-only filesystem? A container AFAIK always has a filesystem, so if it’s not read-only I imagine you could let restic write its cache as usual.

There is a local r/w file system within the docker container, but it’s not persisted anywhere and it is empty when the next pod is created by the job controller.

This has been a known issue for a long time and I assume it’s just how restic works unfortunately.

Best way to get by imo is to run a local backup repo and use rclone to sync that local repo with S3(or other similar services). If snapshots pile up and you do forget and prune infrequently, a lot of GET requests will be made when you forget --prune. rclone solves this problem.