Google Workspace as a Repository 403 Error File Limit Reached

Hoping somebody could help me. I have a repository setup on a Google Workspace Shared Drive.
It shows that the item cap has been reached under the admin console.
The only way I can see what is stored in the repository is using the command snapshots --no-lock
If I don’t put the --no-lock flag then I get a 500 Internal Server Error and many
Post request put error: googleapi: Error 403: The file limit for this shared drive has been exceeded., teamDriveFileLimitExceeded

Is there any way I can salvage the repository by purging some snapshots? Or I have to purge the whole repository because there’s no working room?
Any help would be much appreciated.
Thank you in advance.

Probably not possible to do this in place - even if you manage to forget some snapshots you most likely wont be able to prune anything as you can not write single byte.

The easiest way would be to copy all repo to another location (can be local) - then forget/prune and copy it back (before making sure you have enough free space).

Any idea how much bytes I would need to free up?
I have another folder within the same Shared drive where I used Rclone to sync a backup configuration from a linux system that is 39.638 KiB or 40589 bytes.
Any way to avoid this in the future?
I might just make a new backup rather than downloading the repo, pruning it and uploading it back.
Thanks so much.

As many as you need to make your storage writeable again? No idea what is your limit.

If it is 10TB and you are on 11TB then you have to free up >1TB

The main concern isn’t the file size. It’s the flile limit. I think 400,000 file limit quota. I just need some space to prune.
My storage availability is 10 TB.
I don’t think I was even able to just copy 1 snapshot because it’s full still…
Thanks again.

I haven’t done this myself, but you can try the union feature of rclone:
initialize another empty rclone remote (maybe just a local one) with enough free space,
create rclone union remote (with CREATE class policy set to lno) contains you Google Workspace and this new remote,
use restic prune with rclone as backend and this new union remote as repository.

Very neat idea:)

I created the union between the full remote repository and a local one.
Took some figuring out. Looks like something is happening.
I created a union with my rclone remote local first in upstream followed by remote. When I do this, it doesn’t show errors.
If I place my remote as upstream first followed by the rclone remote local, I get the similar 500 Errors and the 403 file limit errors.
At least now with a prune command I am getting 18.18% so far even if it’s just --dry-run at first.
Thank you very much for the suggestions. I’ll report back.

Make sure to set --pack-size 128 for all future restic operations. That will only require 80k files for a 10TB repository.

1 Like

Thanks @MichaelEischer. I’m wondering if there is a ay to repack the size of the existing repository.
I found a post where somebody repeatedly ran something similar to
prune --repack-small --repack-uncompressed --pack-size 128 --compression max --max-repack-size 750G to pare things down.
Would that repack the size?
I think right now I ran some stats and it shows something like 9 million which is locking up my repository.

From what I understand the problem in your case is that you can’t create any further files. However, that is an absolute requirement to be able to run prune.

What might work are the following steps. WARNING: there is chance that this will completely break the repository.

run restic snapshots --no-lock and look up the snapshot id of two snapshots that can be deleted. Then use a file browser to navigate to the repository storage, enter the snapshots/ folder and remove the two files from that folder, whose name starts with the snapshot id. (Or better move the files to some other place outside the Shared Drive.)

This should allow you to run restic snapshots (without the --no-lock option!). If that is the case, then you can follow the next steps, otherwise don’t.

Now create a full copy of the index/ folder. Then you should be able to run prune once with the --unsafe-recover-no-free-space option (Removing backup snapshots — restic 0.16.0 documentation). That hopefully gives you enough wiggle room to actually repack the existing data.

That would work using prune --repack-small --pack-size 128 --max-repack-size 10G. It’s better to keep the repack size low at first, as it determines how much additional storage (and consequently files!) prune might use.

1 Like

Thanks @MichaelEischer
I’m doing what @DRON recommended earlier by creating a Union Rclone Repository on a local system and running the --repack-small --pack-size 128 --max-repack-size 750G
It’s taking a while to run, but if it doesn’t work I can follow your method. At this point I only have two snapshots. It seems like with another repository when I made a backup of something similar in size I started off with a pack size of --pack-size 64 and then --pack-size 128. It has significantly less files. I’m in the middle of creating the same backup with a different repository so I might just scrap the original, but it would be good to know how to resurrect it in the event that it was a disaster recovery.

1 Like

Restoring is possible without writing to the repository: restore --no-lock ...

1 Like