Slow speeds of Restic with Google Drive rclone backend

I consistently get slow speeds of ~4 MB/s (half of my line speed) when backing up to Google Drive through restic and rclone. I suspect it’s because restic uploads a lot of tiny files, which is a worst-case scenario for drive. I’ve tried increasing the number of connections with -o rclone.connections=64, but it doesn’t seem to help: restic doesn’t appear to use the concurrency.

uploaded intermediate index 22eec63b                                                                                                                        
rclone: 2018/07/19 08:25:45 INFO  : B, total 954025 files 74.757 GiB, 0 errors ETA 5:03:09                                                                  
rclone: Transferred:   4.748 GBytes (3.855 MBytes/s)cache2/entries/0FA7A2B0F525620566B2938FD1C43145A1BF19F2                                                 
rclone: Errors:                63x/skjg6wyp.default/cache2/entries/15B94C4B35FCC61F2A91B8FDA57BA307EA519E1E                                                 
rclone: Checks:                 0                                                                                                                           
rclone: Transferred:         1046                                                                                                                           
rclone: Elapsed time:     21m1.3s                                                                                                                           
rclone: Transferring:                                                                                                                                       
rclone:  *   ...5dffca30caa684cda850fd33dd39f063da539a78: 100% /4.003M, 4.003M/s, 0s                                                                        
rclone:  *   ...afdc8c1e78c90208c09d3b0cc67454b50a30b1d0: 100% /4.089M, 4.089M/s, 0s                                                                        
rclone:

(only 2 transferred at once)

I’ve also tried making a Google Drive API key as recommended here, but it didn’t seem to help.

Does anyone know of a way to potentially speed up restic here?

Hi @Pneumaticat it might be worth testings some Drive uploads as a baseline to compare against. I upload a log to Drive and it has some pretty heavy rate limiting of both data and API calls. My upload line speed is 970 MB/s but I can only upload even large files to Drive at a tiny fraction of that speed.

@whereisaaron, when using rclone by itself, with large files (~5-10 GB each), I can get speeds of 9.5 MB/s consistently. Only when uploading smaller files does the upload speed drop.

You could try copying your restic repo back and forth from Drive with rclone (no restic involved). That would test with the same file sizes. If you get the same slow speed then I would guess you are correct, it is the file sizes that limit restic speed.

Maybe there is (or could be) some option to increase the block sizes restic uses? If you did that you might get less de-duplication but maybe faster overall speed.

That’s not a good idea, this will increase the overhead, restic needs to manage all the blocks. A better idea is to increase the pack file size, which is currently hard coded.

1 Like

Testing as you suggested, I get the same half-line-speed:

2018/07/19 16:20:54 INFO  : data/3a/3a0760b85f7460add9d76a4b2166d66569521f49e37f4a246eaf649df39198fd: Deleted                                              
2018/07/19 16:20:54 INFO  :
Transferred:   849.887 MBytes (4.686 MBytes/s)
Errors:                 0
Checks:               152
Transferred:          152
Elapsed time:      3m1.3s
Transferring:
 *   ...63fa36d002cb4bbdbc862d05e910fdbc6eb303e1:  0% /5.162M, 0/s, -
 *   ...7a7be24d2e99fb997f23c1c9ff077574425296c1: 100% /5.545M, 4.248M/s, 0s
 *   ...8b4d23ca86fb05a69c696d01befd80523e97a266:  0% /4.825M, 0/s, -
 *   ...adca888e72b8e494e211d5f41f783359da7867b3: 100% /6.216M, 4.454M/s, 0s

So I think it is the small files after all.

I also get this error frequently, probably because of the small files creating a large number of requests to Google Drive:

2018/07/19 16:20:45 DEBUG : pacer: Rate limited, sleeping for 1.399764369s (1 consecutive low level retries)                                               
2018/07/19 16:20:45 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)                               
2018/07/19 16:20:45 DEBUG : pacer: Rate limited, sleeping for 2.281560672s (2 consecutive low level retries)                                               
2018/07/19 16:20:45 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)                               
2018/07/19 16:20:46 DEBUG : pacer: Rate limited, sleeping for 4.745774738s (3 consecutive low level retries)                                               
2018/07/19 16:20:46 DEBUG : pacer: low level retry 1/1 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)

Which is probably part of why it goes slowly.

@fd0, would increasing the pack file size be easy to do/make configurable? I think in this case it would definitely help.

No, not easily. And I’m very reluctant to exposing this as a user-configurable option. If you want to experiment with it though, it’s easy to change in the source code here:

You can try setting minPackSize to something much larger than the default, like 64MiB (or even 256MiB), and try that. I think restic will cope just fine with larger packs, but I haven’t tried it in a long time. So please report back! :slight_smile:

I think there’s no one-size-fits-all for packs, there are several trade-offs to decide and for now we go with a small pack size by default. In the long run I’d like to use a dynamic value which increases the pack size when restic detects that the backend connection has a high bandwidth.

1 Like

Changing the pack size to 256 MB does appear to have solved the speed problems! I get 6.9 MB/s:

rclone: Transferred:   7.327 GBytes (6.935 MBytes/s)Online Textbook-00000.warc.gz                                                                          
rclone: Errors:                 3ave of Dragonflies/The Cave of Dragonflies/hts-log.txt                                                                    
rclone: Checks:                 0
rclone: Transferred:           34
rclone: Elapsed time:     18m1.7s
rclone: Transferring:
rclone:  *   ...13a2312da250bbcfc6e0ef65dde3ba11be17de59: 90% /256.067M, 1.467M/s, 17s                                                                     
rclone:  *   ...33045875308bf9f2959929dc7ae3e921aa677b5d: 83% /257.002M, 1.056M/s, 40s                                                                     
rclone:  *   ...a59d07e9c1a59a85a73ae69fd5e0e1aecc76ed0e: 80% /256.629M, 1.095M/s, 46s                                                                     
rclone:  *   ...a6c7af6eb83384e68751762de7c1b45dba9db3c8: 90% /256.702M, 1.256M/s, 19s                                                                     
rclone:  *   ...de5261a00afa7a8612da9e45e0fa25dac5fe4c48: 73% /260.012M, 2.475M/s, 28s                                                                     
rclone:

Which I’m pretty sure is maxing out my current connection to Google Drive at this time (it tends to vary).

I haven’t noticed any other adverse effects of changing the pack size from my few minutes of usage; restic init and backup appear to work fine. I’ll report back again after using it for a little while longer.

Thank you for all your help, and your awesome work on restic!

1 Like

Looks like you are in part hitting against the pretty aggressive Google Drive rate limiting. Although it is not really a solution to this issue, you could consider using a B2 or S3 account for your backups. Those services simply charge you a few cents for high API rates, rather than slowing you down. You may be trying to leverage a Drive ‘unlimited’ storage plan. Even so, B2 storage is pretty cheap, and for less drama and fewer problems like this, it may be worth a couple dollars a month.

1 Like

This is usually my recommendation too, it’s totally worth a few cents or dollars even if you have a free Google Drive. (“It’s a trap!”)

FYI, I’ve got a beta account for B2 “application keys” (so you can create a second set of keys restricted to a particular bucket), I’m working on integrating this with restic.

Sounds pretty tempting, but I’ve got a lot of data to back up (~8 TB) with not that high of a budget to go with it :confused: I do agree that B2 would definitely be a better option if possible, though.

In any case, it seems that Google Drive is chugging along pretty well now – around 7 MB/s, which is honestly fine by me. Thank you to all!

1 Like

Howdy everyone,

As mentioned here, I’m preparing to test this here – it’s kinda “life or death” as I really need to speed up my restic backup: it’s taking almost 24 hours to update just 24 hours of changed data, and I need to reduce it to 12 hours or less.

EDIT: not so much “life or death situation” anymore, as I managed to work around it by moving a large part of the backup to being updated only once per week, on Friday nights when it has the whole weekend to work. But I still would like it very much to speed this up.

@fd0 (or anyone with enough knowledge), can you please tell me:

  1. Would a restic binary with a larger minPackSize be able to ‘interoperate’ on an already existing repository, ie read/write on it along with ‘standard’ (ie 4MiB PackSize) restic?

  2. The way I understand it, packs are generated when blobs get written, so to realize the full benefit of upping minPackSize, the repository would need to be re-initialized and regenerated from the scratch, correct? Or would it be possible to somehow “repack” a current repository into a new, larger pack-sized one?

Also, @Pneumaticat, could you please give us an update? Are you still running restic with 256MB (or MiB, I presume) minPackSize? How is it working out for you? Have you tested restic restore? What about memory usage during backup/restore?

Thanks in advance,
– Durval.

I believe so. The pack size is not fixed, rather there is a maximum. When building packs, restic will keep adding data until the size gets too big, then it will upload that pack and start building a new pack.

At the end of a backup, there’s usually not enough data left for a full pack and so a very small pack can be written with whatever is left.

All that to say: restic already needs to work with small and variable pack sizes, so changing this variable should not affect restic’s ability to work with existing packs.

My understanding is that uploads are throttled more heavily than downloads, so the existing smaller packs should not pose that much of a problem.

There is no way that I know of to rewrite everything. However, when you prune, any pack that contains an object that is no longer used will be repacked; if multiple packs contain an object that needs to be deleted, they will be combined. After several prune operations, you should see the average pack size in the repository get larger.

Yes, there’s no other place in the source which requires a small pack size, restic will just take the files as they are stored in the backend. It’s a trade off so that people with tiny upstream bandwidth can use restic, and I have plans to adjust the pack size based on the backend upload speed. But that’s not an issue I can implement short term, sorry about that.

The reason why you also see larger files is that restic serialises metadata (file names, list of IDs of the content, modes, timestamps) as JSON and saves that as a tree blob to a file. If the directory is very large (number of files or size of files), the JSON document may grow much larger than 4MiB. Restic will still upload and process such a file.

Btw, the constant is called minPackSize because that’s the size a file has to reach before it can be considered “full enough” to be uploaded to the repo.

Some of the constraints I had in mind (from the top of my head):

  • The transfer of a single file should finish in reasonable time, even for users with low upstream bandwidth. Truncated files are of no use, only successfully uploaded files will be considered by e.g. rebuild-index. This takes low-end embedded systems with e.g. 512MiB into account (not considering the ongoing problems with the index loaded into memory completely).
  • A single file should fit into memory easily, so we can load a file during check and decrypt and verify all blobs stored in it. So a file size of e.g. 1GiB is probably too much
  • The file size should not be too small, after all one of the ideas of bundling together blobs into files is reducing the number of files stored in a backend.
  • When designing the repository format, I only had the local and sftp backend in mind and did not think about latency at all. It was simple back in the days :stuck_out_tongue:

I hope this helps understanding the constraints a bit better :slight_smile:

It’s a minimum, the minimal size a file must have to be uploaded to the backend. And you’re right about the last pack file being smaller (in some cases at least).

Correct.

2 Likes

Thanks for the detailed, thoughtful responses @cdhowie and @fd0.

I plan on doing some experimenting with changing minPackSize in the future. If/when I do, I will report back here.

Cheers,
– Durval.

Just wondering, is there any update on this?

Also, are there plans to expose these variables so you don’t need to recompile?

1 Like

Hey just curious about the current state of building from source. Ive been trying to build and the docker container has an issue with the flags github it uses. Ive been using the build.go to build the executible but changing the const doesnt seem to have an effect at all. Im working with a symmetric gigabit connection while restic maxes out at 100Mb upload btw.

Thanks!

1 Like

Before users came along :grin:

Sorry to re-open an old topic but I’m also getting veeery slow speeds on restoring and I’m increasing the pack size from now on and just stumbled on restic telling me that the limit is 128MB (I wanted to use 256), Is there a reason why such a limit? I thought we were free to such a size, the help (restic --help) and documentation doesn’t tell about any limit.

I’m currently getting 5-8 MiB/s download speeds so you can imagine the suffering.