Backing up for first time kills my network

I am a former Crashplan for Linux user, trying to migrate to Restic with B2.

I have followed the steps I’ve found online and trying to use the following command to back up about 200GB.

restic -r b2:myb2backup backup /home/user1/Pictures --limit-upload 1000 -o b2.connections=5 --cache-dir ~/.resticcache

The upload starts off with promising speeds. If you watch it over 2-5 minutes, the upload speed continually drops, until it hovers near 0. When that happens, I am unable to access the internet from anywhere on my network until I unplug my cable modem, and plug it back in.

I’ve also tried the restic backup command without the --limit-upload, without the -o b2.connections, and without the cache-dir - all with the same result.

What am I missing? Is there a tip to creating an initial backup?

Here’s relevant system and restic info:
restic 0.8.0
compiled with go1.9 on linux/amd64

Operating System:
Ubuntu 17.04

Hi, and thanks fro trying restic!

First, the bandwidth indication that restic prints is misleading (and we’ll correct that eventually): It stands for how fast restic (locally) processes data. In reality, to be uploaded data is bundled into several temp files (very fast), which are then uploaded. That’s when you notice the bandwidth starts declining.

What I think happens here is that you cable modem has a large buffer (so-called bufferbloat) and the five concurrent connections to B2, firing all-ahead full, block the modem for a longer time.

What you can try is limiting the number of concurrent connections to a small number, start with e.g. only two connections, and see if that works better.

Another option is to save the backup into a local directory, and then sync that with e.g. rclone to B2.

I’m not entirely sure, but there may also be a bug with bandwidth limiting and the B2 backend.

Please report back how it goes with just a few connections!

Thanks for the advice. Lowering the concurrent connections to 1 or 2 keeps
the upload going, as well as my home network.

If you’d like to try something else, give the code in the master branch a chance (you can find pre-compiled binaries here). I’ve merged a change yesterday which should correct the bandwidth limiting for the b2 backend.

Please report back if you try that! :slight_smile:

Thanks. Just downloaded that binary, and am running it right now with

–limit-upload 9000 -o b2.connections=1 --cache-dir ~/.resticcache

What’s the upstream bandwidth that you have? Please be aware that the number you give to --limit-upload is in Kilobyte per second.

Ahh, that’s a fat finger typing mistake on my part. My upload bandwidth is 5 Mpbs give or take. I’ll kill the job, and re-start with something less than 5000.

@fd0: What’s the logic that decides how many simultaneous connections to use between the backend and the client? (edit: I found in the docs – I was under the impression that it was dynamic but that’s evidently not the case :wink:

@cooper: be aware that a lot of home gateway access points (or integrated cable modems, or NAT routers, or whatever term you want to use) have a near-crippling limitation on the number of simultaneous connections they can maintain. I obviously have no idea what sort of hardware/software you’re running, but that behavior sounds VERY much like what happens when the NAT table fills up on a lot of basic home routers …

@ljwobker good info. At this point, I think the culrpit on my network is my Wireless Router. After doing speed test on dslreports, and seeing I had a BufferBloat score of F, I dug around the admin console of the router and changed some configurations that really seem to be helping. I changed the setting for “bandwidth control” to limit it to 4000, and that seems to be helping (after that change, re-ran dslreport test and got a BufferBloat of A).

It’s a simple config option that can help sometimes. After it was added, we introduced --limit-upload and --limit-download, which should work better in controlling the bandwidth.

That’s very possible - your basic consumer grade wifi router often has default settings that are terrible for a lot of applications. The bandwidth control is usually a setting that shapes the traffic into a much more reasonably sized output queue and can often dramatically improve performance for most applications. :wink: You may want to mess around with the settings to some extent: on at least some of the systems I’ve worked on there is a very non-obvious relationship between what you configure and the actual bandwidth achieved. In lots of cases I’ve seen 50-100% variance between what the config “tells the user theyre doing” and what the hardware ACTUALLY does with that config.

1 Like

@cooper Has the cause of the problem been identified and/or the problem resolved?

It has. I was able to completely upload my ~250GB archive.

Ultimately, my issue was buffer bloat, as @fd0 described.

I diagnosed this, by running a speed test at clicking the result button, and seeing my score of Buffer Bloat = F.

Prior to making any changes to my router, I made the changes @fd0 described, in terms of limiting my -0 b2.connections 2. That helped.

After I did that, I realized I could change some settings on my router, to limit my upload speed, to something slightly slower than what my max upload speed was. After doing that, I no longer needed to worry about the b2.connections limit, and finished my upload that way.

1 Like

Thanks a lot for the update, much appreciated!