Restic hangs with S3 compatible target

Hi,
I’m using restic to backup my files to an s3 compatible target.

restic version
restic 0.17.3 compiled with go1.23.3 on linux/amd64
lsb_release -a
Distributor ID:	Linuxmint
Description:	Linux Mint 22
Release:	22
Codename:	wilma

The problem is that after a few uploaded files it hangs. After a few minutes it uploads a few more files and hangs again.

Here is the script I use:

#!/bin/bash

export RESTIC_PASSWORD=[PASSWORD]
export RESTIC_PACK_SIZE=64

export AWS_ACCESS_KEY_ID=[KEY_ID]
export AWS_SECRET_ACCESS_KEY=[ACCESS_KEY]

export RESTIC_READ_CONCURRENCY=10

BACKUP_TARGET="s3:[TARGET]

BACKUP_DIRS="/home"

/usr/bin/restic -r $BACKUP_TARGET backup $BACKUP_DIRS --compression max

I found this thread: New issue with backblaze B2: `RoundTrip()` error in debug log - #5 by cgeoga and tried to run restic using a hotspot from my smartphone without any differences.

I experimented with different pack sizes - no differences.
I tried to omit compression - no success
I tried different read concurrencies - no differences

I tried to backup to a local target. It worked as expected.

Actually I’m using the Hetzner object storage but to make sure it’s not them, I tried backblaze and experienced the same behavior.

I ran restic with DEBUG_LOG. Here is the output I get when it hangs:

2025/02/23 18:56:33 debug/round_tripper.go:110  debug.loggingRoundTripper.RoundTrip     237     ------------  HTTP RESPONSE ----------
HTTP/2.0 204 No Content
Date: Sun, 23 Feb 2025 17:56:33 GMT
Strict-Transport-Security: max-age=63072000
X-Amz-Request-Id: ...
X-Debug-Bucket: ...

Any ideas how to debug this?

Depending on your upload speed that may actually be expected behavior. The S3 backend upload 5 files in parallel. At 64MB pack size, that means 320 MB that are uploaded in parallel. As reading files is usually much faster than uploading, this results in reading files for a few seconds, waiting a few minutes for their upload, and repeating the whole cycle.

That explanation also perfectly match with your observation that backing up to a local target doesn’t show such “hangs”. With a local target, read and upload speeds are much more balanced, such that the hangs are only in the range of a few seconds.

How fast is your upload? Does the network monitor or your OS (e.g. gnome-system-monitor) show a continuous upload?

Unless you have > 1GBit/s upload speed, there’s no point in setting this.

1 Like

Oh boy…
You’re absolutely right and afterwards it feels a bit stupid.
I let it run over night and realized that it indeed runs as expected according to my upload speed.
For some reason I never thought about the consequences of increasing the concurrency.

Good advice is always to use a tool like iftop or nmon to look at your network activity ^^