pCloud issues via Restic+Rclone

So I’ve been trying to get about 800GB of data backed up to pCloud using Restic + Rclone. I get the data up, but during the process there’s always a few errors. So I tried to do a prune. That took almost a week then failed. So then I tried a rebuild-index and prune, which also had random errors and is now stuck at “0.00% 0 / 5 snapshots” for a good 6-7 hours now. I ran the last attempt at work, instead of at home (I have very slow DSL - I can typically upload to pCloud at 90MB/s at work). Here’s what I got:

https://f002.backblazeb2.com/file/akrabu-090/restic.txt

I’m not sure if it’s pCloud or if it’s using Rclone as a backend, but things just don’t seem stable. I’ve uploaded a good 300GB of other data, all at once, using JUST Rclone with no issues whatsoever. But for some reason it just isn’t working with Restic. Any ideas?

I have used Backblaze in the past, but saw a good deal on a pCloud lifetime account on Black Friday. Thought it’d be nice to not have a monthly fee. Looks like I might be switching back to a native backend. Restic+Rclone just doesn’t seem as robust.

So from the logs I can see different errors like DNS problems and some networking issues causing HTTP 500s (?!).
DNS:

rclone: 2019/12/27 14:23:52 ERROR : locks/1b7d2a22ded605732ce9423a022479bb1d1b915f7266aab23d7a74cbf07a2f34: Post request put error: Put https://api.pcloud.com/uploadfile?filename=1b7d2a22ded605732ce9423a022479bb1d1b915f7266aab23d7a74cbf07a2f34&folderid=4723314641&mtime=
1577485432&nopartial=1: dial tcp: lookup api.pcloud.com: no such host
rclone: 2019/12/27 14:23:52 ERROR : locks/1b7d2a22ded605732ce9423a022479bb1d1b915f7266aab23d7a74cbf07a2f34: Post request rcat error: Put https://api.pcloud.com/uploadfile?filename=1b7d2a22ded605732ce9423a022479bb1d1b915f7266aab23d7a74cbf07a2f34&folderid=4723314641&mtime
=1577485432&nopartial=1: dial tcp: lookup api.pcloud.com: no such host

Networking problems:

rclone: 2019/12/29 07:46:23 ERROR : locks/9021336aed0ff9f9a9517fc4383b1e7e55e989db33b5fba93c3dc5cff5d96e9c: Post request rcat error: Put https://api.pcloud.com/uploadfile?filename=9021336aed0ff9f9a9517fc4383b1e7e55e989db33b5fba93c3dc5cff5d96e9c&folderid=4723314641&mtime
=1577634383&nopartial=1: dial tcp 74.120.8.13:443: connect: network is down
Save(<lock/9021336aed>) returned error, retrying after 985.229971ms: server response unexpected: 500 Internal Server Error (500)
rclone: 2019/12/29 07:46:24 ERROR : locks/9021336aed0ff9f9a9517fc4383b1e7e55e989db33b5fba93c3dc5cff5d96e9c: Post request put error: Put https://api.pcloud.com/uploadfile?filename=9021336aed0ff9f9a9517fc4383b1e7e55e989db33b5fba93c3dc5cff5d96e9c&folderid=4723314641&mtime=

Is there some rate limiting in place? I am not sure if you are at work that there could be some DoS protection in your company network preventing you from doing so many requests?!

Does a normal rclone sync work to that backend?

Weird thing is, the same thing happens at home. Also normal Rclone sync works fine with ~300GB of data (larger media files, not a bunch of small files like with Restic).

And what happens if you rclone sync a local restic repo on your machine instead of big single files?

That seems to work fine.

I started over at home instead and this is what I got. Note that Backblaze works perfectly fine from both locations. I don’t have another backend to test using Rclone, but since Rclone by itself works I’m thinking it’s Restic using Rclone.

It’s not done yet but this is the output from rebuild-index, prune, then check.

You mean rclone-ing the data to pcloud?

I rclone’d about 300GB of data and there wasn’t a single error.

The second attempt with Restic + Rclone finally finished. This is what I got.

I rclone’d about 300GB of data and there wasn’t a single error.

But have you tried to/did you rclone the local restic repo to pCloud?

I don’t have a local repo, so no, I can’t try that, unfortunately.

I guess I’m just going to go back to B2. Can’t get pCloud to work at all. Even if I did get it to work at this point, I can’t say I’d trust it.

Well you could create a troubleshooting one-off repo with some random non important data like stock photos and some big and small files (youtube videos etc) and then give it a shot :wink:
But yeah B2 works reliably; possibly it’s a backend issue?! Idk.