Atomic uploads... with pCloud / rclone + webdav?

So I just got the email about Restic v0.13.0 - woohoo! :partying_face: - and was going through the changelog when I saw #3003 and it got me thinking…

One of the reasons I had to abandon pCloud as a backend was because it would occasionally accept incomplete / empty files when interrupted at just the wrong point in time. Though, somehow, someway, these partials DID end up getting used for snapshots and caused me quite the headache - but that’s a whole other story. I think it would upload a partial, and then on the next run it would see that pack and think, ohh it’s already there, let’s just use that - and then I’d have corrupted snapshots? Never did QUITE get to the bottom of it, I just switched to B2 because I was tired of messing with it.

My question is, can atomic uploads be implemented for rclone + webdav (assuming it isn’t already), or is this more on rclone’s end? I don’t quite understand the technicalities of how restic works with rclone, so pardon me if this is a dumb question haha :slight_smile:

It would have to be implemented in rclone. To be precise rclone has already set the proper flags in the pCloud API which should guarantee that uploads either fully complete or not at all.

2 Likes

I am also using restic/rclone/pCloud. How did you detect this bahavior?

It’s been awhile since I switched, and I don’t quite remember the details. Here’s a few threads of my issues on pCloud:

https://forum.restic.net/t/restic-rclone-pcloud-connection-issues

https://forum.restic.net/t/pcloud-issues-via-restic-rclone

https://forum.restic.net/t/restic-rclone-pcloud-connection-issues

But TL;DR, I just repeatedly and often had lots of issues. I couldn’t rely on it, so I switched to B2. I haven’t had any problems since. :man_shrugging:t2:

Thanks for sharing.
For me it is working quite well since ~6 month. I have around 750GB in the repository.
However triggered by your comment I do now a check --read-data-subset to see if there is any hidden issue.

I think one of the issues is that I had one client on a spotty connection, plus he’d randomly put his computer to sleep in the middle of a backup (my grandpa). Now, at the same time, I’ve had zero issues on B2, even with his snapshots. I have him backing up every Sunday, and sometimes I won’t get the snapshot to fully come in until like Tuesday. But it does come in, eventually, and everything checks out.

My theory was that using rclone as a backend wasn’t as stable. I think at one point I even showed that by backing up to B2 with just restic, then with B2 as an Rclone backend. It just wasn’t as resilient to dropped connections, long pauses (say, putting your computer to sleep in the middle of a backup), etc.

Now all that said, it’s been many versions ago, both in Restic and Rclone. Perhaps something’s fixed. The way I tested was to create a random 1G file and randomly disconnect the wifi in the middle of a backup while uploading that file, then turn it back on. Then generate another file, disconnect the wifi, but CANCEL the backup. Then turn the wifi on and try to back up again. Then do a --read-data and see if it works.

If it goes well I may rclone everything back over to pCloud but… I’m hesitant, because B2 has been bulletproof and I kind of just don’t want to mess with it all over again haha

@T-6 Something else you might want to read…

Makes me nervous. I have a 4TB lifetime account (you can pay the 2TB price twice through a support email). Admittedly I mostly use it as a movie server. Shocked this hasn’t happened to me, honestly. But from reading multiple accounts, if they do arbitrarily yank away your account, you lose everything, and support won’t bother with you.

That said if you only use it for restic, you at least know your data is encrypted and safe from prying eyes. But considering some of the comments on there, plus my own experience - it seems to be rather arbitrary as to why they would shut an account down. I have about 2TB of movies on mine… :grimacing:

Well, on the other hand it is not surprising that someone who’s account has been shut down, claims that he hasn’t done anything against TOC. pCloud is located in Switzerland, so he was free to get this clarified with legal support.

That’s fair. Also later on he realizes that even though he purchased the books and stuff he stored on it, the DRM’d copies would still match any hashes that pCloud bots were looking for. I’m uncertain if he was simply storing them, or if there were shared links and it could be construed that he was distributing them.

I mean on one hand, if I bought some epubs, and stored them on pCloud for safe keeping, I’d be pretty upset if they closed my account too. That said, we’re just taking his word for it. Others chimed in too though.

Also although pCloud the company is located in Switzerland, that doesn’t necessarily mean your data is in Switzerland. I’m not sure where you physically reside, but I’m in the US myself. When I signed up, I wasn’t given a choice over where my data resides. However, recently, they’ve given users the option. You might want to check for yourself under your account settings.

EDIT: And according to this announcement, pCloud ONLY had servers in Texas until fairly recently. So yep, might want to check that. :slight_smile:

Sorry in advance for potential smartassing but here’s what I do: I have a Raspberry Pi with a 4TB hdd in my own and my dad’s basement. I restic backup to the local Pi and then rsync the whole thing to my dad’s place every night. You only need one fixed IP address at your place (or dynamic DNS) and autossh for an ssh tunnel from the other side to your place. Works great and costs like 200 bucks for a lifetime plan.

Edited to add: ok I admit that’s how it used to be until I dug a trench and put a 2-wire cable all the way to my dad’s house with these modems. Also now I use a Lenovo mini AIO instead of the Pi on my side because it’s a little faster. But still the concept above works great even over DSL!

1 Like