Restic + Rclone (pCloud) connection issues

Ever since switching from Backblaze to pCloud, I get a lot of these from time to time:

rclone: 2020/08/01 02:25:25 ERROR : data/1d/1d8dcc07a905a39833a5417d338f9924762913e71dfa72ca3880240a32d2395a: Post request rcat error: Put “https://api.pcloud.com/uploadfile?filename=1d8dcc07a905a39833a5417d338f9924762913e71dfa72ca3880240a32d2395a&folderid=6346972073&mtime=1596273346&nopartial=1”: write tcp 192.168.0.13:54196->74.120.8.12:443: write: broken pipe

rclone: 2020/08/01 02:25:27 ERROR : data/f0/f0b68671a8f1be8dbf7582f2dd03b3e0f186e5eff579181471ac13a31899b334: Post request put error: Put “https://api.pcloud.com/uploadfile?filename=f0b68671a8f1be8dbf7582f2dd03b3e0f186e5eff579181471ac13a31899b334&folderid=6406985690&mtime=1596273115&nopartial=1”: read tcp 192.168.0.13:54116->74.120.9.234:443: read: connection reset by peer

Are there any switches I could add for Rclone that would make it more robust? More retries? Low level retries? This never happened on Backblaze, on the same repository (I rclone’d it) and on the same connection.

So I switched to fiber, and I’m still getting these. Would this mean there’s corruption, or will Restic retry until it gets it right? I’m just worried about an inconsistent database.

rclone: 2020/08/14 14:55:48 ERROR : data/c9/c9b997d4033857207ae5c44a2f4a433fa5383feca438dc8fb1ec26715924867d: Post request put error: Put “https://api.pcloud.com/uploadfile?filename=c9b997d4033857207ae5c44a2f4a433fa5383feca438dc8fb1ec26715924867d&folderid=6397410328&mtime=1597442068&nopartial=1”: write tcp 192.168.0.4:56690->74.120.8.7:443: use of closed network connection

rclone: 2020/08/14 14:55:48 ERROR : data/c9/c9b997d4033857207ae5c44a2f4a433fa5383feca438dc8fb1ec26715924867d: Post request rcat error: Put “https://api.pcloud.com/uploadfile?filename=c9b997d4033857207ae5c44a2f4a433fa5383feca438dc8fb1ec26715924867d&folderid=6397410328&mtime=1597442068&nopartial=1”: write tcp 192.168.0.4:56690->74.120.8.7:443: use of closed network connection

Save(<data/c9b997d403>) returned error, retrying after 720.254544ms: server response unexpected: 500 Internal Server Error (500)al_views.zip

rclone: 2020/08/14 14:56:54 ERROR : data/c9/c9b997d4033857207ae5c44a2f4a433fa5383feca438dc8fb1ec26715924867d: Post request put error: Put “https://api.pcloud.com/uploadfile?filename=c9b997d4033857207ae5c44a2f4a433fa5383feca438dc8fb1ec26715924867d&folderid=6397410328&mtime=1597442149&nopartial=1”: EOF

rclone: 2020/08/14 14:56:54 ERROR : data/c9/c9b997d4033857207ae5c44a2f4a433fa5383feca438dc8fb1ec26715924867d: Post request rcat error: Put “https://api.pcloud.com/uploadfile?filename=c9b997d4033857207ae5c44a2f4a433fa5383feca438dc8fb1ec26715924867d&folderid=6397410328&mtime=1597442149&nopartial=1”: EOF

Save(<data/c9b997d403>) returned error, retrying after 873.42004ms: server response unexpected: 500 Internal Server Error (500)cal_views.zip

rclone: 2020/08/14 15:16:45 ERROR : data/59/593c86d3b45635bf8c10063b35e0484a2033451b4bc95bd8d652ac98bba5d6b9: Post request put error: Put “https://api.pcloud.com/uploadfile?filename=593c86d3b45635bf8c10063b35e0484a2033451b4bc95bd8d652ac98bba5d6b9&folderid=6358293913&mtime=1597443335&nopartial=1”: write tcp 192.168.0.4:57207->74.120.8.7:443: write: broken pipecation Support/Microsoft/Teams/IndexedDB/https_teams.microsoft.com_0.indexeddb.leveldb/MANIFEST-000001

rclone: 2020/08/14 15:16:45 ERROR : data/59/593c86d3b45635bf8c10063b35e0484a2033451b4bc95bd8d652ac98bba5d6b9: Post request rcat error: Put “https://api.pcloud.com/uploadfile?filename=593c86d3b45635bf8c10063b35e0484a2033451b4bc95bd8d652ac98bba5d6b9&folderid=6358293913&mtime=1597443335&nopartial=1”: write tcp 192.168.0.4:57207->74.120.8.7:443: write: broken pipe

Save(<data/593c86d3b4>) returned error, retrying after 468.857094ms: server response unexpected: 500 Internal Server Error (500)

My experience with using the same setup except wasabi instead of pCloud is that rclone will get it right (I have never seen a failure yet where the backup terminated). I always follow a backup with a check and the check has always passed after these error messages. I don’t think these reported issues are due to problems on your end but are on the pCloud end.

1 Like

I did a rebuild-index cause of the errors I had seen, and have gotten this so far:

counting files in repo
Load(<data/c1043d87e1>, 591, 5107375) returned error, retrying after 468.857094ms: <data/c1043d87e1> does not exist
Load(<data/c103f2b94e>, 591, 5421305) returned error, retrying after 462.318748ms: <data/c103f2b94e> does not exist
Load(<data/c1035d3e72>, 591, 8388122) returned error, retrying after 720.254544ms: <data/c1035d3e72> does not exist
Load(<data/c102f9a082>, 591, 4778933) returned error, retrying after 582.280027ms: <data/c102f9a082> does not exist
Load(<data/c1033f7875>, 591, 5121506) returned error, retrying after 593.411537ms: <data/c1033f7875> does not exist
Load(<data/c104d82727>, 591, 4233213) returned error, retrying after 282.818509ms: <data/c104d82727> does not exist
Load(<data/c104ed0db1>, 591, 4345475) returned error, retrying after 328.259627ms: <data/c104ed0db1> does not exist
Load(<data/c1058258c4>, 591, 4542625) returned error, retrying after 298.484759ms: <data/c1058258c4> does not exist
Load(<data/c1044bac33>, 591, 5390093) returned error, retrying after 400.45593ms: <data/c1044bac33> does not exist
Load(<data/c104a6f2b9>, 591, 4652446) returned error, retrying after 507.606314ms: <data/c104a6f2b9> does not exist
Load(<data/c104d82727>, 591, 4233213) returned error, retrying after 985.229971ms: <data/c104d82727> does not exist
Load(<data/c104ed0db1>, 591, 4345475) returned error, retrying after 535.697904ms: <data/c104ed0db1> does not exist
Load(<data/c1058258c4>, 591, 4542625) returned error, retrying after 660.492892ms: <data/c1058258c4> does not exist
Load(<data/c103f2b94e>, 591, 5421305) returned error, retrying after 613.543631ms: <data/c103f2b94e> does not exist
Load(<data/c1044bac33>, 591, 5390093) returned error, retrying after 726.667384ms: <data/c1044bac33> does not exist
Load(<data/c1043d87e1>, 591, 5107375) returned error, retrying after 587.275613ms: <data/c1043d87e1> does not exist
Load(<data/c104a6f2b9>, 591, 4652446) returned error, retrying after 594.826393ms: <data/c104a6f2b9> does not exist
Load(<data/c102f9a082>, 591, 4778933) returned error, retrying after 884.313507ms: <data/c102f9a082> does not exist
Load(<data/c1033f7875>, 591, 5121506) returned error, retrying after 538.914789ms: <data/c1033f7875> does not exist
Load(<data/c1035d3e72>, 591, 8388122) returned error, retrying after 527.390157ms: <data/c1035d3e72> does not exist
Load(<data/c10a7fd1e6>, 591, 4473412) returned error, retrying after 430.435708ms: <data/c10a7fd1e6> does not exist
Load(<data/c10af0de6a>, 591, 4311280) returned error, retrying after 535.336638ms: <data/c10af0de6a> does not exist
Load(<data/c10887717c>, 702, 4761108) returned error, retrying after 681.245719ms: <data/c10887717c> does not exist
Load(<data/c109c3bc17>, 8583, 4252822) returned error, retrying after 398.541282ms: <data/c109c3bc17> does not exist
Load(<data/c10ba26f87>, 591, 4564821) returned error, retrying after 396.557122ms: <data/c10ba26f87> does not exist
Load(<data/c10c4fc379>, 591, 5870680) returned error, retrying after 626.286518ms: <data/c10c4fc379> does not exist
Load(<data/c10d2eddb7>, 591, 5423634) returned error, retrying after 353.291331ms: <data/c10d2eddb7> does not exist
Load(<data/c10cf70919>, 591, 4252684) returned error, retrying after 682.667507ms: <data/c10cf70919> does not exist
Load(<data/c10a7fd1e6>, 591, 4473412) returned error, retrying after 897.539375ms: <data/c10a7fd1e6> does not exist
Load(<data/c10af0de6a>, 591, 4311280) returned error, retrying after 767.86523ms: <data/c10af0de6a> does not exist
Load(<data/c10d2eddb7>, 591, 5423634) returned error, retrying after 396.227312ms: <data/c10d2eddb7> does not exist
Load(<data/c10887717c>, 702, 4761108) returned error, retrying after 493.746208ms: <data/c10887717c> does not exist
Load(<data/c10ba26f87>, 591, 4564821) returned error, retrying after 830.44008ms: <data/c10ba26f87> does not exist
Load(<data/c109c3bc17>, 8583, 4252822) returned error, retrying after 1.106431215s: <data/c109c3bc17> does not exist
Load(<data/c10c4fc379>, 591, 5870680) returned error, retrying after 434.590217ms: <data/c10c4fc379> does not exist
Load(<data/c10cf70919>, 591, 4252684) returned error, retrying after 821.106448ms: <data/c10cf70919> does not exist
Load(<data/c10d2eddb7>, 591, 5423634) returned error, retrying after 629.010732ms: <data/c10d2eddb7> does not exist
Load(<data/c10887717c>, 702, 4761108) returned error, retrying after 1.341027661s: <data/c10887717c> does not exist
Load(<data/c10c4fc379>, 591, 5870680) returned error, retrying after 901.713016ms: <data/c10c4fc379> does not exist
Load(<data/c10af0de6a>, 591, 4311280) returned error, retrying after 757.424518ms: <data/c10af0de6a> does not exist
Load(<data/c10cf70919>, 591, 4252684) returned error, retrying after 1.171237337s: <data/c10cf70919> does not exist
Load(<data/c10ba26f87>, 591, 4564821) returned error, retrying after 1.17467502s: <data/c10ba26f87> does not exist
Load(<data/c10a7fd1e6>, 591, 4473412) returned error, retrying after 875.821074ms: <data/c10a7fd1e6> does not exist
Load(<data/c10d2eddb7>, 591, 5423634) returned error, retrying after 1.55781934s: <data/c10d2eddb7> does not exist
Load(<data/c109c3bc17>, 8583, 4252822) returned error, retrying after 1.15940893s: <data/c109c3bc17> does not exist
Load(<data/c10c4fc379>, 591, 5870680) returned error, retrying after 1.271599594s: <data/c10c4fc379> does not exist
Load(<data/c10af0de6a>, 591, 4311280) returned error, retrying after 1.319761679s: <data/c10af0de6a> does not exist
Load(<data/c10887717c>, 702, 4761108) returned error, retrying after 2.174520794s: <data/c10887717c> does not exist
Load(<data/c10a7fd1e6>, 591, 4473412) returned error, retrying after 1.454296748s: <data/c10a7fd1e6> does not exist
Load(<data/c10ba26f87>, 591, 4564821) returned error, retrying after 2.32966652s: <data/c10ba26f87> does not exist
Load(<data/c10b6d1f35>, 591, 4819051) returned error, retrying after 398.55613ms: <data/c10b6d1f35> does not exist
Load(<data/c109c3bc17>, 8583, 4252822) returned error, retrying after 2.352985419s: <data/c109c3bc17> does not exist
Load(<data/c10cf70919>, 591, 4252684) returned error, retrying after 1.008204668s: <data/c10cf70919> does not exist
Load(<data/c10d2eddb7>, 591, 5423634) returned error, retrying after 3.738445824s: <data/c10d2eddb7> does not exist
Load(<data/c10c4fc379>, 591, 5870680) returned error, retrying after 1.453674091s: <data/c10c4fc379> does not exist
Load(<data/c10af0de6a>, 591, 4311280) returned error, retrying after 1.828295087s: <data/c10af0de6a> does not exist
Load(<data/c10b6d1f35>, 591, 4819051) returned error, retrying after 885.808734ms: <data/c10b6d1f35> does not exist
Load(<data/c10cf70919>, 591, 4252684) returned error, retrying after 1.876960068s: <data/c10cf70919> does not exist
Load(<data/c10887717c>, 702, 4761108) returned error, retrying after 2.054166187s: <data/c10887717c> does not exist
Load(<data/c10a7fd1e6>, 591, 4473412) returned error, retrying after 3.626892523s: <data/c10a7fd1e6> does not exist
Load(<data/c10c4fc379>, 591, 5870680) returned error, retrying after 4.71514527s: <data/c10c4fc379> does not exist
Load(<data/c10af0de6a>, 591, 4311280) returned error, retrying after 4.939943365s: <data/c10af0de6a> does not exist
Load(<data/c10ba26f87>, 591, 4564821) returned error, retrying after 3.114023427s: <data/c10ba26f87> does not exist
Load(<data/c109c3bc17>, 8583, 4252822) returned error, retrying after 1.728653694s: <data/c109c3bc17> does not exist
Load(<data/c10b6d1f35>, 591, 4819051) returned error, retrying after 1.044401717s: <data/c10b6d1f35> does not exist
Load(<data/c10d2eddb7>, 591, 5423634) returned error, retrying after 5.304203839s: <data/c10d2eddb7> does not exist
Load(<data/c10cf70919>, 591, 4252684) returned error, retrying after 4.490387462s: <data/c10cf70919> does not exist
Load(<data/c109c3bc17>, 8583, 4252822) returned error, retrying after 5.615309897s: <data/c109c3bc17> does not exist
Load(<data/c10b6d1f35>, 591, 4819051) returned error, retrying after 2.399983187s: <data/c10b6d1f35> does not exist
Load(<data/c10887717c>, 702, 4761108) returned error, retrying after 2.243335279s: <data/c10887717c> does not exist
Load(<data/c10ba26f87>, 591, 4564821) returned error, retrying after 3.770836023s: <data/c10ba26f87> does not exist
Load(<data/c10a7fd1e6>, 591, 4473412) returned error, retrying after 5.41809052s: <data/c10a7fd1e6> does not exist
Load(<data/c10887717c>, 702, 4761108) returned error, retrying after 8.286368954s: <data/c10887717c> does not exist
Load(<data/c10b6d1f35>, 591, 4819051) returned error, retrying after 2.14638347s: <data/c10b6d1f35> does not exist
Load(<data/c10c4fc379>, 591, 5870680) returned error, retrying after 6.782199283s: <data/c10c4fc379> does not exist
Load(<data/c10af0de6a>, 591, 4311280) returned error, retrying after 6.896494886s: <data/c10af0de6a> does not exist
Load(<data/c10cf70919>, 591, 4252684) returned error, retrying after 6.058557229s: <data/c10cf70919> does not exist
Load(<data/c10b6d1f35>, 591, 4819051) returned error, retrying after 4.364467796s: <data/c10b6d1f35> does not exist
Load(<data/c10d2eddb7>, 591, 5423634) returned error, retrying after 5.990130631s: <data/c10d2eddb7> does not exist
Load(<data/c10ba26f87>, 591, 4564821) returned error, retrying after 7.15230732s: <data/c10ba26f87> does not exist
Load(<data/c109c3bc17>, 8583, 4252822) returned error, retrying after 5.147442151s: <data/c109c3bc17> does not exist
Load(<data/c10a7fd1e6>, 591, 4473412) returned error, retrying after 3.59175519s: <data/c10a7fd1e6> does not exist
[10:53:35] 75.63% 168956 / 223406 packs

So I’d say there’s definitely issues. Sigh. I might not be able to use pCloud as my backend. Just doesn’t seem stable enough.

Sigh. I tried a rebuild-index, prune, and check… and it did not go well :frowning:

rebuild-index

[17:38:05] 100.00% 223406 / 223406 packs

finding old index files

saved new indexes as [ecfd7c2c 0f45dc2c fd2e9fbe 0fda4f70 52787270 961a2c93 47996082 b1c15209 e4434ac4 2f989e5d 40fc126c f912094c e893aad2 f629a917 b17dc379 84a39ae4 8ae11804 bf67f298 d4759b82 80a21829 ea006a0b 59be890b 864d84fe 40bcf8c6 77edd713 87d95ba7 9e146087 86f577be 0089829a 9660c047 1fba5dc2 9a7e2c46 880975fa 1c735fbd 45137d09 7e19f108 b2b3f21c 6df25364 d2842484 8a441c0c 319fd919 b5946c18 534ee209 d1be57f9 66b7c039 7077eed5 a7911c5f 97209147 4c8e48fe 4e441cdc cfe68cc4 255f4e0a 62910d44 6c2da19d d083df5f 5da1ae06 9784f575 793af281 c81a3973 2d3a11dd b75176b1 2624b82b 3f88fe8d 77f45577 0cd923b0 6e392d3e 261bc069 8e74c9ca e53a2322 5989ceee f31f4e67 afb911fb 4ae7cad8 f0b3e1e0 0c4d680d]b 864d84fe 40bcf8c6 77edd713 87d95ba7 9e146087 86f57

remove 698 old index files

prune

repository 24344e18 opened successfully, password is correct

counting files in repo

building new index for repo

[20:55:52] 100.00% 223406 / 223406 packs

incomplete pack file (will be removed): 416036aa0ca7eacae9d45c1dc73f8f3e78c0b36b13e88010d77edd590547b67c

incomplete pack file (will be removed): a661505028d0772716b3fefc602b5ef63bdc692a5c155c0ab57846dde8c8f094

incomplete pack file (will be removed): af6474b3856499e64b7e3db2a19159ad482b1a0aaf7481416a5b543a45534595

incomplete pack file (will be removed): b87e1edaca3363be11908cf60f1bc9dffdde363c0e402802f55c254b58ea39b6

incomplete pack file (will be removed): b707cc9fca1b0dc2b13fe4418fa16c3e5eac7b7f0b49f27ec8b9482548bd0f27

incomplete pack file (will be removed): d596975c62d722ac6859a017839a5ec9679fd4dc5f8b5b4eb7e4d2bc502a87cf

incomplete pack file (will be removed): e7551bb95e1557fe46e96dd79b8cd5831c9488d938c84786c1874cca1931f38a

incomplete pack file (will be removed): fa169b6b79a62d6fc13e6e7ba1d59ad8d789b7d7ebdaf8d520819a2030433448

repository contains 223398 packs (3420678 blobs) with 1.022 TiB

processed 3420678 blobs: 48 duplicate blobs, 30.582 MiB duplicate

load all snapshots

find data that is still in use for 223 snapshots

[9:49] 100.00% 223 / 223 snapshots

found 3405537 of 3420678 data blobs still in use, removing 15141 blobs

will remove 8 invalid files

will delete 284 packs and rewrite 613 packs, this frees 1.717 GiB

[43:49] 100.00% 613 / 613 packs rewritten

counting files in repo

[21:14:47] 100.00% 222821 / 222821 packs

finding old index files

rclone: 2020/08/18 11:02:22 ERROR : index/10e1753ad92d14d68198ae5d7388cb1475de13e88dda23a1a8f91961db9360b0: Post request put error: Put “https://api.pcloud.com/uploadfile?filename=10e1753ad92d14d68198ae5d7388cb1475de13e88dda23a1a8f91961db9360b0&folderid=6345873219&mtime=1597773680&nopartial=1”: EOF

rclone: 2020/08/18 11:02:22 ERROR : index/10e1753ad92d14d68198ae5d7388cb1475de13e88dda23a1a8f91961db9360b0: Post request rcat error: Put “https://api.pcloud.com/uploadfile?filename=101753ad92d14d68198ae5d7388cb1475de13e88dda23a1a8f91961db9360b0&folderid=6345873219&mtimee1753ad92d14d68198ae5d7388cb1475de13e88dda23a1a8f91961db9360b0&folderid=6345873219&mtime=1597773680&nopartial=1”: EOF

Save(<index/10e1753ad9>) returned error, retrying after 720.254544ms: server response unexpected: 500 Internal Server Error (500)db9360b0&folderid=6345873219&mtim

rclone: 2020/08/18 11:03:51 ERROR : index/b61ef064645779a4a81e6de7276562ec9d4e63e92fa97bd851735b9c3a990c9a: Post request put error: Put “https://api.pcloud.com/uploadfile?filename=b61ef064645779a4a81e6de7276562ec9d4e63e92fa97bd851735b9c3a990c9a&folderid=6345873219&mtime=1597773770&nopartial=1”: EOF

rclone: 2020/08/18 11:03:51 ERROR : index/b61ef064645779a4a81e6de7276562ec9d4e63e92fa97bd851735b9c3a990c9a: Post request rcat error: Put “https://api.pcloud.com/uploadfile?filename=b6ef064645779a4a81e6de7276562ec9d4e63e92fa97bd851735b9c3a990c9a&folderid=6345873219&mtime1ef064645779a4a81e6de7276562ec9d4e63e92fa97bd851735b9c3a990c9a&folderid=6345873219&mtime=1597773770&nopartial=1”: EOF

Save(<index/b61ef06464>) returned error, retrying after 582.280027ms: server response unexpected: 500 Internal Server Error (500)3a990c9a&folderid=6345873219&mtim

saved new indexes as [1449a156 06c99030 cd06e689 5ab18fe1 5c367ead 0bef0230 8ccf29bf 3fd4f938 8678f6c5 c346d060 f81afec6 4c71030f 0d773d4f 298ac8c2 806ef617 6a68ecb2 b21c25af bb533376 c7d1960a d74257a3 03f1cdac 6e07af57 bf7b4158 e649cc30 c507ca2f 42462c24 ae78806b 10e1753a 26208947 03415d1e 1e134241 377d1196 bff7aae6 b6387f46 34935494 a38bbe39 aadab85b 69393cf0 671a2146 730daa8a c8ea3c2c b61ef064 fbcc4d76 3141bfff f28dd3a9 4bd8b0fb b236dc34 8c21cb6f 5f3c1e1b 369763d6 7aa8a60c c4c7951c 3ff52ad2 ddd7643f 631cec9c 7120328c 3813dc4f 35fbf290 f05e3 c7d1960a d74257a3 03f1cdac 6e07af57 bf7b4158 e649cc30 c507ca2f 42462c24 ae78806b 10e17e9e 4f153d16 5073f19c eadf17ad 242511cf 954b9f5d b33156c6 9e9a9f54 d538fcd6 9dd41386 19e664d3 50d67e92 48ebf791 e1dfe406 279b7bc0 22e65d1f a8d42605]c 3ff52ad2 ddd7643f 631cec9c 7120328c 3813dc4f 35fbf

remove 75 old index files

[15:29] 100.00% 897 / 897 packs deleted

done

check

using temporary cache in /var/folders/78/z94fqn6944l3mcz4mxrwf6ycx3x4jt/T/restic-check-cache-724315274

repository 24344e18 opened successfully, password is correct

created new cache in /var/folders/78/z94fqn6944l3mcz4mxrwf6ycx3x4jt/T/restic-check-cache-724315274

create exclusive lock for repository

load indexes

error: error loading index 10e1753a: load <index/10e1753ad9>: invalid data returned

error: error loading index b61ef064: load <index/b61ef06464>: invalid data returned

Fatal: LoadIndex returned errors

I guess I should go back to Backblaze? I don’t feel confident in being able to restore my data at all at this point. I’m also not certain I can get my repository back into a consistent state now.

Yeah, a prune netted me this:

repository 24344e18 opened successfully, password is correct

counting files in repo

building new index for repo

[5:50] 0.42% 939 / 222821 packs

Load(<data/8b41b0bc85>, 591, 6193956) returned error, retrying after 720.254544ms: <data/8b41b0bc85> does not exist

Load(<data/8b42fd3a77>, 591, 4414508) returned error, retrying after 582.280027ms: <data/8b42fd3a77> does not exist

[21:28:44] 100.00% 222821 / 222821 packs

repository contains 222821 packs (3405533 blobs) with 1.020 TiB

processed 3405533 blobs: 0 duplicate blobs, 0 B duplicate

load all snapshots

find data that is still in use for 223 snapshots

[13:27] 100.00% 223 / 223 snapshots

Fatal: number of used blobs is larger than number of available blobs!

Please report this error (along with the output of the ‘prune’ run) at

Sign in to GitHub · GitHub

What should I run on my repository now to get it to a usable state??

I encountered this error three times. The first time a rebuild index fixed it. The second time a backup with the --force flag fixed it. The third time nothing fixed it; I gave up and just rclone synced my local to the remote. After a month I reverted to normal backup operations, but when I forget I do not --prune, and it has been flawless since.

1 Like

What can be done for errors like this? Rebuild-index and prune don’t seem to be cutting it… :frowning_face:

tree 786a54f4: file “2020-07-13 10.23.06.jpg” blob 2 size could not be found
tree 786a54f4: file “2020-07-13 10.23.06.jpg” blob 3 size could not be found
tree 786a54f4: file “2020-07-13 19.08.24.jpg” blob 0 size could not be found
tree 786a54f4: file “2020-07-13 19.08.24.jpg” blob 1 size could not be found
tree 786a54f4, blob dce4b0d9: not found in index
tree 786a54f4, blob f870a0d3: not found in index
tree 786a54f4, blob b8591f64: not found in index
tree 786a54f4, blob a5a9c2a3: not found in index

EDIT: Ran a rebuild-index. Running a fresh snapshot of my Mac and Drobo now. Unfortunately my grandpa’s computer that had been backing up to my repo has bit the dust. Hoping the damage isn’t in his part of the repository. Going to try the advice here after that. I’ll report back. I think I’ll be cloning the repo on pCloud from now on, and running prune on the clone followed by a check before replacing the original repo from now on…

Exactly, follow those instructions. Basically what you want to do to get the useful parts of it back into shape is running checks to find what is missing, then find the snapshots that are affected by whatever is missing, forgetting them, rebuilding and checking again.

When you check, you can check just the integrity of the repository, and I’d start with that. Once you fixed all those problems, I’d do a check --read-data to verify that all the data stored in the repository is still intact (so it hasn’t gone bad in the storage). If in the end you have a successful run of that, then your repository should be fine IMO.

I’d highly recommend using the latest master build when you do this, because it contains serious improvements/optimizations for e.g. the check command and what not. It will safe you a lot of time I think.

Best case not too many snapshots will be affected, worst case most of them will. And yeah, given what you’ve shown here I sure wouldn’t use pCloud :slight_smile:

Cool. Doing a backup --force right now. It’s got about 700GB to sort through still. Nearly all of it should already be in the repo, unless it got corrupted. After this backup is done I’ll do the check and I am indeed using the latest master build, as of today (I’ve been using one from about a month ago before this, too). Then I’ll do a little manual pruning if anything is missing, then run a check --read-data and see what that does, as you suggested.

Unfortunately I’ve already bought a lifetime account. I’m going to try continuing to use pCloud, but only prune once or twice a year - and on a cloned repository, at that. We’ll see. Hopefully over time restic + rclone webdav stability will improve. I’ve been messing with duplicacy CLI, which natively supports webdav, and it’s been working just fine with pCloud and a ~70GB repository - but I vastly prefer restic.

I’m still wondering if there isn’t a retry or low-level retry switch I should try passing to rclone via restic. Might make it more robust with the occasional network hiccup?

Ps. I should mention I’ve used pCloud for about a year now, with no issue at all - but I’ve been (justifiably) paranoid about running prune. I’ve done check --read-data about three times just to check on it occasionally with no errors at all. All this happened cause I got the idea in my head that now would be a good time to run prune since I moved and now have 100mb fiber now instead of DSL. I was, apparently, wrong. I will say it’s the same ISP, though. I don’t have this issue if I run prune from work - I just don’t like using my work connection for that. Plus we have forced updates with automatic reboots, which sometimes screws things up too.

Oh! Ha, I’m dumb. pCloud has a “rewind” feature. I’ll just restore to the day before the prune operation, and I should be set!

I am going to keep the corrupted repo up and running, as an exercise in repair, and still see what I can manage with it - but now I for sure have a viable backup :+1:

Very good. Might want to run a check --read-data after rewinding just to be sure!

1 Like

Hmm, I’m using one of the beta versions (v0.9.6-353-gfa135f72). Wonder if this isn’t a bug? A normal restic check finishes just fine, no errors minus duplicate files.

Going to try it again with v0.9.6-364-gb1b3f1ec and then stable if that doesn’t work.

This is highly interesting. It’s the same error message that @fd0 and @MichaelEischer has been debugging extensively in Restic slice bounds out of range? .

The v0.9.6-353-gfa135f72 version you tried it with is just one commit behind the latest master so that’s perfectly fine. I doubt the other versions will make a difference, and the error you are seeing now is something I am pretty sure the other guys would like to get your help investigating.

@MichaelEischer Can you sum up what patching we’d like to see (“manually compiled” text as well as the debugging output) for @akrabu to build and run a debug version for this issue?

EDIT: My bad, didn’t look closely enough - @MichaelEischer noted that this is just the progress bar crashing, it has nothing to do with the other report.

1 Like

Sure. Point me at a debug version and I’ll be happy to help troubleshoot it. :+1:

That said, it did have to run overnight before it did it, so it will be a slow process lol

Hmm, I’m at 13% with the latest master, and just got this:

Load(<data/239b887aba>, 0, 0) returned error, retrying after 720.254544ms: <data/239b887aba> does not exist

However, it does exist, both in the corrupted repo and the restored repo that I’m checking right now. MD5 checks out the same on both, as well.

Okay, so I’m going to re-run check --read-data and see if it times out at the exact same spot again. Could be pCloud randomly doesn’t serve up data, which would be unfortunate. I’m also going to rclone the entire repository to an external disk now that I’ve made room for it. Probably going to take awhile. It’s roughly 800GB - 1TB. But that way I can rule out the provider and know if this is a bug or just pCloud not being a good backend.

Top two panes is my work machine, which has a faster internet connection. Bottom is home, where I’m cloning the repo. :+1:

So the latest master AND the current stable build both give me:

pack 5c0f447c: not referenced in any index
pack c0e32d88: not referenced in any index
pack af6474b3: not referenced in any index
62 additional files were found in the repo, which likely contain duplicate data.
You can run restic prune to correct this.
check snapshots, trees and blobs
read all data
panic: runtime error: slice bounds out of range [:-5]

goroutine 3534 [running]:
main.newReadProgress.func1(0x0, 0x0, 0x0, 0x0, 0x64, 0x0, 0x125e90e801, 0x5255e00)
cmd/restic/cmd_check.go:114 +0x2f8
github.com/restic/restic/internal/restic.(*Progress).updateProgress(0xc032a791e0, 0x0, 0x0, 0x0
, 0x0, 0x64, 0x0, 0x387a00)
internal/restic/progress.go:147 +0xb4
github.com/restic/restic/internal/restic.(*Progress).Report(0xc032a791e0, 0x0, 0x0, 0x0, 0x0, 0
x1, 0x0)
internal/restic/progress.go:136 +0x15c
github.com/restic/restic/internal/checker.(*Checker).ReadPacks.func1(0xc02c680f68, 0x14a2cd5)
internal/checker/checker.go:803 +0x200
The Go Programming Language(0xc032a6cf00, 0xc032a6cf90)
vendor/golang.org/x/sync/errgroup/errgroup.go:57 +0x64
created by The Go Programming Language
vendor/golang.org/x/sync/errgroup/errgroup.go:54 +0x66

So you were right. Not sure what to do at this point. Also not sure if it’s my repo, or a bug?

I wonder if I pass --quiet if the progress bar won’t crash?