Clean up for better speeds?

Hi,
I was wondering if someone could shed some light on the issue im having, Before restic on the same server i was getting these speeds in November

Files:        1416 new,   158 changed, 1264728 unmodified
Dirs:            0 new,     3 changed,     0 unmodified
Data Blobs:   3902 new
Tree Blobs:      4 new
Added to the repo: 3.122 GiB

processed 1266302 files, 649.958 GiB in 1:41:22
snapshot 2dbbedea saved
There are 6 exclusion rules...

But now im getting these speeds

Files:         365 new,   185 changed, 1277113 unmodified
Dirs:            0 new,     3 changed,     0 unmodified
Data Blobs:   1721 new
Tree Blobs:      4 new
Added to the repo: 1.443 GiB

processed 1277663 files, 677.480 GiB in 10:26:39
snapshot 692a5191 saved
There are 6 exclusion rules...

im not sure if its a cache issue? or because the repo data is pretty large now but not sure why there is an extra 9 hours

Thank you

Hi there,

Which backend you’re using? Also when was the last prune?

Hi there currently backend saving the restic files to a NAS the prune is every 24 hours

Which version of restic are you using on which operating system? The performance change around November could be related to the changed detection of modified files in restic 0.9.6 . On which file system is the data stored which you are backing up?
What is the exact command line you use to run restic?

In order to pretty much verify if the cause is the change from mtime to ctime for change detection, you can run the backup command with the --ignore-inode flag, just once for testing. If this runs with your “normal” duration, it might very well be it.

Then again, you could start by just telling us what versions you were running in november and now. If you haven’t upgraded, the mtime to ctime change isn’t the cause.

Thanks for the reply, what i ended up doing is to create another repo from zero and it started to work fast again 2-3 hours tops, same version 0.9.5

So you never used a version higher than 0.9.5?

nope i had to redo the backup from zero to make it go faster again

I’m wondering what could have caused the slowdown. During a backup restic just loads the index and the tree nodes of the latest snapshot from the repository.

The latter should be mostly identical to your new backup. Which command line do you use exactly to create the backup? Did you disable caching?

As you run prune daily, there shouldn’t be any performance problems due to a large number of small index files. So my other guess would be that the index grew large enough to cause your machine to start swapping?

Do you still have the old repository? If yes, could you please take a look at how many files are in the index folder and also check how large that folder is?

Thanks for the reply, currently using rescript to backup, correct i prune every 24 hours, yes i still have the old repo this is the snapshots

created new cache in /root/.cache/restic
ID        Time                 Host         Tags        Paths
--------------------------------------------------------------------------------
f949edc0  2019-12-14 21:05:17  prometheus2              /media/servers/ad/shares
454bb60f  2019-12-15 21:05:04  prometheus2              /media/servers/ad/shares
85396458  2019-12-16 21:05:04  prometheus2              /media/servers/ad/shares
0a9104e6  2019-12-17 21:05:06  prometheus2              /media/servers/ad/shares
451b908d  2019-12-18 21:05:05  prometheus2              /media/servers/ad/shares
a622694d  2019-12-19 21:05:05  prometheus2              /media/servers/ad/shares
08ca1c6b  2019-12-20 21:05:06  prometheus2              /media/servers/ad/shares
8ba6a26a  2019-12-21 21:05:04  prometheus2              /media/servers/ad/shares
50989ced  2019-12-22 21:05:03  prometheus2              /media/servers/ad/shares
95ff48ed  2019-12-23 21:05:05  prometheus2              /media/servers/ad/shares
4de2289e  2019-12-24 21:05:04  prometheus2              /media/servers/ad/shares
d25bb6e7  2019-12-25 21:05:05  prometheus2              /media/servers/ad/shares
35d23e40  2019-12-26 21:05:05  prometheus2              /media/servers/ad/shares
ea9732b0  2019-12-27 21:05:05  prometheus2              /media/servers/ad/shares
c9fbcf7c  2019-12-28 21:05:04  prometheus2              /media/servers/ad/shares
89b1dad2  2019-12-29 21:05:03  prometheus2              /media/servers/ad/shares
8e347271  2019-12-30 21:05:04  prometheus2              /media/servers/ad/shares
7cb232bd  2019-12-31 21:05:04  prometheus2              /media/servers/ad/shares
4ba887d6  2020-01-01 21:05:04  prometheus2              /media/servers/ad/shares
42e941b1  2020-01-02 21:05:05  prometheus2              /media/servers/ad/shares
ef18461c  2020-01-03 21:05:04  prometheus2              /media/servers/ad/shares
aefd4745  2020-01-04 21:05:04  prometheus2              /media/servers/ad/shares
dbcecd80  2020-01-05 21:05:04  prometheus2              /media/servers/ad/shares
b7748b19  2020-01-06 21:05:04  prometheus2              /media/servers/ad/shares
400b0920  2020-01-07 21:05:05  prometheus2              /media/servers/ad/shares
3d512b48  2020-01-08 21:05:05  prometheus2              /media/servers/ad/shares
5e7be122  2020-01-09 21:05:05  prometheus2              /media/servers/ad/shares
f50e2136  2020-01-10 21:05:04  prometheus2              /media/servers/ad/shares
60229155  2020-01-12 21:05:04  prometheus2              /media/servers/ad/shares
7a23aa0c  2020-01-13 21:05:06  prometheus2              /media/servers/ad/shares
fb25c2b4  2020-01-14 21:05:05  prometheus2              /media/servers/ad/shares
eb942488  2020-01-15 21:05:04  prometheus2              /media/servers/ad/shares
b4c7afef  2020-01-16 21:05:05  prometheus2              /media/servers/ad/shares
f207fc23  2020-01-17 21:05:06  prometheus2              /media/servers/ad/shares
e12bd7a5  2020-01-18 21:05:05  prometheus2              /media/servers/ad/shares
8a621c57  2020-01-19 21:05:03  prometheus2              /media/servers/ad/shares
692a5191  2020-01-20 21:05:05  prometheus2              /media/servers/ad/shares
3b9026be  2020-01-21 21:05:04  prometheus2              /media/servers/ad/shares
a6fb2827  2020-01-22 21:05:06  prometheus2              /media/servers/ad/shares
daf3c13d  2020-01-23 21:05:04  prometheus2              /media/servers/ad/shares
--------------------------------------------------------------------------------
40 snapshots

and this info

 restic -r /media/backupnas/ stats
enter password for repository:
repository 6fac3f45 opened successfully, password is correct
scanning...
Stats for all snapshots in restore-size mode:
  Total File Count:   53206011
        Total Size:   26.443 TiB

Can you take a look at how many files are in the index folder of your backup repository, along with its size? Or even better: what is the output of ls -la /media/backupnas/index?

It would also be useful to get the exact command line used by rescript for the backup, I’m mostly interested in whether caching is disabled for the backup runs or not.

Thanks for the reply this is what i got, currently looking on rescript to see what command it runs
this is the script

root@prometheus2:/media# ls -la /media/backupnas/index
total 169264
drwxr-xr-x 2 root root       0 Feb  1 12:21 .
drwxr-xr-x 2 root root       0 Jan 30 22:34 ..
-rwxr-xr-x 1 root root 4838049 Feb  1 12:21 0c074c145865a01f321336e4d6975311d6a4c9af8267a06fda29ed4e21945a09
-rwxr-xr-x 1 root root 4742989 Feb  1 12:21 0ceea0d5c594708390c3aaa7fa45dba62257c33eb33d95efb411fa861400fc8f
-rwxr-xr-x 1 root root 4629336 Feb  1 12:21 15b16917fdbedf90a31e757e0fac9f8f42614c0c21cd3c78c48bc317e8f4b238
-rwxr-xr-x 1 root root 4226826 Feb  1 12:21 3141c392fe346d54c734c992b412f93e39cbc92ee4f1ac92a606582dc1174696
-rwxr-xr-x 1 root root 4763687 Feb  1 12:21 35bfb944b3aea48ae38943eaad198fb41c33983db824c43cd40b8f902bea3505
-rwxr-xr-x 1 root root 4459882 Feb  1 12:21 37686ec65f416ee1255e5e9eff3707b6a58d8ab958957c32b5dcd25cc42321d3
-rwxr-xr-x 1 root root 4597181 Feb  1 12:21 394ac827f76ee64451593d41c918153444fcb0c0b8da290542151b646235d1c9
-rwxr-xr-x 1 root root 4479635 Feb  1 12:21 3a2ffbad8261bb51b84244ee6540191cbabb53cac79036081bd041754d8a861a
-rwxr-xr-x 1 root root 4465703 Feb  1 12:21 440c1d98503f2ef5a89d454b06ea8b5548d0d64caac93d93ee96b6113b547199
-rwxr-xr-x 1 root root 4309078 Feb  1 12:21 47313bab3fc3de01beb53563937bbcfa5756fd171da7b20a523c5a14328e6cf1
-rwxr-xr-x 1 root root 4697284 Feb  1 12:21 4d39da49ad0c923f4c9b1ef3d5e92913271590d5df30bd9479c627a0bc190f88
-rwxr-xr-x 1 root root 4566664 Feb  1 12:21 5f72629c812a9ba4c755d3439816ad5aef7a0e35dd42a2f27379e8e7adebd801
-rwxr-xr-x 1 root root 4452133 Feb  1 12:21 7cc8fb8ddb39ec4c92eeb079385e1e2201d328d20ca00945c8016a9fa4e1be0f
-rwxr-xr-x 1 root root 4567187 Feb  1 12:21 81472e271b3d03d1ac610647ac3c7ffe3f5541db79f736afdd347eabbebe3bbc
-rwxr-xr-x 1 root root 4529883 Feb  1 12:21 81bfae3e01929efc73401bbfb52db30ddb42352cf338eac92e86d41e90f4c9c2
-rwxr-xr-x 1 root root 4223343 Feb  1 12:21 834e59e639a7f300b262cfc3a6ffe09c591f1f8e605851b32998d01c44428820
-rwxr-xr-x 1 root root 5037142 Feb  1 12:21 8efddc404f4f43514610a2f5ddf2ddbb1d42b42add5f7a54c6d8580ec72d560f
-rwxr-xr-x 1 root root 4768293 Feb  1 12:21 9600343d6c0af28608e6cd9d59ba1cd8b52951858b294846505615a4900f7759
-rwxr-xr-x 1 root root 4651423 Feb  1 12:21 ace2e0592f5aa319f6d0ced6067d286a716dbbb5714c7be1522d57b4323bb708
-rwxr-xr-x 1 root root 4484949 Feb  1 12:21 b0094dee38813303996352be55085794f8a697a62ed9d8d83db86d60d1c796af
-rwxr-xr-x 1 root root 4474060 Feb  1 12:21 b74c376156828f264ab56887b8cb82d20ca34309d3c356a86632e4273439aadc
-rwxr-xr-x 1 root root 4705469 Feb  1 12:21 bbbc73ff67b0071cc04b35439af39328b185e74c9810f6156e4440db195dc258
-rwxr-xr-x 1 root root 4673044 Feb  1 12:21 bf4fc5967f2d2f748482fbf0839c5b1ed9e2c7e8f33d8d45f41899813725d7fd
-rwxr-xr-x 1 root root 4353979 Feb  1 12:21 c4ed295c8bc12146b8317c30ea923555dbd5980d2e2226cbd67112a7479f4f9f
-rwxr-xr-x 1 root root 2763560 Feb  1 12:21 c71091c5000cbfa08c00106838467f0eb8a80d82c65b9b069e7953de21f7a764
-rwxr-xr-x 1 root root 4950717 Feb  1 12:21 c73e1e1c7bcfb1dad5ca31e93939644f35bd628f44f51db85ab8538efba8dcfa
-rwxr-xr-x 1 root root 4768710 Feb  1 12:21 db7b50958f24af330a96e9dfa6398989dc3ed48855129262be1f74d5c2de9254
-rwxr-xr-x 1 root root 4599037 Feb  1 12:21 dd5291f8c16e795dfa0311079d108896b0cc90af72a4257dd6a7f213f0cd3b4a
-rwxr-xr-x 1 root root 4499141 Feb  1 12:21 ddff69fb0ec71b2314a00593f5bf2024e82dac5ceccd59fd56477cce08c83a4b
-rwxr-xr-x 1 root root 4334742 Feb  1 12:21 deae9f4d078eaa4d7557be627253be9e85e5874b1b2d5e2089dcfee5851fb9fe
-rwxr-xr-x 1 root root 5109038 Feb  1 12:21 e0c110cac577b6e2eecf729d018c22ab382d0500352d637888cf816cbd1ea030
-rwxr-xr-x 1 root root 4582430 Feb  1 12:21 e3acfdfa4602f0db6d42f4029d67ebb65112228d2b145e7f5a3c36b8daf48532
-rwxr-xr-x 1 root root 4797731 Feb  1 12:21 e63a20766c2a34d3ead54b7e5b1a99b5a8564e2797bfa58c0b3f09dd5b85d177
-rwxr-xr-x 1 root root 4599882 Feb  1 12:21 e776dd4455a5d48775a91be354b92cee7b9d4fcd95043cd87c5f80912649fd75
-rwxr-xr-x 1 root root 4668372 Feb  1 12:21 edda2b5897cee35007ceb4959a39f1b4cfec0453153ab3ba7aa90289c328d717
-rwxr-xr-x 1 root root 4892817 Feb  1 12:21 edf8cabcea7456db5cc3019aaf5473ce447a13ad860a5354bde5d61e3a97bc9d
-rwxr-xr-x 1 root root 4526259 Feb  1 12:21 f6703c4399a6c1bad745c853a4da066526fd2fd8d5a7e610c7b6b9ee16cd5481
-rwxr-xr-x 1 root root 4461631 Feb  1 12:21 fb2cf4edca8af1a9527e075f0a78b5acb36c5ea93de64c3e04497d53c75edf2f

I didn’t notice anything in rescript that would disable the cache used by restic.

An index of around 180mb size shouldn’t cause performance issues for the backup command, especially because the prune command seems to have properly cleaned up the index. Does the host which runs the backup have more than 1GB free memory?

Thanks for the reply, the host has 48 gigs of ram i know it was odd that it was taking long, so what i did was to cut the backup thinking it would speed it up but it didn’t, so what i did is create another repo and redid very odd but currently working, it does the backup in 2 hours

Hmm, I’m mostly out of ideas on what could be the reason of the slowdown. It looks like the only chance to find out the reason would be to build a restic version with profiling support and then take a look at the CPU usage.

1 Like

Thanks for the reply, same here i had to create another repo for now, ill postback if it happens again