Hi,
I was wondering if someone could shed some light on the issue im having, Before restic on the same server i was getting these speeds in November
Files: 1416 new, 158 changed, 1264728 unmodified
Dirs: 0 new, 3 changed, 0 unmodified
Data Blobs: 3902 new
Tree Blobs: 4 new
Added to the repo: 3.122 GiB
processed 1266302 files, 649.958 GiB in 1:41:22
snapshot 2dbbedea saved
There are 6 exclusion rules...
But now im getting these speeds
Files: 365 new, 185 changed, 1277113 unmodified
Dirs: 0 new, 3 changed, 0 unmodified
Data Blobs: 1721 new
Tree Blobs: 4 new
Added to the repo: 1.443 GiB
processed 1277663 files, 677.480 GiB in 10:26:39
snapshot 692a5191 saved
There are 6 exclusion rules...
im not sure if its a cache issue? or because the repo data is pretty large now but not sure why there is an extra 9 hours
Which version of restic are you using on which operating system? The performance change around November could be related to the changed detection of modified files in restic 0.9.6 . On which file system is the data stored which you are backing up?
What is the exact command line you use to run restic?
In order to pretty much verify if the cause is the change from mtime to ctime for change detection, you can run the backup command with the --ignore-inode flag, just once for testing. If this runs with your “normal” duration, it might very well be it.
Then again, you could start by just telling us what versions you were running in november and now. If you haven’t upgraded, the mtime to ctime change isn’t the cause.
I’m wondering what could have caused the slowdown. During a backup restic just loads the index and the tree nodes of the latest snapshot from the repository.
The latter should be mostly identical to your new backup. Which command line do you use exactly to create the backup? Did you disable caching?
As you run prune daily, there shouldn’t be any performance problems due to a large number of small index files. So my other guess would be that the index grew large enough to cause your machine to start swapping?
Do you still have the old repository? If yes, could you please take a look at how many files are in the index folder and also check how large that folder is?
restic -r /media/backupnas/ stats
enter password for repository:
repository 6fac3f45 opened successfully, password is correct
scanning...
Stats for all snapshots in restore-size mode:
Total File Count: 53206011
Total Size: 26.443 TiB
Can you take a look at how many files are in the index folder of your backup repository, along with its size? Or even better: what is the output of ls -la /media/backupnas/index?
It would also be useful to get the exact command line used by rescript for the backup, I’m mostly interested in whether caching is disabled for the backup runs or not.
I didn’t notice anything in rescript that would disable the cache used by restic.
An index of around 180mb size shouldn’t cause performance issues for the backup command, especially because the prune command seems to have properly cleaned up the index. Does the host which runs the backup have more than 1GB free memory?
Thanks for the reply, the host has 48 gigs of ram i know it was odd that it was taking long, so what i did was to cut the backup thinking it would speed it up but it didn’t, so what i did is create another repo and redid very odd but currently working, it does the backup in 2 hours
Hmm, I’m mostly out of ideas on what could be the reason of the slowdown. It looks like the only chance to find out the reason would be to build a restic version with profiling support and then take a look at the CPU usage.