Howdy @David,
You bet. They have been going steadly worse, along with memory consumption during backup, as the repo grows with more snapshots, but this is not a big surprise as @cdhowie had already more or less predicted it.
You are correct. I meant to say, ‘in terms of magnitude’: among the ones we’ve depicted, both #2 and #5 are in the “several hundreds of MBs added” order-of-magnitude ballpark, ditto ‘few thousands of changed/new files/blobs’, but had taken considerably less time to complete (specially #2, which I think not coincidentally, is the earlier one).
My point is that things seem to be going steadily downhill independent of how much the source data files have changed. The only thing I can see that could account for that is repo growth. and this seems to be implied by my previous exchange with @cdhowie (pointed above).
The only solution I can see for this is as many forgets as possible, followed by a prune: given that I will do the prune on a copy of the repo and keep the old, pre-prune repo around, my idea is to forget everything but the latest snapshot for each of my dataset; if I ever need to restore anything from an old snapshot, I can access it from the previous repo I will be keeping.
Let’s hope I have enough time for that before things blow up in my face – despite the top-notch hardware I’ve dedicated to restic in the form of its own exclusive Xeon server, it seems to be busting at the seams at too many places (not only backup time window, but as previously stated, also memory usage and cache size) – I’m starting to get the impression that restic is simply not ready for the kind of load I’m putting on it Well, if it breaks, at least I get to keep both pieces
– Durval.