Thanks all. I’m getting closer, not a solution, but the scope of the issue is smaller.
And I think perhaps tjh was on to something with mounting options, as it seems to actually happen after a reboot. By creating a smaller testing folder its easier to iterate on, and I will missing the reboot condition previously because each run was taking 12 hours.
I’m posting an update, not a solution, and I am still digging through what is happening.
In short, I have the following scenario, with the reduced testing folder:
- Initial backup - lets call it snapshot 1
- This backup takes 30s (expected)
- Reboot
- Rerun same backup - lets call it snapshot 2
- This backup takes 30s (unexpected)
- No reboot
- Rerun same backup - lets call it snapshot 3
- This backup takes 1s (expected, and awesome)
Looking at the 3 snapshots in more detail, as well as their immediate tree:
Snapshot 1
cat snapshot
repository 523f75c6 opened (version 2, compression level auto)
{
"time": "2024-11-03T12:22:16.129025383+11:00",
"parent": "28b1fcf83a7416f5dd9dfd56c7f9b0e47be4710f1232ca45324f5b69b64cf8a1",
"tree": "b2b90e2c63626ac41c5ece049356193ab0ab54a8ca12a0e9d322c7d5aea5f790",
"paths": [
"/mnt/FF67-F77E/files"
],
"hostname": "gearbox",
"username": "chris",
"uid": 1000,
"gid": 1000,
"program_version": "restic 0.16.4"
}
cat blob
{
"nodes": [
{
"name": "mnt",
"type": "dir",
"mode": 2147484141,
"mtime": "2024-10-27T15:54:07.429999922+11:00",
"atime": "2024-10-27T15:54:07.429999922+11:00",
"ctime": "2024-10-27T15:54:07.429999922+11:00",
"uid": 0,
"gid": 0,
"user": "root",
"group": "root",
"inode": 3670017,
"device_id": 2115,
"content": null,
"subtree": "ad4c930daaef0652aa01ec9f25c5134bc205cdb62b54fe0efeea216532c2970a"
}
]
}
Snapshot 2
After reboot, unexpectedly reruns full backup
cat snapshot
{
"time": "2024-11-03T12:23:57.702699622+11:00",
"parent": "e4209073606a6040b789fd85f6a9f2a236cb243050a99fff4fee17f1aafd5f30",
"tree": "690ef13c98b1aede49a0e39fc2881d210bfc99e9637e1dcc1425714e2c357ec5",
"paths": [
"/mnt/FF67-F77E/files"
],
"hostname": "gearbox",
"username": "chris",
"uid": 1000,
"gid": 1000,
"program_version": "restic 0.16.4"
}
cat blob
{
"nodes": [
{
"name": "mnt",
"type": "dir",
"mode": 2147484141,
"mtime": "2024-10-27T15:54:07.429999922+11:00",
"atime": "2024-10-27T15:54:07.429999922+11:00",
"ctime": "2024-10-27T15:54:07.429999922+11:00",
"uid": 0,
"gid": 0,
"user": "root",
"group": "root",
"inode": 3670017,
"device_id": 2115,
"content": null,
"subtree": "a124c19768d84ead187a50f622e47b04d07c3346f105096ef9638976ad2bf5f1"
}
]
}
Snapshot 3
cat snapshot
{
"time": "2024-11-03T12:27:10.472468147+11:00",
"parent": "77187405a9a575aa8dc054d85fff871c24ca8b9e257886fd676b887fdf9c5b18",
"tree": "690ef13c98b1aede49a0e39fc2881d210bfc99e9637e1dcc1425714e2c357ec5",
"paths": [
"/mnt/FF67-F77E/files"
],
"hostname": "gearbox",
"username": "chris",
"uid": 1000,
"gid": 1000,
"program_version": "restic 0.16.4"
}
cat blob
(same as snapshot 2 - its the same tree)
When the full, unexpected backup, runs against in snapshot 2, the tree
has changed (b2b90e2c63626ac41c5ece049356193ab0ab54a8ca12a0e9d322c7d5aea5f790
to 690ef13c98b1aede49a0e39fc2881d210bfc99e9637e1dcc1425714e2c357ec5
).
Whereas, the expected backup, where it completes quick, the tree
does not change (remains as 690ef13c98b1aede49a0e39fc2881d210bfc99e9637e1dcc1425714e2c357ec5
between snapshot 2 and 3)
So something in the reboot is causing the root tree in the snapshot to change, and I’m assuming this has a trickle down effect triggering a change in every subtree. Comparing the two blob’s they do appear identical though (same device id, user, group, times, etc), so I’m still figuring out what exactly is changing