For the check, yes. The “server” itself (or a host which has fast connection to the backend where restic keeps the data) would be better.
For the backup, I am not sure how to even initiate it somewhere else than the client itself, since it needs to read the data. Maybe (theoretically) you’ll have a 3rd host which you have a very fast connection to both backend and the client somehow, you can mount the client’s data to this host and backup from there, but that might be stretching it a bit.
Backups should be made regularly, often from many clients to a shared server.
If you have the choice to either
a) do the backups centrally one after the other from the server e.g., via CIFS/SMB or
b) set up individual cron jobs (Linux) or tasks in the task scheduler (Windows) on each client
then I don’t know - except maybe the speed - any advantage that would speak for b), hence the question.
If the clients are connected to the server via VPN (WireGuard, Tailscale, …), I don’t think there is anything against sharing via CIFS/SMB. On the contrary: Ransomware, for example, is more likely to end up on a client than on a server due to carelessness.
Back to the initial question:
Is my assumption correct that backups are much faster if restic runs on the client, where files and cache are on one and the same machine, because then less data has to pass through the slow network connection?
Ultimately, of course, I could try both and compare, but I’m sure I’m not the first to face this question.
I think I was the victim of terminology (e.g. server-host-backend). But yes, the client side backups will be more efficient / faster, given the caching is not disabled.
I run restic on ~4k clients directly. Some points I’d like to note would be:
If they all use same repo, and that repo gets too big, total cache size on your clients might be visible.
The wrapper / cron on client side should be somewhat clever, so they should be not crying too bad when they can’t perform backup while you’re doing maintenance on repo (e.g. check/forget/prune). At least until something like non-locking check/prune lands (if it ever does).
If I understood correctly, I have some random guarantees. E.g.
A cron-like check (on “server” side), which has a read access to all repositories (using minio/s3) which loops the server list and query their latest snapshot timestamp (e.g. “last backup too old” warning)
Restic is triggered via a python wrapper, which is doing some magic like logging all problems, handling the client-side locks (no 2 backups / database dumps should run at the same time on a client, not retrying 200 times in case a repeating error occurs etc.)
An external service, which also has read access to all repositories, loops through and actually restores the latest snapshot to test the backups
Especially client-side retry & logging is important since you could get network-related issues or even memory exhaustion (oom killer) randomly.
But if you want to check any pre-made solution/helper, I’d suggest checking github for it: link