Really slow incremental backups of a Windows share

So I’m in a situation where I can’t put Restic on the actual server, but I can “pull” from the server onto an external drive. It’s about 2TB of data. Problem is, it took 24 hours just to rescan 93GB of data. I already have a full backup. And the size hasn’t changed by a single byte yet. But it’s going as slow as the initial backup for some reason. Is there a switch that would make it scan quicker? I tried --ignore-inode, not knowing if that would actually help or not - but unfortunately it has not. Any ideas?? Thanks!

Hello @akrabu,

My experience (with a much larger, ~30TB backup) is entirely different; the first backup took over 2 months, but succeeding ones take less than 36h (still a lot, but a whole lot less than the initial one).

Presuming you are backing up to local storage (an external hard disk, maybe?), I would look at memory usage – I’ve seen ‘restic backup’ using in excess of 36GB RAM here. If your machine doesn’t have enough memory, it could be thrashing the hell out of your system. Look at “Performance” under the Windows Task Manager to see whether that’s the case.

Cheers,
– Durval.

What does the final backup report say about new/unmodified files?

@durval At the moment I’m up to 107.268GiB/2.101TiB. It’s using 276,100KiB of RAM. I have 12GB of RAM free. So I don’t think that’s it haha. The local drive is a 7,200rpm USB3.0 drive. It’s barely blinking. I’ve benchmarked it at 120MB/s. So that’s not the holdup either…

@cdhowie I’ll let you know in about a week or two when it finishes LOL

All I know is, the local repo folder size still hasn’t changed, except by a few bytes. It’s not actively copying anything at all yet - and I wouldn’t expect it to, because it’s currently rescanning an Archive folder that’s hardly ever used. It’s acting like it’s scanning it for the first time or something, and matching the checksums so not actually copying anything. But it’s moving very, very slowly…

Hi.
I have the same bug. When scanning remote share all data is RE-READ everytime, regardless of last modify time.
However if i mirror remote share to local dir with robocopy, it’s OK.

So this incremental backup was going to take 28 days. I came up with a better way of doing it. This topic can now be closed. Thanks!

See: Remotely starting backups on Windows boxes

I checked your new topic (nice guide BTW) and it seems the problem was that, when accessing the files to back up via a “remote path” (which I think you mean over a Windows aka SMB aka CIFS share, correct?) restic backup acted as if all files have changed.

This IMHO would be a bug. I would recommend you file a bug report at Issues · restic/restic · GitHub, including how to reproduce it, so it can be addressed

Cheers,
– Durval.

1 Like

I’m thinking that restic needs to read the file to tell if it’s changed, and while that’s fast locally, it would be rather slow for a remote UNC path?

If not, yeah, could be a bug.

restic should not have to read a file when its ctime/mtime and size have not changed. On *ix, restic also checks the inode – not sure how this works out in Windows. But perhaps something is making restic think the file has changed so it is re-reading it?

I have seen instances where the time doesn’t match because the original drive’s filesystem doesn’t match the target drive’s filesystem capabilities. I found out HFS+ has a higher time resolution than ExFAT (my target) and so it would re-read each time. I reformatted my target to HFS+ and it worked fine. Could it be something like that??

@akrabu, good point re: time resolution. This brings to mind the fact that the rclone has a “tolerance” of 1s when comparing times – so as to account for systems where the resolution is in ms (or ns) and others where the resolution is just 1s. @fd0, how does restic handle that?

Maybe, but if the parent snapshot was created against the same remote then wouldn’t the time resolution be the same? This would only apply if the time resolution changed between backups.

Hi, reviving this old topic because it looks like something I experienced and might be of help to someone.

My scenario is the following:
On the LAN, Linux box (Raspbian 10 on Raspberry PI) with CIFS mounted Windows shares (of the same server).
Remote backup of these SMB shares through VPN on http rest server.

I have 2 shares.
One of them was quite quick to backup (~20 minutes after initial backup).
The second one was painfully slow, let say between a few hours up to 10 hours or more.

After a very long search, the only thing that I found different was the way the shares were mounted.
In my /etc/fstab I had this:

//server/share1 /mnt/share1           cifs    ro,uid=localuser,credentials=/etc/samba/.smbcredentials,iocharset=utf8    0       0
//server/share2 /mnt/share2           cifs    uid=localuser,ro,credentials=/etc/samba/.smbcredentials,iocharset=utf8    0       0

Notice the order of the options ro and uid.
The mount options of the filesystem were the following:

(ro,relatime,vers=2.1,cache=strict,username=localuser,uid=1001,forceuid,gid=0,noforcegid,addr=192.168.1.199,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)

(ro,relatime,vers=2.1,cache=strict,username=localuser,uid=1001,forceuid,gid=0,noforcegid,addr=192.168.1.199,file_mode=0755,dir_mode=0755,soft,nounix,          mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)

There is no “serverino” option on share2.

The only thing that I changed was the order of the options ro,uid=localuser instead of uid=localuser,ro and everything went back to normal, like 30-40 minutes for incremental backup.
I have no idea why.
Looks to me like a Debian or upstream bug, not sure what to do at this point.

Hope that helps.