I’ve been using Restic for a couple of years in backing up my NAS (around 10TB of data). The RAM requirements were getting out of hand, so I decided on sharding my backups e.g. device backups in one (all the Windows devices backup to the NAS, not using Restic), documents in another, etc.
With that, I’m freshly backing up my NAS to my colo (going to take 60 days, woot! not wanting to risk a trip to colo). I’m running into a no space left on device
error I’ve never seen before.
When I first started using Restic, I move the restic tmp folder to tmpfs as my monitoring saw a huge amount of writes when creating the restic-temp-pack
files. I run SSD’s on the primary drives of all my VM, so I try to reduce the amount of writes there. I haven’t changed anything, but I’m apparently running out of space now (about 1GB allocated to tmpfs).
Normally, the restic-temp-pack
files seem really small. My monitoring hasn’t see any out-of-memory conditions, which leads me to think that Restic is writing more than 1GB to the tmpfs mount.
How much should we expect Restic to use?
(I also updated to 0.10.0 during this migration (waiting on 0.11.0 to get promoted into debian testing), I was previously using 0.9.4, not sure if something changed with the restic-temp-pack
)
My setup is rather trivial. I have my archive server in colo handle pruning (memory/latency related).
#!/bin/bash
restic version
restic snapshots > /dev/null 2>&1
repoExists=$?
if [ $repoExists -ne 0 ]; then
echo "Repository does not exist, it will be created."
restic init --verbose
echo "Repository created."
fi
echo "Starting backup run..."
restic \
backup \
--verbose \
--exclude-caches \
--one-file-system \
--tag nas \
--cleanup-cache \
--exclude /mnt/cephfs/backups \
# other excludes
/mnt/cephfs
status=$?
echo "Backup run completed."
[ $status -eq 0 ] && echo "Success." || exit $status