Suggestion for pack-size

Hi all,

i am new to restic and doing my first steps to get my (and more important my wifes) data stored in a safe place.

So far everything is working, i did some performance testing against Kopia and Borg where Restic was the clear winner.

I do not have Terabyte of data, but a Gigabit network connection to my NAS which is operating as backup device.
Do i need to tune the pack-size parameter to a higher value or should i leave the 16 MB as they are?

Thanks in advance!

Hi @magnum and welcome to the restic community! :slight_smile:

You probably have found the documentation for this already, but in the case you haven here is a link to it where this very option is explained:

Is the doc sufficient enough to answer your question, or are there things which are still unclear.

Can you tell more about your estimated repository size? And the NAS - is there some special kind of storage involved or is it simply hard disc(s)?

Many thanks from another german user for your warm welcome. I love restic already :slight_smile:

Yes, this documentation i am referring to.
It says that it can improve speed also on a local repository.

In my current setup the data is stored on a regular harddisk which is a little slower than the Gigabit network connection to my NAS. That’s what i am asking for.
However, i am fine with the overall performance. The initial backup took 2:10 hours for 315 GB of data.

So maybe i am worried too much about the wrong things and should simply use it. :slight_smile:

The files are stored on a hard drive connected directly to the server (Proxmox, but that doesn’t matter). The access to my NAS is made via NFS.
Security is not a big thing as it’s only the data of me and my wife. The date are not going elsewhere outside my LAN.
Both devices are connected via Gigabit-Switch, so the harddisks (not SSD) might be the limiting factors.

If you already did your initial backup, packs with the default pack size are already created and there so far is no process to change these. But yes, maybe in your setting (and most settings) pack sized do not matter too much.

Anyway, I would recommend pack sizes depending on the repository size - with some special cases which would benefit from special treatment. I have a similar repo size and use ~50 MB pack sizes with good results.

1 Like

Hehe :slight_smile: Yeah I would say that you should start using restic and if you see a bottleneck that can be tweaked by an option in restic: go an set up a test repo (crucial, never test on your live backup repository) and play around with it.

But increasing the pack size is something to be considered - as the docs suggest.


Ah, and larger pack sizes are not only for performance reasons, it of course also decreases the total number of pack files which implies:

  • decreased memory usage for the index
  • less storage overhead if you have a backend with a fixed overhead per file stored
  • no problems with limits of files in a storage

,but also

  • higher probability of packs needing a repack during prune or didn’t get repacked but still occupy unnecessary space

Thanks to all for your quick responses.
Not very common getting answers that fast :slight_smile:

Restic works for me perfectly so far and in the meantime i was able to create a systemd timer to create automatic snapshots every night.

Learning by doing :laughing:

I’ve tested the script a couple of times and the next snapshots were created in seconds due to no changes. So you’re both right. The pack-size doesn’t matter to me at the moment :slight_smile:
I could imagine it might become a problem for remote storage if you have only a limited file/inode number available.
Right now the stats give me around 24.000 packs processed. Shouldn’t be a problem at all.


Welcome to the forum :slight_smile:

One thought - do you have an offsite backup? Backing up to a NAS is good, but if your house burns down with the NAS in it you’ll lose all your data if there’s no offsite backup. I backup to

  • An internal disk
  • A disk in my detached garage
  • A disk at a friends place
  • AWS S3 (B2 is likely easier and cheaper for most people)
1 Like

Thanks for your thoughts
I am not that paranoid having my data on four different places :slight_smile:

However beside the local backup, i am running the same task to a Hetzner Storage Box offsite