Fragmented storage?

Hi there!

Why I ask.

I’m a new user to restic, maybe there is a way to solve my “problem” already, or if not, there is probably a “best way” to implement one. Please take my apologies if I failed to read the docs thoroughly.

Idea.

I have lots of old media. To be exact, piles. These are usually harddisks with less than 4 TB of storage space. I once intended to use MHDDFS to use them altogether as a cheap local storage space, but MHDDFS never got stable, and it doesn’t make much sense to have ~20 hdds running on standby all the time.

Is it possible to store restic backups in a given granularity, let’s say 1 TB so that it may fill up the media with little space unused? Most media should be considered offline and taken online on special demand only.

Hi @Yanestra and welcome to the restic community! :slight_smile:

I am not sure I understand your question, so let me try to figure it out.
Are you thinking of stripping your restic repository across multiple hard drives?

This is not a use case natively supported by restic. My understanding is that restic might need to access any file in the repository at any time. prune and check definitely need to look at everything.

As a side note, anything involving multiple drives needs redundancy on those drives. As you add more and more drives, the probability that one drive will fail in a given timeframe approaches 1.

btrfs raid1 could be an option. It will allow you to start off with only 2 or 3 drives and add more drives as you need space. However, all drives that are part of the btrfs filesystem must be available to the system in order to mount the filesystem.

Yes,…
BUT that does not necessarily mean I’d expect restic to handle the distribution on the media. My question would be if it can handle some degree of granularity, like portions of 1TB, which might be absent (offline) or present (online)?

No, it can’t. One of restic’s core assumptions is that all data in the repository is accessible on demand. Sorry about that :wink:

Does restic actually make assumptions about the cloud filesystem (which, I assume, might be limited to file containers limited in length, usually 2G or 4G)? If restic accesses them, is it in a header read manner, or random access by principle?

Restic assumes that it can request parts of files stored in the backend near instantaneously. It does not optimize at all for slow (e.g. multi-second response latency) backends. It makes no assumptions in terms of file size, it tries to split files so that they are rather small (below 20MiB), so such limited backends should work. I haven’t tried them though :slight_smile:

1 Like

https://www.snapraid.it/ Check it maybe it will help