Backend for the Interplanetary File System (IPFS)

Hello everybody,

According to Wikipedia:

IPFS is a protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia in a distributed file system.

It’s basically an immutable filesystem based on and Merkle-tree like DAG.
After thinking about it for a while, it seems to be a perfect candidate for a Restic backup. Both IPFS and the Restic repository format use content addressing and hashes to build a DAG.

Both Restic and the IPFS reference implementation are written in Go.
IPFS provides a Go library. Implementing a Restic backend could be pretty straight forward.
Or do I miss any major obstacles?

Have there been any efforts to backup to IPFS via Restic?

If rclone got support for IPFS, this would be possible (since restic supports storage via rclone).

Interesting. But I would prefer a dedicated restic backend:

There is one problem with the “immutability” of IPFS:
Everytime restic makes a change to an IPFS directory (such as adding a new snapshot), the content of the repo and therefore the hash of this IPFS directory will change.
As a consequence, the restic repo URL would need to change as well.
To avoid adjusting the restic repo URL after each modification, IPFS provides naming system called IPNS.

This PKI-based system can reference changing hashes by a fixed hash. This fixed IPNS hash can be used as the restic repository URL.

However, updating this IPNS has to be done by restic when closing the backend.
I dont this that it can be properly handled via the rclone backend.

Can’t think of it right now, but maybe on irc?!
I think before adding more backends, the plan is to move forward with the general core of restic - like with the new archiver code that was added fairly recently.
But in the end it’s up to the contributers if they want to work on a new backend.

While it’s a nice idea to add a backend for ipfs, I’m skeptical that it’s the right choice for restic. Besides, if it is added to rclone, I don’t see much value in duplicating the work again in restic.

Correct, we have plenty of other stuff to do, improving restore, optimizing prune, adding compression, …

While that’s true, it is a good idea to discuss an idea for a feature before spending serious time on it. It would be a shame to work on code which won’t be merged in the end :slight_smile:

Sorry for warming up this older topic, but I think this is still a very interesting combination.

@fd0

IPFS itself isn’t THAT interesting to support, since the data would stay on the current machine, and you have to manually move the data to other machines by pinning on them.

Much more interesting would be support for the IPFS-Cluster system, where you can specify how many copies of each file or directory pinned there should be stored in the cluster.

This way you can spread backups on N machines without any manual work.

Additionally in the long run filecoin could be an interesting solution, if you don’t want to setup N machines to hold your backups, but just pay someone to host your data.

Since both works with IPFS in the background, you gain the same content addressed storage space.

There is preliminary work done in rclone to support IPFS: https://github.com/rclone/rclone/issues/128

As @fd0 said it probably won’t happen in the foreseeable future that we implement it directly in restic, especially when it might come to rclone.

The work on rclone for ipfs is abandoned. Maybe someone will pick it up.