I’m thinking about automating some application snapshotting, and it would be really convenient if I could create snapshots structured as a directory of files without writing copies of the content to disk first.
How complex would it be to make something like this work:
tar cvf - * | restic backup --stdin --tar
What I have in mind specifically is building a script that would run mysqldump
on one table at a time and generate a tar stream looking like ${DATABASE}/${TABLE}.sql
. This would make it really quick for me to restore individual tables from any snapshot, and with --stdin
I wouldn’t have to worry about having enough disk space and IO available outside the restic repo to dump all databases to at the same time first.
This would also offer a really nice path for migrating existing tar backups to restic snapshots
Edit: Finally found a few past references to this idea:
Instead of implementing a compression directly into restic, why not simply support a tar output stream, so it can be piped to other compressors on the shell?
opened 08:40PM - 30 Jul 18 UTC
closed 08:49PM - 31 Jul 18 UTC
type: feature suggestion
# Output of `restic version`
`restic 0.9.1 compiled with go1.10.2 on linux/am… d64`
# What should restic do differently? Which functionality do you think we should add?
I'm using different storages for backups: an internal repository for daily snapshots and an external disk for not so often snapshots. It would be helpful to transfer a snapshot from the internal repo to the external. So I could run `restic -r /mnt/backup export |restic -r /mnt/archive import` from time to time and save snapshots on different storages. I think it would be useful to allow options to filter the snapshot, i.e. `--exclude …`
I'm not sure if both commands should be combined like `restic transfer /mnt/backup /mnt/archive`, but split command would allow a transfer with ssh and maybe you could add a format option to select the *tar* format. This would ease the first import of backups from foreign software like `restic import --fmt tar /mnt/old/backup.tar`. The same for an export as tar.
opened 03:33AM - 09 Feb 17 UTC
closed 06:36PM - 28 Aug 18 UTC
As discussed in #781, please add remote server mode with an option for add-only … access.
This mode of operation assumes inverse [threat model](https://github.com/restic/restic/blob/master/doc/Design.md#threat-model):
- The host system where a backup is created on is **not** trusted.
- The remote location where backups are stored on is trusted.
Therefore, the host system (source of backups) shall be only allowed to add files to the repository, never delete or change. The host system may not choose own encryption keys.
For better host separation while retaining best de-duplication it is desired that the host system is only allowed read an portion of a repository essential to the job on hand. (In no cases the host system shall be allowed to read files belonging to other hosts.)
A server is supposed to be configured in `.ssh/authorized_keys` with a forced `command` not unlike [it is done with Borg](https://borgbackup.readthedocs.io/en/stable/usage.html#append-only-mode):
command="restic serve ..." ssh-rsa <untrusted client key>
Contrary to the common threat model, encryption is not required in this mode of operation.
Other that re-implementing own protocol, it may be also sufficient to:
- Read source files for backups from tar archives. [Go has means to read them.](https://golang.org/pkg/archive/tar/) In this case backups could proceed as following:
tar c /path | restic backup --from-tar
Or, in case of network backup:
sudo tar c /home | ssh backup.example.com restic backup --from-tar --hostname=$HOSTNAME
In this case we're trading bandwidth for security, which is an obvious drawback.
- Implement rsync protocol and read source files from a rsync client. There is librsync and there could be bingings for it, or even a number of reimplementation. [One that I found.](https://github.com/smtc/rsync) Yet whole rsync protocol isn't just librsync, so it may take more effort than just implementing own protocol.
Backup proceeds like following:
rsync -a -e "ssh backup.example.com restic --hostname=$HOSTNAME" /path/to/backup
(Note that rsync will append own command, calling `restic rsync ...`)
This option has major, if not great, benefit of not having restic installed on all and every target server. One doesn't have to hack some vintage CentOS servers to get restic version updates.
Either way we must add an explicit option for a hostname because restic could not know it otherwise.
fd0
September 30, 2018, 9:43am
2
This is a good idea and has indeed been proposed several times already, here’s another one:
Is there any discussion about a way to restore files to a single .zip or .tar.gz file?
Am willing to write code.
I could envision a couple ways of this happening:
Direct support implemented into restic’s CLI, with a flag such as --archive zip or --archive tgz or something.
The restore functions exposed as a library so that an io.Reader can be returned such that I can pipe the contents of the restore into a zipper or targz-er function. Ideally avoiding writing a whole archive t…
I think we need to decouple this from backup/restore and make two new commands: import
and export
, which imports from and exports to e.g. tar
files.
The pieces are in place so far and @matt offered to do the export part, so it “just” needs to be done
1 Like
Ah, just a Small Matter of Programming then!
I take it the refactor previously mentioned as making it easier to implement has happened already?
fd0
October 2, 2018, 12:41pm
4
It has, but at the moment we have several big things to work on, so it’s not something I would start right now. So don’t get your hopes up that this will be added near-term
1 Like
Is this feature request tracked on GitHub?