Technically it’s possible. However, not implemented so far. The only way to chunk something is currently to run the backup command. So all you could do is access your saved data via restore or mount and run a backup on it.
Actually, IMO the best solution would be to implement two cases in the copy command:
chunking parameters of source and target repo are identical => only copy missing blobs
chunking parameters of source and target do not match => rechunk all files to copy
Moreover there could be a check in the check --read-data command which checks if the saved blobs are “valid chunks” with respect to the repo’s chunking parameter.
And - yes - there could be also an in-repo repair which re-chunks all files if the blobs are not " valid chunks". This would be however a completely different algorithm compared to prune. prune solely works on the blob level, whereas this would need to work on the tree level: Look for files to (possibly) re-chunk, do the re-chunk and then save the modified tree.
thank you very much for your reply. Re-chunking data during copy sounds like a good idea. I do not fully understand the implications but i sounds like it would add a lot of complexity to the copy process.
I’m now following your suggestion to mount and backup the data again. This works for me with two caveats:
I now have an additional path prefix (mountpoint + snapshot prefix). Not beautiful but also not a problem for me at all.
When backing up snapshot after snapshot from the restic-mounted filesystem I have to download data many times from the source repository. After three snapshots I tried to download the source repository as a whole and mount the local copy. This is faster by a factor of about 5 for my case.