Question about backup with Amazon S3 without running restic as root

Hi, I would like to have a backup in the Amazon S3 bucket and I am working with Centos7. I have created a S3 bucket and I can backup a folder when I run restic as user A and user A is the owner of that folder.
I followed the guide here: Examples — restic 0.14.0 documentation and created a restic user, and then I try to initialize the restic repository and there is error. The step is as follow:
Case1:
As root user, I run the following:
$ unset HISTFILE
$ export RESTIC_REPOSITORY=“s3:https://s3.amazonaws.com/bucket_name
$ export AWS_ACCESS_KEY_ID=“my_AWS_ACCESS_KEY_ID”
$ export AWS_SECRET_ACCESS_KEY=“my_AWS_SECRET_ACCESS_KEY”
$ export RESTIC_PASSWORD=“my_RESTIC_PASSWORD”
Then, I run the following:
$ sudo -u restic /home/restic/bin/restic init
And then I received the message:
Fatal: Please specify repository location (-r or --repository-file)

Also, I try to run it in another way.

Case 2:
As root user, I run the following:
$ unset HISTFILE
$ export RESTIC_REPOSITORY=“s3:https://s3.amazonaws.com/bucket_name
$ export AWS_ACCESS_KEY_ID=“my_AWS_ACCESS_KEY_ID”
$ export AWS_SECRET_ACCESS_KEY=“my_AWS_SECRET_ACCESS_KEY”
$ export RESTIC_PASSWORD=“my_RESTIC_PASSWORD”
Then, I run the following:
$ sudo -u restic /home/restic/bin/restic init -r s3:s3.amazonaws.com/bucket_name init
Fatal: create key in repository at s3:s3.amazonaws.com/bucket_name failed: client.PutObject: Access Denied

May I know how to run the backup with the restic user and backup the file to Amazon S3 bucket?

Thanks,
Joe

For case #1 you’ll need to use the -E flag with sudo, which preserves environment variables.

Thank you. It works fine now.
I have one more question about the environment variable.
Suppose I would like to backup multiple folders to Amazon S3 bucket (folder_A to bucket/backup_A and folder_B to bucket/backup_B, folder_C to bucket/backup_C…) every 2am and I setup multiple shell scripts as follow:

Shell script for backup folder A:
unset HISTFILE
export AWS_ACCESS_KEY_ID=“AWS_ACCESS_KEY_ID”
export RESTIC_REPOSITORY=“s3:https://s3.amazonaws.com/bucket/backup_A
export AWS_SECRET_ACCESS_KEY=“AWS_SECRET_ACCESS_KEY”
export RESTIC_PASSWORD=“RESTIC_PASSWORD”
sudo -E -u restic /home/restic/bin/restic backup /path/to/the/folder
sudo -E -u restic /home/restic/bin/restic check >> output.txt

For the shell script for backup folder B, I change the RESTIC_REPOSITORY to “s3:https://s3.amazonaws.com/bucket/backup_B

Then, I set the cron job as follow:
0 2 * * * /usr/bin/bash /path/to/shell/script/A
0 2 * * * /usr/bin/bash /path/to/shell/script/B

In this case, as multiple shell scripts run in nearly the same time, will it cause any issue in the backup process? (For example, it backup folder A to bucket/backup_C as the environment variable RESTIC_REPOSITORY is changed to “s3:https://s3.amazonaws.com/bucket/backup_C”)
Can I assume that the environment variable with the same name will not alter each other in each shell script? Or is it better to do the backup one by one with one shell script running the multiple shell scripts in order?

Thanks,
Joe

Both scripts run in separate processes and therefore won’t interfere with their environment variables.

Running multiple backups in parallel is possible, but not running check multiple times. Running check after each backup is also very likely too much. It probably will just cause lots of traffic costs as check doesn’t use the local cache and has to download all data it needs from the repository.

Why do you run restic using sudo?
As User A is owner of the A folder (and so on) I see no reason to run restic as root.
Just export all the env variables and run restic as normal user.

However, one thing to keep in mind is: If all users know the credentials for the same repository, they can all read the data of all other users. (Which might not be an issue in your case…)