Initial commit
This commit is contained in:
15
Dockerfile
Normal file
15
Dockerfile
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
FROM ubuntu:14.04.3
|
||||||
|
MAINTAINER technik@myfoodmap.de
|
||||||
|
|
||||||
|
RUN apt-get update && apt-get install -y \
|
||||||
|
python-pip \
|
||||||
|
xz-utils
|
||||||
|
|
||||||
|
RUN pip install awscli
|
||||||
|
|
||||||
|
ADD backup.sh /backup.sh
|
||||||
|
ADD restore.sh /restore.sh
|
||||||
|
ADD run.sh /run.sh
|
||||||
|
RUN chmod 755 /*.sh
|
||||||
|
|
||||||
|
CMD ["/run.sh"]
|
||||||
170
README.md
Normal file
170
README.md
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
docker-backup-gpg-s3
|
||||||
|
================
|
||||||
|
|
||||||
|
Compress a folder, encrypt it and store it on AWS S3.
|
||||||
|
|
||||||
|
Why should you encrypt your private files before uploading them on S3? Because Amazon is part of an international policy that treats everyone like terrorists.
|
||||||
|
|
||||||
|
|
||||||
|
Quick Start
|
||||||
|
================
|
||||||
|
|
||||||
|
Step 1. Create an S3 bucket on AWS. Write down the AWS region that was used to create the bucket and don't lose it.
|
||||||
|
|
||||||
|
Step 2. Create an AWS User in AWS IAM that is going to be used to backup a folder in the just created bucket. Write down the ```Access Key ID``` and the ```Secret Access Key``` and don't lose it.
|
||||||
|
|
||||||
|
Step 3. Create the following policy in AWS IAM
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Version": "2012-10-17",
|
||||||
|
"Statement": [
|
||||||
|
{
|
||||||
|
"Sid": "Stmt1454689922000",
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": [
|
||||||
|
"s3:PutObject"
|
||||||
|
],
|
||||||
|
"Resource": [
|
||||||
|
"arn:aws:s3:::myBackupBucket/*"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
and attach it to the User created in Step 2. Replace myBackupBucket with the name you gave to the bucket in Step 1 and be careful to append ```/*``` to it.
|
||||||
|
|
||||||
|
Step 4. Copy a public gpg key into a folder that can be mount by the docker container later. It is going to be used to encrypt your backup. Write down the Emailadress of the gpg key and don't lose it.
|
||||||
|
|
||||||
|
|
||||||
|
docker build -t backup-gpg-s3 .
|
||||||
|
|
||||||
|
Step 5. Run the container
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d \
|
||||||
|
--name my-backup \
|
||||||
|
--restart=always \
|
||||||
|
--volume /folder/to/backup:/backup/:ro \
|
||||||
|
--volume /folder/to/backup/keys/:/keys/:ro \
|
||||||
|
--env "CRON_INTERVAL=0 4 * * * " \
|
||||||
|
--env "GPG_RECIPIENT=myBackup@myDomain.com" \
|
||||||
|
--env "S3_BUCKET_NAME=myBackupBucket" \
|
||||||
|
--env "AWS_ACCESS_KEY_ID=myAWSAccessKey" \
|
||||||
|
--env "AWS_SECRET_ACCESS_KEY=myAWSSecretAccess" \
|
||||||
|
--env "AWS_DEFAULT_REGION=eu-central-1" \
|
||||||
|
backup-gpg-s3
|
||||||
|
```
|
||||||
|
|
||||||
|
This container is going to perform a backup every day at 4 am. You can define the backup schedule with ```GPG_RECIPIENT```.
|
||||||
|
|
||||||
|
Confirm that your backup container is set up properly
|
||||||
|
===========
|
||||||
|
|
||||||
|
Step 1. Check if Cron is set up
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec my-backup crontab -l
|
||||||
|
```
|
||||||
|
|
||||||
|
It should show your environment variables and the Cron interval, followed by ```/backup.sh```.
|
||||||
|
|
||||||
|
Step 2. Check if your public gpg key was imported
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec my-backup gpg --list-keys
|
||||||
|
```
|
||||||
|
|
||||||
|
and confirm that the email adress is the same as the one you assigned to ```GPG_RECIPIENT``` while starting the backup container.
|
||||||
|
|
||||||
|
Step 3. Initiate backup manually.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec my-backup bash /backup.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This could take a while if the folder the backup is set up for is bigger than 100MB. After it's done, check if there is a file in your AWS bucket.
|
||||||
|
|
||||||
|
Prepare Emergency Restore
|
||||||
|
===========
|
||||||
|
|
||||||
|
Create another policy that is needed for restoring from a previously made backup:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Version": "2012-10-17",
|
||||||
|
"Statement": [
|
||||||
|
{
|
||||||
|
"Sid": "Stmt1456142648000",
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": [
|
||||||
|
"s3:GetObject",
|
||||||
|
"s3:ListBucket"
|
||||||
|
],
|
||||||
|
"Resource": [
|
||||||
|
"arn:aws:s3:::myBackupBucket/*",
|
||||||
|
"arn:aws:s3:::myBackupBucket"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Backup Restore
|
||||||
|
===========
|
||||||
|
|
||||||
|
You should perform a backup restore before actually needing to restore from a backup, just to make sure that everything works the way it's supposed to.
|
||||||
|
|
||||||
|
Step 1. Attach the policy created in [Prepare Emergency Restore](#prepare-emergency-restore) to the user that is used for making backups. Now that user is able to restore from backups, too.
|
||||||
|
|
||||||
|
Step 2. Copy the private gpg key into a folder that can be mount by the restore container later.
|
||||||
|
|
||||||
|
Step 3. Start the restore container
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -it -rm \
|
||||||
|
--volume /path/to/restore/folder:/restore/:rw \
|
||||||
|
--volume /path/to/backup/keys/:/keys/:ro \
|
||||||
|
--env "GPG_RECIPIENT=\\\\" \
|
||||||
|
--env "S3_BUCKET_NAME=\\\\" \
|
||||||
|
--env "AWS_ACCESS_KEY_ID=\\\\" \
|
||||||
|
--env "AWS_SECRET_ACCESS_KEY=\\\\" \
|
||||||
|
--env "AWS_DEFAULT_REGION=eu-central-1" \
|
||||||
|
backup-gpg-s3 bash /restore.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
You will be asked to enter the name of the backup. If your private gpg key has a password you will be asked for it, too.
|
||||||
|
|
||||||
|
|
||||||
|
FAQs
|
||||||
|
===========
|
||||||
|
|
||||||
|
How do I generate a GPG key?
|
||||||
|
-----------
|
||||||
|
|
||||||
|
Create a key with ```gpg --gen-key``` and export them with
|
||||||
|
|
||||||
|
|
||||||
|
How do I export a GPG Key from my key chain, so that it can be used in a container volume?
|
||||||
|
-----------
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gpg --output ~/path/to/volume/myKey.gpg.pub --export myBackup@myDomain.com
|
||||||
|
|
||||||
|
gpg --output ~/path/to/volume/myKey.gpg --export-secret-keys myBackup@myDomain.com
|
||||||
|
```
|
||||||
|
|
||||||
|
What can I do if I generate a GPG Key and it tells me I need more entropy?
|
||||||
|
-----------
|
||||||
|
|
||||||
|
Fedora/Rh/Centos types: ```sudo yum install rng-tools```
|
||||||
|
|
||||||
|
On deb types: ```sudo apt-get install rng-tools``` to set it up.
|
||||||
|
|
||||||
|
Then run ```sudo rngd -r /dev/urandom```
|
||||||
|
|
||||||
|
The backup container makes backups every day / every week, but it doesn't delete old backup files. How can I delete old backups?
|
||||||
|
-----------
|
||||||
|
|
||||||
|
You can define a lifecycle in the properties of your S3 bucket.
|
||||||
13
backup.sh
Normal file
13
backup.sh
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
: ${BACKUP_DATE:=_$(date +"%Y-%m-%d_%H-%M")}
|
||||||
|
|
||||||
|
cd /backup
|
||||||
|
tar cJf ~/$S3_BUCKET_NAME$BACKUP_DATE.tar.xz ./*
|
||||||
|
cd /
|
||||||
|
|
||||||
|
gpg --trust-model always --output ~/$S3_BUCKET_NAME$BACKUP_DATE.tar.xz.gpg --encrypt --recipient $GPG_RECIPIENT ~/$S3_BUCKET_NAME$BACKUP_DATE.tar.xz
|
||||||
|
rm ~/$S3_BUCKET_NAME$BACKUP_DATE.tar.xz
|
||||||
|
|
||||||
|
aws s3 cp ~/$S3_BUCKET_NAME$BACKUP_DATE.tar.xz.gpg s3://$S3_BUCKET_NAME/$S3_BUCKET_NAME$BACKUP_DATE.tar.xz.gpg --storage-class STANDARD_IA
|
||||||
|
rm ~/$S3_BUCKET_NAME$BACKUP_DATE.tar.xz.gpg
|
||||||
20
restore.sh
Normal file
20
restore.sh
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
gpg --import /keys/*
|
||||||
|
|
||||||
|
aws s3 ls s3://$S3_BUCKET_NAME
|
||||||
|
echo "These are the files currently available in your backup bucket."
|
||||||
|
echo "Which file contains the backup you want to restore from?"
|
||||||
|
echo -n "File name: "
|
||||||
|
read RESTORE_FILE
|
||||||
|
|
||||||
|
cd /restore
|
||||||
|
|
||||||
|
aws s3 cp s3://$S3_BUCKET_NAME/$RESTORE_FILE .
|
||||||
|
|
||||||
|
gpg --output ./restore.tar.xz --decrypt $RESTORE_FILE
|
||||||
|
|
||||||
|
tar xf ./restore.tar.xz
|
||||||
|
|
||||||
|
rm restore.tar.xz
|
||||||
|
rm $RESTORE_FILE
|
||||||
|
|
||||||
|
exit
|
||||||
16
run.sh
Normal file
16
run.sh
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
gpg --import /keys/*
|
||||||
|
|
||||||
|
cron
|
||||||
|
|
||||||
|
# LS_COLORS is set to nothing and for some strange reason crontabs are not allowed to contain such env vars
|
||||||
|
unset LS_COLORS
|
||||||
|
|
||||||
|
# Create crontab file
|
||||||
|
env | cat - > /backup.cron
|
||||||
|
echo "$CRON_INTERVAL /backup.sh" >> /backup.cron
|
||||||
|
|
||||||
|
crontab /backup.cron
|
||||||
|
|
||||||
|
tail -f /dev/null
|
||||||
Reference in New Issue
Block a user