Skip to content

Commit 002f4cc

Browse files
committed
Add ability to upload mongo dump directly to S3 without saving to local disk.
1 parent d579766 commit 002f4cc

File tree

3 files changed

+69
-30
lines changed

3 files changed

+69
-30
lines changed

README.md

+42-20
Original file line numberDiff line numberDiff line change
@@ -7,26 +7,48 @@ Docker image with `mongodump`, `cron` and AWS CLI to upload backups to AWS S3.
77

88
| Env var | Description | Default |
99
|-----------------------|-------------|-------------------------|
10-
| MONGO_URI | Mongo URI. | `mongodb://mongo:27017` |
11-
| CRON_SCHEDULE | Cron schedule. Leave empty to disable cron job. | `''` |
12-
| TARGET_S3_FOLDER | Folder to upload backups. Leave it empty to disable upload to S3. **If enabled, backups are deleted from the local folder after a successful upload.** | `''` |
13-
| AWS_ACCESS_KEY_ID | AWS Access Key ID. Leave empty if you want to use AWS IAM Role instead. | `''` |
14-
| AWS_SECRET_ACCESS_KEY | AWS Access Key ID. Leave empty if you want to use AWS IAM Role instead. | `''` |
15-
16-
### Example
17-
18-
Run container with cron job (once a day at 1am), upload backups to AWS S3 folder:
19-
20-
docker run -d \
21-
-v /path/to/target/folder:/backup \
22-
-e 'MONGO_URI=mongodb://mongo:27017' \
23-
-e 'CRON_SCHEDULE=0 1 * * *' \
24-
-e 'TARGET_S3_FOLDER=s3://my_bucket/backup/' \
25-
-e 'AWS_ACCESS_KEY_ID=my_aws_key' \
26-
-e 'AWS_SECRET_ACCESS_KEY=my_aws_secret' \
27-
istepanov/mongodump:4.2
28-
29-
Docker Compose example (no S3 upload, keep backups in `mongo-backup` Docker volume):
10+
| `MONGO_URI` | Mongo URI. | `mongodb://mongo:27017` |
11+
| `CRON_SCHEDULE` | Cron schedule. Leave empty to disable cron job. | `''` |
12+
| `TARGET_FOLDER` | Local folder (inside the container) to save backups. Mount volume to this folder. Set it to null (empty string) to disable local backups (this will make `TARGET_S3_FOLDER` a required parameter). | `'/backup'` |
13+
| `TARGET_S3_FOLDER` | Folder to upload backups. Leave it empty to disable upload to S3. | `''` |
14+
| `AWS_ACCESS_KEY_ID` | AWS Access Key ID. Leave empty if you want to use AWS IAM Role instead. | `''` |
15+
| `AWS_SECRET_ACCESS_KEY` | AWS Access Key ID. Leave empty if you want to use AWS IAM Role instead. | `''` |
16+
17+
### Examples
18+
19+
Run container with cron job (once a day at 1am), save backup to `/path/to/target/folder`, upload backups to AWS S3 folder:
20+
21+
docker run -d \
22+
-v /path/to/target/folder:/backup \
23+
-e 'MONGO_URI=mongodb://mongo:27017' \
24+
-e 'CRON_SCHEDULE=0 1 * * *' \
25+
-e 'TARGET_S3_FOLDER=s3://my_bucket/backup/' \
26+
-e 'AWS_ACCESS_KEY_ID=my_aws_key' \
27+
-e 'AWS_SECRET_ACCESS_KEY=my_aws_secret' \
28+
istepanov/mongodump:4.2
29+
30+
Same, but runs once, no cron job:
31+
32+
docker run -ti \
33+
-v /path/to/target/folder:/backup \
34+
-e 'MONGO_URI=mongodb://mongo:27017' \
35+
-e 'TARGET_S3_FOLDER=s3://my_bucket/backup/' \
36+
-e 'AWS_ACCESS_KEY_ID=my_aws_key' \
37+
-e 'AWS_SECRET_ACCESS_KEY=my_aws_secret' \
38+
istepanov/mongodump:4.2
39+
40+
Run container with cron job (once a day at 1am), upload backups to AWS S3 folder, do not create local backups:
41+
42+
docker run -d \
43+
-e 'MONGO_URI=mongodb://mongo:27017' \
44+
-e 'CRON_SCHEDULE=0 1 * * *' \
45+
-e 'TARGET_FOLDER=' \
46+
-e 'TARGET_S3_FOLDER=s3://my_bucket/backup/' \
47+
-e 'AWS_ACCESS_KEY_ID=my_aws_key' \
48+
-e 'AWS_SECRET_ACCESS_KEY=my_aws_secret' \
49+
istepanov/mongodump:4.2
50+
51+
Docker Compose example - run container with cron job (once a day at 1am), save backup to `mongo-backup` volume:
3052

3153
version: '3'
3254

backup.sh

+23-10
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,34 @@
11
#!/bin/bash
22

3-
set -e
3+
set -eo pipefail
44

55
echo "Job started: $(date)"
66

77
DATE=$(date +%Y%m%d_%H%M%S)
8-
FILE="/backup/backup-$DATE.tar.gz"
98

10-
mkdir -p dump
11-
mongodump --uri "$MONGO_URI"
12-
tar -zcvf "$FILE" dump/
13-
rm -rf dump/
9+
if [[ -z "$TARGET_FOLDER" ]]; then
10+
# dump directly to AWS S3
1411

15-
if [[ "$TARGET_S3_FOLDER" ]]; then
16-
aws s3 cp "$FILE" "$TARGET_S3_FOLDER"
17-
echo "$FILE uploaded to $TARGET_S3_FOLDER"
18-
rm -rf "$FILE"
12+
if [[ -z "$TARGET_S3_FOLDER" ]]; then
13+
>&2 echo "If TARGET_FOLDER is null/unset, TARGET_S3_FOLDER must be set"
14+
exit 1
15+
fi
16+
17+
mongodump --uri "$MONGO_URI" --gzip --archive | aws s3 cp - "$TARGET_S3_FOLDER/backup-$DATE.tar.gz"
18+
echo "Mongo dump uploaded to $TARGET_S3_FOLDER"
19+
else
20+
# save dump locally (and optionally to AWS S3)
21+
22+
FILE="$TARGET_FOLDER/backup-$DATE.tar.gz"
23+
24+
mkdir -p "$TARGET_FOLDER"
25+
mongodump --uri "$MONGO_URI" --gzip --archive="$FILE"
26+
echo "Mongo dump saved to $FILE"
27+
28+
if [[ -n "$TARGET_S3_FOLDER" ]]; then
29+
aws s3 cp "$FILE" "$TARGET_S3_FOLDER"
30+
echo "$FILE uploaded to $TARGET_S3_FOLDER"
31+
fi
1932
fi
2033

2134
echo "Job finished: $(date)"

entrypoint.sh

+4
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
set -e
44

55
export MONGO_URI=${MONGO_URI:-mongodb://mongo:27017}
6+
export TARGET_FOLDER=${TARGET_FOLDER-/backup} # can be set to null
67

78
# Optional env vars:
89
# - CRON_SCHEDULE
@@ -16,6 +17,9 @@ if [[ "$CRON_SCHEDULE" ]]; then
1617
mkfifo "$LOGFIFO"
1718
fi
1819
CRON_ENV="MONGO_URI='$MONGO_URI'"
20+
if [[ "$TARGET_FOLDER" ]]; then
21+
CRON_ENV="$CRON_ENV\nTARGET_FOLDER='$TARGET_FOLDER'"
22+
fi
1923
if [[ "$TARGET_S3_FOLDER" ]]; then
2024
CRON_ENV="$CRON_ENV\nTARGET_S3_FOLDER='$TARGET_S3_FOLDER'"
2125
fi

0 commit comments

Comments
 (0)