AWS S3 CLI docker for the impatient (also wasabi)

What's the problem with AWS cli guide (it has good docs anyway)? well, nothing really complex unless you really want to use it asap. This is more of a basic setup and quick run using docker.

0. Things needed: Docker and AWS S3 creds (access key id/secret access key)

There's a guide for linux distro, and a post-installation steps to run Docker as non-root

1. Pull Docker image

docker image pull amazon/aws-cli

2. Make credentials

Create ~/.aws dir if not exists, then inside that dir create a file credentials and config:

# ~/.aws/credentials
[default]
aws_access_key_id=fffff3333ff55555GGGG
aws_secret_access_key=kkkkkkkkkkkkkkkeeeeeeeeeeeeeyyyyyyyyyyyy
# ~/.aws/config
[default]
region=us-west-2
output=json

Or use an interactive setup

# interactive setup for creds
docker container run --rm -it \
  -v ~/.aws:/root/.aws \
  amazon/aws-cli \
  configure

Check creds:

docker container run --rm -it \
  -v ~/.aws:/root/.aws \
  amazon/aws-cli \
  configure
# or view
cat ~/.aws/credentials

3. cp copy, sync , mv move and ls list files

3.a. cp copy command

$ aws s3 cp <source> <target> [--options]

source could mean your files from your computer or from S3

Add --recursive option if dir:

# upload a local file hyena.png (current dir) to mammals dir
docker container run --rm \
  -v ~/.aws:/root/.aws \
  -v $(pwd):/aws \
  amazon/aws-cli s3 \
  cp hyena.png \
  s3://animal-bucket/mammals/
# download lynx.png from s3 to current dir
docker container run --rm \
  -v ~/.aws:/root/.aws \
  -v $(pwd):/aws amazon/aws-cli s3 \
  cp s3://animal-bucket/mammals/lynx.png ./
# upload all contents from reptiles dir (local) to s3
docker container run --rm \
  -v ~/.aws:/root/.aws \
  -v $(pwd):/aws amazon/aws-cli s3 \
  cp reptiles/ s3://animal-bucket/reptiles/ \
  --recursive

3.b ls command

# list all files in reptiles dir
docker container run --rm \
  -v ~/.aws:/root/.aws \
  amazon/aws-cli s3 \
  ls s3://animal-bucket/reptiles/

e.g. get the size of a bucket

docker container run --rm \
  -v ~/.aws:/root/.aws \
  amazon/aws-cli s3 \
  ls s3://animal-bucket/ \
  --summarize \
  --human-readable \
  --recursive \
  --profile profile-name \
  --endpoint-url=https://s3.<region>.<provider>.com

3.c sync command

aws s3 sync <source> <target> [--options]

Using cp will require the --recursive option to copy multiple files in a dir, while sync command will, by default, copy a whole directory. It will only copy new/modified files.

Add --delete option to remove <target> files that aren't in <source> dir.

# using new-reptiles folder as basis
docker container run --rm \
  -v ~/.aws:/root/.aws \
  -v $(pwd):/aws \
  amazon/aws-cli s3 \
  sync new-reptiles/ s3://animal-bucket/old-reptiles/

3.c mv move command

aws s3 mv <source> <target> [--options]

# Only markdown files are moved from new-reptiles
# to animal-bucket/old-reptiles. The rest aren't included.
docker container run --rm \
  -v ~/.aws:/root/.aws \
  -v $(pwd):/aws \
  amazon/aws-cli s3 \
  mv s3://new-reptiles/ s3://animal-bucket/old-reptiles/ \
  --recursive --exclude "*" --include "*.md" \
  --profile wasabi \
  --endpoint-url=https://s3.<region>.<provider>.com

The rest of the commands can be found in their docs.

4. Using another cloud storage (ie Wasabi)

Same thing with step 2:

~/.aws/credentials

[default]
aws_access_key_id=fffff3333ff55555GGGG
aws_secret_access_key=kkkkkkkkkkkkkkkeeeeeeeeeeeeeyyyyyyyyyyyy
[wasabi]
aws_access_key_id=wwwaaasssaaabbbiiiii
aws_secret_access_key=wwwwwwwaaaaaassssssaaaaaabbbbbbbiiiiiiii

~/.aws/config

[default]
region=us-west-2
output=json
[wasabi]
region=us-east-1
output=table

Or use an interactive setup. Notice I added --profile <PROFILE_NAME> option:

# interactive setup for creds
docker container run --rm -it \
  -v ~/.aws:/root/.aws \
  amazon/aws-cli \
  configure --profile wasabi

Check creds:

docker container run --rm -it \
  -v ~/.aws:/root/.aws \
  amazon/aws-cli \
  configure --profile wasabi

Doing some commands (with --profile and --endpoint-url):

docker container run --rm \
  -v ~/.aws:/root/.aws \
  amazon/aws-cli s3 \
  ls s3://wasabi-bucket/ \
  --profile wasabi \
  --endpoint-url=https://s3.us-west-2.wasabisys.com

Shorten the command with alias:

alias aws=$(docker container run --rm -it \
  -v ~/.aws:/root/.aws \
  -v $(pwd):/aws \
  amazon/aws-cli)

Errors

An error occurred (NoSuchTagSetError) when calling the GetObjectTagging operation: There is no tag set associated with the bucket.

If you came across this error while using sync, a fix could be either grant the s3:GetObjectTagging permission to that user, or adding this option --copy-props metadata-directive (it tells the cli not to get source object tags).

Source can be found here.