Skip to content

ascribe/amazon-aws-tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 

Repository files navigation

Amazon-AWS-tools

AWS Cli basic workflow

If you want to quickly list instances and their Security groups for particular environment, here is the one-liner:

    aws ec2 describe-instances --filter Name=tag:Environment,Values=ENVIRONMENT_NAME --query 'Reservations[*].Instances[*].{ID:InstanceId,SG:SecurityGroups,Tags:Tags}' --output text --profile profile_name

This will produce you nice output of your instances and their assigned Security groups. This can be filtered in multiple ways, but the best way is to select only first element without filters:

aws ec2 describe-instances --query 'Reservations[0].Instances[0]' --output json

That way you will see which elements you have in your response, and based on that you can build filter.

This is VERY important feature which will do logical AND operation for those filters, so only values which are having ALL requirements will be in the output:

aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=tag:Environment,Values=ENV_NAME" "Name=tag:Project,Values=PROJECT_NAME" --profile PROFILE_NAME --query Reservations[*].Instances[*].State

If you specify like this (all filters together), output will be produced if ANY of those filters is found:

aws ec2 describe-instances --filters "Name=instance-state-name,Values=running,Name=tag:Environment,Values=ENV_NAME,Name=tag:Project,Values=PROJECT_NAME" --profile PROFILE_NAME --query Reservations[*].Instances[*].State

If you want to query and output certain values from Tags, for example instance Name, along with other details, you go like this:

aws ec2 describe-instances --filters 'Name=tag:KEY,Values=VALUE' --query 'Reservations[].Instances[].[Tags[?Key==`Name`] | [0].Value,InstanceId,InstanceType]' --output table

Also, you can configure your AWS CLI to work with profiles, so for example if you have different access keys for different environments, you can access them by using –profile (as shown above).

Other AWS Cli commands

List instanceIDs

aws ec2 describe-instances --output text --query "Reservations[].Instances[].InstanceId"

List Architectures

aws ec2 describe-instances --output text --query "Reservations[].Instances[].Architecture"

Search instances with tag “Name” and value “TAG”

aws ec2 describe-instances --filters "Name=tag:Name,Values=TAG"

Search instances with tag values containing “VALS”

aws ec2 describe-instances –filters "Name=tag-value,Values=*VALS*"

Search instances with tag values beginning with “VALS”

aws ec2 describe-instances –filters "Name=tag-value,Values=VALS*"

Search instances with tag keys “Name”

aws ec2 describe-instances –filters "Name=tag-key,Values=Name"

Delete an S3 bucket and all its contents with just one command

aws s3 rb s3://bucket-name --force

Recursively copy a directory and its subfolders from your PC to Amazon S3.

aws s3 cp MyFolder s3://bucket-name -- recursive [--region us-west-2]

Display subsets of all available ec2 ubuntu-images

aws ec2 describe-images | grep ubuntu

List users in a table format

aws iam list-users --output table

List the sizes of an S3 bucket and its contents

aws s3api list-objects --bucket BUCKETNAME --output json --query "[sum(Contents[].Size), length(Contents[])]"

Get the total data of your S3 bucket:

aws s3 ls s3://ascribebackup --recursive | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}'

Get bucketsize and number of files in a list

aws s3api list-objects --bucket ascribebackup --output json --query "[sum(Contents[].Size), length(Contents[])]"

Move S3 bucket to different location

aws s3 sync s3://oldbucket s3://newbucket --source-region us-west-1 --region us-west-2

List all of your instances that are currently stopped, and the reason for the stop

aws ec2 describe-instances --filters Name=instance-state-name,Values=stopped --region eu-west-1 --output json | jq -r .Reservations[].Instances[].StateReason.Message

Test one of your public CloudFormation templates

aws cloudformation validate-template --region eu-west-1 --template-url https://s3-eu-west-1.amazonaws.com/ca/ca.cftemplate

Request spot-instance

aws ec2 request-spot-instances  --spot-price "0.5995" --instance-count 1 --type "one-time" --launch-specification {\"ImageId\":\"ami-062c161b\",\"InstanceType\":\"c4.4xlarge\",\"Placement\":{\"AvailabilityZone\":\"eu-central-1\"},\"SecurityGroupIds\":\"sg-de11b4b7\"}

s3cmd-tool

Install s3cmd

sudo apt-get install s3cmd

List all buckets

s3cmd ls

List the contents of the bucket

s3cmd ls s3://my-bucket-name

Upload a file into the bucket (private)

s3cmd put myfile.txt s3://my-bucket-name/myfile.txt

Upload a file into the bucket (public)

s3cmd put --acl-public --guess-mime-type myfile.txt s3://my-bucket-name/myfile.txt

Recursively upload a directory to s3

s3cmd put --recursive my-local-folder-path/ s3://my-bucket-name/mydir/

Download a file

s3cmd get s3://my-bucket-name/myfile.txt myfile.txt

Recursively download files that start with myfile

s3cmd --recursive get s3://my-bucket-name/myfile

Delete a file

s3cmd del s3://my-bucket-name/myfile.txt

Delete a bucket

s3cmd del --recursive s3://my-bucket-name/

Create a bucket

s3cmd mb s3://my-bucket-name

List bucket disk usage (human readable)

s3cmd du -H s3://my-bucket-name/

Sync local (source) to s3 bucket (destination)

s3cmd sync my-local-folder-path/ s3://my-bucket-name/

Sync s3 bucket (source) to local (destination)

s3cmd sync s3://my-bucket-name/ my-local-folder-path/

Do a dry-run (do not perform actual sync, but get information about what would happen)

s3cmd --dry-run sync s3://my-bucket-name/ my-local-folder-path/

Apply a standard shell wildcard include to sync s3 bucket (source) to local (destination)

s3cmd --include '2014-05-01*' sync s3://my-bucket-name/ my-local-folder-path/

Go ahead with boto (examples)

Get instance informations

from pprint import pprint
import boto
import os
AWS_ACCESS_KEY_ID = os.environ["AWS_ACCESS_KEY_ID"]
AWS_SECRET_ACCESS_KEY = os.environ["AWS_SECRET_ACCESS_KEY"]

conn = boto.ec2.connect_to_region("eu-central-1",
                aws_access_key_id=AWS_ACCESS_KEY_ID,
                aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
​
reservations = conn.get_all_instances()
instances = [i for r in reservations for i in r.instances]
for instance in instances:
    pprint(instance.__dict__)
    break # remove this to list all instances
          # this break is just for testing to return only one record!!!!

Listing all of your EC2 Instances using boto

import boto.ec2
import os
AWS_ACCESS_KEY_ID = os.environ["AWS_ACCESS_KEY_ID"]
AWS_SECRET_ACCESS_KEY = os.environ["AWS_SECRET_ACCESS_KEY"]

def get_ec2_instances(region):
    conn = boto.ec2.connect_to_region("eu-central-1",
                aws_access_key_id=AWS_ACCESS_KEY_ID,
                aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
    reservations = conn.get_all_reservations()
    for reservation in reservations:
        print(region+':',reservation.instances)

    for vol in conn.get_all_volumes():
        print(region+':',vol.id)

def main():
#    regions = ['us-east-1','us-west-1','us-west-2','eu-west-1','sa-east-1',
#                'ap-southeast-1','ap-southeast-2','ap-northeast-1']
    regions = ['eu-central-1']
    for region in regions:
        get_ec2_instances(region)

if  __name__ =='__main__':
    main()

Retrieving basic information from the running EC2 instances

    from collections import defaultdict
    import boto
    
    # Connect to EC2
    ec2 = boto3.resource('ec2')
    # Get information for all running instances
    running_instances = ec2.instances.filter(Filters=[{
        'Name': 'instance-state-name',
        'Values': ['running']}])        
    ec2info = defaultdict()
    for instance in running_instances:
        for tag in instance.tags:
            if 'Name'in tag['Key']:
                name = tag['Value']
     
        ec2info[instance.id] = {
            'Tag': name,
            'Type': instance.instance_type,
            'State': instance.state['Name'],
            'Private IP': instance.private_ip_address,
            'Public IP': instance.public_ip_address,
            'DNS Name': instance.public_dns_name,
            'Launch Time': instance.launch_time
            }
    
    attributes = ['Tag', 'Type', 'State', 'Private IP', 'Public IP', 'DNS Name', 'Launch Time']
    for instance_id, instance in ec2info.items():
        for key in attributes:
            print("{0}: {1}".format(key, instance[key]))
        print("------")

Moving S3-buckets between Useraccounts...

Before using the SYNC command, you must give the destination AWS account access to the source AWS accounts resources by using Amazon S3 ACLs or bucket policies. First, get the 12-digit account ID for the destination account. Next, in the source account, attach the following policy to the bucket you want to copy:

  • Bucket policy in the source AWS account

     {
         "Version": "2012-10-17",
         "Statement": [
             {
                 "Sid": "DelegateS3Access",
                 "Effect": "Allow",
                 "Principal": {"AWS": "destinationAccountNumber"},
                 "Action": "s3:*", "Resource": [
                     "arn:aws:s3:::sourcebucket/*",
                     "arn:aws:s3:::sourcebucket"
                 ]
             }
         ]
     }
    

Next, attach a policy to a user in the destination AWS account to delegate access to the bucket in the source AWS account:

  • User or group policy in the destination AWS account

     {
         "Version": "2012-10-17",
         "Statement": {
             "Effect": "Allow",
             "Action": "s3:*",
             "Resource": [
                 "arn:aws:s3:::sourcebucket",
                 "arn:aws:s3:::sourcebucket/*",
                 "arn:aws:s3:::destinationbucket",
                 "arn:aws:s3:::destinationbucket/*",
             ]
         }
     }
    

When these steps are completed, you can copy objects by using the AWS CLI.

aws s3 sync s3://sourcebucket s3://destinationbucket

For more information on transferring ownership of bucket objects to a different account, please refer the following links:
https://aws.amazon.com/premiumsupport/knowledge-center/account-transfer-s3/
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html
http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html

You can use Cross-Region Replication(CRR), wherein the copying of objects across buckets in different AWS regions is automatic, asynchronous. Please refer the following link for more:
http://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html

You can use a shell script that when run using a cron, a time based job scheduler which will do the sync periodically at fixed times, dates, or intervals. Please refer the link to how to create a cron job:
http://www.thesitewizard.com/general/set-cron-job.shtml

Best practices

Tag EBS Volumes

Indicate name of the instance and purpose (boot, database, etc) so you know what that available state volume was used for!

Power Down for the Weekend

Turn off your development environment during the weekends, holidays or when is not in use to save some bucks.

Throw out unattached IP Addresses

Remember that an Elastic IP address when not in use will be charged for every hour it is not attached to an instance.

Replace Outdated S3 Objects with Glacier

You don't have to delete out of date S3 objects. AWS S3 has an automated process that allows you to transfer objects over to Glacier.

License

The documents in this repository are licensed under a Creative Commons Attribution 4.0 International license.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published