Tuesday, October 15, 2013

Amazon Web Services CloudFront Edge Locations Codes

blog-domenech-org-aws-cloudfront-logo

Following the thread opened with my previous post about AWS CloudFront Edge Locations Codes, here there is an updated list.

Amazon Web Services names their Edge Locations after the closest International Airport IATA Code.


AMS1  Amsterdam, The Netherlands
AMS50 Amsterdam, The Netherlands
ARN1  Stockholm, Sweden
ATL50 Atlanta, Georgia
BOM2  Mumbai, India
CDG3  Paris, France
CDG50 Paris, France
CDG51 Paris, France
DFW3  Dallas, Texas
DFW50 Dallas, Texas
DUB2  Dublin, Ireland
EWR2  Newark, New Jersey
FRA2  Frankfurt, Germany
FRA50 Frankfurt, Germany
FRA6  Frankfurt, Germany
GRU1  Sau Paulo, Brazil
GIG50 Rio de Janerio, Brazil
HKG1  Hong Kong Island, Hong Kong
HKG50 Hong Kong Island, Hong Kong
IAD12 Ashburn, Virginia
IAD2  Ashburn, Virginia
IAD53 Ashburn, Virginia
ICN50 Seoul, South Corea
IND6  South Bend, Indiana
JAX1  Jacksonville, Florida
JFK1  Nueva York, New York
JFK5  Nueva York, New York
JFK6  Nueva York, New York
LAX1  Los Angeles, California
LAX3  Los Angeles, California
LHR3  London, United Kingdom
LHR5  London, United Kingdom
LHR50 London, United Kingdom
MAA3  Chennai, India
MAD50 Madrid, Spain
MIA3  Miami, Florida
MIA50 Miami, Florida
MNL50 Manila, Philippines
MRS50 Marseille, France 
MXP4  Milan, Italy
NRT12 Tokyo, Japan
NRT52 Tokyo, Japan
NRT53 Tokyo, Japan
NRT54 Tokyo, Japan
SEA4  Seattle, Washington
SEA50 Seattle, Washington
SFO4  San Francisco, California
SFO5  San Francisco, California
SFO9  San Francisco, California
SIN2  Republic of Singapore
SIN3  Republic of Singapore
STL2  St. Louis, Missouri
SYD1  Sydney, Australia
TPE50 Taipei, Taiwan
WAW50 Warsaw, Poland 

Total = 55
Note: This is an historic list. Some of these Edge Location codes are not longer active.

Official Information: AWS Global Infrastructure

The Edge Location code is present in the CloudFront access logs (3rd field). To activate access logs for your CloudFront distribution follow these instructions.

Would you like to see them on the map? Check this Google Maps about Amazon Web Services.


Sunday, October 6, 2013

Deploy SSH Authorized Keys using S3 and AWS CLI with temporary credentials

amazon-aws-command-line-interface-python-install-domenech-org


Disclaimer: Modifying security credentials could render in loosing access to your server in case of problems. I strongly suggest you test the method described here in your Development environment before using it in Production.


Key-Pairs is the standard method to authenticate SSH access to our EC2 Instances based on AWS AMI Linux. We can easily create new Key-Paris for our team using the ssh-keygen command and manually adding them to the file /home/ec2-user/.ssh/authorized_keys for those with root access.
Format:

/home/ec2-user/.ssh/authorized_keys
ssh-rsa AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== main-key
ssh-rsa BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB== juan
ssh-rsa CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC== pedro
ssh-rsa DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD== luis

But when the number of instances and members grows we need a centralized method of distribution of this file.

Goal

- Store an authorized_keys file in S3 encrypted "at-rest".
- Transport this file from S3 to the instance securely.
- Give access to this file only to the right instances.
- Do not store any API Access Keys into the involved script.
- Store all the temporary files in RAM.


S3

- Create a bucket. In this example is "tarro".

- Create in your local an authorized_keys file and upload it to the new bucket.

- Select the file properties with S3 Console and select Server Side Encryption = AES256 and Save.

aws-s3-server-side-encryption-aes256

- Calculate the MD5 of the file with md5sum
Example:


$ md5sum authorized_keys
690f9d901801849f6f54eced2b2d1849  authorized_keys

- Create a file called authorized_keys.md5, copy the md5sum result in (only the hexadecimal string of numbers and letters) and upload it to the same S3 bucket.


IAM

We will use an EC2 IAM instance role. This way we don't need to store a copy of our API Access Key into the instances who will be accessing the secured files. AWS Command Line Interface (AWS CLIwill automatically access to the EC2 Instance Metadata and retrieve a temporary security credential needed to connect to S3. We will specify a role policy to grant read access to the bucket that contains those files.

- Create a role using the IAM Console. In my example is "demo-role".
- Select Role Type = Amazon EC2.
- Select Custom Policy.
- Create a role policy to grant read access only to "tarro" bucket. Example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Sid": "Stmt1380102067000",
      "Resource": [
        "arn:aws:s3:::tarro/*"
      ],
      "Effect": "Allow"
    }
  ]
}

EC2

We will use Amazon Linux AMI 2013.09 which includes AWS CLI.

- Launch your instance as you usually do but now select the IAM Role and choose the appropriate one. In my example is "demo-role" but you could have different roles for every application tier like: web servers, data bases, test, etc.

aws-ec2-iam-role-metadata-s3-domenech.org

- Under root, create /root/bin/

- In /root/bin/ create the file deploy-keys.sh with the following content:

#!/bin/bash
#
# /root/bin/deploy-keys.sh
# Install centralized authorized_keys file from S3 securely using temporary security credentials
# blog.domenech.org
#

### User defined variables
BUCKET="tarro"
TMPFOLDER="/media/tmpfs/"
# (Finish TMPFOLDER variable with slash)

# Note:
# The temporary folder in RAM size is 1 Megabyte. 
# If you are planning to deal with files bigger than that you have to change line #24 accordingly

# Create temporary folder
if [ ! -e $TMPFOLDER ] 
then
 mkdir $TMPFOLDER
fi

# Mount temporary folder in RAM
mount -t tmpfs -o size=1M,mode=700 tmpfs $TMPFOLDER

# Get-Object from S3
COMMAND=`aws s3api get-object --bucket $BUCKET --key "authorized_keys" $TMPFOLDER"authorized_keys"`
if [ ! $? -eq 0 ]
then
 umount $TMPFOLDER
 logger "deploy-keys.sh: aws s3api get-object authorized_keys failed! Exiting..."
 exit 1
fi 

# Get-Object from S3 (MD5)
COMMAND=`aws s3api get-object --bucket $BUCKET --key "authorized_keys.md5" $TMPFOLDER"authorized_keys.md5"`
if [ ! $? -eq 0 ]
then
 umount $TMPFOLDER
        logger "deploy-keys.sh: aws s3api get-object authorized_keys.md5 failed! Exiting..."
        exit 1
fi

# Check MD5, copy the new file if matches and clean up
MD5=`cat $TMPFOLDER"authorized_keys.md5"`
MD5NOW=`md5sum $TMPFOLDER"authorized_keys" | awk '{print $1}'`
if [ $MD5 == $MD5NOW ]
then
 mv --update /home/ec2-user/.ssh/authorized_keys /home/ec2-user/.ssh/authorized_keys.original
 cp --force $TMPFOLDER"authorized_keys" /home/ec2-user/.ssh/authorized_keys
 chown ec2-user:ec2-user /home/ec2-user/.ssh/authorized_keys
 chmod go-rwx /home/ec2-user/.ssh/authorized_keys
 # The unmount command will delete all the files in RAM but we are extra cautious here shredding and removing
 shred $TMPFOLDER"authorized_keys"; shred $TMPFOLDER"authorized_keys.md5"
 rm $TMPFOLDER"authorized_keys"; rm $TMPFOLDER"authorized_keys.md5"; umount $TMPFOLDER
 logger "deploy-keys.sh: Keys updated successfully."
 exit 0
else
        shred $TMPFOLDER"authorized_keys"; shred $TMPFOLDER"authorized_keys.md5"
        rm $TMPFOLDER"authorized_keys"; rm $TMPFOLDER"authorized_keys.md5"; umount $TMPFOLDER
 logger "deploy-keys.sh: MD5 check failed! Exiting..."
 exit 1
fi



- And give execution permissions to root and remove unnecessary Read/Write permissions.

OR you can do it all at once more easily executing this command:


mkdir /root/bin/; cd /root/bin/; wget -q http://www.domenech.org/files/deploy-keys.sh; chmod u+x deploy-keys.sh; chmod go-rwx deploy-keys.sh; chown root:root deploy-keys.sh

and test the script. You can check the script results at /var/log/messages

- Trigger after reboot the script by adding the line /root/bin/deploy-keys.sh at the system init script /etc/rc.local

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

/root/bin/deploy-keys.sh




Friday, October 4, 2013

Certificación de Amazon Web Services: Solutions Architect - Associate

Amazon-Web-Services-Certified-Solutions-Architect-Associate

Amazon Web Services (AWS) ha puesto en marcha su programa de certificación profesional y la primera de ellas está disponible para pasar examen: AWS Certified Solutions Architect - Associate Level.  El examen se realiza en los centros Kryterion y en su red de asociados. En España las dos únicas opciones son Madrid y Barcelona y tiene un coste de 150$. El temario está detallado públicamente en el Exam Guide y se realiza en inglés.

Según se desprende de la información publicada, en el road-map hay previstas un total de tres certificaciones: Solutions Architect (la comentada aquí) y las futuras SysOps Administrator y Developer. La intención es crear 3 roles distintos para agrupar dentro de cada uno de ellos los distintos profesionales que utilizan los servicios de AWS. Una certificación profesional tiene como principal misión facilitar la elección de profesionales por la parte de empleadores y facilitar la elección de empresas de servicios por parte de clientes. Una certificación nos asegura unos mínimos de competencia que nos ahorrar tiempo a la hora de escoger a nuestros empleados y a nuestros proveedores.

Es importante destacar que como suele ser común en las certificaciones de empresas tecnológicas, cada una de ellas dispondrá de varios niveles. AWS tiene previsto tres niveles: Associate, Professional y Master para cada una de ellas. Mi interpretación de estos niveles, basándome en mi experiencia previa con otras empresas, sería la siguiente:
- El primer nivel es una puerta de entrada y la mayoría de técnicos con un conocimiento amplio del producto pueden optar a él. El examen es completamente teórico. Es fácil.
- El segundo demuestra un conocimiento profundo del producto y suelen poseerlo profesionales que se relacionan con esa tecnología frecuentemente en su día a día laboral. El examen además de teoría suele incluir ejercicios prácticos donde se simulan tareas que el profesional se encontrará en el mundo real pero con una cantidad de tiempo limitado para resolverlos. Es difícil.
- El tercer nivel nos indica un experto donde la totalidad de su desempeño profesional está ligado al producto al que hace referencia la certificación. Sus conocimientos están por encima de lo necesario para el uso completo de esa tecnología y se aproxima más aun profesor que a un experto. El examen suele ser completamente práctico y suele realizarse en las oficinas centrales de la empresa supervisado por personal directamente relacionado con la creación de dicha tecnología. Es extremadamente difícil.

Estoy ansioso de que estén disponibles las nuevas certificaciones y el siguiente nivel para Solutions Architect. Invito a todos los profesionales relacionados con la ingeniería de sistemas y el cloud computing a seguir de cerca esta certificación ya que se convertirá muy pronto en un nuevo estandard de nuestra industria.


Actualización 8-Oct-2013
Certification Roadmap AWS Certified Solutions Architect, Developer and SysOps Administrator