Tuesday, October 30, 2012

AWS API How To

Dealing with the Amazon Web Services API could be frustrating for a beginner. Here you are a small example that will help you to start with.

Some concepts:
The AWS API is a resource that could be accessed from everywhere by an authenticated application to manage all kind of elements into the AWS infrastructure. You can create a new EC2 instance, manage the contents of your S3 Bucket, modify an alarm in Cloudwatch, etc. (the "programmable data center" concept). You could either create your own application to interact with the AWS API (example: create and Smartphone App to Start/Stop your EC2 instances) or you could use someone else application to do that (That's what I do). Amazon Web Services provides a convenient ready-to-use command line tools to use their API.
There are different API methods inside the AWS cloud and different methods of authentication. Currently, the official way to authenticate is using you Access Key and the Secret Key and the Certificate authentication is now obsolete.
By default all API calls are directed to the us-east-1 Region (N.Virgina).


First Step:
Deploy an EC2 instance using the Amazon Linux AMI. The basic AWS Linux AMI includes command line tools to interact with the previous mentioned APIs (and others). You still have the option to download those command line tools and use them from your laptop but to use this preconfigured AMI is the easiest way to start.

This is a list of the current APIs included in EC2 Amazon Linux and its versions:

$ ssh -i juankeys.pem ec2-user@ec2-50-16-155-40.compute-1.amazonaws.com
Last login: Tue Oct 30 10:25:19 2012 from 28.red-28-28-28.adsl.static.ccgg.telefonica.net
       __|  __|_  )
       _|  (     /   Amazon Linux AMI
      ___|\___|___|
https://aws.amazon.com/amazon-linux-ami/2012.09-release-notes/

$ sudo -i

# ll /opt/aws/apitools/
total 36
lrwxrwxrwx 1 root root   11 Oct 25 18:51 as -> as-1.0.61.1
drwxr-xr-x 4 root root 4096 Oct 25 18:51 as-1.0.61.1
lrwxrwxrwx 1 root root   22 Oct 25 18:51 cfn-init -> ./cfn-init-1.3-6.amzn1
drwxr-xr-x 5 root root 4096 Mar 24  2012 cfn-init-1.1-0.amzn1
drwxr-xr-x 5 root root 4096 Oct 25 18:51 cfn-init-1.3-6.amzn1
lrwxrwxrwx 1 root root   11 Oct 25 18:51 ec2 -> ec2-1.6.3.0
drwxr-xr-x 4 root root 4096 Oct 25 18:51 ec2-1.6.3.0
lrwxrwxrwx 1 root root   12 Oct 25 18:51 elb -> elb-1.0.17.0
drwxr-xr-x 4 root root 4096 Oct 25 18:51 elb-1.0.17.0
lrwxrwxrwx 1 root root    9 Oct 25 18:51 iam -> iam-1.5.0
drwxr-xr-x 4 root root 4096 Oct 25 18:51 iam-1.5.0
lrwxrwxrwx 1 root root   12 Oct 25 18:51 mon -> mon-1.0.13.4
drwxr-xr-x 4 root root 4096 Oct 25 19:45 mon-1.0.13.4
lrwxrwxrwx 1 root root   12 Oct 25 18:51 rds -> rds-1.10.003
drwxr-xr-x 4 root root 4096 Oct 25 18:51 rds-1.10.003
lrwxrwxrwx 1 root root   14 Oct 25 18:51 ses -> ses-2012.07.09
drwxr-xr-x 3 root root 4096 Oct 25 18:51 ses-2012.07.09


Credentials for EC2 API command line tools:
(Logged with root) Export the variables AWS_ACCESS_KEY and AWS_SECRET_KEY with your credentials and test the configuration with a simple EC2 command like ec2-desbribe-regions The Access Key and the Secret Key are obtained when you create a new user using the IAM console. You have an example of creating a new user in this article. Please note you will need a user with admin privileges to interact with AWS API.

# export AWS_ACCESS_KEY=(your access key without parentheses)
# export AWS_SECRET_KEY=(your secret key without parentheses)

# ec2-describe-regions
REGION eu-west-1 ec2.eu-west-1.amazonaws.com
REGION sa-east-1 ec2.sa-east-1.amazonaws.com
REGION us-east-1 ec2.us-east-1.amazonaws.com
REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com
REGION us-west-2 ec2.us-west-2.amazonaws.com
REGION us-west-1 ec2.us-west-1.amazonaws.com
REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com 


Credentials for Auto Scaling, Cloudwatch, RDS and ELB API command line tools:
- Create a text file with this name and path: /opt/aws/apitools/mon/credential-file-path.template with the following contents:

AWSAccessKeyId=(your access key without parentheses)                      
AWSSecretKey=(your secret key without parentheses)

- Prevent other users from reading it:

# chmod go-rwx /opt/aws/apitools/mon/credential-file-path.template

# ll /opt/aws/apitools/mon/credential-file-path.template
-rw------- 1 root root 91 Oct 25 19:45 /opt/aws/apitools/mon/credential-file-path.template

- Export the AWS_CREDENTIAL_FILE variable with the file location:

export AWS_CREDENTIAL_FILE=/opt/aws/apitools/mon/credential-file-path.template

- And test the configuration with some simple commands like as-describe-scaling-activities, mon-list-metrics, elb-describe-lbs and rds-describe-db-engine-versions:

# as-describe-scaling-activities
ACTIVITY  fddbfad9-3383-4cdd-bbaa-fb843ff1141a  2012-10-29T14:30:50Z  grupo-prueba  Successful
ACTIVITY  02ff2071-1ec5-45c4-936d-76620a8ff0b0  2012-10-29T13:57:28Z  grupo-prueba  Successful
ACTIVITY  a18a5ce2-c28f-4531-abe6-6bde9d3713fd  2012-10-29T13:57:13Z  grupo-prueba  Successful
ACTIVITY  320d8cf7-adab-4085-becf-25fbd29d89ee  2012-10-29T13:43:49Z  grupo-prueba  Successful

# mon-list-metrics | head
"             AutoScalingGroupName             grupo-prueba        
"             AutoScalingGroupName             grupo-prueba        
"             AutoScalingGroupName             grupo-prueba        
"             AutoScalingGroupName             grupo-prueba        
"             AutoScalingGroupName             grupo-prueba        
"             AutoScalingGroupName             grupo-prueba                  "    

# elb-describe-lbs
LOAD_BALANCER  domenech    domenech-1821931935.us-east-1.elb.amazonaws.com   2012-05-31T15:16:17.630Z  internet-facing
LOAD_BALANCER  elb-prueba  elb-prueba-926661513.us-east-1.elb.amazonaws.com  2012-10-29T12:49:17.750Z  internet-facing

# rds-describe-db-engine-versions | head
VERSION  mysql          5.1.45            mysql5.1            MySQL Community Edition                  MySQL 5.1.45                                
VERSION  mysql          5.1.49            mysql5.1            MySQL Community Edition                  MySQL 5.1.49                                
VERSION  mysql          5.1.50            mysql5.1            MySQL Community Edition                  MySQL 5.1.50                                
VERSION  mysql          5.1.57            mysql5.1            MySQL Community Edition                  MySQL 5.1.57                                
VERSION  mysql          5.1.61            mysql5.1                


Visita a la emissora de Ona Mitja 783 KHz

Ha sigut un dia molt agradable. Gracies al meu amic Josep he tingut l'oportunitat de visitar la estació emissora de Ona Mitja de COPE Barcelona (783 KHz). Per als que ens agrada el mon de la radio, tot lo referent a les "baixes freqüències" i la generació de Kilowats te un encant especial. Es un mon on les vàlvules son les reines i les tensions i les potencies es mesuren amb milers.
Aquesta instal.lació disposa de dues emissores que funcionen en modalitat de alta disponibilitat: una de moderna (la activa) i una de més antiga (que entra en funcionament en cas de problemes a la principal). Aquesta ultima, fabricada per Continental Electronics es la que m'ha cridat l'atenció. L'atmosfera "retro" te un encant especial que només el podem apreciar uns quants. La seva estructura amplia permet visualitzar tot el seu funcionament començant per la etapa elèctrica que puja la seva entrada de 380V trifàsic fins a 17KV, passant per el seu armari de comandament on tot son relés i contactors i acabant a la etapa de potencia on dues majestuoses vàlvules pugen la senyal fins el 50KW.
A les fotografies podeu observar els diferents elements. A destacar la canonada de coure enorme que veieu que es la conducció que treu la senyal de la sala i la porta capa l'exterior i en Josep literalment dintre del quadre de comandament fent un manteniment rutinari.
No vaig poder estar-me fer fer-me una foto al costat de les vàlvules (lògicament amb la emissora apagada :)


Emissora Onda Media 783 Khz 50 KV Emissora Onda Media 783 Khz 50 KV

Emissora Onda Media 783 Khz 50 KV Emissora Onda Media 783 Khz 50 KV

Base Station Khz 50 KV Base Station Khz 50 KV

    Base Station Khz 50 KV Base Station Khz 50 KV

    High Power TransmitterHigh Power Transmitter


Josep VernetJuan Domenech


Monday, October 29, 2012

AWS EC2 Instance Metadata

There is an easy way to access to Instance information from within. This is very useful when writing scripts who are executed inside the Instance. The method is accessing the Instance Metadata using a HTTP GET call to the IP 169.254.169.254. It works on any EC2 instance and the IP address is always the same.

Examples:

- Obtaining the Instance ID:

$ wget -q -O - http://169.254.169.254/latest/meta-data/instance-id
i-87eef4e2

- Public Hostname:

$ wget -q -O - http://169.254.169.254/latest/meta-data/public-hostname
ec2-50-17-85-234.compute-1.amazonaws.com

- Public IPv4 Address:

$ wget -q -O - http://169.254.169.254/latest/meta-data/public-ipv4
50.17.85.234

Check the Instance Metadata manual page for further reference. 

Unfortunately, the Instance Tags are not available through Metadata yet (forums). There is a workaround using the API ec2-describe-instances command. 

Example: To obtain a whole Tag list from the instance we are running in:

$ ec2-describe-instances `wget -q -O - http://169.254.169.254/latest/meta-data/instance-id` --show-empty-fields | grep TAG

TAG instance i-87eef4e2 Another Tag Another Value
TAG instance i-87eef4e2 Name mymachine1

Using AWS S3 with s3cmd

Our goal is to use a simple command line utility to access our S3 resources from our laptop Linux.

First, you should create a IAM User with permissions just to access your S3 Buckets. There are several options to configure an IAM User but my suggestion is to create an admin user with only access to S3. The process is similar to the Read Only User we have created before for Newvem but this time selecting the Policy Template for "Amazon S3 Full Access". This way we are sure that anything that happens to this user will only affect our S3 Buckets.

In this example I will download all the log files that have been automatically created by AWS Cloudfront during the experiment of exploring the amount of CloudFront Edge Locations that exist today and delete those files afterwards.

- Install s3cmd

$ sudo apt-get install s3cmd 

- Configure:

$ s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Access Key: (Access Key for the admin S3 User we have created before)
Secret Key: (Seccret Key for the admin S3 User we have created before)

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: (your-password)
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]:

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't conect to S3 directly
HTTP Proxy server name:

New settings:
  Access Key: ***********************
  Secret Key: ***************************
  Encryption password: ***********
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)

Save settings? [y/N] y
Configuration saved to '/home/joan/.s3cfg'

- Listing Buckets:

$ s3cmd ls 

- Listing Bucket contents (folders):

$ s3cmd ls s3://Bucket-Name 

- Listing Bucket contents (files):

$ s3cmd ls s3://Bucket-Name/Folder-Name 

- Download all folder content:

$ s3cmd get s3://Bucket-Name/Folder-Name/* 

- Delete all folder content:

$ s3cmd del s3://Bucket-Name/Folder-Name/* 

Note: This command last will delete all files AND the folder.

Note: Keep in mind that any access to AWS S3 becomes a GET, PUT, POST or LIST action and some cost may occur. Refer to Amazon Web Services S3 Pricing for details.

Friday, October 19, 2012

New AWS Edge Location Madrid,Spain. Welcome baby!

New AWS Edge Location Madrid Official Announcement:
http://aws.amazon.com/about-aws/whats-new/2012/09/12/amazon-cloudfront-announces-madrid-edge-location/

Let's give it a look. Some traceroutes from Spain using http://www.rediris.es/red/lg/

Espere, por favor...
Please wait...
traceroute to 54.240.186.41 (54.240.186.41), 30 hops max, 40 byte packets
 1  ge-0.espanix.mdrdsp02.es.bb.gin.ntt.net (193.149.1.36)  6.446 ms  7.397 ms  2.023 ms
 2  * 81.19.109.138 (81.19.109.138)  1.758 ms  1.689 ms
 3  54.240.186.41 (54.240.186.41)  1.730 ms  1.721 ms  6.010 ms
{master}

Espere, por favor...
Please wait...
traceroute to 54.240.186.41 (54.240.186.41), 30 hops max, 40 byte packets
 1  CICA.GE0-2-0.ciemat.rt1.mad.red.rediris.es (130.206.245.37)  13.676 ms  47.461 ms  8.326 ms
 2  ge-0.espanix.mdrdsp02.es.bb.gin.ntt.net (193.149.1.36)  8.620 ms  8.306 ms  8.184 ms
 3  81.19.109.138 (81.19.109.138)  8.473 ms  11.382 ms  23.109 ms
 4  54.240.186.41 (54.240.186.41)  8.322 ms  7.995 ms  9.795 ms
{master}

Few hops. Good

Espere, por favor...
Please wait...
PING 54.240.186.41 (54.240.186.41): 56 data bytes
64 bytes from 54.240.186.41: icmp_seq=0 ttl=61 time=8.081 ms
64 bytes from 54.240.186.41: icmp_seq=1 ttl=61 time=8.006 ms
64 bytes from 54.240.186.41: icmp_seq=2 ttl=61 time=8.144 ms
64 bytes from 54.240.186.41: icmp_seq=3 ttl=61 time=8.129 ms
64 bytes from 54.240.186.41: icmp_seq=4 ttl=61 time=8.221 ms
64 bytes from 54.240.186.41: icmp_seq=5 ttl=61 time=12.117 ms
--- 54.240.186.41 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max/stddev = 8.006/8.783/12.117/1.492 ms
{master}

Espere, por favor...
Please wait...
PING 54.240.186.41 (54.240.186.41): 56 data bytes
64 bytes from 54.240.186.41: icmp_seq=0 ttl=63 time=1.749 ms
64 bytes from 54.240.186.41: icmp_seq=1 ttl=63 time=2.006 ms
64 bytes from 54.240.186.41: icmp_seq=2 ttl=63 time=6.273 ms
64 bytes from 54.240.186.41: icmp_seq=3 ttl=63 time=5.281 ms
64 bytes from 54.240.186.41: icmp_seq=4 ttl=63 time=4.484 ms
64 bytes from 54.240.186.41: icmp_seq=5 ttl=63 time=4.070 ms
--- 54.240.186.41 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.749/3.977/6.273/1.637 ms
{master}

And short roud-trip. Good. Yep, looks like this guy is where it has to be. Nice!

Reading CloudFront logs I've noticed that the ID for this Edge Location is MAD50

#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id
2012-10-19 10:49:11 MAD50 2111 80.28.28.28 GET d310qyhvupgmlk.cloudfront.net / 200 - Mozilla/5.0%20(X11;%20Linux%20x86_64)%20AppleWebKit/536.11%20(KHTML,%20like%20Gecko)%20Ubuntu/12.04%20Chromium/20.0.1132.47%20Chrome/20.0.1132.47%20Safari/536.11 - - Miss hnyCtybbSg9HGO5NWsBwUMqvd-1JoXqPeo3CN9VxD4SYVKH99rEleQ==


For an updated Edge Location list check this post http://blog.domenech.org/2013/10/amazon-web-services-cloudfront-edgelocation-codes.html


Update February-24-2013: Collected Edge Locations so far: 46

AMS1  Amsterdam
AMS50 Amsterdam
ARN1  Stockholm
BOM2  Mumbai, India
CDG3  Paris, France
CDG50 Paris, France
CDG51 Paris, France
DFW3  Dallas,TX
DFW50 Dallas,TX
DUB2  Dublin, Irealand
EWR2  Newark
FRA2  Frankfurt, Germany
FRA6  Frankfurt, Germany
GRU1  Sau Paulo, Brazil
HKG1  Hong Kong
HKG50 Hong Kong
IAD12 Ashburn
IAD2  Ashburn
IAD53 Ashburn
ICN50 Seoul, South Corea
IND6  South Bend
JAX1  Jacksonville
JFK1  New York
JFK5  New York
JFK6  New York
LAX1  Los Angeles
LAX3  Los Angeles
LHR3  London, UK
LHR5  London, UK
MAA3  Chennai, India
MAD50 Madrid,Spain
MIA3  Miami,FL
MIA50 Miami,FL
MXP4  Milan, Italy
NRT52 Tokyo,Japan
NRT53 Tokyo,Japan
NRT54 Tokyo,Japan
SEA4  Seattle, Australia
SEA50 Seattle, Australia
SFO4  San Jose,CA
SFO5  San Jose,CA
SFO9  San Jose,CA
SIN2  Singapore
SIN3  Singapore
STL2  St Louis
SYD1  Sydney, Australia

Friday, October 12, 2012

Divulgando AWS en Barcelona y Madrid

Ha sido una agradable experiencia mi colaboración en Navigate the Cloud Spain 2012 en Barcelona y Madrid. Agradezco la oportunidad que me ha dado el equipo de Celingest de representarlos y a Amazon Web Services EMEA por su invitación.

Aquí algunas de las presentaciones del genial Carlos Conde (Principal Solutions Architect, Amazon Web Services):

How Customers are Using AWS

Getting started with AWS

Deploying applications on AWS

Choosing the Right Data Storage

Big Data Analytics in the Cloud

Y la mía: Plataforma flexible para aplicaciones móviles de alta demanda

y algunas fotos :)