Thursday, December 27, 2012

¿Cómo ver Netflix en España?


Este popular proveedor de video on-demand no se encuentra disponible en nuestro país y su página web se negará a registrarnos a su servicio. Actualmente hay un método para conseguir registrarnos a Netflix desde aquí y disfrutar de su contenido utilizando UnBlockUS. UnBlockUS nos ofrece una semana de prueba gratuita y luego su coste es de $4,99/mes. Netflix nos ofrece un mes de prueba gratuita y luego su coste es de $7,99/mes. También podemos utilizar nuestro PC Ubuntu como periférico compatible Netflix utilizando PPA for Netflix Desktop. Esta utilidad es gratuita.

Pasos:

- En la página principal de UnBlockUS accedemos a su oferta de prueba introduciendo nuestra dirección de correo.

- Luego configuramos nuestro cliente DNS para que utilice sus servidores de DNS. Nuestro /etc/resolv.conf debe quedar así:

nameserver 208.122.23.22
nameserver 208.122.23.23

O si utilizamos DHCP y la consola gráfica, así:



- Nos autenticamos (Log In) y accedemos a su página de ayuda para acceder a Netflix. Es importante destacar que el proceso de creación de nuestra cuenta en Netflix lo haremos desde esta ayuda entrando en http://join-us.netflix.com/ Esta página no funcionará si no hemos realizado los anteriores pasos correctamente.

- Nos damos de alta en Netflix.

- Instalamos el cliente de Netflix siguiendo esta ayuda. Este cliente es una adaptación para Wine del cliente para Microsoft Windows. Los pasos principales son:

sudo apt-add-repository ppa:ehoover/compholio
sudo apt-get update
sudo apt-get install netflix-desktop

- Desde Inicio de Ubuntu tecleamos Netflix y ejecutamos la aplicación. En la primera ejecución Wine descargará otros componentes necesarios. Los errores en esta fase son comunes. Ignorar y repetir.

- Nos autenticamos en Netflix y listo.



Consideraciones adicionales:

Debemos considerar UnBlockUS como un servicio de proxy como otros servicios similares que existen para acceder a proveedores de contenido que solo funcionan con una IP origen americana. Pero tiene la peculiaridad que su método de configuración consiste en delegar a ellos toda nuestra resolución DNS. Una vez nuestro cliente lanza una petición de resolución contra sus servidores, estos deciden si se trata de un servicio del cual quieren hacer proxy o no. En caso negativo los servidores DNS nos devuelven la IP auténtica sin cambios y nuestro ordenador accede a ese contenido sin utilizar la infraestructura de UnBlockUS. Y solo en el caso de ciertos servicios (Netflix, Vudu y Hudu Plus) sus servidores DNS "falsean" la resolución y nos devuelven las IPs de los Proxy de UnBlockUS. Este funcionamiento se puede comprar con un simple dig:

Primero la resolución estandard usando un servidor de DNS público:

# dig netflix.com A @8.8.8.8

; <<>> DiG 9.8.1-P1 <<>> netflix.com A @8.8.8.8
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3635
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;netflix.com. IN A

;; ANSWER SECTION:
netflix.com. 3 IN A 69.53.236.17

;; Query time: 54 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Thu Dec 27 17:19:36 2012
;; MSG SIZE  rcvd: 45

Y luego utilizando el servidor de DNS de UnBlockUS:

# dig netflix.com A @208.122.23.22 

; <<>> DiG 9.8.1-P1 <<>> netflix.com A @208.122.23.22
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9078
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;netflix.com. IN A

;; ANSWER SECTION:
netflix.com. 180 IN A 173.208.170.14
netflix.com. 180 IN A 173.230.240.197
netflix.com. 180 IN A 204.12.200.14
netflix.com. 180 IN A 67.216.222.14
netflix.com. 180 IN A 147.255.171.14
netflix.com. 180 IN A 147.255.227.14

;; Query time: 80 msec
;; SERVER: 208.122.23.22#53(208.122.23.22)
;; WHEN: Thu Dec 27 17:20:49 2012
;; MSG SIZE  rcvd: 125

Es una solución inteligente pero arroja dudas en cuanto a la privacidad. En una configuración como esta UnBlockUS tiene visibilidad de todas nuestras peticiones DNS y puede decidir cuales altera y cuales no. Si utilizamos este servicio para acceder a streaming desde un periférico tipo Roku o similar este detalle no tiene la menor importancia pero si utilizamos nuestro ordenador personal o tablet la cosa cambia. A discreción del lector.

Friday, November 30, 2012

Amazon Web Services Re:Invent Report

amazon-web-services-re-invent-blog-domenech-org


Following my previous article for my trip to Re:Invent...

"T-Shirt" Project: Success!

Successfully delivered to Jeff Barr. Notice my face: I usually don't look so silly... I was nervous! :)

aws-reinvent-t-shirt-blog-domenech-org-jeff-barr
Jeff Barr, AWS

Carlos Conde was very difficult to locate at the event: He's and important man. But "the creator" deserves a t-shirt and a special version one.

aws-reinvent-t-shirt-blog-domenech-org-carlos-conde
Carlos Conde, AWS

It took some courage to give my present to Adrian Cockcroft. He's like a star! :)

aws-reinvent-t-shirt-blog-domenech-org-adrian-cockcroft
Adrian Cockcroft, Netflix


Bring ideas and find out about future plans: Success!

aws-re-invent-t-shirt-blog-domenech-org-anil-hinduja
Anil Hinduja, CloudFront

aws-re-invent-t-shirt-blog-domenech-org-tom-rizzo
Tom Rizzo, EC2 AWS

aws-reinvent-blog-domenech-org-training-certification
AWS Training Team
I had a good chat with the Training Team and there are VERY interesting news about Certification. I'm pretty sure we will have and official announcement in the following weeks. We'll wait for that.


News:


Zadara Storage: A surprisingly and interesting approach to provide high-end storage for EC2 Instances. They've managed to have space at AWS Data Centers to install there SAN Disks Arrays and they're willing to connected them to your EC2 Instances using Direct Connect. This connection method is used to connect your office or your on premise infrastructure to your VPC but in this case they connect storage through iSCSI or NFS. The price of the service is per hours basis and you get full access to the admin tool to define your volumes and parameters like RAID configuration. With a solution like that, there is no limit for the kind of application to run on EC2. Even the more I/O demanding ones. We are talking here about non virtualized storage. The old fashioned SAN array. Currently is only available at US-East Region but with plans to expand to other regions.
Besides technical and commercial considerations, this product/service says a lot of how open is AWS when it comes to giving tools to their costumers. Is hard for me to imagine others companies letting in a competitor into their buildings. Well done!


New EC2 Instance Types: A "Cluster High Memory" instance with 240 GB RAM and two 120GB SSD disks. A "High Storage" instance 117 GB RAM and 24 hard drives (48 TB total). I only can say: Awesome! According with the EC2 Team, this internal storage will be managed as the any other kind of Instance Storage and therefore is: Ephemeral. Using their words: "It will be amazing to see how you (the costumers) create new ways to use this storage". I couldn't agree more.


AWS Marketplace is not just a place to sell AMIs. Thanks to the talk of Craig Carl I've got a wider perspective of AWS Marketplace. We should see it like a tool to sell anything your are able to create in Amazon Web Services cloud. Not just an AMI with an application stack in, but a dynamic configuration set. A configuration that adapt to the consumer needs gathering information automatically of interacting with the user.
And a new concept of product just emerged: A Marketplace application could be something else than an application. I'll try to explain it with an example: You could create an application to access some information. The information is what the costumer wants (no the application itself). As long the application is running, the costumer is accessing to the information and therefore is billed (and you get your cut). When the contract expires, the application shuts down and the deal ceases. Commercial or infrastructure costs on your side (the provider) = zero. Awesome.
I my opinion, a new job role has been created: "Marketplace application stack developer".

An EC2 Spot Instance can be automatically terminated at any given minute. We knew that they can be terminated without previous warning when a "On Demand User" needs the resources you're using but we didn't know when it could happen.

"AMI" could be spelled as "A.M.I." or can be pronounced as /æˈmɪ/


And some more pictures:

amazon-web-services-re-invent-blog-domenech-Simone-Brunozziamazon-web-services-re-invent-blog-domenech-org-jeff-bezosamazon-web-services-re-invent-blog-domenech-org-matt-wood


amazon-web-services-re-invent-blog-domenech-org-legoamazon-web-services-re-invent-blog-domenech-org-think-big-start-smallaws-amazon-web-services-reinvent-blog-domenech-org-only-you-can-protect-amazon-security


amazon-web-services-re-invent-blog-domenech-org-netfilx-adrian-cockcroftamazon-web-services-re-invent-blog-domenech-org-s3-simple-storage-serviceaws-amazon-fcb-futbol-club-barcelona



aws-amazon-web-services-reinvent-blog-domenech-org-badgesaws-amazon-web-services-reinvent-blog-domenech-org-nasa-jplaws-amazon-web-services-reinvent-blog-domenech-org-nasa-jpl-curiosity


aws-amazon-web-services-reinvent-blog-domenech-org-obama-diagramaws-amazon-web-services-reinvent-blog-domenech-org-jeff-barraws-amazon-web-services-reinvent-blog-domenech-org-max-spevack-linux-ami



aws-amazon-web-services-reinvent-blog-domenech-org-spot-instances-1aws-amazon-web-services-reinvent-blog-domenech-org-spot-instances-2aws-amazon-web-services-reinvent-blog-domenech-org-werner-vogels


aws-reinvent-blog-domenech-org-craig-carlaws-amazon-web-services-reinvent-blog-domenech-org-tiffani-bova


Saturday, November 17, 2012

What I would like to bring from Las Vegas AWS re:Invent 2012 ?

Amazon Web Services Re:Invent 2012 Las Vegas

My wish list:

- I would like a handshake with Jeff Barr, AWS Evangelist and leader of its official blog. I think he's doing and excellent job and I admire how he manage to find time to accomplish his tasks.

- I would like a handshake with Carlos Conde, AWS Europe Solutions Architect. I had the opportunity of helping him at the last Navigate the Cloud Barcelona/Madrid and there I have discovered that he is the designer of the awesome design used in all the AWS Official Architecture Diagrams. He is an excellent communicator and as it turns out, he is brilliant graphic designer. I have no words to express my admiration.

- I would like a handshake with Adrian Cockcroft, Cloud Architect at Netflix. I red him (without me been aware of) back when I was a Solaris enthusiast and I like his way of communicate: Sharp, sober and with a little touch of humor.

- I would like to have some beers with my friends of Celingest.com. They are going to be there and I have a present for them (and for the people mentioned above). What it is? You will see ;)

- I would like to know if there is an AWS Architect Certification on the road map and if so, details about it. Now you have an official architecting training course but I hope there is more coming about this topic.

- I would like to know the plans to implement native Hot-Link protection for CloudFront. This was an issue some time ago for S3 but now is solved with referral control. Some of my customers would like that to happen for CloudFront as well.

- I would like to know if there is any plan to adopt BGP routing for Disaster and Recovery solutions. AWS is doing an effort to become the perfect choice when it comes to D&R and I think it is. The option of having a "sleeping infrastructure" waiting for a disaster to happen and booting up when that happens is... priceless. And the cherry on the cake would be the option of route customer Public IP traffic (Only for costumers with their own Autonomous System, of course).

- I would like to suggest to the EC2 Team the idea of not auto-terminating EC2 Instances living into and Auto Scaling Group until their "paying hour" has been spent. When in an Auto Scaling Group, the EC2 instances are automatically launched and terminated. That's the way it should be. But if the application load decreases, could happen that an instance that was brought to life 30 minutes ago will be terminated (no longer needed) and you will waste the other remaining 30 minutes. Would be nice to have an option to tell AS not to terminate an instance until the whole hour has passed.

- And learn, meet interesting people and have fun :)


My tentative agenda:

Tuesday 10/28/2012
APN Partner Summit 

Wednesday 11/28/2012 

10:30 AM-11:20 AM Room 3205: RMG205 Decoding Your AWS Bill 
10:30 AM-11:20 AM Room 3004: STP204 Pinterest Pins AWS! Running Lean on AWS Once You've Made It 

01:00 PM-01:50 PM Room Venetian A: RMG204 Optimizing Costs with AWS 
01:00 PM-01:50 PM Room 3404: ENT205 Drinking our own Champagne: Amazon.com's Adoption of AWS 

02:05 PM-02:55 PM Room Venetian B: STG301 Using Amazon Elastic Block Store 
02:05 PM-02:55 PM Room 3205: CPN203 Saving with EC2 Spot Instances 

03:25 PM-04:15 PM Room 3004: BDT301 High Performance Computing in the Cloud 
03:25 PM-04:15 PM Room 3202: SPR208 Hitting Your Cloud's Usage Sweet Spot (Presented by Newvem) 

04:30 PM-05:20 PM Room 3404: STP101 What Can You Do With $100? 
04:30 PM-05:20 PM Room Venetian C: ARC203 Highly Available Architecture at Netflix 

Thursday 11/29/2012 

10:30 AM-11:20 AM Room Venetian C: ARC204 AWS Infrastructure Automation 
10:30 AM-11:20 AM Room Venetian D: STG205 Amazon S3: Reduce costs, save time, and better protect your data 

11:35 AM-12:25 PM Room Venetian A: ARC202 Architecting for High Availability & Multi-Availability Zones on AWS 
11:35 AM-12:25 PM Room Venetian B: CPN208 Failures at Scale and How to Ignore Them 

03:00 PM-03:50 PM Room 3305: CPN202 Run More for Less 
03:00 PM-03:50 PM Room 3101B: CPN206 Learning From the Masters 

04:05 PM-04:55 PM Room 3404: BDT204 Awesome Applications of Open Data 
04:05 PM-04:55 PM Room Venetian D: STG302 Archive in the Cloud with Amazon Glacier 

05:10 PM-06:00 PM Room Venetian B: CPN209 Your Linux Amazon Machine Image 
05:10 PM-06:00 PM Room 3205: CPN211 My Data Center Has Walls that Move 


To anyone around Las Vegas those days:



juan...@gmail.com

Thursday, November 15, 2012

Newvem First Contact and EC2 Reserved Instances

newvem logo

Thanks to a friend I had the opportunity to test the Newvem Beta tool connected to his AWS Customer account and I'd like to share some conclusions.
With the fast growing of the Cloud market, some tools are emerging to help us to managing those "invisible" and fast-growing architectures. Some of them are trying to help us answering the question: "How can I pay less each month?". I have to say in advance that there is no magic answer. What is good for me could not be good for you. But there are some common scenarios where a bit of help could be useful.

Security:

First thing that caught my eye was the security recommendations. I wasn't expecting this here but I have to admit that they're convenient. With a constantly growing infrastructure and a group of Admins taking care of it, there is no such a thing as unnecessary security recommendations.

newvem-aws-blog-domenech-security-recomendations

Tell me about the money:

With the Spend Efficiency chart Newvem tell us some topics to pay attention. The tool has no way to know what is normal for us from what is not. For example, in that evaluation, a bunch of instances were manually stopped after a special event and this was detected as an abnormal situation and an alert was generated (Monthly cost changed by -34.00%). So those warnings should be considered just suggestions coming from someone who can't read your mind. The "better safe than sorry" approach.

newvem-aws-blog-domenech-spend-efficiency


Reserved Instances Recommendation:

newvem-aws-blog-domenech-reserved-instances-3


newvem-aws-blog-domenech-reserved-instances-list

Well, this is not rocket science. An Instance that has been up 100% of the time during the last 2 months should be a Reserved Instance. And among Light, Medium and Heavy Reserved Instance should be Heavy. That's the recommendation.
This RI Calculator gives us also some numbers showing how much money we have to pay in advance (Upfront) if we decide to purchase RI for all those Instace-types in a 1-Year and 3-Year scenarios.
What I really appreciate here is that simple table is a good starting point to begin to understand the concept behind EC2 Reserved Instances. This a confusing topic for beginners no matter in which company area they are. Thanks to this table, 3 key concepts are explained using our current AWS infrastructure: RI Instance-Type, RI Availability Zone and RI Hourly Price.

RI Instance-Type:
A Reserved Instance purchase applies to a EC2 Instance-Type. No instance hostname, nor Instance ID present or future. A RI gives you a better price for an Instance-Type wherever its usage or which of your EC2 Instance will end up using it.

RI Availability Zone:
A Reserved Instance applies to an Availability Zone. If you run two different Instances in two different AZs within a Region you will have to purchase two RIs. One for each AZ.

RI Hourly Price:
The year savings shown on the table above are the multiplication of the better price/hour you'll get when buying a RI and the amount of hours in a year. What it is telling us is the potential benefit we would get if our machine is 100% of the time up and running. Benefit of the RI model compared to On-Demand. But this doesn't mean that we have (or we will) to keep our instance always up. We will do what we will need. Starting and stopping it, but with a better Hourly Price.

And again, when it comes to recommendation there are not flawless and we need the human in the process. For example here:

newvem-aws-blog-domenech-reserved-instances-recommendation-light

For this m1.small us-east-1d we have a RI Light recommendation but the historic chart shows me that this Instance Type is not longer used in that particular AZ and it probably won't be at the future. Obviously, this is something I know and the RI Calculator don't. The human touch.


S3

Newvem also give us information about our Simple Storage Service but with my current scenario there are few things to say. This website stores in S3 its static content and with "only" 12 GBytes total space used, no recommendations needed.

newvem-aws-blog-domenech-s3-buckets


In conclusion,

I think that this kind of tool is useful now but it will much more in the near future. There is no limit for what the software could learn and predict and all those third-party products will advance faster than the cloud provider (AWS in this case) when it comes to "high level" management. I'm not saying that we will never see a button on our Cloud Console with the name "How to pay less" on it. Just saying that some else will be always faster to put that at work.

There are areas not covered where help is needed to handle important cost sources, like Internet and CloudFront traffic. This is a burden for heavy traffic sites and currently AWS don't give you a report to understand where your spending in traffic is going. You need third party software to collect and process logs so here... room from improvement.

The application covered here is in Beta stage and free. Looking forward to knowing its final price... This will be the key to conclude if is useful for my customers or not.

Last minute note! I've just noticed their today's General Availability announcement. Seems that this product is no longer beta. Good luck boys!


Monday, November 12, 2012

Automatically Manage your AWS EC2 Instance Public IP Addresses with Route53

aws-route-53-rest-api-dns-ec2-call


Our Goal: Easy access to our Instances by Name instead to locate them through EC2 Console after an IP change caused by a stop/start action.

Is quite tedious the need to open the AWS Console to find an instance Public IP after a stop/start action or if we forgot which previously it was. Here I show you a tool that consists in a script executed inside the instance that updates its DNS records in Route53 using the instance Tag "Name". This is and optional Tag we can use to store the "Host Name" when launching a new instance or edit it anytime we need afterwards. If this optional tag is not present, the script I show you here, will use the instance ID to update (or create) the corresponding DNS A Record. This way we will have always the instance accessible through its FQDN and it will be stable (It won't change overtime).
Example: My-Instance-Host-Name.ec2.My-Domain.com

$ ssh -i juankeys.pem ec2-user@webserver1.ec2.donatecpu.com

Last login: Mon Nov 12 00:14:35 2012 from 77.224.98.33
       __|  __|_  )
       _|  (     /   Amazon Linux AMI
      ___|\___|___|
https://aws.amazon.com/amazon-linux-ami/2012.09-release-notes/ 
There are 4 total update(s) available
Run "sudo yum update" to apply all updates.

[ec2-user@webserver1 ~]$ 


Instance Tag Name
Configure your EC2 instance with a Tag Name using the Console. Usually the Instance Launch Wizard will ask you for it but if is empty, you can update it any time you want. In this example the Tag Name will be "webserver1".

aws-route-53-ec2-tag-name-donatecpu-com


Preparations
Log into your instance and make sure that the EC2 API is ready to run. Follow this previous post if you need help with that. You will need a IAM user with admin permissions on Route53.


Route53
Create a new zone in Route53 (if you don't have any created yet) and save the assigned Hosted Zone ID:

aws-route-53-ec2-tag-name-donatecpu-com


dnscurl.pl
dnscurl.pl is an AWS Perl tool that will help you to use the Route53 API. Unlike other AWS APIs, Route53's API uses REST methods. This means that is accessible using HTTP calls (similar to accessing instance metadata) which looks good but the authentication process is a painfuldnscurl.pl simplifies the authentication process to generate the calls (GET and POST) to the Route 53 API.

Create a directory called /root/bin/ to store our tools, download dnscurl.pl, and make it executable:

# cd /root

# mkdir bin

# cd bin

# wget -q http://awsmedia.s3.amazonaws.com/catalog/attachments/dnscurl.pl

# chmod u+x dnscurl.pl

Note: You can also download the dnscurl.pl from here using a browser.

Create in the same folder a file called ".aws-secrets" (note the dot at the begining of the file name) with the following content and make it only readable for root:

%awsSecretAccessKeys = (
    '(your key name without parentheses)' => {
        id => '(your access key without parentheses)',
        key => '(your secret key without parentheses)', 
    },
);

# chmod go-rwx .aws-secrets 

Test dnscurl.pl with a simple read-only call. If everything is good, you should see something like this:

# ./dnscurl.pl --keyfile ./.aws-secrets --keyname juan -- -v -H "Content-Type: text/xml; charset=UTF-8" https://route53.amazonaws.com/2012-02-29/hostedzone/Z1F5BRDVBM
                                                                           0.0%
* About to connect() to route53.amazonaws.com port 443 (#0)
*   Trying 72.21.194.53...
* connected
* Connected to route53.amazonaws.com (72.21.194.53) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* SSL connection using SSL_RSA_WITH_RC4_128_MD5
* Server certificate:
* subject: CN=route53.amazonaws.com,O=Amazon.com Inc.,L=Seattle,ST=Washington,C=US
* start date: Nov 05 00:00:00 2010 GMT
* expire date: Nov 04 23:59:59 2013 GMT
* common name: route53.amazonaws.com
* issuer: CN=VeriSign Class 3 Secure Server CA - G3,OU=Terms of use at https://www.verisign.com/rpa (c)10,OU=VeriSign Trust Network,O="VeriSign, Inc.",C=US
> GET /2012-02-29/hostedzone/Z1F5BRDVBM HTTP/1.1
> User-Agent: curl/7.24.0 (x86_64-redhat-linux-gnu) libcurl/7.24.0 NSS/3.13.5.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.2
> Host: route53.amazonaws.com
> Accept: */*
> Content-Type: text/xml; charset=UTF-8
> Date: Sun, 11 Nov 2012 23:21:26 GMT
> X-Amzn-Authorization: AWS3-HTTPS AWSAccessKeyId=AKIAJ5,Algorithm=HmacSHA1,Signature=/i+0d=

< HTTP/1.1 200 OK
< x-amzn-RequestId: 843632ca-2c56-11e2-94bf-3b3ef9a8f457
< Content-Type: text/xml
< Content-Length: 582
< Date: Sun, 11 Nov 2012 23:21:26 GMT

<?xml version="1.0"?>
* Connection #0 to host route53.amazonaws.com left intact
<GetHostedZoneResponse xmlns="https://route53.amazonaws.com/doc/2012-02-29/"><HostedZone><Id>/hostedzone/Z1F5BRDVBM</Id><Name>donatecpu.com.</Name><CallerReference>454848C9-18D1-2DDB-AC24-B629E</CallerReference><Config/><ResourceRecordSetCount>2</ResourceRecordSetCount></HostedZone><DelegationSet><NameServers><NameServer>ns-1146.awsdns-15.org</NameServer><NameServer>ns-1988.awsdns-56.co.uk</NameServer><NameServer>ns-228.awsdns-28.com</NameServer><NameServer>ns-783.awsdns-33.net</NameServer></NameServers></DelegationSet></GetHostedZoneResponse>* Closing connection #0

You should see a correctly created AWSAccessKeyId and Signature, no error messages and at the bottom and XML output showing the DNS Servers for you Zone.


start-up-names.sh
Download my script start-up-names.sh and make it executable:
# wget -q http://www.domenech.org/files/start-up-names.sh 

# chmod u+x start-up-names.sh

Or copy and paste the following text into a file called start-up-names.sh

#!/bin/bash
# start-up-names.sh
# http://blog.domenech.org

logger start-up-name.sh Started

#More environment variables than we need but... we always do that
export AWS_CREDENTIAL_FILE=/opt/aws/apitools/mon/credential-file-path.template
export AWS_CLOUDWATCH_HOME=/opt/aws/apitools/mon
export AWS_IAM_HOME=/opt/aws/apitools/iam
export AWS_PATH=/opt/aws
export AWS_AUTO_SCALING_HOME=/opt/aws/apitools/as
export AWS_ELB_HOME=/opt/aws/apitools/elb
export AWS_RDS_HOME=/opt/aws/apitools/rds
export EC2_AMITOOL_HOME=/opt/aws/amitools/ec2
export EC2_HOME=/opt/aws/apitools/ec2
export JAVA_HOME=/usr/lib/jvm/jre
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/root/bin

# *** Configure these values with your settings ***
#API Credentials
AWSSECRETS="/root/bin/.aws-secrets"
KEYNAME="juan"
#Hosted Zone ID obtained from Route53 Console once the zone is created
HOSTEDZONEID="Z1F5BRDVBM"
#Domain name configured in Route53 and used to store our server names
DOMAIN="ec2.donatecpu.com"
# *** Configuration ends here ***

#Let's get the Credentials that EC2 API needs from .aws-secrets dnscurl.pl file
ACCESSKEY=`cat $AWSSECRETS | grep id | cut -d\' -f2`
SECRETKEY=`cat $AWSSECRETS | grep key | cut -d\' -f2`

#InstanceID Obtained from MetaData 
INSTANCEID=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`

#Public Instance IP obtained from MetaData
PUBLICIP=`wget -q -O - http://169.254.169.254/latest/meta-data/public-ipv4`

#IP Currently configured in the DNS server (if exists)
CURRENTDNSIP=`dig $INSTANCEID"."$DOMAIN A | grep -v ^\; | sort | tail -1 | awk '{print $5}'`

#Instance Name obtained from the Instance Custom Tag NAME
WGET="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`"
INSTANCENAME=`ec2-describe-instances -O $ACCESSKEY -W $SECRETKEY $WGET --show-empty-fields | grep TAG | grep Name | awk '{ print $5 }'`

echo $INSTANCEID $PUBLICIP $CURRENTDNSIP $INSTANCENAME
logger $INSTANCEID $PUBLICIP $CURRENTDNSIP $INSTANCENAME

#Set the new Hostname using the Instance Tag OR the Instance ID
if [ -n "$INSTANCENAME" ]; then
hostname $INSTANCENAME
logger Hostname from InstanceName set to $INSTANCENAME
else
hostname $INSTANCEID
logger Hostname from InstanceID set to $INSTANCEID
fi

#dnscurl.pl Delete Current InstanceID Public IP A Record to allow Later Update
COMMAND="<?xml version=\"1.0\" encoding=\"UTF-8\"?><ChangeResourceRecordSetsRequest xmlns=\"https://route53.amazonaws.com/doc/2012-02-29/\"><ChangeBatch><Changes><Change><Action>"DELETE"</Action><ResourceRecordSet><Name>"$INSTANCEID"."$DOMAIN".</Name><Type>A</Type><TTL>600</TTL><ResourceRecords><ResourceRecord><Value>"$CURRENTDNSIP"</Value></ResourceRecord></ResourceRecords></ResourceRecordSet></Change></Changes></ChangeBatch></ChangeResourceRecordSetsRequest>"

/root/bin/dnscurl.pl --keyfile $AWSSECRETS --keyname $KEYNAME -- -v -H "Content-Type: text/xml; charset=UTF-8" -X POST https://route53.amazonaws.com/2012-02-29/hostedzone/$HOSTEDZONEID/rrset -d "$COMMAND"

#dnscurl.pl Create InstanceID Public IP A Record
COMMAND="<?xml version=\"1.0\" encoding=\"UTF-8\"?><ChangeResourceRecordSetsRequest xmlns=\"https://route53.amazonaws.com/doc/2012-02-29/\"><ChangeBatch><Changes><Change><Action>"CREATE"</Action><ResourceRecordSet><Name>"$INSTANCEID"."$DOMAIN".</Name><Type>A</Type><TTL>600</TTL><ResourceRecords><ResourceRecord><Value>"$PUBLICIP"</Value></ResourceRecord></ResourceRecords></ResourceRecordSet></Change></Changes></ChangeBatch></ChangeResourceRecordSetsRequest>"

/root/bin/dnscurl.pl --keyfile $AWSSECRETS --keyname $KEYNAME -- -v -H "Content-Type: text/xml; charset=UTF-8" -X POST https://route53.amazonaws.com/2012-02-29/hostedzone/$HOSTEDZONEID/rrset -d "$COMMAND"

logger Entry $INSTANCEID.$DOMAIN sent to Route53

#Create DNS A record for Instance Name (if exists)
if [ -n "$INSTANCENAME" ]; then

#dnscurl.pl Delete Current Instance Name Public IP A Record to allow Later Update
COMMAND="<?xml version=\"1.0\" encoding=\"UTF-8\"?><ChangeResourceRecordSetsRequest xmlns=\"https://route53.amazonaws.com/doc/2012-02-29/\"><ChangeBatch><Changes><Change><Action>"DELETE"</Action><ResourceRecordSet><Name>"$INSTANCENAME"."$DOMAIN".</Name><Type>A</Type><TTL>600</TTL><ResourceRecords><ResourceRecord><Value>"$CURRENTDNSIP"</Value></ResourceRecord></ResourceRecords></ResourceRecordSet></Change></Changes></ChangeBatch></ChangeResourceRecordSetsRequest>"

/root/bin/dnscurl.pl --keyfile $AWSSECRETS --keyname $KEYNAME -- -v -H "Content-Type: text/xml; charset=UTF-8" -X POST https://route53.amazonaws.com/2012-02-29/hostedzone/$HOSTEDZONEID/rrset -d "$COMMAND"

#dnscurl.pl Create Instance Name Public IP A Record
COMMAND="<?xml version=\"1.0\" encoding=\"UTF-8\"?><ChangeResourceRecordSetsRequest xmlns=\"https://route53.amazonaws.com/doc/2012-02-29/\"><ChangeBatch><Changes><Change><Action>"CREATE"</Action><ResourceRecordSet><Name>"$INSTANCENAME"."$DOMAIN".</Name><Type>A</Type><TTL>600</TTL><ResourceRecords><ResourceRecord><Value>"$PUBLICIP"</Value></ResourceRecord></ResourceRecords></ResourceRecordSet></Change></Changes></ChangeBatch></ChangeResourceRecordSetsRequest>"

/root/bin/dnscurl.pl --keyfile $AWSSECRETS --keyname $KEYNAME -- -v -H "Content-Type: text/xml; charset=UTF-8" -X POST https://route53.amazonaws.com/2012-02-29/hostedzone/$HOSTEDZONEID/rrset -d "$COMMAND"

logger Entry $INSTANCENAME.$DOMAIN sent to Route53
fi

logger start-up-names.sh Ended

Edit the script and adapt the variables from the "*** Configure these values with your settings ***" section with your parameters.

Test it:

# ./start-up-names.sh

(text output)

# tail /var/log/messages

Nov 11 23:30:57 ip-10-29-30-48 ec2-user: start-up-name.sh StartedNov 11 23:30:59 ip-10-29-30-48 ec2-user: i-87eef4e1 54.242.191.68 ns-1146.awsdns-15.org. webserver1
Nov 11 23:30:59 ip-10-29-30-48 ec2-user: Hostname from InstanceName set to webserver1
Nov 11 23:31:00 ip-10-29-30-48 ec2-user: Entry i-87eef4e1.ec2.donatecpu.com sent to Route53
Nov 11 23:31:00 ip-10-29-30-48 ec2-user: Entry webserver1.ec2.donatecpu.com sent to Route53
Nov 11 23:31:00 ip-10-29-30-48 ec2-user: start-up-names.sh Ended

Reading /var/log/messages you should have something like this output. First the script gathers the Instance ID and the Public IP reading the Instance Metadata. Then the current IP ($CURRENTDNSIP) configured at the DNS (if any) using dig and the Instance Tag Name using the ec2-describe-instances command. The first change to happen is the Host Name. If the Instance Tag Name is present it will become the machine Host Name and if not, the Instance ID will play this role. One way or the other we will have a stable way to identify our servers. The Instance ID is unique and won't change over time. Then we call the Route53 API using dnscurl.pl four times. There is no API call to "overwrite" and existing DNS record so we need to Delete it first and Create it afterwards. The Delete call has to include the exact values the current entry has (quite silly if you ask me...) so that is why the scripts needs the current Public IP configured. We Delete using the old values and Create using the new ones. One dnscurl execution for the Instance ID (that always exists) and again for the Instance Tag Name (if present).

Two entries should have been automatically created in your Hosted Zoned and present at Route53 console for our Instance:

aws-route-53-dns-record-set

Those entries are ready to use and now you can forget its Instance ID or volatile Public IP and just ping or ssh to the name. Example: webserver1.ec2.donatecpu.com.


Auto Start
The main purpose is to maintain our servers IPs automatically updated in our DNS so we need that the main script is executed every time the machine starts. Once we've verified that it works fine is time to edit /etc/rc.local and add start-up-names.sh full path to it:

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

/root/bin/start-up-names.sh

And that is it. I suggest you to manually stop and start your instance and verify that its new assigned Public IP is updated in the DNS. All AMIs you generate from this Instance will include this described  configuration and therefore they will dynamically maintain their IPs. Cool!

Note: When playing with changes in DNS Records their TTL value matters. In this exercise we've used a value of 600 seconds so a change could take up to 10 minutes to be available in your local area network if your DNS server has cached it.