from:http://blog.codesta.com/codesta_weblog/2008/02/amazon-ec2—wh.html
by:Oliver Chan
Amazon’s Elastic Compute Cloud (EC2) has the goal of providing flexible computing capacity in the form of a service. This service provides the user with the ability to quickly scale to the demands of an application by booting or shutting down servers in a matter of minutes. Since all these machines run in a virtual environment, you only need to pay for the resources you use. More detailed information can be found on Amazon’s EC2 home page – http://aws.amazon.com/ec2/ Much of the documentation provided by Amazon was straightforward and easy to follow so for a full walk-through see http://docs.amazonwebservices.com/AWSEC2/2007-08-29/GettingStartedGuide/. We will assume that the reader is familiar the basics of EC2.
This article focuses on the problematic aspects of EC2 – issues that can lead to serious problems or technicalities that if ignored, can lead to frustrating hours wasted on troubleshooting and debugging. We’ve learned that the single most important thing you can do for your EC2 environment is to give it a dynamic DNS solution it can use to overcome the DHCP nature of virtual machines. Now what can you do for yourself, you ask? Take a look at the gotchas we encountered and save yourself from dealing with the same problems.
DHCP, Dynamic IPs and DynDNS.com
One side-effect of these virtual servers is that each time one boots up, DHCP assigns them a new IP address. In all of our experience we’ve never been assigned the same IP after deploying a new instance of a machine. This is highly undesirable since our web application running on Amazon EC2 would become unreachable by it’s dns name and old external IP if we ever had to re-deploy a server instance after some failure. It became evident that if a server were to go down, we’d be dealing with a significant amount of down-time. In order to ensure that we wouldn’t have to deal with a time-consuming process of modifying the configuration of servers and waiting for updates to our domain provider’s DNS to propagate (up to 96 hours), we implemented a dynamic DNS solution.
Our application had the additional complexity of requiring inter-server communication. But with a new internal IP address on every re-deployment, the new locations of the server would be unknown to the others. Once again we decided that we needed these servers to have stable aliases in order to avoid reconfiguring each machine whenever we had to re-deploy a server.
In order to ensure a quick recovery after one of our servers went down, we needed to be able to update DNS entries and have the changes take effect immediately. Amazon suggests using dynamic DNS solutions such as DynDNS and ZoneEdit. We decided to go with DynDNS because it appeared to offer better support and documentation for the service itself, as well as better instructions on how to set up recommended update clients such as ddclient. The ddclient tool is responsible for monitoring a machine’s IP address and updating DNS entries when a change is detected. Here is what we did to implement the dynamic DNS service for EC2:
- Go to https://www.dyndns.com/services/ and sign up for a free ‘Dynamic DNS’ account or a paid ‘Custom DNS’ account if you want to stick with an existing domain.
- Create place holder records for entries that you expect to be updated dynamically by ddclient (you can start with a bogus value like 10.10.10.10 to make it obvious when it changes).
- Under your preferences you can also pre-activate your solution to speed things up if you plan on delegating the name service over to DynDNS soon.
- Go to https://www.dyndns.com/support/clients/unix.html to download ddclient and follow the instructions in the Knowledge Base article to get the client installed.
- Paste the following and update your login, password and ‘custom’ server list in your ddclient.conf file:
use=cmd, cmd=’curl http://169.254.169.254/2007-08-29//meta-data/public-ipv4′
login=xxxxx
password=xxxxx
protocol=dyndns2
server=members.dyndns.org
wildcard=YES
custom=yes, your.server1.com, your.server2.com - Note that the client requires the perl-IO-Socket-SSL module to be installed so using yum, the following command should do the trick – "yum install perl-IO-Socket-SSL.noarch"
- You can also choose to make sure the ddclient daemon is running when the machine boots up by using chkconfig with the service or start it with "service ddclient start"
Note that the url in the configuration above is Amazon’s recommended way to obtain a server’s external IP address and http://169.254.169.254/2007-08-29//meta-data/local-ipv4 will give you an machine’s internal IP. That’s it. After getting the client set up you can log into your DynDNS account to see that your records are being updated and now you can access your servers using those DNS names without worrying about unexpected changes to the IP address of your servers.
Here are some things to consider when using ddclient. ddclient maintains a cache by default in “/var/cache/ddclient/” which can prevent any updates to DynDNS if a record is updated outside of the client – remember to delete the cache in this situation. If you want to keep both a machine’s external and internal dns names up to date, you would need to run multiple instances of the ddclient daemon. Note however, that you must modify both the startup script (provided by them) to handle multiple instances as well as the .pid values to be unique in each of the .conf files. This may be too much work so a simple alternative is to have separate cron-jobs that call ddclient with the ‘–force’ flag along with the location of each ddclient.conf file with the ‘–file’ parameter.
Other ‘Gotchas’ and Issues We Ran Into
This section covers unexpected issues that we ran into our first time around working with EC2. These issues are centered around 3 areas – choices in machine images, packaging your images and disk usage on the virtual servers. It’s helpful to be aware of these issues because you may otherwise end up wasting time troubleshooting the same problems we did.
Choices in Machine Images
Remember that you don’t have to use Amazon’s base Fedora Core 4 images to build your machines. Before spending too much time configuring and customizing an AMI, find one that suits your needs from the start so you won’t have to redo any work later on down the road. Check out the list of public AMIs in Amazon’s resource center for something that is more suitable for your needs: http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=101 We started with the standard Fedora Core 4 build but eventually moved to a more up-to-date enterprise CentOS build provided by rightscale.com for better maintainability and security.
Packaging Your Image
When packaging up your own image using the ‘ec2-bundle-vol’ command, make sure you specify a clean folder using the ‘–d’ flag otherwise bundling the same image twice will result in an error due to the conflicting sets of temporary files. Also, use the ‘–p’ flag to specify a prefix/name for your image otherwise when you upload the AMI and look at your list of images with the "ec2-describe-images –o self" command it will be very hard to differentiate between all the images that you’ve created. For example, we used something like "ec2-bundle-vol -k pk-XXXXXXXXXXXXXX.pem -u 123456789 -c cert-XXXXXXXXXXXXX.pem -d /mnt/cleantempfolder -p web-server-v1".
Machine Disk Usage
When working with your image note that the main drive/partition (where the system files are) has a very limited capacity (10 GB in our case). So when dealing with large files/directories use ‘/mnt’ as it has over 100 GB. We’ve experienced all sorts of failures after accidentally maxing out the main partition. Remember that if you are running an application that generates log files or temporary/residual files on disk, you will need to make sure you don’t cause failures by filling up the main partition with large files and directories.
If a machine is terminated, all your data will be lost except for what was backed up from the last time you ran an ‘ec2-bundle-vol’. Be mindful of where you put your files because when bundling your machine image, many directories are excluded by default so it’s easy to lose data. Check the sample output of the ‘ec2-bundle-vol’ command to see which directories don’t get backed up – http://docs.amazonwebservices.com/AWSEC2/2007-08-29/GettingStartedGuide/creating-an-image.html
Good Luck!
With a dynamic DNS solution, Amazon EC2 servers face significantly less down-time if something goes wrong and are much easier to maintain in the long run. Amazon has provided a set of very useful tools that make it simple to build, upload and deploy customized machine images. Overall, I have to say that Amazon’s EC2 platform has been relatively easy to work with and is worth considering as a hosting solution.
from:http://blog.codesta.com/codesta_weblog/2008/02/amazon-ec2—wh.html