Utility Computing is here: meet the Amazon Elastic Compute Cloud

Amazon has been pushing the limits of distributed computing, offering very useful, reasonably priced computing services like their awesome online storage service (S3), and their queuing service (SQS). Now they’ve released something MUCH more generic and powerful: a hosting infrastructure that lets you preconfigure your desired servers (by giving Amazon a disk image of a Linux machine). It’s called the Elastic Compute Cloud, or EC2 for short. When you want a server, you can then order it via the website and have it online within minutes. Pricing is a very reasonable 10 cents an hour (72$/month) plus bandwidth. Each instance provides the computing equivalent of a dedicated system with a 1.7Ghz Xeon CPU, 1.75GB of RAM, 160GB of local disk, and 250Mb/s of network bandwidth.

This is game-changing. Lets say that you had a web application that suddenly got very popular. If you built on top of EC2, you could start out small (4 machines at 72$/month, maybe 2 web servers and 2 database servers) and just add virtual servers as it became necessary. You get techcrunched? Log on to the admin interface, and provision another 3 web servers and 3 database servers. Your extra capacity is up and running within minutes.
Or lets say you have some expensive back-end compute process (like image or video processing). Simply provision one computer from Amazon to do it, and send the jobs over to it via the Amazon queuing system. When you need more capacity, add more machines: the Amazon queue will make sure that only one machine gets a given computing task. This highlights the sweet spot for EC2: applications where it’s easy to scale up simply by adding more computers that pull tasks from a queue. I’d be more willing to host a back-end process like this on EC2 as a first step.
I expect a market for disk images will grow around this service. For example, most people using this service to run a web service will need some kind of load balancer. An intrepid developer could build a load-balancing appliance from open source tools, and release it as a standard EC2 disk image. This logic applies not just to load balancers, but to almost any kind of computing appliance on the market.
The game-changing aspect of all this is similar to S3, but with broader implications. One of the biggest startup expenses is hardware. As a startup, you end up buying hardware well before you need it (often at capacities that you will never use!). Distributed infrastructure reduces startup risk by making it possible to pay for computing on an as-needed basis.
As always, feel free to comment below. And if you have ideas about how to do load-balancing in the context of EC2, or how to handle security (is EC2 firewalled?) PLEASE comment: I’m especially interested in these topics!
Updates and links:
Obligatory techcrunch story
Windy pundit writes
Suppose you have a site that runs a weekly contest. Each contest opens on Sunday and closes on the following Saturday. You get a burst of people checking it out when it starts, so you need 10 servers on Monday as people check it out from work. The rest of the week you only need 3 servers…Amazon EC2 lets you do exactly that. Each day, you can allocate however many servers you need. In fact, EC2 rents time by the hour, so you can idle with 1 server overnight, bring another server online as people get to work, and another server to handle the lunch break.
John Lam writes that EC2 is distruptive technology. He says “Makes me want to do a startup company now :)”
Oliver Gutknecht describes the firewalling and security features.
Tabulas is sceptical, but puts forward a GREAT idea for Amazon’s next service (are you listening, Amazon?)
Now when will somebody offer the S3 equivalent for mySQL hosting? *That* would be a total killer app – imagine if I never had to worry about backing up mySQL or slaving additional machines or scaling out as usage grew… I could simply drop in my data and have it there. Pay as you go mySQL hosting … yummy!

3 thoughts on “Utility Computing is here: meet the Amazon Elastic Compute Cloud

  1. Jon August 24, 2006 / 10:06 am

    OK so it’s firewalled. And presumably all data is being kept on a GFS-like redundant file system. So you’re getting the two most expensive aspects of a server setup (firewall and RAID) for FREE!
    This is way more than just a cheap server. This is cheap enterprise-class infrastructure.
    An option with more RAM would be good (for a main database server, you’d probably want 4GB or 8GB, and be willing to pay for it). I’m sure they’ll add a range of configurations as this matured.

  2. Tom Kerswill November 9, 2006 / 3:49 am

    I have been very impressed with EC2. An unexpected side-effect is that it’s made me think by default in the kind of “my server might crash or become unavailable” mindset – and with the servers being cheap, it’s easier to think about adding in more servers and keeping many copies of all the data.
    Re: load balancing — I’m just going to use “balance” – ultra simple. But the disadvantage is paying for one instance that’s effectively just a load-balancer. A few people on the Amazon forums have suggested a really low-spec instance, that’s dirt-cheap (like, 1 cent an hour or something) that can be used for this purpose – which would be *very* cool.
    The other option is using a cheap machine outside of Amazon as the load balancer… but I don’t really know the implications of having a load balancer on a different network and whether this would really work…

Comments are closed.