The game is afoot in the race to develop the next best thing, and Amazon has released these rather interesting technologies that enable developers to build that next best thing cheaply. Jeff Bezos strikes again with services that leverage the idle parts of Amazon's computing infrastructure and provides it to the people at extremely reasonable prices. Here is a quick overview of the various technologies described above.
"Developers use the Amazon Mechanical Turk web services API to submit tasks to the Amazon Mechanical Turk web site, approve completed tasks, and incorporate the answers into their software applications."
This web service is in reality a masked human-computer interface. Instead of humans asking computers to perform actions, computers are asking humans to perform actions, and using the results in whatever computations are being performed. The entire developing area of funnelling work off that is easy for humans to do but hard for computers (visual pattern recognition, reading obscured text, etc) is interesting, and efficient, assuming you can assemble people to actually do the work.
S3: Simple Storage Service
This is truly a simple web service that allows you to store and retrieve any amount of data from the Amazon infrastructure, and you only pay for what you use.
"It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites."
This is a very simple idea, and easy to grasp the significance. Storage is cheap, so make is available anywhere through standardized methods. The first obvious idea around this technology, online backup, has been done by a few companies already. The cost, you ask? $0.15 per GB-month of storage used, and $0.20 per GB of data transferred.
E2: Elastic Computing Cloud
E2 is to computing what S3 is to storage. E2 provides a web service that allows you to increase your computing capacity on the fly.
"Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change."
The first obvious idea I can see with this is reducing the Digg /Slasdot / reddit effect on smaller sites. I hate clicking on a link from one of those sites to only find out that it maxed out its bandwidth limits two hours ago.
Virtualizing all computer resources like this is quite interesting - I'd be interested in reading a story by someone who completely runs a site from Amazon's infrastructure. Requests made to the domain would use the EC2 computing cloud to retrieve the page requested from the S3 storage service. Traffic would be monitored, and in the case of a statistical spike, more computer resources could be called within minutes to serve your new readers.
Again, the cost is very reasonable. $0.10 per instance-hour consumed, $0.20 per GB of data transferred, and any associated S3 costs.
SQS: Simple Queue Service
"Amazon SQS offers a reliable, highly scalable hosted queue for storing messages as they travel between computers."
SQS is a giant distributed queue using Amazon's messaging infrastructure. This service-based asynchronous message technology is a little less obvious in terms of realistic functionality, but very geeky, and hence, very interesting. It reminds me of when Google released Google Sets. My first thought was, "What sane person would use this?", and my second thought was, "I wish I could think of a reason to use this".