I just attended the 2011 Cloud Computing Expo which was an interesting experience. It was great to see people talk so passionately about the current trends and the future of cloud computing. I heard a lot of technologists and vendors talking about various ways to make it easier to deploy and run applications in the cloud and to get elasticity in the application layer. Almost every one was talking about scaling out and almost no one was talking about scaling up. This is very interesting to me in a time where you can deploy a High Memory Quadruple Extra Large instance on Amazon EC2 with 68.4 GB of memory. I have been been talking about scaling up AND scaling out in enterprise data centers for a while. It is quite a simple concept that the more data you can pull closer to the application in it's local memory, the faster you get to access that data. That is what BigMemory allows you to do at a flip of a button. Specially when an average commodity server is equipped with anywhere between 32 to 128 GB of RAM, that is a lot of capacity in the application layer. Terracotta founder Ari Zilka has been talking about how Disk is the New Tape and Memory is the New Disk. Now, it got me thinking about the same concept in the public cloud. So I decided to do some math and came up with a startling conclusion - it is not just about speed but also about direct cost in the public cloud.
Here is the math..
Here is the math..
For 1 TB of data
Amazon default (m1.small)
@ 1.7 GB per instance
@ $0.085 per hour
= 588 instances and $1200 a day
Amazon High Memory Quadruple Extra Large (m2.4xlarge)
@ 68.4 GB per instance
@ $ 2 per hour
= 14.6 instances and $701.7 a day
That is a 97.6% smaller cluster with 41.6% reduction in just direct cost. And how do you even manage close to 600 instances?
Lastly, don't forget that Memory is only getting bigger and cheaper. I spoke to a AWS EC2 representative who said that Amazon is already offering 160 GB instances.
It pays to go big..