Is Cloud Computing Truly Ready for the Enterprise?

Cloud computing involves delivery of computing resources such as hardware and software, with the users of such resources saving on making upfront investments and instead renting these resources on an as needed basis, says Umesh Bellur

01 July, 2010 Opinion, Technology
Print Friendly, PDF & Email

Umesh Bellur, Associate Professor, IIT-B, Mumbai

The past couple of years have witnessed an exponential growth in the amount of attention that cloud computing has gotten. It is now reached a point where it is being touted as the solution to everything from controlling capital expenditure in IT organisations to growth on demand in start-ups. However, enterprises that have significant investments in computer systems already are unsure as to what exactly cloud computing delivers for them and whether they should “move to the cloud”.

Let us first examine what cloud computing really is. The widelyaccepted definition is one where “resources are delivered as a utility”. These resources can be of hardware, infrastructure stacks such as those containing production web servers, application servers etc., application development frameworks such as Google’s app engine or Force.com and entire software applications such as SalesForce.com and Gmail. The notion of a utility is derived from analogs such as power and water where the consumer of the end product does not need to make an investment in the infrastructure needed to realise the product and the business model is one of “pay as you use” with the appropriate metering tools to realise it.

The success of the utility model lies in its ability to amortise the cost of the infrastructure over multiple users. For instance, in the case of a power company, the investment is power generation, storage and transmission is returned by renting the facility by means of the power that it generates for multiple users and for long periods of time. A similar concept can be applied to computing resources such as hardware and software and the users of such resources can avoid an upfront investment (which can be prohibitive) in high quality compute resources by “renting” the resources on an as needed basis from a cloud provider. A secondary advantage is that of being able to grow your enterprise on demand that is particularly attractive to small and medium businesses that are growing rapidly and are consequently unable to predict demand in advance.

Virtualisation allows one to simply amortise one’s existing set of servers to serve many applications instead of vertically stove-piping hardware on a per application basis, thereby allowing one to consolidate applications on a smaller set of servers and freeing up capacity to host new applications.

The Enterprise Dilemma

But what about the enterprises that have sunk cost in purchasing large servers to run their current set of applications? Of what use could cloud computing be to them? Realise that these organisations had purchased “large” servers to run legacy applications that could only scale vertically. However, a recent trend in almost all of these organisations is to move away from this scenario and rearchitect their applications to be horizontally scaleable using technologies such as J2EE. This serves multiple purposes: one is to get away from the vendor lock in situation they currently are in (blades for horizontal scalability are available from many vendors and can be mixed and matched) and secondly it’s an opportunity to modernise their code base, leading to reduced costs of maintenance – after all Java programmers are far less expensive than mainframe programmers!

So far, so good. However, enterprises have now begun to realise that the large servers that they had purchased earlier are overkill to run a single tier of their newly multi-tier application and that utilisation levels are abysmally low. The consequent question is: Can I somehow partition these servers into nicely isolated “smaller servers” which can then host different applications simultaneously? And the answer? Yes! In fact we have the technology for it today – virtualisation! So the brainwave is to simply amortise the existing set of servers to serve many applications instead of the previous approach of vertically stove-piping hardware on a per application basis, thereby allowing me to consolidate applications on a smaller set of servers and freeing up capacity to host new applications! Fantastic! In fact, this is nothing but what cloud computing promises. There is even a term for this – private clouds where all the resources of the cloud are owned and used by a single organisation.

The Technology Angle

We are still early in the maturity curve of cloud computing technology and definitely not ready to entrust mission critical enterprise applications to this technology yet.
Umesh Bellur

So far it sounds like a bag of reorganising resources tricks! What’s the technology advances in cloud computing? At the heart of making all this work lies virtualisation where we consolidate the set of resources being virtualised and allocate the right amount of resource on demand to the application. On demand going down, the resource is reclaimed and added back to the available pool. Present virtualisation technology allows us to do this with entire computers along the dimensions of CPU and memory mostly. So, I can take a computer and carve it up into multiple virtual machines each of which is allowed to consume a certain amount of CPU and memory – of course the sum of these machines cannot exceed the whole that we started with! Each of the virtual machines can host an application which needs just those amount of resources that the machine has – in other words it’s a custom fit made just for that application! Virtualisation really does work. What we need to do is to manage these resources in the face of dynamic changes in demand – the management layer over virtualisation technology is what makes up the technology of cloud computing. It includes solutions to questions such as:

  • When should I create a new VM and of what size?
  • Where should I create this VM on the set of physical servers that are available to me?
  • Can I resize a previously created VM because of increased load on the application executing there?
  • How can I take advantage of the VM migration technology that allows me to move around an executing VM along with its application to a different server?
  • How many resources have I used up over a period of time (aka metering)?
    and so on.

While current virtualisation technologies (both Open source as well as proprietary) have tackled some of the mundane and not so mundane issues here, there remain many unanswered questions. A few of them are:

  • Do I really know what the optimal place to create a new VM is? Most current solutions use what is called a “first fit” approach – find the first physical server that can hold the VM and create it there! Most times, this turns out to be non optimal.
  • How do I decide which VM to migrate and when? To which physical server should I migrate the VM? Being reactive and then simply picking the least loaded server currently may result in cascading migrations.
  • If I run out of resources in my data center, can I automatically extend my data centre using public cloud resources? This is a process termed cloud bursting and is not very well understood currently.
  • What are the trust and security implications of virtualisation? How can I guarantee that a prior footprint does not interfere with my application on the same physical server?

Given these questions and more which are yet to be answered satisfactorily, I would surmise that we are still early in the maturity curve of cloud computing technology and definitely not ready to entrust mission critical enterprise applications to this technology yet. Even apart from the technology question marks are some concerns that show that we need to be process ready to receive this technology.

CIO Headaches

A CIO contemplating setting up a private cloud or even hosting his applications on a public cloud must necessarily ask himself the following questions:

  • How do I decide on a strategy to decide which applications to move to a cloud first? Or should I move them all together?
  • How do I come up with a plan to translate my physical environment hosting applications to a virtualised environment? In other words, how many VMs? What does each VM host? What is the sizing of each VM? And many other such questions need to be answered before taking the plunge.
  • Is it necessary to re-architect my applications to be “cloud ready”?
  • If I decide to use public clouds, how do I ensure that I do not get into a provider lock in situation? Are there emerging standards that will ensure that I can move seamlessly from one provider to another?
  • If many applications execute in a virtualised environment, are there really tools to charge-back usage to the owners of these applications? What kinds of economic models should I use in situations where some resources such as a DB server is shared amongst many applications?
  • Will my organisational structure have to change with the move to this paradigm?

Clearly, the concept is great! Who would not sign up to consolidate their servers and free up space for hosting newer applications thereby gaining significant cost savings? But the devil, as they say, lies in the details and while virtualisation technology companies have made tremendous strides over the last five years, I believe it will take a new breed of start-ups to complete the picture – start-ups whose sole focus is managing the virtualisation layer. In other words – cloud computing technology providers. For the present, the CIO must tread carefully especially when it comes to moving mission critical, revenue generating applications to the cloud.

Recommended Articles

cover

MODI YEARS OF INCLUSIVE GROWTH
NO STATE LEFT BEHIND (2014-2023)

India is supposed to become a developed economy by 2047 with a GDP size of $30 Tn. While there is largely a consensus on the feasibility of this, it is...

Leave a Reply