point
Menu
Magazines
Browse by year:
July - 2004 - issue > Feature:On Demand Computing
Are You Ready for On Demand
Karthik Sundaram
Wednesday, July 9, 2008
Heard this one about the salesman in the eyewear showroom? He says a pair of glasses will cost you $100. If you remain silent, he then adds that the lenses would cost you another $100. And if you still show no sign of unease, he caps it all saying that the price he quoted was for each eye. Are you feeling the same sense of being crowned with the dunce cap? They first sold you super large boxes for your IT plan. And then some super intelligent software was billed to you. Now they are telling you all that is not efficient or cost-effective, and that they will run your entire IT for you. Aha!

But the problem does seem to be real. There are basic problems that IT must address to provide assurance that IT can support today’s dynamic businesses. The IT infrastructure is:

Inefficient: There are many factors, but the one that gets the most attention is that the computing capacity far exceeds current and projected needs. Also, managing a diverse hardware and software environment is hard and expensive.

Unresponsive: Changing the infrastructure to support dynamic business needs is slow and difficult, limiting the agility of the business.

Unaligned: IT is not organized or managed around business services.

Indiscriminate addition of servers to manage new application delivery demands has played havoc on the load balancing in server “farms,” and rendering the entire framework prone to manual errors. Corey Ferengul of the META Group1 provides valuable analysis of these challenges from the perspective of the data center:

Data center budgets will grow 7– 9% in 2003, driven by people costs and account for 50 – 75% of the IT budget.

Utilization of UNIX and Windows servers is under 25% over 24 hours.

So how will on-demand computing help you? By one definition, on-demand computing is the ability to manage the corporate infrastructure as an inte nal computing service, similar to an electric utility. The on-demand computing environment must be driven by formal service level definitions and include these key capabilities:

Dynamic Provisioning. The goal is to allocate computing resources dynamically to meet current and projected needs. This is a variable consumption model; you use only what you need. An interesting side benefit of this approach is that it also provides strong resilience in the event of failure. Resources can be re-allocated to meet needs when one node or system fails.

Self-Managing Systems. Automation and intelligence enable the flexibility to address changing conditions and ensure administrative scalability. Any changed resources should be discovered and managed automatically based on service-level requirements. The long-term trend is clearly towards a world where there is dynamic integration between an organization’s internal IT infrastructure and the computing utilities of other organizations using the Internet as the communications vehicle. This is seen in both the IT trends to on-demand computing and outsourcing, and the business trends to supply chain and partner integration.Today, the on-demand computing phenomenon is focused on flexible management of each of these utilities independently. Over time, the boundaries between organizations will fade and the location of the computing infrastructure used to support any business process will be dynamic.
The immediate benefits of an on-demand computing environment are clear, yet the proper path to implementing an effective environment is not clear. The Web has some good pointers and here are five basic steps to build your on-demand computing environment.

Identify On-Demand Computing Opportunities. Review the current environment to identify pools of similar resources that could be dynamically managed, focusing on servers and storage. For example, consider two applications running on separate groups of Intel-based servers with similar configurations. Note that the operating system and all layered software can be different. If the utilization of this pool of servers is below your desired goals, and the business processes supported do not have simultaneous peaks, it should be considered a pool to target. The use of automated asset inventory and asset management solutions is strongly advised. The benefit of this step is formal analysis of the on-demand computing opportunities. Collect Service Level Definitions. Review the service level requirements for the business processes. If no formal agreements exist, it will be necessary to review business needs and set objectives. Next, implement a service level management solution to gauge compliance with defined service levels. The benefit of this step is having clear service levels and an accurate benchmark of the state of delivery.

Identify Standard Configurations. Create templates for operating system releases; application servers, along with patch and service pack levels; databases and other supporting software that is required to support the targeted business processes. Asset management software helps by creating detailed records of the hardware and software configuration. The benefit is the ability to reproduce production systems quickly and reliably.

Automate Provisioning. Use software delivery technology to configure a newly discovered server, or to reconfigure an underutilized server to run another application. This process can be manually driven by Unicenter NSM for initial deployment. Administrative staff can be alerted when utilization on a server falls below a threshold; then an operator can easily re-provision the system using a single click. The same process can be applied to provisioning new storage devices. The benefits of this step include higher utilization and flexibility in deployment of blades, servers and storage supporting business processes. Implement Automated Management. After the alert-driven approach has been successfully implemented, move to fully automate the process of server and storage provisioning based on formally defined service levels. At this point, it may be useful to update the billing and chargeback mechanisms to take full advantage of the dynamic on-demand computing model by leveraging asset management solutions. This step provides significant benefits in terms of productivity of administrative staff and the ability to meet service levels cost effectively.

Having a clear picture of what already has been deployed is crucial before companies start to roll out new, intelligent devices. This might sound obvious, but experts say it's not always done. Gartner analyst John Phelps says many companies don't know where all their servers are located, who controls and owns them, and the main functions and applications running on them.

The only way to do it is to first understand the relationships between hardware and software resources delivering a particular business service. Inventory discovery and relationship mapping are the keys to starting. While companies focus on the fundamentals, vendors are working to create intelligent devices, management tools and services for utility consumption. The field is crowded, weighted by HP, IBM and Sun. IBM uses “on-demand” to describe its initiative. HP has its Utility Data Center lineup and Sun has its N1 data-center architecture. Gartner estimates 15% of corporations will adopt a utility computing arrangement this year, and the market for utility services in North America will increase from $8.6 billion this year to more than $25 billion in 2006. By 2006, 30% of companies will have some sort of utility computing arrangement. Are you ready?
Twitter
Share on LinkedIn
facebook