Sajai Krishnan
Saturday, August 1, 2009
Every two to three years the hi-tech industry throws up the next shiny thing. But every decade or so comes a game-changing transformational technology wave. Cloud computing is one such technological innovation. Ten years from now, cloud computing could be 20-30 percent of the architecture inside a data center. But more importantly, a generation is growing up now that is used to ubiquitous computing and communications everywhere. The applications serving this community are completely based in the cloud. This generation is going to expect nothing less than the ability to have access to their data and applications anywhere they are; in other words, this community is going to be served from the cloud, with applications specifically designed to make use of the cloud and deliver the benefits of the cloud.

Setting the clock back to the present, the early cloud applications are clearly visible in the iPhone store, in the CRM and email spaces, and with applications like Facebook and Twitter. But, while this is clearly exciting and is going to be unfolding broadly and with a tidal force in the coming years, today most compute and data are centered in the enterprise.

Cloud computing can be a significant efficiency driver in enterprises in the near term. In a recessionary environment especially infrastructure savings is at a premium. Simplistically, looking at the three key cloud components ? applications, compute, and storage ? one can see the emergent benefits. Cloud applications (SaaS) have already proved their benefits clearly by providing significant savings over custom deployment of software. Cloud compute (compute-as-a-service CaaS) benefits are becoming better understood with increasing penetration of its foundational virtualization technologies from VMWare and Citrix Xen. Cloud storage (Storage-as-a-service StaaS), on the other hand, is a relatively new phenomenon and has gained some early exposure via the services offered by Amazon Web Services and a few others. In cloud storage or StaaS, the foundational technology equivalent to that from VMWare is relatively nascent and firms like ParaScale and EMC are just starting to offer it.

Interestingly, while Caas is a bit better understood than StaaS, beyond the relatively familiar virtualization it is more difficult to gain the benefits of CaaS for a couple of reasons. Firstly, CaaS requires the use of provisioning and workload management tools and APIs that require legacy apps to go back from production into development and test (for programming with the new APIs) before redeployment. Secondly, leveraging CaaS requires careful reconfiguration (albeit not overly complex) of associated networks and storage. Taken together, CaaS requires relatively more expertise to deploy than StaaS, and enterprises and service providers have just begun to recognize this.

StaaS, on the other hand, is the low hanging fruit amongst cloud infrastructure services. Some cloud storage vendors even provide access via legacy protocols (NFS, CIFS, FTP) so that existing applications can easily benefit from the low-cost commodity hardware (not proprietary storage appliances as before) economics of this cloud architecture. One of ParaScale’s service provider partners even connected up an IBM mainframe application to a cloud store via NFS … getting StaaS economies to the most legacy of applications in a matter of one week. In many respects, enterprises can stop thinking of storage clouds as clouds per se. It is easier to think of a storage cloud as an additional tier of NAS or tier-2 or tier-3 that can offload expensive tier-1 storage. So, instead of continuing to overload the primary or tier-1 storage with files that are a little old and rarely accessed but still important enough to keep in many cases, enterprises can save a lot of money by using storage clouds composed of commodity servers to offload and supplement that expensive primary tier of storage based on proprietary appliances.

Certainly there is concern in the enterprises about lock-in. What if we were to pick a particular vendor’s cloud APIs to leverage and program to? The APIs are pretty simple, but in general it is not easy to work the same APIs in another vendor’s cloud. So, while theoretically one is not irrevocably locked in, transferring is not possible without some simple programming. Clearly, this is a barrier to switching. Amongst the new APIs, Amazon’s EC2 compute and S3 storage APIs are clearly gaining dominance as a de facto standard. This was not clear even six months ago, but the community building tools for the Amazon standards is rapidly establishing this as a preferred standard. In addition to this, VMWare with its dominance and Microsoft may establish some additional standards.

In the cloud storage space there is a huge interest in REST API standards. But even here there is the Amazon REST API, and the ‘rest’ are just roadkill (pun intended). A vendor standard called WebDAV is another route to gaining WAN access to storage, and this may yet prove strong enough to provide a second standard. At this point, it is the author’s opinion that customers looking to deploy storage clouds should stay with the Amazon S3 REST API or WebDAV for WAN access.

As enterprises consider the various cloud options, in the U.S. Q4 is going to see a veritable Tsunami of CaaS and StaaS announcements. There will be plenty of options providing customers many choices in terms of price, performance, and service. On the compute side, the gating factor will be the availability of professional services that will enable enterprises to get onboarded and integrated with the right APIs. On the storage side, storage clouds with NFS and CIFS access will provide for easy onboarding of legacy data. There will of course be a lot of noise on standard WAN access methods to storage, and a lot of marketing around REST APIs. Unfortunately amongst the WAN access approaches, only the Amazon S3 REST API and potentially WebDAV provide a way around lock-in.

The variety of choices around public or external clouds and private or internal clouds is very useful for a CIO. In many cases it is not a case of ‘either’ or ‘or’, but is really a case of ‘depends’. Depending on the nature of the application, a workload maybe a better fit for the public cloud or the private cloud or some sort of a hybrid combination, going back or forth depending on factors like seasonal demand and age.

In all this, yet another customer concern centers on security and privacy; concerns that a multi-tenant public cloud can accidentally result in involuntary leakage of data. While today’s technology should be able to provide the necessary level of isolation, human error can always be a factor, and ultimately if there is a perception that there is a security issue, then that is a reality. To address security concerns in the shadow of SAS70 or HIPAA regulations, private clouds or hosted private clouds (hosted in dedicated setting by your favorite service provider) are certainly a practical option.

Overall, cloud services are a very recession appropriate offering; a very practical approach to delaying or reducing expensive IT capex. And as mentioned earlier, Q4 is going to provide the customer 5x more choices in SLAs, features, and prices. It is a great time to make modest bets with this technology and meet your business objectives while retaining cash in your pocket.

The author of the article is Sajai Krishnan, President and CEO of ParaScale.
Share on LinkedIn

Previous Magazine Editions