point
Menu
Magazines
Browse by year:
April - 2010 - issue > Technology
The Internet @ 15 It's Still about Performance
Anshu Agarwal
Monday, March 29, 2010
comment
print
There have been a lot of 15 year commemorations lately. 1995 was sort of the start of the commercial Internet as it’s the year in which Yahoo, Amazon, and eBay were all founded and the Internet Explorer came to life (Netscape Navigator came out the year before).

Another company was also founded right about this time; a company created to test and measure the performance of Websites. The name of that company is Keynote. For 15 years we’ve worked with all of the Internet giants, and also those that would like to become giants one day, helping them to understand how fast Web pages and online transaction load and how often their online properties are unavailable to their customers. We monitor thousands of businesses from 240 locations around the world. On any given day we take close to 400 million Internet and mobile measurements.

We’ve seen a lot in the wild wild west that is the Internet. And if you think that all the bugs and kinks have been worked out I’m here to tell you that performance isn’t one of them. You experience it every day and you can see how leading companies, in practically every industry, are doing on our indices (www.keynote.com/indices). In any given week, you’ll see leading retailers range in response time from 3 seconds to 30 seconds and banks from 5 seconds to 25 seconds.

Where do the performance problems on the Internet today come from? Well, from a myriad of sources to be sure, but Cisco’s recent announcement of a router capable of 322 terabits per second will not do much to change the problems we see everyday. Among Web operations teams, the widely used rule of thumb popularized by Google’s Steve Souders is that on an average 75 percent of bottlenecks happen on the front-end, in the browser.

When you switched from IE 6 to FireFox and felt like things were moving faster, you were right. Every subsequent version of FireFox has gotten faster and its rivals IE and Chrome have been battling over who’s faster in executing all the things you encounter on a Web page today when you update Facebook, reserve your weekend getaway at Zipcar, or collaborate on Google Wave.

No doubt, we’ve come a long way since dial-up, and many of us take wireless for granted. But network speed only gets you so far. First, not all applications are data intensive and most bandwidth is used by a small percentage of users. Second, depending on what you are doing on your computer it will account for less than 25 percent of improvement according to a research by Souders. The real improvement has to come from how Web applications are built and presented in the browser and that’s where the problem resides – more on that later.
And I think things may actually get a little worse in the not too distant future; the culprit is cloud computing. To be fair, it isn’t an entirely new phenomenon; after all, colos and hosting facilities were practically birthed simultaneously with Web browsers. But rather than just renting rack-space or boxes, companies are moving their entire IT stack to someone else’s environment; everything from application servers to network engineering to storage. It’s no mean feat to handle one instance of say a travel site at a wholly-owned data center; now imagine a single company having to handle multiple businesses.

Outages happen; it’s just the nature of being on the Web. Most internal Web teams are extraordinarily adroit at responding quickly. But, when outsourcing your entire data center to a third party, your ability to respond is severely mitigated. I recently read about the CEO of a small company lamenting about the amount of time it takes to get a response from her SaaS vendor who took the place of her internal exchange team. While companies are switching to cloud computing for cost and staffing reasons, the longer-term impact of this decision will be based on customer satisfaction and that means response time of Web transactions and availability of online portals.

So this brings me to why more isn’t being done at the front end, or at the browser level. The answer is fairly straightforward: many companies don’t know what the end-user is experiencing and the engineering staff that is building the applications isn’t part of the Web operations group that measures the performance. Let’s take each one of these separately.

So why don’t companies know what their customers are experiencing? The answer is because they are measuring performance from inside their data center. An accurate real-world view requires monitoring a Web page or transaction from geographically dispersed locations around the globe – where customers reside. (Ask your own Web operations team how they are getting the outside-in picture.)
Next, the apps-ops divide is a very real problem when it comes to Web performance. The nature of Web application development means that real world performance testing only happens after being deployed into production. And because Web app changes happen so quickly, performance monitoring and testing lag behind and often drag on until the issue comes to a head. This has famously happened at companies such as Friendster and Twitter. In the case of the former, it cost them the lead and ultimately a lot more.

If you are getting ready to move your online business to the cloud or swap an on-premise application for a SaaS vendor, it’s critical that you put in place a service level agreement (SLA) and monitor the performance. And make sure that SLA has teeth. While talking with Ben Pring, an analyst with Gartner last month, he said “We’re pointing out to people that without a penalty structure, the SLA isn’t worth the paper it’s written on.”

Specifically, here are three keys to getting SaaS or cloud computing right for your organization:

First, make sure that the SaaS provider defines what they are using to monitor the performance and the uptime of the applications. Ask them to show you exactly how and what they monitor.
Next, have the vendor walk you through their internal alerting, escalation, and communications process. Brad DeSent, president of Buffalo Grove, Illinois based Apex Consulting Group says, “Specifically ask the provider, tell me about an outage that you had and how you reacted to it and how you were communicating with your clients during that period.”

Last but not least, be certain that they have a recovery plan for their systems when an outage occurs because, inevitably, they will occur. “Know what they are going to do for you,” DeSent says. “This can be as simple as a one-page description of how the SaaS provider deals with the problem.”

So while the Web might be an oldster in Internet time, it’s actually still very much an adolescent and measuring and monitoring connected and wireless site performance from the outside in will remain an important issue in the foreseeable future. And implementing ironclad SLAs with your SaaS and cloud providers is the latest management challenge that must be addressed.

Anshu Agarwal, Vice President – Marketing, Keynote Systems
Twitter
Share on LinkedIn
facebook
Disclaimer
Messages posted on this Web site under the `Comments' area are solely the opinions of those who have posted them and do not necessarily reflect the opinions of Infoconnect Web Technologies India Pvt Ltd or its site www.siliconindia.com. Gossip, mud slinging and malicious attacks on individuals and organizations are strictly prohibited. Infoconnect Web Technologies India Pvt Ltd can not be held responsible for errors or omissions in content, nor for the authenticity of the user/company name or email addresses associated with posted messages. Infoconnect Web Technologies India Pvt Ltd reserves the right to edit or remove messages containing inappropriate language or any other material that could be construed as libelous, potentially libelous, or otherwise offensive or inappropriate.Infoconnect Web Technologies India Pvt Ltd do not endorse the products and services or any other offerings mentioned in these messages.