point
Menu
Magazines
Browse by year:
Getting The Network In “Shape” For Business
Ajmal Noorani
Tuesday, September 30, 2003
IN THE BATTLE FOR BANDWIDTH ON CONGESTED Wide-Area Network (WAN) links, aggressive applications like music downloads and large email attachments can flood capacity, jeopardizing business applications and the business performance that they are expected to provide. Abundant data, protocols that swell to use any available bandwidth, network bottlenecks, and new, popular, and bandwidth-hungry applications—they all seem to conspire against network and application performance. The selection and deployment of a suitable application traffic management system can help IT overcome these problems, align resource use with business priorities and ensure a satisfactory end-user experience.

Identifying performance problems is a good first step, but it’s not enough. You need the ability to solve performance problems as well. Application traffic management systems that protect, pace, contain and provision bandwidth on a per-user, per-session, per-stream, per-location or per-application basis are a necessity in today’s enterprise networks.

Suppose an insurance firm posts a new file to be downloaded by customers and prospects. Although these file transfers are critically important, they are not time-sensitive. When too many users grab the file, interactive applications grind to a halt—including urgent applications that support claim processing and sales. If WAN application performance is unmanaged, or even monitored but not controlled, the company’s network and application performance will fall short of supporting the company’s business goals.

What’s needed? After all, the file transfers are necessary, just not at the expense of more business-critical applications. The company needs to be able to divide its link into multiple, unequal portions. Downloads should have some bandwidth but should be capped. And urgent applications need to be protected. Bandwidth partitions do just that.

Partitions
A partition creates a virtual separate pipe for a traffic class. You specify the size of the reserved link, designate whether it can expand, and optionally cap its growth. Partitions function similarly to frame-relay PVCs, but with the added benefit of sharing unused excess bandwidth with other traffic.

With an application traffic management system, the administrator at the insurance firm can create an FTP traffic class and assign it to its own partition. When someone initiates a file transfer, bandwidth is available no matter what other traffic is present. But the transfers are contained to a maximum amount of bandwidth. It’s unlikely that transfers taking three minutes instead of two would impact employee productivity. When there’s less FTP demand, other traffic can lay claim to the FTP partition’s unused bandwidth.

Similarly, another partition can reserve an appropriate amount of bandwidth for claim-processing applications. If unused, the bandwidth is automatically allocated to other high-priority applications. But if needed, no amount of FTP, Web surfing or large emails can infringe on the bandwidth needed to support claims processing.

Two variations on the partition theme are of particular interest: hierarchical partitions and dynamic partitions.

Hierarchical partitions are embedded in larger, parent partitions. They carve a large bandwidth allocation into managed subsets. For example, you could reserve 40 percent of your link capacity for applications running over Citrix and then reserve portions of that 40 percent for each application running over Citrix—perhaps half for PeopleSoft and a quarter each for Great Plains and SalesLogix.

Dynamic partitions are per-user partitions that manage each user’s bandwidth allocation across one or more applications. In addition, dynamic partitions can be created for a group of users within an IP address range.

Dynamic partitions are useful for situations when you care more about equitable bandwidth allocation than about how it’s put to use.

As users initiate traffic for a particular class, dynamic partitions are created on the fly. When the maximum number of partitions is reached, an inactive slot is released for each new active user. They greatly simplify administrative overhead and allow over-subscription.

For example, a university can give each dormitory student a minimum of 20 Kbps and a maximum of 60 Kbps to use in any way he/she wishes. Or a business can protect and/or cap bandwidth for distinct departments (accounting, human resources, marketing and so on).

Rate Policies
VoIP (Voice over IP) can be a convenient and cost-saving option, but only if it delivers good service consistently. When delay-sensitive voice traffic traverses congested WAN links on a shared network, the result can be delay, jitter, packet loss, and poor reception. Each flow requires a guaranteed minimum rate, or the service is unusable. After all, a video or voice stream that randomly speeds up and slows down as packets arrive in clumps is unlikely to attain wide commercial acceptance.

Application traffic management systems administer rate policies that can deliver a minimum rate for each individual session of a traffic class, allow that session prioritized access to excess bandwidth, and set a limit on the total bandwidth it can use. A policy can keep greedy traffic in line or can protect latency-sensitive sessions. As with partitions, any unused bandwidth is automatically allocated to other applications.

All types of streaming media would benefit from rate policies with per-session minimums to secure effective performance. Many thin-client or server-based applications also benefit from such policies.

Print traffic, emails with large attachments, and file transfers are all examples of bandwidth-greedy traffic that would benefit from rate policies, but without a guaranteed minimum, and with a bandwidth limit at a lower priority than that for critical traffic.

The insurance firm in the partition example might want to go one step further to ensure that file transfers do not create problems. Suppose someone who is equipped with a T3 initiates a file transfer. Assuming the partition is in place and doing its job, that user could dominate the entire FTP partition, leaving other potential FTP users without resources. Because a partition applies only to the aggregate total of a traffic class, individual users would still be operating in a free-for-all environment. A policy could fix this problem. For instance, a policy that caps each FTP session at 100 Kbps, or any appropriate amount, would keep downloads equitable.

TCP Rate Control
TCP Rate Control operates behind the scenes for all traffic with rate policies, optimizing a limited-capacity link. It overcomes TCP’s shortcomings, proactively preventing congestion on both inbound and outbound traffic. TCP Rate Control paces traffic, telling the end stations to slow down or speed up. It’s no use sending packets any faster if they will be accepted only at a particular rate once they arrive. Rather than discarding packets from a congested queue, TCP Rate Control paces packets to prevent congestion. It forces a smooth, even flow rate that maximizes throughput.

Unlike TCP Rate Control, queuing-based bandwidth-management products wait for queues to form and congestion to occur and then reorder and discard packets. Queuing-based solutions do not proactively control the rate at which traffic enters the wide-area network at the other edge. More importantly, queuing-based solutions are not bi-directional and do not control the rate at which traffic travels into a LAN from a WAN, where there is no queue.

Other Considerations
Shaping coupled with compression is a win-win, but compression by itself is no different than a bandwidth upgrade: While compression squeezes down files, which allows more traffic to get through on a link, it does not manage how that link is utilized. Compression does not guarantee that the most deserving application will receive its appropriate share of bandwidth, and it does not prevent unsanctioned or less-urgent business traffic from grabbing an inequitable share of the link.

Therefore it is important to understand that compression is a feature, not a product. If leveraged in the context of a comprehensive application traffic management system—one armed with application-layer monitoring and bandwidth management—compression provides powerful benefits that enhance IT’s ability to extend existing resources, contain costs, and manage the WAN’s total cost of ownership.

It is also important to note that before any form of QoS is applied, that applications be accurately and granularly identified. Layer 3-4 classification is provided by many vendors, but this visibility is limited when it’s necessary to classify traffic based on port numbers and user/server IDs.

Hence the need for application-layer (Layer 7) classification. This level of visibility enables organizations to identify applications at a very granular level (for example, SAP Web front end from other Web traffic, PeopleSoft from Exchange within Citrix, Oracle print traffic from transaction traffic). With Layer 7 classification and TCP Rate Control, an application traffic management system is a very effective way to ensure reliable business application performance and achieve quick ROI.

Twitter
Share on LinkedIn
facebook