point
Menu
Magazines
Browse by year:
Making it work for your enterprise
Ajmal Noorani
Friday, January 30, 2004
MPLS (multi-protocol label switching) technology can improve network performance for select traffic. In the typical network without MPLS, packet paths are determined in real time as routers decide each packet’s appropriate next hop. This conventional IP routing requires time and eliminates opportunity to influence packets' paths. MPLS predefines explicit paths for specific types of traffic, identified by path labels put in each packet.

Though it has been around for several years, MPLS has garnered much attention recently, with most major carriers rolling out network-based virtual private network services based on the technology. But the real promise of MPLS has been unfulfilled as it has only been delivered in carriers’ backbones—prioritization makes little difference in multigigabit networks—and has been limited to service within one carrier’s network. In recent months, MPLS has now been delivered over multiple-carrier networks. And it is becoming more useful by extending prioritization from a carrier’s backbone to the customer’s edge router.

According to Gartner, MPLS has accelerated the migration from established/managed data services (FR, ATM and private line), and is destined to become the fastest-growing WAN solution for enterprises by 2006. But does MPLS work? Is it enough? If you use an MPLS network, and your service provider says it offers QoS (quality of service) with MPLS, can you rest easy? Will critical applications have consistent and good performance? Can you or your provider enforce end-to-end performance commitments and assess compliance?
David Willis, VP of Infrastructure Strategies at META Group says, “Quality-of-service (QoS) capabilities are essential to provide the full range of services needed by a demanding customer base, and MPLS provides a strong core platform for these services. Yet MPLS addresses only the carrier’s own network problem, and other technologies must be employed to extend QoS to the customer, where it is most needed.”

How MPLS works
Packet marking, in all its flavors, has survived as an enduring vehicle to ensure speedy treatment across the WAN and across heterogeneous network devices. First, CoS/ToS (class- and type-of-service bits) were incorporated into IP. Then, DiffServ (Differentiated Services) became the newer marking protocol for uniform quality of service—essentially the same as ToS bits, just more of them. And more recently, MPLS has emerged as the newest standard, providing the ability to specify a network path for consistent performance.

Routing occurs in layer three of the OSI’s standard networking model, and MPLS attempts to solve layer three’s issues—time-consuming routing and the lack of deterministic traffic engineering. MPLS is a standards-based technology to improve network performance. It enables network administrators to define explicit paths (called LSPs, or label switched paths) for specific traffic and identify numbers (called labels) to name the paths or portions of paths. One or more nested labels are embedded inside an MPLS header on each packet, located between the packet's headers for layer two and layer three.

As labeled packets traverse the MPLS network, MPLS-enabled routers (called LSRs or label switched routers) needn’t calculate the next hop. Instead, they read a packet's label and look up the appropriate next hop in their forwarding tables. Then they optionally exchange and/or add MPLS labels to the packet’s header so that the next LSR will find the proper label for the correct path. Finally, the LSR forwards the packet.

Improved performance results from using optimum paths, diverting competing traffic from optimum paths, sparing routers extra tasks, and reducing the need for layer-three routing. (For more on MPLS, check the MPLS Resource Center at mplsrc.com.)

Issues with MPLS
MPLS can deliver real performance benefits, but two problems remain:
Overuse: An optimal path is not likely to remain prompt with heavy use. Defining paths for certain traffic is only as effective as your ability to distinguish one type of traffic from another. Perhaps you'd like your SAP to get the optimum path, Oracle and web traffic to get a fairly speedy path, and email and FTP to get what's left. How can you ensure that each type of traffic gets the right label? MPLS-enabled devices that are unable to accurately identify packets from a specific application, cannot assign labels correctly, thereby defeating the whole purpose.
Bottleneck: The transition point from a non-MPLS edge to an MPLS core or WAN typically turns into a speed conversion bottleneck as traffic from 100-megabit or gigabit Ethernet LANs tries to funnel into slower access links where MPLS is not yet lending its performance advantages to critical traffic. The link from the local LAN to the MPLS core is typically the lowest capacity portion of the network. It backs up with deep queues and introduces the most latency. Even if SAP has the most preferred path in your MPLS network, it might wait behind FTP packets and web traffic before entering.

Compromise Approach
The compromise approach, promoted by some large industry players, uses a simple mechanism to separate traffic into broad categories via DiffServ classifications, which are then mapped into MPLS flows across the carrier network. As with any compromise, there are several weaknesses to this approach, the most significant being the fact that customer edge routers lack a sufficiently rich application discovery mechanism, limiting visibility into what the customer cares about most—the successful delivery of business applications in the most efficient manner. Further, once traffic is classified, it must also be managed, and router-based traffic-conditioning mechanisms are rudimentary and inexact. Routers rely on the use of queues to delay and/or drop packets to produce the necessary traffic profile. This passive technique leads to inefficient network utilization.

The Solution
The ability to distinguish one type of traffic from another as it passes from a non-MPLS edge to an MPLS core is essential for assigning proper MPLS labels and for treating traffic according to your preferences. The solution is to use QoS appliances that can accurately and granularly detect and identify a large variety of applications and other types of traffic in the industry. These intelligent appliances can also be used to establish a separate demarcation point for carriers that wish to provide better application visibility for their customers, without having to manage the router.

As the QoS device detects and identifies each application or traffic type needing a particular MPLS label, you can also determine how that traffic should be escorted through the funnel point. With policy-based bandwidth allocation and traffic shaping, you can protect critical applications, pace those that are less urgent, and optimize performance of a limited-capacity access link. You specify bandwidth minimums and/or maximums on a per-application, per-session, or per-user basis, controlling performance to suit application characteristics, business requirements, and user needs.

MPLS is a significant industry standard that will deliver performance benefits to users who utilize it as one component of application traffic management and not expect it to be a silver bullet that will deliver end-to-end performance by itself. The combination of QoS capabilities at the edge with MPLS-based quality of service through the core of the network is the best practice approach to ensuring performance of WAN applications.
Twitter
Share on LinkedIn
facebook