point

Server-Based Caching to Accelerate Enterprise Applications

Ashutosh Parasnis
Managing Director-QLogic India
Tuesday, December 10, 2013
Ashutosh Parasnis
Increased server performance, higher virtual machine density, advances in network bandwidth, and more demanding business application workloads create a critical I/O performance imbalance between servers, networks, and storage subsystems. Storage I/O is the primary performance bottleneck for most virtualized and data-intensive applications. While processor and memory performance have grown in step with Moore's Law (getting faster and smaller), storage performance has lagged far behind, as shown in Figure 1. This performance gap is further widened by the rapid growth in data volumes that most organizations are experiencing today. For example, IDC predicts that the amount of data volume in the digital universe will increase by a factor of 44 over the next 10 years.

While business applications continue to put more and more usable data at the fingertips of those who need it most, organizations are also generating, capturing and purchasing new data at exponential rates. All this data creates an opportunity for improved efficiency and more powerful business intelligence, while at the same time contributing to application performance issues.

The recent Storage Acceleration and Performance Technologies study by the Taneja Group* noted that 42 percent of respondents reported I/O performance causing unresponsive applications or application failures. Storage performance negatively impacting IT efficiencies, including maximizing server virtualization initiatives and meeting service level agreements (SLAs), were identified by half of all respondents as top drivers for deploying storage acceleration technologies.

SSD Caching

The introduction of SSD cache has created an opportunity for accelerating enterprise applications because the combination of flash's speed, capacity, and price fills the gap that exists between spinning disk and RAM. The purpose of SSD-based caching is to address the I/O performance gap by reducing I/O latency and increasing IOPS performance. Any enterprise caching solution under consideration should be easy to deploy, be OS/application transparent, support caching on individual servers as well as multi-server clusters, including support for highly virtualized environments and clustered applications. It should maintain existing SAN data protection and compliance policies, and deliver benefits across the widest range of applications in the enterprise. Today, there are three main approaches to SSD caching in networks: array-based caching, caching appliances, and server-based caching. While SSD may be implemented as media for primary storage, it's still relatively expensive and potentially cost prohibitive, whereas flash used as cache is more cost effective if deployed by targeting specific workloads to be accelerated.


Share on Twitter
Share on LinkedIn
Share on facebook