The Ripple Effects of Flash
Date: Tuesday , October 20, 2015
Flash memory is rapidly overtaking spinning hard disk drives (HDDs) as the storage technology of choice in PCs, notebook computers, and moving into data centers to improve the performance of storage systems and servers. The move to flash, however, involves more than just an upgrade from one storage medium to another. It is fostering a once-in-a-generation transformation that will have profound repercussions on how IT systems are designed, deployed, and used. It can be likened to the shift from vacuum tubes to transistors in the 1950s. With flash, organizations will get more value and far more work out of smaller, more powerful data centers that will also cost less to own and manage. Startups will scale more rapidly than ever and established companies to roll out new and innovative services.
Change That Will Reverberate Through Digital History
The key to understanding this transition is that hard drives are mechanical devices, in some ways more akin to the fan inside your laptop. SSDs, by contrast, don\'t contain moving parts \'just moving electrons\' and the flash storage technology will steadily improve in performance and storage density thanks in part to trends embodied in Moore\'s Law.
These benefits have put HDDs at an inherent disadvantage. Mechanical devices break, consume inordinate HDDs has peaked at 15K rpm, a key metric of performance, for over 15 years. Making them any faster would require additional energy, according to analysts.
The one frontier where HDDs have historically had an edge \'price per GB comparisons\' is also dwindling. Lenovo now sells a 4TB solid state drive for servers at a lower cost per GB than an equivalent amount of capacity fulfilled with 15K drives.
The debate in favor of SSDs could end there, but in reality it\'s just the beginning. The real advantage of SSDs is their ability to enable you to optimize data centers because of the speed, reliability, and performance you get with solid state technology.
Server-level flash regularly outperforms traditional HDDS by 50x to 100x on core functions like random reads and can improve application performance by 20x to 50x. By transferring and storing data at a much faster rate, flash effectively lets organizations recover computing cycles \'lost\' in the wait for hard drives, increasing the productivity of servers and other equipment and turning the initial savings achieved in storage into savings that cascade across the data center.
To get the performance needed out of slow HDDs, for example, IT managers often resort to buying multiple smaller capacity high speed drives rather than fewer large capacity drives. This reduces time lost in the \'spin cycle\' of data retrieval. Multiplying the number of drives, however, increases energy and points of failure. It also increases the need for battery backup, DRAM or more tiers of storage. By trying to paper over the problem, complexity multiplies. Replacing 15k and 10k HDDs with SSDs streamlines operations and costs.
Optimized data centers, meanwhile, can lead directly to bottomline gains. For example, JD.Com, China\'s ecommerce giant, accelerated customer queries by 9x while reducing servers by 3x by integrating flash technology into its operations. ZenDesk, a maker of helpdesk software, increased the number of simultaneous queries its database could handle from 1,000 per second to 3,000 per second with flash while lowering its hardware footprint, raising reliability, and increasing uptime.
The ripple effects of flash technology are even having an impact beyond the hardware realm. By eliminating servers, many customers are finding that they can also reduce their enterprise software licensing costs, which can far outstrip hardware expenditures. AT Internet, an analytics leader in Europe, reduced its software license spend by 85 percent when it reduced the number of database servers from 12 to two with flash.
Clearly, Less Is More
Idle and under-utilized equipment remains one of the uncomfortable facts of the IT industry. Stanford University\'s Jonathan Koomey and other analysts have estimated that approximately 30 percent of servers are \'comatose,\' i.e., plugged in but not performing any useful functions. Even most active servers aren\'t really being fully utilized. In fact, reports from McKinsey, Gartner, and others have said server utilization seems to hover around seven to eight percent.
If a manufacturing facility only utilized its production lines 8 percent of the time, it likely wouldn\'t survive long. Or imagine if your own employees were only occupied two working days a month? Data center assets can never be used at 100 percent capacity, but the potential for improvement from current levels is vast.
Don\'t get me wrong. Hard drives have been a great technology, and they will have a role in the data centers of tomorrow. But it will be a far smaller role over time, as flash penetrates data centers at an accelerating rate.