Welcome!

Linux Containers Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Automic Blog, Liz McMillan

Related Topics: @CloudExpo, Linux Containers, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Article

Performance: The Key to Data Efficiency By @Permabit | @CloudExpo [#Cloud]

Data efficiency encompasses a variety of different technologies that enable the most effective use of space on a storage device

Data efficiency - the combination of technologies including data deduplication, compression, zero elimination and thin provisioning - transformed the backup storage appliance market in well under a decade. Why has it taken so long for the same changes to occur in the primary storage appliance market? The answer can be found by looking back at the early evolution of the backup appliance market, and understanding why EMC's Data Domain continues to hold a commanding lead in that market today.

Data Efficiency Technologies
The term "data efficiency" encompasses a variety of different technologies that enable the most effective use of space on a storage device by both reducing wasted space and eliminating redundant information. These technologies include thin provisioning, which is now commonplace in primary storage, as well as less extensively deployed features such as compression and deduplication.

Compression is the use of an algorithm to identify data redundancies within a small distance, for example, finding repeated words within a 64 KB window. Compression algorithms often take other steps to increase the entropy (or information density) of a set of data such as more compactly representing parts of bytes that change rarely, like the high bits of a piece of ASCII text. These sorts of algorithms always operate "locally", within a data object like a file, or more frequently on only a small portion of that data object at a time. As such, compression is well suited to provide savings on textual content, databases (particularly NoSQL databases), and mail or other content servers. Compression algorithms typically achieve a savings of 2x to 4x on such data types.

Deduplication, on the other hand, identifies redundancies across a much larger set of data, for example, finding larger 4 KB repeats across an entire storage system. This requires both more memory and much more sophisticated data structures and algorithms, so deduplication is a relative newcomer to the efficiency game compared to compression. Because deduplication has a much greater scope, it has the opportunity to deliver much greater savings - as much as 25x on some data types. Deduplication is particularly effective on virtual machine images as used for server virtualization and VDI, as well as development file shares. It also shows very high space savings in database environments as used for DevOps, where multiple similar copies may exist for development, test and deployment purposes.

The Evolution of Data Efficiency
In less than ten years, data deduplication and compression shifted billions of dollars of customer investment from tape-based backup solutions to purpose-built disk-based backup appliances. The simple but incomplete reason for this is that these technologies made disk cheaper to use for backup. While this particular aspect enabled the switch to disk, it wasn't the driver for the change.

The reason customers switched from tape to disk was that backup and particularly restore to and from disk, respectively, is much, much faster. Enterprise environments were facing increasing challenges in meeting their backup windows, recovery point objectives, and (especially) recovery time objectives with tape-based backup systems. Customers were already using disk-based backup in critical environments, and they were slowly expanding the use of disk as the gradual price decline of disk allowed.

Deduplication enabled a media transition for backup by dramatically changing the price structure for disk-based vs tape. While the disk-based backup is still more expensive, deduplication has made it faster and better.

It's also worth noting that Data Domain, the market leader early on, still commands a majority share of the market. This can be partially explained by history, reputation and the EMC sales machine, but other early market entrants including Quantum, Sepaton and IBM have struggled to gain share, so this doesn't fully explain Data Domain's prolonged dominance.

The rest of the explanation is that deduplication technology is extremely difficult to build well, and Data Domain's product is a solid solution for disk-based backup. In particular, it is extremely fast for sequential write workloads like backup, and thus doesn't compromise performance of streaming to disk. Remember, customers aren't buying these systems for "cheap disk-based backup;" they're buying them for "affordable, fast backup and restore." Performance is the most important feature. Many of the competitors are still delivering the former - cost savings - without delivering the golden egg, which is actually performance.

Lessons for Primary Data Efficiency
What does the history of deduplication in the backup storage market teach us about the future of data efficiency in the primary storage market? First, we should note that data efficiency is catalyzing the same media transition in primary storage as it did in backup, on the same timeframe - this time from disk to flash, instead of tape to disk.

As was the case in backup, cheaper products aren't the major driver for customers in primary storage. Primary storage solutions still need to perform as well as (or better than) systems without data efficiency, under the same workloads. Storage consumers want more performance, not less, and technologies like deduplication enable them to get that performance from flash at a price they can afford. A flash-based system with deduplication doesn't have to be cheaper than the disk-based system it replaces, but it does have to be better overall!

This also explains the slow adoption of efficiency technologies by primary storage vendors. Building compression and deduplication for fully random access storage is an extremely difficult and complex thing to do right. Doing this while maintaining performance - a strict requirement, as we learn from the history of backup - requires years of engineering effort. Most of the solutions currently shipping with data efficiency are relatively disappointing and many other vendors have simply failed at their efforts, leaving only a handful of successful products on the market today.

It's not that vendors don't want to deliver data efficiency on their primary storage, it's that they simply haven't been able to develop it so far and have underestimated the difficulty of this task.

Hits and Misses (and Mostly Misses)
If we take a look at primary storage systems shipping with some form of data efficiency today, we see that the offerings are largely lackluster. The reason that offerings with efficiency features haven't taken the market by storm is because they deliver the same thing as less successful disk backup products - cheaper storage, not better storage. Almost universally, they deliver space savings at a steep cost in performance, a tradeoff no customer wants to make. If customers simply wanted to spend less, they would buy bulk SATA disk rather than fast SAS spindles or flash.

Take NetApp, for example. One of the very first to the market with deduplication, they proved that customers wanted efficiency - but they were also quickly turned off by the limitations of the ONTAP implementation. Take a look at the NetApp's Deduplication Deployment and Implementation Guide (TR-3505). Some choice quotes include, "if 1TB of new data has been added [...], this deduplication operation takes about 10 to 12 hours to complete," and "With eight deduplication processes running, there may be as much as a 15% to 50% performance penalty on other applications running on the system." Their "50% Virtualization Guarantee* Program" has 15 pages of terms and exceptions behind that little asterisk. It's no surprise that most NetApp users choose not to turn on deduplication.

VNX is another case in point. The "EMC VNX Deduplication and Compression" white paper is similarly frightening. Compression is offered, but it's available only as a capacity tier: "compression is not suggested to be used on active datasets." Deduplication is available as a post-process operation, but "for applications requiring consistent and predictable performance [...] Block Deduplication should not be used."

Finally, I'd like to address Pure Storage, which has set the standard for offering "cheap flash" without delivering the full performance of the medium. They represent the most successful of the all-flash array offerings on the market today and have deeply integrated data efficiency features, but they struggle to meet a sustained 150,000 IOPS. Their arrays deliver a solid win on price over all of the flash arrays without optimization, but that performance is not going to tip the balance for primary in the same way Data Domain did for backup.

To be fair to the above products, there are lots of others that must have tried to build their own deduplication and simply failed to deliver something that meets their exacting business standards. IBM, EMC VMAX, Violin Memory and others surely have tried to build their own efficiency features, and have even announced promises to deliver over the years, but none have shipped to date.

Finally, there are some leaders in the primary efficiency game so far! Hitachi is delivering "Deduplication without Compromise" on their HNAS and HUS platforms, providing deduplication (based on Permabit's AlbireoTM technology) that doesn't impact the fantastic performance of the platform. This solution delivers savings and performance for file storage, although the block side of HUS still lacks efficiency features.

EMC XtremIO is another winner in the all-flash array sector of the primary storage market. XtremIO has been able to deliver outstanding performance with fully inline data deduplication capabilities. The platform isn't yet scalable or dense in capacity, but it does deliver the required savings and performance necessary to make a change in the market.

Requirements for Change
The history of the backup appliance market makes the requirement for change in the primary storage market clear. Data efficiency simply cannot compromise performance, which is the reason why a customer is buying a particular storage platform in the first place. We're seeing the seeds of this change in products like HUS and XtremIO, but it's not yet clear who will be the Data Domain of the primary array storage deduplication market. The game is still young.

The good news is that data efficiency can do more than just reduce cost; it can also increase performance as well - making a better product overall, as we saw in the backup market. Inline deduplication can eliminate writes before they ever reach disk or flash, and deduplication can inherently sequentialize writes in a way that vastly improves random write performance in critical environments like OLTP databases. These are some of the requirements for a tipping point in the primary storage market.

Data efficiency in primary storage must deliver uncompromising performance in order to be successful. At a technical level, this means that any implementation must deliver predictable inline performance, a deduplication window that spans the entire capacity of the existing storage platform, and performance scalability to meet the application environment. The current winning solutions provide some of these features today, but it remains to be seen which product will capture them all first.

Inline Efficiency
Inline deduplication and compression - eliminating duplicates as they are written, rather than with a separate process that examines data hours (or days) later - is an absolute requirement for performance in the primary storage market, just as we've previously seen in the backup market. By operating in an inline manner, efficiency operations provide immediate savings, deliver greater and more predictable performance, and allow for greatly accelerated data protection.

With inline deduplication and compression, the customer sees immediate savings because duplicate data never consumes additional space. This is critical in high data change rate scenarios, such as VDI and database environments, because non-inline implementations can run out of space and prevent normal operation. In a post-process implementation, or one using garbage collection, duplicate copies of data can pile up on the media waiting for the optimization process to catch up. If a database, VM, or desktop is cloned many times in succession, the storage rapidly fills and becomes unusable. Inline operations prevent this bottleneck, one called out explicitly in the NetApp documentation above where at most 2 TB of new data can be processed per day. In a post-process implementation a heavily utilized system may never catch up with new data written!

Inline operation also provides for the predictable, consistent performance required by many primary storage applications. In this case, deduplication and compression occur at the time of data write and are balanced with the available system resources by design. This means that performance will not fluctuate wildly as with post-process operation, where a 50% impact (or more) can be seen on I/O performance, as optimization occurs long after the data is written. Additionally, optimization at the time of data write means that the effective size of DRAM or flash caches can be greatly increased, meaning that more workloads can fit in these caching layers and accelerate application performance.

A less obvious advantage of inline efficiency is the ability for a primary storage system to deliver faster data protection. Because data is reduced immediately, it can be replicated immediately in its reduced form for disaster recovery. This greatly shrinks recovery point objectives (RPOs) as well as bandwidth costs. In comparison, a post-process operation requires either waiting for deduplication to catch up with new data (which could take days to weeks), or replicating data in its full form (which could also take days to weeks of additional time).

Capacity and Scalability
Capacity and scalability of a data efficiency solution should seem to be obvious requirements, but they're not apparent in the products in the market today. As we've seen, a storage system incorporating deduplication and compression must be a better product, not just a cheaper product. This means that it must support the same storage capacity and the performance scalability of the primary storage platforms that customers are deploying today.

Deduplication is a relative newcomer to the data efficiency portfolio, and this is largely because the system resources required, in terms of CPU and memory, are much greater than older technologies like compression. The amount of CPU and DRAM in modern platforms means that even relatively simple deduplication algorithms can now be implemented without substantial hardware cost, but they're still quite limited in the amount of storage that they can address, or the data rate that they can accommodate.

For example, even the largest systems from all-flash array vendors like Pure and XtremIO support well under 100 TB of storage capacity, far smaller than the primary storage arrays being broadly deployed today. NetApp, while they support large arrays, only identify duplicates within a very small window of history - perhaps 2 TB or smaller. To deliver effective savings, duplicates must be identified across the entire storage array, and the storage array must support the capacities that are being delivered and used in the real world. Smaller systems may be able to peel off individual applications like VDI, but they'll be lost in the noise of the primary storage data efficiency tipping point to come.

Shifting the Primary Storage Market to Greater Efficiency
A lower cost product is not sufficient to substantially change customers' buying habits, as we saw from the example of the backup market. Rather, a superior product is required to drive rapid, revolutionary change. Just as the backup appliance market is unrecognizable from a decade ago, the primary storage market is on the cusp of a similar transformation. A small number of storage platforms are now delivering limited data efficiency capabilities with some of the features required for success: space savings, high performance, inline deduplication and compression, and capacity and throughput scalability. No clear winner has yet emerged. As the remaining vendors implement data efficiency, we will see who will play the role of Data Domain in the primary storage efficiency transformation.

More Stories By Jered Floyd

Jered Floyd, Chief Technology Officer and Founder of Permabit Technology Corporation, is responsible for exploring strategic future directions for Permabit’s products, and providing thought leadership to guide the company’s data optimization initiatives. He has previously deployed Permabit’s effective software development methodologies and was responsible for developing Permabit product’s core protocol and initial server and system architectures.

Prior to Permabit, Floyd was a Research Scientist on the Microbial Engineering project at the MIT Artificial Intelligence Laboratory, working to bridge the gap between biological and computational systems. Earlier at Turbine, he developed a robust integration language for managing active objects in a massively distributed online virtual environment. Floyd holds Bachelor’s and Master’s degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA
SYS-CON Events announced today that Outscale will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outscale's technology makes an automated and adaptable Cloud available to businesses, supporting them in the most complex IT projects while controlling their operational aspects. You boost your IT infrastructure's reactivity, with request responses that only take a few seconds.
SYS-CON Events announced today that Progress, a global leader in application development, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Enterprises today are rapidly adopting the cloud, while continuing to retain business-critical/sensitive data inside the firewall. This is creating two separate data silos – one inside the firewall and the other outside the firewall. Cloud ISVs oft...
DevOps at Cloud Expo – being held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real r...
In his opening keynote at 20th Cloud Expo, Michael Maximilien, Research Scientist, Architect, and Engineer at IBM, will motivate why realizing the full potential of the cloud and social data requires artificial intelligence. By mixing Cloud Foundry and the rich set of Watson services, IBM's Bluemix is the best cloud operating system for enterprises today, providing rapid development and deployment of applications that can take advantage of the rich catalog of Watson services to help drive insigh...
As cloud adoption continues to transform business, today's global enterprises are challenged with managing a growing amount of information living outside of the data center. The rapid adoption of IoT and increasingly mobile workforce are exacerbating the problem. Ensuring secure data sharing and efficient backup poses capacity and bandwidth considerations as well as policy and regulatory compliance issues.
SYS-CON Events announced today that Cloud Academy will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud computing technologies. Ge...
The 21st International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
SYS-CON Events announced today that Progress, a global leader in application development, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Enterprises today are rapidly adopting the cloud, while continuing to retain business-critical/sensitive data inside the firewall. This is creating two separate data silos – one inside the firewall and the other outside the firewall. Cloud ISVs ofte...
SYS-CON Events announced today that Interoute has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Interoute is the owner operator of Europe's largest network and a global cloud services platform, which encompasses over 70,000 km of lit fiber, 15 data centers, 17 virtual data centers and 33 colocation centers, with connections to 195 additional partner data centers. Our full-service Unifie...
Amazon started as an online bookseller 20 years ago. Since then, it has evolved into a technology juggernaut that has disrupted multiple markets and industries and touches many aspects of our lives. It is a relentless technology and business model innovator driving disruption throughout numerous ecosystems. Amazon’s AWS revenues alone are approaching $16B a year making it one of the largest IT companies in the world. With dominant offerings in Cloud, IoT, eCommerce, Big Data, AI, Digital Assis...
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo Silicon Valley Call for Papers is now open.
SYS-CON Events announced today that delaPlex will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. delaPlex pioneered Software Development as a Service (SDaaS), which provides scalable resources to build, test, and deploy software. It’s a fast and more reliable way to develop a new product or expand your in-house team.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Existing Big Data solutions are mainly focused on the discovery and analysis of data. The solutions are scalable and highly available but tedious when swapping in and swapping out occurs in disarray and thrashing takes place. The resolution for thrashing through machine learning algorithms and support nomenclature is through simple techniques. Organizations that have been collecting large customer data are increasingly seeing the need to use the data for swapping in and out and thrashing occurs ...
SYS-CON Events announced today that DivvyCloud will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. DivvyCloud software enables organizations to achieve their cloud computing goals by simplifying and automating security, compliance and cost optimization of public and private cloud infrastructure. Using DivvyCloud, customers can leverage programmatic Bots to identify and remediate common cloud problems in rea...
SYS-CON Events announced today that Tintri, Inc, a leading provider of enterprise cloud infrastructure, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Tintri offers an enterprise cloud platform built with public cloud-like web services and RESTful APIs. Organizations use Tintri all-flash storage with scale-out and automation as a foundation for their own clouds – to build agile development environments...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...