Welcome!

Linux Containers Authors: Elizabeth White, Yeshim Deniz, Stackify Blog, Pat Romanski, Liz McMillan

Related Topics: Containers Expo Blog, Java IoT, Linux Containers

Containers Expo Blog: Article

The Risks of Over-Virtualization

Or why virtualization is not high availability

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

When Time Began: The Mainframe Era
The Mainframe Era marked the beginning of commercial computing. Looking from the vantage point of how computing evolved after the Mainframe Era, we can observe what problems we might encounter going back to mainframes again. The post-mainframe age was marked by the networked minicomputer (or server). The generally perceived advantage of this was to move computing resources out of the glass house and closer to the user. Another significant advantage was enhanced fault-resilience.

Now, if the mainframe went down for some reason, not everyone in the organization was affected: the local network of minicomputers would still provide any local services that didn't depend on the mainframe. Conversely, any server crash in this network only affected its local users, not the whole organization. The ultimate end point of this expansion into networks was the total decentralization of services, resulting in the decommissioning of many of the central mainframes and the nearly complete reliance instead on a distributed network of servers. However, the essential problem of a distributed network of servers, which virtualization promises to solve, is that they are hard to find (they're not centrally located) and hard and costly to manage (most are running operating systems that aren't amenable to easy remote management). Worse still, if something goes wrong with the hardware (or the operating system), there's pretty much no remote diagnostic ability, so a person has to find the server and manually sort it out. While new remote management system technologies help alleviate some of the administration burden, the issues of server proliferation and remote accessibility continue to exist.

Computing Grows Up: The Server Age
So the server age heralded unparalleled management headaches. They were so great that after the initial heady decentralization, which saw servers running in any available network-connected space, most business critical servers were tracked down and forcibly repatriated to the old glass houses (or what now had became the modern server room) where they could at least be corralled so that the remote management nightmare was considerably less.

However, the management problem still isn't eradicated: just because you have 20-odd servers physically located in the same place doesn't mean that you have the expertise to cope with all the failures that can still occur. This aspect of the management problem arises because the servers that replaced the mainframe were probably purchased over a considerable span of time, often from different manufacturers. Differences in internal components, the Basic I/O Subsystem (BIOS) configuration and software configurations make diagnosing and fixing problems that may happen in an aging server very difficult and necessitate the acquisition of large amounts of in-house expertise.

In many large organizations, the server management problem has become the single largest concern of the IT department. Even in small and medium-sized businesses, concern is growing about the multiplicity of server types in the environment and how they can be effectively managed and repaired without affecting business-critical operations.

The Future: The Promise of The Virtualization Age
The promise of the virtualization age is that of server consolidation: all those individual servers in the machine room can become "virtual" servers running on a single (very powerful) physical machine. That solves the management problem because now there's only a single physical machine to understand. Well, that's the theory. In practice one also needs to understand the virtualization environment; however, that's still only two pieces of knowledge as opposed to the much broader knowledge set required to understand the original multi-server environment being replaced by the virtualization setup. To understand exactly what this replacement entails, we must examine the nature of a virtualized environment.

Understanding the Virtualization Environment
The first thing you need to understand when choosing a virtualization environment (VE) is that they come in two flavors:

  • Standard Virtualization: This presents a set of known device drivers to the operating system running inside the VE. Note that the devices presented by the VE are often not the actual devices present on the platform, but are emulated by the virtualization system to look like real devices. The advantage of doing this is that the operating system uses its standard device drivers to drive these pseudo-devices, and so no modifications to the operating system are required. Any standard operating system can run in this type of environment. The disadvantage is obviously that two levels of device drivers are involved: the one that the operating system uses to drive the pseudo-device and the one that the virtualization environment uses to emulate the pseudo-device. This increases the complexity of the I/O path, and very often slows it down.
  • Para-Virtualization: This presents a set of para-virtual devices to the operating system that require special drivers to operate. This "special" set of devices isn't ordinarily found in the operating system, and so the operating system itself requires modifications to talk to them. Since the operating system is being modified anyway additional changes are often made to it to make it run more efficiently in the VE. Although the necessity of modifying the operating system appears at first glance to be a significant drawback, the resulting efficiency of the virtualized operating system is often an overriding justification for taking this route. And because the para-virtual device drivers are crafted exactly for the VE, they're often as efficient as the operating system driving the hardware natively.
Now there's a third class of virtualization type coming, and that's virtualized hardware. In this scenario, a hardware card itself is expecting to be driven simultaneously by multiple virtualized operating systems. The virtualization software merely presents the virtualized device instances to the operating system to be driven by the device driver provided by the operating system (although the native device driver usually has to be enhanced to understand the hardware virtualization). This kind of hardware virtualization promises to blur the distinction between Standard and Para-virtualization in the field. Even for hardware that might not be thought of as natively virtualized, the major processor makers are adding virtualization technologies to their chipsets (Intel with its VT architecture and AMD with Pacifica), which promises to erase the Standard vs Para distinction altogether.

A Comparison of a Virtualization Environment and a Mainframe
A virtualization environment and a mainframe are very similar from the point-of-view of being "just a large machine," and that's not all they have in common: In an effort to make mainframes relevant to the modern world, mainframe manufacturers became the first true pioneers of virtualization (and the first business groups to tout the benefits of server consolidation). The current generation of virtualization technology is really a "second wave," moving virtualization from the province of highly specialized (and expensive) mainframes to commodity systems. However, one of the chief disadvantages comes from the very fact that the virtualization environment is now running on commodity hardware. Although this might be cheaper by a factor of 10 to 100 over the old mainframe, the flip side is less individual tailoring and burn-in testing. So the failure potential of commodity hardware is far higher than with a mainframe.

There's also a disadvantage inherent in the commodity environment: diversity. Although diversity is often a good thing; in hardware terms, the extreme diversity of so-called commodity hardware results in a plethora of device drivers for that hardware (and, indeed, in Open Source operating systems the risk that some of the hardware won't even have device drivers available). Whether you regard this hardware diversity as a good thing or a bad thing, it's certain that device drivers (in both open and closed source operating systems) are the single most significant source of operating system faults.1 Since, in both standard and para virtualization, the virtualization software itself actually contains the "real" device driver, this type of fault can still bring down the virtualization layer, and so potentially every virtual machine running on the box.

So what lessons we can learn?

The lessons of virtualization are several: First, the very act of virtualizing servers increases the vulnerability of your application environment to both hardware failure and to driver faults. Second, the consequences of these faults when they occur will be more catastrophic than when the environment was distributed among a large pool of servers, since all of the virtualized servers will be taken down with a single machine or driver failure. Therefore, while virtualization may solve the servers' management problem, the cost of doing so is to increase the potential and scope of failures in the enterprise, thus causing an availability crisis.

Solving the Availability Crisis
The beauty of this problem is that the solution is the same as it was in the many-server environment: high-availability clustering. High-availability clustering software is designed to take a group of servers and ensure that a set of services (application, database, file shares) is always available across them.

This same paradigm applies in a virtualized environment with the single caveat that you must still have at least two physical machines to guard against failures of the hardware or virtualization environment.

In general, since high-availability software is designed to run on servers, it will mostly run unmodified in the virtualized server environment, so if you used high-availability software in your original environment, it will be perfectly possible to use the same software in your virtualized environment. The only caveat is that the high-availability software should be configured so that every service has a backup on a separate physical machine. Thus, the virtualization setup desired to achieve the benefits of server consolidation without sacrificing protection against unplanned outages is two physical machines, each initially running about half of the virtual machines, and each acting as a failover target for the services that it doesn't run.

Choosing a high-availability clustering software solution that monitors the entire application stack (application services, database, client and network connections as well as the OS, virtualization layer, and underlying hardware) provides the highest levels of protection against crippling downtime.

Conclusion
In studying the impact of migrations to virtualized environments, we can find lessons from previous cycles in the computer industry. However, the primary points to bear in mind are:

  1. Virtualization is not high availability. It's a solution for the server management problem, not a solution for the service availability problem.
  2. If carried too far, virtualization can, in fact, lead to a decrease in the availability of your services, not an increase.
Therefore, the deployment of a high-availability solution becomes more critical in a virtualized environment. Since deploying a high-availability solution will likely require a modification of the virtualized configuration (i.e., you need two virtualization servers, not one), plans for implementing virtualization should include high-availability planning from the outset of the design stage.

By combining server virtualization with high-availability clustering, IT organizations can realize the benefits of increased manageability and savings from server consolidation without risking increased downtime for business-critical applications

Reference
1.  Chou, A.; Yang, J.; Chelf, B.; Hallem, S.; and Engler, D.
An empirical study of operating system errors.
In Proceedings of the 18th ACM Symposium on Operating Systems Principles, Oct. 2001.

More Stories By James Bottomley

Dr. James Bottomley is chief technology officer, SteelEye Technology (www.steeleye.com). As CTO, he provides the technical strategic vision for SteelEye's future products and research programs. He is also a committed member of the Open Source community currently holding the Linux Kernel SCSI Maintainership and is a frequent speaker at industry trade shows and conferences. James is also an active member of the SteelEye engineering team, directly applying his experience and expertise to SteelEye's ongoing product development efforts. He has 12 years of prior experience both in Acadaemia, AT&T Bell Labs, and NCR working on diverse enterprise and clustering technologies. He holds an MA and a PhD from Cambridge University.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
SYS-CON Italy News Desk 02/27/06 05:03:43 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

SYS-CON India News Desk 02/27/06 03:04:05 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

@ThingsExpo Stories
I think DevOps is now a rambunctious teenager - it's starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Major trends and emerging technologies – from virtual reality and IoT, to Big Data and algorithms – are helping organizations innovate in the digital era. However, to create real business value, IT must think beyond the ‘what’ of digital transformation to the ‘how’ to harness emerging trends, innovation and disruption. Architecture is the key that underpins and ties all these efforts together. In the digital age, it’s important to invest in architecture, extend the enterprise footprint to the cl...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessio...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
Announcing Poland #DigitalTransformation Pavilion
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
CloudEXPO | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that All in Mobile, a mobile app development company from Poland, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. All In Mobile is a mobile app development company from Poland. Since 2014, they maintain passion for developing mobile applications for enterprises and startups worldwide.
The best way to leverage your CloudEXPO | DXWorldEXPO presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering CloudEXPO | DXWorldEXPO will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at CloudEXPO. Product announcements during our show provide your company with the most reach through our targeted audienc...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world.
Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution. In his session at @ThingsExpo, Akvelon expert and IoT industry leader Sergey Grebnov provided an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...