Welcome!

Linux Containers Authors: Elizabeth White, Liz McMillan, Sematext Blog, Pat Romanski, Flint Brenton

Related Topics: Containers Expo Blog, Java IoT, Linux Containers

Containers Expo Blog: Article

The Risks of Over-Virtualization

Or why virtualization is not high availability

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

When Time Began: The Mainframe Era
The Mainframe Era marked the beginning of commercial computing. Looking from the vantage point of how computing evolved after the Mainframe Era, we can observe what problems we might encounter going back to mainframes again. The post-mainframe age was marked by the networked minicomputer (or server). The generally perceived advantage of this was to move computing resources out of the glass house and closer to the user. Another significant advantage was enhanced fault-resilience.

Now, if the mainframe went down for some reason, not everyone in the organization was affected: the local network of minicomputers would still provide any local services that didn't depend on the mainframe. Conversely, any server crash in this network only affected its local users, not the whole organization. The ultimate end point of this expansion into networks was the total decentralization of services, resulting in the decommissioning of many of the central mainframes and the nearly complete reliance instead on a distributed network of servers. However, the essential problem of a distributed network of servers, which virtualization promises to solve, is that they are hard to find (they're not centrally located) and hard and costly to manage (most are running operating systems that aren't amenable to easy remote management). Worse still, if something goes wrong with the hardware (or the operating system), there's pretty much no remote diagnostic ability, so a person has to find the server and manually sort it out. While new remote management system technologies help alleviate some of the administration burden, the issues of server proliferation and remote accessibility continue to exist.

Computing Grows Up: The Server Age
So the server age heralded unparalleled management headaches. They were so great that after the initial heady decentralization, which saw servers running in any available network-connected space, most business critical servers were tracked down and forcibly repatriated to the old glass houses (or what now had became the modern server room) where they could at least be corralled so that the remote management nightmare was considerably less.

However, the management problem still isn't eradicated: just because you have 20-odd servers physically located in the same place doesn't mean that you have the expertise to cope with all the failures that can still occur. This aspect of the management problem arises because the servers that replaced the mainframe were probably purchased over a considerable span of time, often from different manufacturers. Differences in internal components, the Basic I/O Subsystem (BIOS) configuration and software configurations make diagnosing and fixing problems that may happen in an aging server very difficult and necessitate the acquisition of large amounts of in-house expertise.

In many large organizations, the server management problem has become the single largest concern of the IT department. Even in small and medium-sized businesses, concern is growing about the multiplicity of server types in the environment and how they can be effectively managed and repaired without affecting business-critical operations.

The Future: The Promise of The Virtualization Age
The promise of the virtualization age is that of server consolidation: all those individual servers in the machine room can become "virtual" servers running on a single (very powerful) physical machine. That solves the management problem because now there's only a single physical machine to understand. Well, that's the theory. In practice one also needs to understand the virtualization environment; however, that's still only two pieces of knowledge as opposed to the much broader knowledge set required to understand the original multi-server environment being replaced by the virtualization setup. To understand exactly what this replacement entails, we must examine the nature of a virtualized environment.

Understanding the Virtualization Environment
The first thing you need to understand when choosing a virtualization environment (VE) is that they come in two flavors:

  • Standard Virtualization: This presents a set of known device drivers to the operating system running inside the VE. Note that the devices presented by the VE are often not the actual devices present on the platform, but are emulated by the virtualization system to look like real devices. The advantage of doing this is that the operating system uses its standard device drivers to drive these pseudo-devices, and so no modifications to the operating system are required. Any standard operating system can run in this type of environment. The disadvantage is obviously that two levels of device drivers are involved: the one that the operating system uses to drive the pseudo-device and the one that the virtualization environment uses to emulate the pseudo-device. This increases the complexity of the I/O path, and very often slows it down.
  • Para-Virtualization: This presents a set of para-virtual devices to the operating system that require special drivers to operate. This "special" set of devices isn't ordinarily found in the operating system, and so the operating system itself requires modifications to talk to them. Since the operating system is being modified anyway additional changes are often made to it to make it run more efficiently in the VE. Although the necessity of modifying the operating system appears at first glance to be a significant drawback, the resulting efficiency of the virtualized operating system is often an overriding justification for taking this route. And because the para-virtual device drivers are crafted exactly for the VE, they're often as efficient as the operating system driving the hardware natively.
Now there's a third class of virtualization type coming, and that's virtualized hardware. In this scenario, a hardware card itself is expecting to be driven simultaneously by multiple virtualized operating systems. The virtualization software merely presents the virtualized device instances to the operating system to be driven by the device driver provided by the operating system (although the native device driver usually has to be enhanced to understand the hardware virtualization). This kind of hardware virtualization promises to blur the distinction between Standard and Para-virtualization in the field. Even for hardware that might not be thought of as natively virtualized, the major processor makers are adding virtualization technologies to their chipsets (Intel with its VT architecture and AMD with Pacifica), which promises to erase the Standard vs Para distinction altogether.

A Comparison of a Virtualization Environment and a Mainframe
A virtualization environment and a mainframe are very similar from the point-of-view of being "just a large machine," and that's not all they have in common: In an effort to make mainframes relevant to the modern world, mainframe manufacturers became the first true pioneers of virtualization (and the first business groups to tout the benefits of server consolidation). The current generation of virtualization technology is really a "second wave," moving virtualization from the province of highly specialized (and expensive) mainframes to commodity systems. However, one of the chief disadvantages comes from the very fact that the virtualization environment is now running on commodity hardware. Although this might be cheaper by a factor of 10 to 100 over the old mainframe, the flip side is less individual tailoring and burn-in testing. So the failure potential of commodity hardware is far higher than with a mainframe.

There's also a disadvantage inherent in the commodity environment: diversity. Although diversity is often a good thing; in hardware terms, the extreme diversity of so-called commodity hardware results in a plethora of device drivers for that hardware (and, indeed, in Open Source operating systems the risk that some of the hardware won't even have device drivers available). Whether you regard this hardware diversity as a good thing or a bad thing, it's certain that device drivers (in both open and closed source operating systems) are the single most significant source of operating system faults.1 Since, in both standard and para virtualization, the virtualization software itself actually contains the "real" device driver, this type of fault can still bring down the virtualization layer, and so potentially every virtual machine running on the box.

So what lessons we can learn?

The lessons of virtualization are several: First, the very act of virtualizing servers increases the vulnerability of your application environment to both hardware failure and to driver faults. Second, the consequences of these faults when they occur will be more catastrophic than when the environment was distributed among a large pool of servers, since all of the virtualized servers will be taken down with a single machine or driver failure. Therefore, while virtualization may solve the servers' management problem, the cost of doing so is to increase the potential and scope of failures in the enterprise, thus causing an availability crisis.

Solving the Availability Crisis
The beauty of this problem is that the solution is the same as it was in the many-server environment: high-availability clustering. High-availability clustering software is designed to take a group of servers and ensure that a set of services (application, database, file shares) is always available across them.

This same paradigm applies in a virtualized environment with the single caveat that you must still have at least two physical machines to guard against failures of the hardware or virtualization environment.

In general, since high-availability software is designed to run on servers, it will mostly run unmodified in the virtualized server environment, so if you used high-availability software in your original environment, it will be perfectly possible to use the same software in your virtualized environment. The only caveat is that the high-availability software should be configured so that every service has a backup on a separate physical machine. Thus, the virtualization setup desired to achieve the benefits of server consolidation without sacrificing protection against unplanned outages is two physical machines, each initially running about half of the virtual machines, and each acting as a failover target for the services that it doesn't run.

Choosing a high-availability clustering software solution that monitors the entire application stack (application services, database, client and network connections as well as the OS, virtualization layer, and underlying hardware) provides the highest levels of protection against crippling downtime.

Conclusion
In studying the impact of migrations to virtualized environments, we can find lessons from previous cycles in the computer industry. However, the primary points to bear in mind are:

  1. Virtualization is not high availability. It's a solution for the server management problem, not a solution for the service availability problem.
  2. If carried too far, virtualization can, in fact, lead to a decrease in the availability of your services, not an increase.
Therefore, the deployment of a high-availability solution becomes more critical in a virtualized environment. Since deploying a high-availability solution will likely require a modification of the virtualized configuration (i.e., you need two virtualization servers, not one), plans for implementing virtualization should include high-availability planning from the outset of the design stage.

By combining server virtualization with high-availability clustering, IT organizations can realize the benefits of increased manageability and savings from server consolidation without risking increased downtime for business-critical applications

Reference
1.  Chou, A.; Yang, J.; Chelf, B.; Hallem, S.; and Engler, D.
An empirical study of operating system errors.
In Proceedings of the 18th ACM Symposium on Operating Systems Principles, Oct. 2001.

More Stories By James Bottomley

Dr. James Bottomley is chief technology officer, SteelEye Technology (www.steeleye.com). As CTO, he provides the technical strategic vision for SteelEye's future products and research programs. He is also a committed member of the Open Source community currently holding the Linux Kernel SCSI Maintainership and is a frequent speaker at industry trade shows and conferences. James is also an active member of the SteelEye engineering team, directly applying his experience and expertise to SteelEye's ongoing product development efforts. He has 12 years of prior experience both in Acadaemia, AT&T Bell Labs, and NCR working on diverse enterprise and clustering technologies. He holds an MA and a PhD from Cambridge University.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
SYS-CON Italy News Desk 02/27/06 05:03:43 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

SYS-CON India News Desk 02/27/06 03:04:05 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

@ThingsExpo Stories
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
"IoT is going to be a huge industry with a lot of value for end users, for industries, for consumers, for manufacturers. How can we use cloud to effectively manage IoT applications," stated Ian Khan, Innovation & Marketing Manager at Solgeniakhela, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Onalytica. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
The Internet of Things (IoT) promises to simplify and streamline our lives by automating routine tasks that distract us from our goals. This promise is based on the ubiquitous deployment of smart, connected devices that link everything from industrial control systems to automobiles to refrigerators. Unfortunately, comparatively few of the devices currently deployed have been developed with an eye toward security, and as the DDoS attacks of late October 2016 have demonstrated, this oversight can ...
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to impr...
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Businesses and business units of all sizes can benefit from cloud computing, but many don't want the cost, performance and security concerns of public cloud nor the complexity of building their own private clouds. Today, some cloud vendors are using artificial intelligence (AI) to simplify cloud deployment and management. In his session at 20th Cloud Expo, Ajay Gulati, Co-founder and CEO of ZeroStack, will discuss how AI can simplify cloud operations. He will cover the following topics: why clou...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
"ReadyTalk is an audio and web video conferencing provider. We've really come to embrace WebRTC as the platform for our future of technology," explained Dan Cunningham, CTO of ReadyTalk, in this SYS-CON.tv interview at WebRTC Summit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, drew together recent research and lessons learned from emerging and established compa...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, discussed why and how ReadyTalk diverted from healthy revenue and mor...
We are always online. We access our data, our finances, work, and various services on the Internet. But we live in a congested world of information in which the roads were built two decades ago. The quest for better, faster Internet routing has been around for a decade, but nobody solved this problem. We’ve seen band-aid approaches like CDNs that attack a niche's slice of static content part of the Internet, but that’s it. It does not address the dynamic services-based Internet of today. It does...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...