Click here to close now.




















Welcome!

Linux Containers Authors: Automic Blog, Mike Kavis, SmartBear Blog, Pat Romanski, AppDynamics Blog

Related Topics: Containers Expo Blog, Java IoT, Linux Containers

Containers Expo Blog: Article

The Risks of Over-Virtualization

Or why virtualization is not high availability

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

When Time Began: The Mainframe Era
The Mainframe Era marked the beginning of commercial computing. Looking from the vantage point of how computing evolved after the Mainframe Era, we can observe what problems we might encounter going back to mainframes again. The post-mainframe age was marked by the networked minicomputer (or server). The generally perceived advantage of this was to move computing resources out of the glass house and closer to the user. Another significant advantage was enhanced fault-resilience.

Now, if the mainframe went down for some reason, not everyone in the organization was affected: the local network of minicomputers would still provide any local services that didn't depend on the mainframe. Conversely, any server crash in this network only affected its local users, not the whole organization. The ultimate end point of this expansion into networks was the total decentralization of services, resulting in the decommissioning of many of the central mainframes and the nearly complete reliance instead on a distributed network of servers. However, the essential problem of a distributed network of servers, which virtualization promises to solve, is that they are hard to find (they're not centrally located) and hard and costly to manage (most are running operating systems that aren't amenable to easy remote management). Worse still, if something goes wrong with the hardware (or the operating system), there's pretty much no remote diagnostic ability, so a person has to find the server and manually sort it out. While new remote management system technologies help alleviate some of the administration burden, the issues of server proliferation and remote accessibility continue to exist.

Computing Grows Up: The Server Age
So the server age heralded unparalleled management headaches. They were so great that after the initial heady decentralization, which saw servers running in any available network-connected space, most business critical servers were tracked down and forcibly repatriated to the old glass houses (or what now had became the modern server room) where they could at least be corralled so that the remote management nightmare was considerably less.

However, the management problem still isn't eradicated: just because you have 20-odd servers physically located in the same place doesn't mean that you have the expertise to cope with all the failures that can still occur. This aspect of the management problem arises because the servers that replaced the mainframe were probably purchased over a considerable span of time, often from different manufacturers. Differences in internal components, the Basic I/O Subsystem (BIOS) configuration and software configurations make diagnosing and fixing problems that may happen in an aging server very difficult and necessitate the acquisition of large amounts of in-house expertise.

In many large organizations, the server management problem has become the single largest concern of the IT department. Even in small and medium-sized businesses, concern is growing about the multiplicity of server types in the environment and how they can be effectively managed and repaired without affecting business-critical operations.

The Future: The Promise of The Virtualization Age
The promise of the virtualization age is that of server consolidation: all those individual servers in the machine room can become "virtual" servers running on a single (very powerful) physical machine. That solves the management problem because now there's only a single physical machine to understand. Well, that's the theory. In practice one also needs to understand the virtualization environment; however, that's still only two pieces of knowledge as opposed to the much broader knowledge set required to understand the original multi-server environment being replaced by the virtualization setup. To understand exactly what this replacement entails, we must examine the nature of a virtualized environment.

Understanding the Virtualization Environment
The first thing you need to understand when choosing a virtualization environment (VE) is that they come in two flavors:

  • Standard Virtualization: This presents a set of known device drivers to the operating system running inside the VE. Note that the devices presented by the VE are often not the actual devices present on the platform, but are emulated by the virtualization system to look like real devices. The advantage of doing this is that the operating system uses its standard device drivers to drive these pseudo-devices, and so no modifications to the operating system are required. Any standard operating system can run in this type of environment. The disadvantage is obviously that two levels of device drivers are involved: the one that the operating system uses to drive the pseudo-device and the one that the virtualization environment uses to emulate the pseudo-device. This increases the complexity of the I/O path, and very often slows it down.
  • Para-Virtualization: This presents a set of para-virtual devices to the operating system that require special drivers to operate. This "special" set of devices isn't ordinarily found in the operating system, and so the operating system itself requires modifications to talk to them. Since the operating system is being modified anyway additional changes are often made to it to make it run more efficiently in the VE. Although the necessity of modifying the operating system appears at first glance to be a significant drawback, the resulting efficiency of the virtualized operating system is often an overriding justification for taking this route. And because the para-virtual device drivers are crafted exactly for the VE, they're often as efficient as the operating system driving the hardware natively.
Now there's a third class of virtualization type coming, and that's virtualized hardware. In this scenario, a hardware card itself is expecting to be driven simultaneously by multiple virtualized operating systems. The virtualization software merely presents the virtualized device instances to the operating system to be driven by the device driver provided by the operating system (although the native device driver usually has to be enhanced to understand the hardware virtualization). This kind of hardware virtualization promises to blur the distinction between Standard and Para-virtualization in the field. Even for hardware that might not be thought of as natively virtualized, the major processor makers are adding virtualization technologies to their chipsets (Intel with its VT architecture and AMD with Pacifica), which promises to erase the Standard vs Para distinction altogether.

A Comparison of a Virtualization Environment and a Mainframe
A virtualization environment and a mainframe are very similar from the point-of-view of being "just a large machine," and that's not all they have in common: In an effort to make mainframes relevant to the modern world, mainframe manufacturers became the first true pioneers of virtualization (and the first business groups to tout the benefits of server consolidation). The current generation of virtualization technology is really a "second wave," moving virtualization from the province of highly specialized (and expensive) mainframes to commodity systems. However, one of the chief disadvantages comes from the very fact that the virtualization environment is now running on commodity hardware. Although this might be cheaper by a factor of 10 to 100 over the old mainframe, the flip side is less individual tailoring and burn-in testing. So the failure potential of commodity hardware is far higher than with a mainframe.

There's also a disadvantage inherent in the commodity environment: diversity. Although diversity is often a good thing; in hardware terms, the extreme diversity of so-called commodity hardware results in a plethora of device drivers for that hardware (and, indeed, in Open Source operating systems the risk that some of the hardware won't even have device drivers available). Whether you regard this hardware diversity as a good thing or a bad thing, it's certain that device drivers (in both open and closed source operating systems) are the single most significant source of operating system faults.1 Since, in both standard and para virtualization, the virtualization software itself actually contains the "real" device driver, this type of fault can still bring down the virtualization layer, and so potentially every virtual machine running on the box.

So what lessons we can learn?

The lessons of virtualization are several: First, the very act of virtualizing servers increases the vulnerability of your application environment to both hardware failure and to driver faults. Second, the consequences of these faults when they occur will be more catastrophic than when the environment was distributed among a large pool of servers, since all of the virtualized servers will be taken down with a single machine or driver failure. Therefore, while virtualization may solve the servers' management problem, the cost of doing so is to increase the potential and scope of failures in the enterprise, thus causing an availability crisis.

Solving the Availability Crisis
The beauty of this problem is that the solution is the same as it was in the many-server environment: high-availability clustering. High-availability clustering software is designed to take a group of servers and ensure that a set of services (application, database, file shares) is always available across them.

This same paradigm applies in a virtualized environment with the single caveat that you must still have at least two physical machines to guard against failures of the hardware or virtualization environment.

In general, since high-availability software is designed to run on servers, it will mostly run unmodified in the virtualized server environment, so if you used high-availability software in your original environment, it will be perfectly possible to use the same software in your virtualized environment. The only caveat is that the high-availability software should be configured so that every service has a backup on a separate physical machine. Thus, the virtualization setup desired to achieve the benefits of server consolidation without sacrificing protection against unplanned outages is two physical machines, each initially running about half of the virtual machines, and each acting as a failover target for the services that it doesn't run.

Choosing a high-availability clustering software solution that monitors the entire application stack (application services, database, client and network connections as well as the OS, virtualization layer, and underlying hardware) provides the highest levels of protection against crippling downtime.

Conclusion
In studying the impact of migrations to virtualized environments, we can find lessons from previous cycles in the computer industry. However, the primary points to bear in mind are:

  1. Virtualization is not high availability. It's a solution for the server management problem, not a solution for the service availability problem.
  2. If carried too far, virtualization can, in fact, lead to a decrease in the availability of your services, not an increase.
Therefore, the deployment of a high-availability solution becomes more critical in a virtualized environment. Since deploying a high-availability solution will likely require a modification of the virtualized configuration (i.e., you need two virtualization servers, not one), plans for implementing virtualization should include high-availability planning from the outset of the design stage.

By combining server virtualization with high-availability clustering, IT organizations can realize the benefits of increased manageability and savings from server consolidation without risking increased downtime for business-critical applications

Reference
1.  Chou, A.; Yang, J.; Chelf, B.; Hallem, S.; and Engler, D.
An empirical study of operating system errors.
In Proceedings of the 18th ACM Symposium on Operating Systems Principles, Oct. 2001.

More Stories By James Bottomley

Dr. James Bottomley is chief technology officer, SteelEye Technology (www.steeleye.com). As CTO, he provides the technical strategic vision for SteelEye's future products and research programs. He is also a committed member of the Open Source community currently holding the Linux Kernel SCSI Maintainership and is a frequent speaker at industry trade shows and conferences. James is also an active member of the SteelEye engineering team, directly applying his experience and expertise to SteelEye's ongoing product development efforts. He has 12 years of prior experience both in Acadaemia, AT&T Bell Labs, and NCR working on diverse enterprise and clustering technologies. He holds an MA and a PhD from Cambridge University.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
SYS-CON Italy News Desk 02/27/06 05:03:43 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

SYS-CON India News Desk 02/27/06 03:04:05 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

@ThingsExpo Stories
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducted a live demonstration of how quickly application development can happen when the need to comply wit...
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs. The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with APIs within the next year.
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, d...
In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discussed the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation.
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect their organization.
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world and it starts with business models and monetization strategies.
Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, discussed IoE and the enormous opportunities it provides to public and private firms alike. She will share what businesses must do to thrive in the IoE economy, citing examples from several industry sectors.
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
Akana has released Envision, an enhanced API analytics platform that helps enterprises mine critical insights across their digital eco-systems, understand their customers and partners and offer value-added personalized services. “In today’s digital economy, data-driven insights are proving to be a key differentiator for businesses. Understanding the data that is being tunneled through their APIs and how it can be used to optimize their business and operations is of paramount importance,” said Alistair Farquharson, CTO of Akana.
Business as usual for IT is evolving into a "Make or Buy" decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud business applications and services across multiple cloud delivery models.
The enterprise market will drive IoT device adoption over the next five years. In his session at @ThingsExpo, John Greenough, an analyst at BI Intelligence, division of Business Insider, analyzed how companies will adopt IoT products and the associated cost of adopting those products. John Greenough is the lead analyst covering the Internet of Things for BI Intelligence- Business Insider’s paid research service. Numerous IoT companies have cited his analysis of the IoT. Prior to joining BI Intelligence, he worked analyzing bank technology for Corporate Insight and The Clearing House Payment...
"Optimal Design is a technology integration and product development firm that specializes in connecting devices to the cloud," stated Joe Wascow, Co-Founder & CMO of Optimal Design, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...
Electric Cloud and Arynga have announced a product integration partnership that will bring Continuous Delivery solutions to the automotive Internet-of-Things (IoT) market. The joint solution will help automotive manufacturers, OEMs and system integrators adopt DevOps automation and Continuous Delivery practices that reduce software build and release cycle times within the complex and specific parameters of embedded and IoT software systems.
"ciqada is a combined platform of hardware modules and server products that lets people take their existing devices or new devices and lets them be accessible over the Internet for their users," noted Geoff Engelstein of ciqada, a division of Mars International, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years.