Welcome!

Linux Containers Authors: Stackify Blog, SmartBear Blog, Pat Romanski, Yeshim Deniz, Plutora Blog

Related Topics: Containers Expo Blog, Java IoT, Linux Containers

Containers Expo Blog: Article

The Risks of Over-Virtualization

Or why virtualization is not high availability

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

When Time Began: The Mainframe Era
The Mainframe Era marked the beginning of commercial computing. Looking from the vantage point of how computing evolved after the Mainframe Era, we can observe what problems we might encounter going back to mainframes again. The post-mainframe age was marked by the networked minicomputer (or server). The generally perceived advantage of this was to move computing resources out of the glass house and closer to the user. Another significant advantage was enhanced fault-resilience.

Now, if the mainframe went down for some reason, not everyone in the organization was affected: the local network of minicomputers would still provide any local services that didn't depend on the mainframe. Conversely, any server crash in this network only affected its local users, not the whole organization. The ultimate end point of this expansion into networks was the total decentralization of services, resulting in the decommissioning of many of the central mainframes and the nearly complete reliance instead on a distributed network of servers. However, the essential problem of a distributed network of servers, which virtualization promises to solve, is that they are hard to find (they're not centrally located) and hard and costly to manage (most are running operating systems that aren't amenable to easy remote management). Worse still, if something goes wrong with the hardware (or the operating system), there's pretty much no remote diagnostic ability, so a person has to find the server and manually sort it out. While new remote management system technologies help alleviate some of the administration burden, the issues of server proliferation and remote accessibility continue to exist.

Computing Grows Up: The Server Age
So the server age heralded unparalleled management headaches. They were so great that after the initial heady decentralization, which saw servers running in any available network-connected space, most business critical servers were tracked down and forcibly repatriated to the old glass houses (or what now had became the modern server room) where they could at least be corralled so that the remote management nightmare was considerably less.

However, the management problem still isn't eradicated: just because you have 20-odd servers physically located in the same place doesn't mean that you have the expertise to cope with all the failures that can still occur. This aspect of the management problem arises because the servers that replaced the mainframe were probably purchased over a considerable span of time, often from different manufacturers. Differences in internal components, the Basic I/O Subsystem (BIOS) configuration and software configurations make diagnosing and fixing problems that may happen in an aging server very difficult and necessitate the acquisition of large amounts of in-house expertise.

In many large organizations, the server management problem has become the single largest concern of the IT department. Even in small and medium-sized businesses, concern is growing about the multiplicity of server types in the environment and how they can be effectively managed and repaired without affecting business-critical operations.

The Future: The Promise of The Virtualization Age
The promise of the virtualization age is that of server consolidation: all those individual servers in the machine room can become "virtual" servers running on a single (very powerful) physical machine. That solves the management problem because now there's only a single physical machine to understand. Well, that's the theory. In practice one also needs to understand the virtualization environment; however, that's still only two pieces of knowledge as opposed to the much broader knowledge set required to understand the original multi-server environment being replaced by the virtualization setup. To understand exactly what this replacement entails, we must examine the nature of a virtualized environment.

Understanding the Virtualization Environment
The first thing you need to understand when choosing a virtualization environment (VE) is that they come in two flavors:

  • Standard Virtualization: This presents a set of known device drivers to the operating system running inside the VE. Note that the devices presented by the VE are often not the actual devices present on the platform, but are emulated by the virtualization system to look like real devices. The advantage of doing this is that the operating system uses its standard device drivers to drive these pseudo-devices, and so no modifications to the operating system are required. Any standard operating system can run in this type of environment. The disadvantage is obviously that two levels of device drivers are involved: the one that the operating system uses to drive the pseudo-device and the one that the virtualization environment uses to emulate the pseudo-device. This increases the complexity of the I/O path, and very often slows it down.
  • Para-Virtualization: This presents a set of para-virtual devices to the operating system that require special drivers to operate. This "special" set of devices isn't ordinarily found in the operating system, and so the operating system itself requires modifications to talk to them. Since the operating system is being modified anyway additional changes are often made to it to make it run more efficiently in the VE. Although the necessity of modifying the operating system appears at first glance to be a significant drawback, the resulting efficiency of the virtualized operating system is often an overriding justification for taking this route. And because the para-virtual device drivers are crafted exactly for the VE, they're often as efficient as the operating system driving the hardware natively.
Now there's a third class of virtualization type coming, and that's virtualized hardware. In this scenario, a hardware card itself is expecting to be driven simultaneously by multiple virtualized operating systems. The virtualization software merely presents the virtualized device instances to the operating system to be driven by the device driver provided by the operating system (although the native device driver usually has to be enhanced to understand the hardware virtualization). This kind of hardware virtualization promises to blur the distinction between Standard and Para-virtualization in the field. Even for hardware that might not be thought of as natively virtualized, the major processor makers are adding virtualization technologies to their chipsets (Intel with its VT architecture and AMD with Pacifica), which promises to erase the Standard vs Para distinction altogether.

A Comparison of a Virtualization Environment and a Mainframe
A virtualization environment and a mainframe are very similar from the point-of-view of being "just a large machine," and that's not all they have in common: In an effort to make mainframes relevant to the modern world, mainframe manufacturers became the first true pioneers of virtualization (and the first business groups to tout the benefits of server consolidation). The current generation of virtualization technology is really a "second wave," moving virtualization from the province of highly specialized (and expensive) mainframes to commodity systems. However, one of the chief disadvantages comes from the very fact that the virtualization environment is now running on commodity hardware. Although this might be cheaper by a factor of 10 to 100 over the old mainframe, the flip side is less individual tailoring and burn-in testing. So the failure potential of commodity hardware is far higher than with a mainframe.

There's also a disadvantage inherent in the commodity environment: diversity. Although diversity is often a good thing; in hardware terms, the extreme diversity of so-called commodity hardware results in a plethora of device drivers for that hardware (and, indeed, in Open Source operating systems the risk that some of the hardware won't even have device drivers available). Whether you regard this hardware diversity as a good thing or a bad thing, it's certain that device drivers (in both open and closed source operating systems) are the single most significant source of operating system faults.1 Since, in both standard and para virtualization, the virtualization software itself actually contains the "real" device driver, this type of fault can still bring down the virtualization layer, and so potentially every virtual machine running on the box.

So what lessons we can learn?

The lessons of virtualization are several: First, the very act of virtualizing servers increases the vulnerability of your application environment to both hardware failure and to driver faults. Second, the consequences of these faults when they occur will be more catastrophic than when the environment was distributed among a large pool of servers, since all of the virtualized servers will be taken down with a single machine or driver failure. Therefore, while virtualization may solve the servers' management problem, the cost of doing so is to increase the potential and scope of failures in the enterprise, thus causing an availability crisis.

Solving the Availability Crisis
The beauty of this problem is that the solution is the same as it was in the many-server environment: high-availability clustering. High-availability clustering software is designed to take a group of servers and ensure that a set of services (application, database, file shares) is always available across them.

This same paradigm applies in a virtualized environment with the single caveat that you must still have at least two physical machines to guard against failures of the hardware or virtualization environment.

In general, since high-availability software is designed to run on servers, it will mostly run unmodified in the virtualized server environment, so if you used high-availability software in your original environment, it will be perfectly possible to use the same software in your virtualized environment. The only caveat is that the high-availability software should be configured so that every service has a backup on a separate physical machine. Thus, the virtualization setup desired to achieve the benefits of server consolidation without sacrificing protection against unplanned outages is two physical machines, each initially running about half of the virtual machines, and each acting as a failover target for the services that it doesn't run.

Choosing a high-availability clustering software solution that monitors the entire application stack (application services, database, client and network connections as well as the OS, virtualization layer, and underlying hardware) provides the highest levels of protection against crippling downtime.

Conclusion
In studying the impact of migrations to virtualized environments, we can find lessons from previous cycles in the computer industry. However, the primary points to bear in mind are:

  1. Virtualization is not high availability. It's a solution for the server management problem, not a solution for the service availability problem.
  2. If carried too far, virtualization can, in fact, lead to a decrease in the availability of your services, not an increase.
Therefore, the deployment of a high-availability solution becomes more critical in a virtualized environment. Since deploying a high-availability solution will likely require a modification of the virtualized configuration (i.e., you need two virtualization servers, not one), plans for implementing virtualization should include high-availability planning from the outset of the design stage.

By combining server virtualization with high-availability clustering, IT organizations can realize the benefits of increased manageability and savings from server consolidation without risking increased downtime for business-critical applications

Reference
1.  Chou, A.; Yang, J.; Chelf, B.; Hallem, S.; and Engler, D.
An empirical study of operating system errors.
In Proceedings of the 18th ACM Symposium on Operating Systems Principles, Oct. 2001.

More Stories By James Bottomley

Dr. James Bottomley is chief technology officer, SteelEye Technology (www.steeleye.com). As CTO, he provides the technical strategic vision for SteelEye's future products and research programs. He is also a committed member of the Open Source community currently holding the Linux Kernel SCSI Maintainership and is a frequent speaker at industry trade shows and conferences. James is also an active member of the SteelEye engineering team, directly applying his experience and expertise to SteelEye's ongoing product development efforts. He has 12 years of prior experience both in Acadaemia, AT&T Bell Labs, and NCR working on diverse enterprise and clustering technologies. He holds an MA and a PhD from Cambridge University.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
SYS-CON Italy News Desk 02/27/06 05:03:43 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

SYS-CON India News Desk 02/27/06 03:04:05 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

@ThingsExpo Stories
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
What sort of WebRTC based applications can we expect to see over the next year and beyond? One way to predict development trends is to see what sorts of applications startups are building. In his session at @ThingsExpo, Arin Sime, founder of WebRTC.ventures, will discuss the current and likely future trends in WebRTC application development based on real requests for custom applications from real customers, as well as other public sources of information,
SYS-CON Events announced today that Infranics will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Since 2000, Infranics has developed SysMaster Suite, which is required for the stable and efficient management of ICT infrastructure. The ICT management solution developed and provided by Infranics continues to add intelligence to the ICT infrastructure through the IMC (Infra Management Cycle) based on mathemat...
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
Now that the world has connected “things,” we need to build these devices as truly intelligent in order to create instantaneous and precise results. This means you have to do as much of the processing at the point of entry as you can: at the edge. The killer use cases for IoT are becoming manifest through AI engines on edge devices. An autonomous car has this dual edge/cloud analytics model, producing precise, real-time results. In his session at @ThingsExpo, John Crupi, Vice President and Eng...
In the enterprise today, connected IoT devices are everywhere – both inside and outside corporate environments. The need to identify, manage, control and secure a quickly growing web of connections and outside devices is making the already challenging task of security even more important, and onerous. In his session at @ThingsExpo, Rich Boyer, CISO and Chief Architect for Security at NTT i3, will discuss new ways of thinking and the approaches needed to address the emerging challenges of securit...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, will posit that disruption is inevitable for c...
SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in S...
There are 66 million network cameras capturing terabytes of data. How did factories in Japan improve physical security at the facilities and improve employee productivity? Edge Computing reduces possible kilobytes of data collected per second to only a few kilobytes of data transmitted to the public cloud every day. Data is aggregated and analyzed close to sensors so only intelligent results need to be transmitted to the cloud. Non-essential data is recycled to optimize storage.
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, will present an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He will expound on the industry issues he frequently came up against as an analyst, and...
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
"I think that everyone recognizes that for IoT to really realize its full potential and value that it is about creating ecosystems and marketplaces and that no single vendor is able to support what is required," explained Esmeralda Swartz, VP, Marketing Enterprise and Cloud at Ericsson, in this SYS-CON.tv interview at @ThingsExpo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
China Unicom exhibit at the 19th International Cloud Expo, which took place at the Santa Clara Convention Center in Santa Clara, CA, in November 2016. China United Network Communications Group Co. Ltd ("China Unicom") was officially established in 2009 on the basis of the merger of former China Netcom and former China Unicom. China Unicom mainly operates a full range of telecommunications services including mobile broadband (GSM, WCDMA, LTE FDD, TD-LTE), fixed-line broadband, ICT, data communica...
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex softw...
SYS-CON Events announced today that Cloud Academy will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud computing technologies. Ge...