Welcome!

Linux Containers Authors: Elizabeth White, Liz McMillan, Pat Romanski, Reinhard Brandstädter, AppDynamics Blog

Related Topics: Containers Expo Blog, Java IoT, Linux Containers

Containers Expo Blog: Article

The Risks of Over-Virtualization

Or why virtualization is not high availability

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

When Time Began: The Mainframe Era
The Mainframe Era marked the beginning of commercial computing. Looking from the vantage point of how computing evolved after the Mainframe Era, we can observe what problems we might encounter going back to mainframes again. The post-mainframe age was marked by the networked minicomputer (or server). The generally perceived advantage of this was to move computing resources out of the glass house and closer to the user. Another significant advantage was enhanced fault-resilience.

Now, if the mainframe went down for some reason, not everyone in the organization was affected: the local network of minicomputers would still provide any local services that didn't depend on the mainframe. Conversely, any server crash in this network only affected its local users, not the whole organization. The ultimate end point of this expansion into networks was the total decentralization of services, resulting in the decommissioning of many of the central mainframes and the nearly complete reliance instead on a distributed network of servers. However, the essential problem of a distributed network of servers, which virtualization promises to solve, is that they are hard to find (they're not centrally located) and hard and costly to manage (most are running operating systems that aren't amenable to easy remote management). Worse still, if something goes wrong with the hardware (or the operating system), there's pretty much no remote diagnostic ability, so a person has to find the server and manually sort it out. While new remote management system technologies help alleviate some of the administration burden, the issues of server proliferation and remote accessibility continue to exist.

Computing Grows Up: The Server Age
So the server age heralded unparalleled management headaches. They were so great that after the initial heady decentralization, which saw servers running in any available network-connected space, most business critical servers were tracked down and forcibly repatriated to the old glass houses (or what now had became the modern server room) where they could at least be corralled so that the remote management nightmare was considerably less.

However, the management problem still isn't eradicated: just because you have 20-odd servers physically located in the same place doesn't mean that you have the expertise to cope with all the failures that can still occur. This aspect of the management problem arises because the servers that replaced the mainframe were probably purchased over a considerable span of time, often from different manufacturers. Differences in internal components, the Basic I/O Subsystem (BIOS) configuration and software configurations make diagnosing and fixing problems that may happen in an aging server very difficult and necessitate the acquisition of large amounts of in-house expertise.

In many large organizations, the server management problem has become the single largest concern of the IT department. Even in small and medium-sized businesses, concern is growing about the multiplicity of server types in the environment and how they can be effectively managed and repaired without affecting business-critical operations.

The Future: The Promise of The Virtualization Age
The promise of the virtualization age is that of server consolidation: all those individual servers in the machine room can become "virtual" servers running on a single (very powerful) physical machine. That solves the management problem because now there's only a single physical machine to understand. Well, that's the theory. In practice one also needs to understand the virtualization environment; however, that's still only two pieces of knowledge as opposed to the much broader knowledge set required to understand the original multi-server environment being replaced by the virtualization setup. To understand exactly what this replacement entails, we must examine the nature of a virtualized environment.

Understanding the Virtualization Environment
The first thing you need to understand when choosing a virtualization environment (VE) is that they come in two flavors:

  • Standard Virtualization: This presents a set of known device drivers to the operating system running inside the VE. Note that the devices presented by the VE are often not the actual devices present on the platform, but are emulated by the virtualization system to look like real devices. The advantage of doing this is that the operating system uses its standard device drivers to drive these pseudo-devices, and so no modifications to the operating system are required. Any standard operating system can run in this type of environment. The disadvantage is obviously that two levels of device drivers are involved: the one that the operating system uses to drive the pseudo-device and the one that the virtualization environment uses to emulate the pseudo-device. This increases the complexity of the I/O path, and very often slows it down.
  • Para-Virtualization: This presents a set of para-virtual devices to the operating system that require special drivers to operate. This "special" set of devices isn't ordinarily found in the operating system, and so the operating system itself requires modifications to talk to them. Since the operating system is being modified anyway additional changes are often made to it to make it run more efficiently in the VE. Although the necessity of modifying the operating system appears at first glance to be a significant drawback, the resulting efficiency of the virtualized operating system is often an overriding justification for taking this route. And because the para-virtual device drivers are crafted exactly for the VE, they're often as efficient as the operating system driving the hardware natively.
Now there's a third class of virtualization type coming, and that's virtualized hardware. In this scenario, a hardware card itself is expecting to be driven simultaneously by multiple virtualized operating systems. The virtualization software merely presents the virtualized device instances to the operating system to be driven by the device driver provided by the operating system (although the native device driver usually has to be enhanced to understand the hardware virtualization). This kind of hardware virtualization promises to blur the distinction between Standard and Para-virtualization in the field. Even for hardware that might not be thought of as natively virtualized, the major processor makers are adding virtualization technologies to their chipsets (Intel with its VT architecture and AMD with Pacifica), which promises to erase the Standard vs Para distinction altogether.

A Comparison of a Virtualization Environment and a Mainframe
A virtualization environment and a mainframe are very similar from the point-of-view of being "just a large machine," and that's not all they have in common: In an effort to make mainframes relevant to the modern world, mainframe manufacturers became the first true pioneers of virtualization (and the first business groups to tout the benefits of server consolidation). The current generation of virtualization technology is really a "second wave," moving virtualization from the province of highly specialized (and expensive) mainframes to commodity systems. However, one of the chief disadvantages comes from the very fact that the virtualization environment is now running on commodity hardware. Although this might be cheaper by a factor of 10 to 100 over the old mainframe, the flip side is less individual tailoring and burn-in testing. So the failure potential of commodity hardware is far higher than with a mainframe.

There's also a disadvantage inherent in the commodity environment: diversity. Although diversity is often a good thing; in hardware terms, the extreme diversity of so-called commodity hardware results in a plethora of device drivers for that hardware (and, indeed, in Open Source operating systems the risk that some of the hardware won't even have device drivers available). Whether you regard this hardware diversity as a good thing or a bad thing, it's certain that device drivers (in both open and closed source operating systems) are the single most significant source of operating system faults.1 Since, in both standard and para virtualization, the virtualization software itself actually contains the "real" device driver, this type of fault can still bring down the virtualization layer, and so potentially every virtual machine running on the box.

So what lessons we can learn?

The lessons of virtualization are several: First, the very act of virtualizing servers increases the vulnerability of your application environment to both hardware failure and to driver faults. Second, the consequences of these faults when they occur will be more catastrophic than when the environment was distributed among a large pool of servers, since all of the virtualized servers will be taken down with a single machine or driver failure. Therefore, while virtualization may solve the servers' management problem, the cost of doing so is to increase the potential and scope of failures in the enterprise, thus causing an availability crisis.

Solving the Availability Crisis
The beauty of this problem is that the solution is the same as it was in the many-server environment: high-availability clustering. High-availability clustering software is designed to take a group of servers and ensure that a set of services (application, database, file shares) is always available across them.

This same paradigm applies in a virtualized environment with the single caveat that you must still have at least two physical machines to guard against failures of the hardware or virtualization environment.

In general, since high-availability software is designed to run on servers, it will mostly run unmodified in the virtualized server environment, so if you used high-availability software in your original environment, it will be perfectly possible to use the same software in your virtualized environment. The only caveat is that the high-availability software should be configured so that every service has a backup on a separate physical machine. Thus, the virtualization setup desired to achieve the benefits of server consolidation without sacrificing protection against unplanned outages is two physical machines, each initially running about half of the virtual machines, and each acting as a failover target for the services that it doesn't run.

Choosing a high-availability clustering software solution that monitors the entire application stack (application services, database, client and network connections as well as the OS, virtualization layer, and underlying hardware) provides the highest levels of protection against crippling downtime.

Conclusion
In studying the impact of migrations to virtualized environments, we can find lessons from previous cycles in the computer industry. However, the primary points to bear in mind are:

  1. Virtualization is not high availability. It's a solution for the server management problem, not a solution for the service availability problem.
  2. If carried too far, virtualization can, in fact, lead to a decrease in the availability of your services, not an increase.
Therefore, the deployment of a high-availability solution becomes more critical in a virtualized environment. Since deploying a high-availability solution will likely require a modification of the virtualized configuration (i.e., you need two virtualization servers, not one), plans for implementing virtualization should include high-availability planning from the outset of the design stage.

By combining server virtualization with high-availability clustering, IT organizations can realize the benefits of increased manageability and savings from server consolidation without risking increased downtime for business-critical applications

Reference
1.  Chou, A.; Yang, J.; Chelf, B.; Hallem, S.; and Engler, D.
An empirical study of operating system errors.
In Proceedings of the 18th ACM Symposium on Operating Systems Principles, Oct. 2001.

More Stories By James Bottomley

Dr. James Bottomley is chief technology officer, SteelEye Technology (www.steeleye.com). As CTO, he provides the technical strategic vision for SteelEye's future products and research programs. He is also a committed member of the Open Source community currently holding the Linux Kernel SCSI Maintainership and is a frequent speaker at industry trade shows and conferences. James is also an active member of the SteelEye engineering team, directly applying his experience and expertise to SteelEye's ongoing product development efforts. He has 12 years of prior experience both in Acadaemia, AT&T Bell Labs, and NCR working on diverse enterprise and clustering technologies. He holds an MA and a PhD from Cambridge University.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
SYS-CON Italy News Desk 02/27/06 05:03:43 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

SYS-CON India News Desk 02/27/06 03:04:05 PM EST

The computing industry goes in cycles. The latest trend, growing in buzz over the past year, is server consolidation aided by virtualization software. Virtualization software for a computer allows a single machine to behave as though it were many different, separate computing systems; each virtualized instance behaves almost identically to an independent physical machine. Using virtualization software, a roomful of servers can be consolidated onto a single physical box (provided it's powerful enough). Pundits claim this trend is cyclical because it's returning us to the old days of a single large, powerful computer (a la the mainframe) running all of the tasks in an organization. Although the modern consolidated, virtualized server is unlikely to look anything like the old mainframes, it's instructive to examine the virtualization trend in light of this mainframe comparison to see if there are any lessons to be learned.

@ThingsExpo Stories
The IoTs will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm and share the must-have mindsets for removing complexity from the development proc...
So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that "magic essence" from your data without falling into the common pitfalls? In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, will provide tips on how to be successful in large scale machine lear...
Increasing IoT connectivity is forcing enterprises to find elegant solutions to organize and visualize all incoming data from these connected devices with re-configurable dashboard widgets to effectively allow rapid decision-making for everything from immediate actions in tactical situations to strategic analysis and reporting. In his session at 18th Cloud Expo, Shikhir Singh, Senior Developer Relations Manager at Sencha, will discuss how to create HTML5 dashboards that interact with IoT devic...
Artificial Intelligence has the potential to massively disrupt IoT. In his session at 18th Cloud Expo, AJ Abdallat, CEO of Beyond AI, will discuss what the five main drivers are in Artificial Intelligence that could shape the future of the Internet of Things. AJ Abdallat is CEO of Beyond AI. He has over 20 years of management experience in the fields of artificial intelligence, sensors, instruments, devices and software for telecommunications, life sciences, environmental monitoring, process...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
SYS-CON Events announced today that Ericsson has been named “Gold Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. Ericsson is a world leader in the rapidly changing environment of communications technology – providing equipment, software and services to enable transformation through mobility. Some 40 percent of global mobile traffic runs through networks we have supplied. More than 1 billion subscribers around the world re...
We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and...
We’ve worked with dozens of early adopters across numerous industries and will debunk common misperceptions, which starts with understanding that many of the connected products we’ll use over the next 5 years are already products, they’re just not yet connected. With an IoT product, time-in-market provides much more essential feedback than ever before. Innovation comes from what you do with the data that the connected product provides in order to enhance the customer experience and optimize busi...
In his session at @ThingsExpo, Chris Klein, CEO and Co-founder of Rachio, will discuss next generation communities that are using IoT to create more sustainable, intelligent communities. One example is Sterling Ranch, a 10,000 home development that – with the help of Siemens – will integrate IoT technology into the community to provide residents with energy and water savings as well as intelligent security. Everything from stop lights to sprinkler systems to building infrastructures will run ef...
Manufacturers are embracing the Industrial Internet the same way consumers are leveraging Fitbits – to improve overall health and wellness. Both can provide consistent measurement, visibility, and suggest performance improvements customized to help reach goals. Fitbit users can view real-time data and make adjustments to increase their activity. In his session at @ThingsExpo, Mark Bernardo Professional Services Leader, Americas, at GE Digital, will discuss how leveraging the Industrial Interne...
The increasing popularity of the Internet of Things necessitates that our physical and cognitive relationship with wearable technology will change rapidly in the near future. This advent means logging has become a thing of the past. Before, it was on us to track our own data, but now that data is automatically available. What does this mean for mHealth and the "connected" body? In her session at @ThingsExpo, Lisa Calkins, CEO and co-founder of Amadeus Consulting, will discuss the impact of wea...
Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe...
The demand for organizations to expand their infrastructure to multiple IT environments like the cloud, on-premise, mobile, bring your own device (BYOD) and the Internet of Things (IoT) continues to grow. As this hybrid infrastructure increases, the challenge to monitor the security of these systems increases in volume and complexity. In his session at 18th Cloud Expo, Stephen Coty, Chief Security Evangelist at Alert Logic, will show how properly configured and managed security architecture can...
A critical component of any IoT project is the back-end systems that capture data from remote IoT devices and structure it in a way to answer useful questions. Traditional data warehouse and analytical systems are mature technologies that can be used to handle large data sets, but they are not well suited to many IoT-scale products and the need for real-time insights. At Fuze, we have developed a backend platform as part of our mobility-oriented cloud service that uses Big Data-based approache...
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vice president of product management, IoT solutions at GlobalSign, will teach IoT developers how t...
There is an ever-growing explosion of new devices that are connected to the Internet using “cloud” solutions. This rapid growth is creating a massive new demand for efficient access to data. And it’s not just about connecting to that data anymore. This new demand is bringing new issues and challenges and it is important for companies to scale for the coming growth. And with that scaling comes the need for greater security, gathering and data analysis, storage, connectivity and, of course, the...
Digital payments using wearable devices such as smart watches, fitness trackers, and payment wristbands are an increasing area of focus for industry participants, and consumer acceptance from early trials and deployments has encouraged some of the biggest names in technology and banking to continue their push to drive growth in this nascent market. Wearable payment systems may utilize near field communication (NFC), radio frequency identification (RFID), or quick response (QR) codes and barcodes...
The IETF draft standard for M2M certificates is a security solution specifically designed for the demanding needs of IoT/M2M applications. In his session at @ThingsExpo, Brian Romansky, VP of Strategic Technology at TrustPoint Innovation, will explain how M2M certificates can efficiently enable confidentiality, integrity, and authenticity on highly constrained devices.
You deployed your app with the Bluemix PaaS and it's gaining some serious traction, so it's time to make some tweaks. Did you design your application in a way that it can scale in the cloud? Were you even thinking about the cloud when you built the app? If not, chances are your app is going to break. Check out this webcast to learn various techniques for designing applications that will scale successfully in Bluemix, for the confidence you need to take your apps to the next level and beyond.