Linux Containers Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Zakia Bouachraoui

Related Topics: Linux Containers

Linux Containers: Article

The Other Virtualization Technology... OS Virtualization

There's more than one way to skin a cat

Server virtualization is rapidly becoming a common undertaking for IT departments. As a matter of fact, many organizations are now taking the next step: evaluating their virtualization investment to see whether it's helped them achieve the server consolidation goals they previously set. And along the way many IT departments have had some surprises such as a "performance tax," a reduction in server or application performance as a result of virtualizing applications using hardware virtualization technology. With the advent of operating system virtualization, this evaluation process is starting to reveal that some deployments and uses are perfectly suited to hardware virtualization, while others are more suited to OS virtualization.

Using Hardware Virtualization
Hardware emulation or virtualization is probably the best-known virtualization technology. VMware and Microsoft both have product offerings that fit this class of virtualization (although the products themselves are quite different in functionality and maturity). Hardware virtualization technology creates a full emulation of hardware at the host OS level. Full emulation typically runs some part of the guest OS code on the native CPU, but some privileged instructions are emulated and handled separately. Besides the CPU, the software has to emulate the BIOS, video adapters, network adapters, storage, and input/output devices to provide a normal environment for the guest OS's operation. The extent of the emulation allows almost any existing OS to run inside the virtual environment. Each virtual environment (a generic term used across all technologies for virtual machines, virtual servers, and virtual private servers) has an entire OS structure, keeping all the software and processes completely isolated. The primary use of this technology has been to provide multiple different OSes on the same physical piece of hardware. Multiple OSes are, in particular, a requirement in development and testing scenarios, where engineers are developing software simultaneously on different operating systems. Many organizations have deployed hardware virtualization in full production scenarios as well.

Experimenting with Paravirtualization
The Xen Open Source project and various paravirtualization approaches have been hot technology topics over the past year. Paravirtualization is built on the same concepts as hardware virtualization, emulating hardware, and creating a server infrastructure for multiple OSes on the same server. The most important difference is that Xen optimizes the operating system to make some important improvements in performance - although that requires altering the operating system. Paravirtualization solutions are still only available in Open Source form, so they haven't been fully adopted or tested yet by many organizations. Xen also has a very limited set of tools, so managing these technologies can be cumbersome.

Exploring OS Virtualization
OS virtualization is an established virtualization technology that's now gaining significant interest and momentum in the marketplace. Quoted as the virtualization technology with the best potential for server consolidation by Gartner at its Data Center conference in December 2005, OS virtualization is now being evaluated and examined by many organizations new to virtualization, and many that have already deployed other virtualization technologies. The two primary OS virtualization solutions are SWsoft's Virtuozzo and Zones in Sun's Solaris OS. There are also several Open Source OS virtualization projects like OpenVZ and Vserver.

OS virtualization technology leverages a single common OS and creates separate, isolated virtual environments on a single physical server to share hardware and software efficiently. The two available technologies differ in their maturity, but the optimal design includes an architecture that ensures that there's complete isolation and security between each virtualized server. The overall design intent of OS virtualization is to produce a more flexible, lower-overhead virtualization option that uses a single operating system and leverages the hardware support built into the OS. OS virtualization technology results in reduced server and application overhead, better density without redundant OSes, and lower total cost of ownership. The primary use of OS virtualization technology is to deploy production application and data on live servers, although in the correct circumstances it's also quite effective for development environments.

A Choice for Server Consolidation?
What are the primary reasons most IT organizations are considering server virtualization and consolidation projects?

The list is long, but the primary reasons include controlling server sprawl, reducing the total cost of ownership, and increasing manageability. The following are some of the key considerations when deciding between hardware and OS virtualization approaches.

Hardware Virtualization for Server Consolidation
Many organizations have deployed hardware virtualization projects and have seen some improvements in server sprawl, but in some cases not as much as originally hoped. This is due to the performance penalty; heavy I/O and production applications often don't perform well enough in a virtualized environment to be deployed.

The total cost of ownership varies tremendously between the different hardware virtualization options. Not all hardware virtualization solutions are the same; the better-performing solutions with more management tool capabilities have a much higher price-premium than the entry-level solutions. The higher-priced solutions will offer better performance and higher virtualized server density, while the lower-cost solutions enable a bare amount of virtualization capabilities with a more severe performance tax. Initially, some organizations thought they might yield some savings in software, but the hardware virtualization architecture only yields savings in hardware by better utilizing server resources.

Finally, hardware virtualization has increased some server infrastructure manageability with its toolsets. Provisioning and centralized server management are areas of obvious improvement. The areas that haven't seen much improvement are the day-to-day management tasks associated with servers: upgrades, updates, and backups. Because each virtualized server is a complete and isolated server, the administrator still must manage each virtualized server as though it were a separate physical server. While the management toolsets can be helpful, they can't overcome the architectural management limitations.

OS Virtualization for Server Consolidation
The first consideration for server consolidation is addressing server sprawl. OS virtualization can potentially use a single operating system per server, while some technologies allow for multiple variations of operating systems. OS virtualization provides the leanest software overhead. Due to the efficient architecture, it has the best potential for virtualized server density and therefore the fewest total physical servers. OS virtualization installations can support virtualized servers a magnitude greater than a typical hardware virtualization deployment.

Due to both the single operating system structure and the lack of hardware emulation overhead, the performance of OS virtualization also outstrips the performance of hardware virtualization. The low overhead enables all applications, from databases to DNS servers, to be loaded and have near-native performance in a virtual server. More applications and servers are suited to OS virtualization and therefore more servers with various purposes and levels of applications can be consolidated safely.

Total cost of ownership is the main driver of the architecture and intent of OS virtualization. The sheer hardware savings achieved with the highly virtualized server density is just one component. The physical server itself requires just a single copy of the operating system, dramatically reducing the total cost of ownership through reduced software licensing costs. There is a perception that the single OS is a limitation, but certain OS virtualization technologies have taken great pains to ensure that these concerns are alleviated. Some technologies have built-in layers for isolation, security, and resource control that make the single OS approach very safe and controlled. Also, because the OS can be identified as a single point of failure, all virtualization approaches have a single point of failure, whether created by the vendor or in this case by an operating system vendor. Again, some OSes are perceived to have more security issues than a proprietary hypervisor, but some OS virtualization technologies have taken this into consideration too and use security layers to mediate kernel access and ensure that it's safe. In taking these precautions, OS virtualization can reap incredible benefits by using a standard OS, such as supporting any hardware that the OS supports, while minimizing risks and ensuring that virtualized servers are isolated and secure from one another.

Lastly, OS virtualization has the best potential for manageability improvements. The available technologies have varying levels of toolsets, but the most established toolset is very comprehensive. As with hardware virtualization, management improvements can be gained from fast provisioning and centralized server management. OS virtualization provisioning is even faster. As images or copies of operating systems aren't required, a very thin virtualized server structure is created in a matter of seconds. In addition, OS virtualization can be configured to have a single OS version loaded for the entire server. The single version enables an entire server to be updated and patched with a single action. Administrators can now manage several virtualized servers just as easily as managing a single physical server. Some of the technologies have additional capabilities such as cloning, zero-downtime migration, and backup, allowing virtualized servers to be tested on a new patch level if necessary, and then deployed in seconds. Some see the single patch level as a limitation of the OS virtualization technology, but this has also been addressed in some of the available technologies. Some technologies enable different distributions to be loaded on a single server in the case of Linux, and different patch levels on a Windows server. Loading different patch levels takes away from the lean intent and design, but it provides the real-world flexibility many organizations require.

Moving Forward
Many IT departments have already had very successful server virtualization deployments and are now looking for opportunities to gain even more benefits from this important technology. As options such as OS virtualization become more prevalent, more and more organizations will consider the architectures and approaches and deploy the appropriate technologies with the appropriate applications and virtualized servers. In some cases, virtualization technologies will replace each other, but more commonly, virtualization and consolidation strategies will become more finely tuned and matched with the appropriate technology.

More Stories By Carla Safigan

Carla Safigan is director of product management at Parallels Inc.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
"MobiDev is a Ukraine-based software development company. We do mobile development, and we're specialists in that. But we do full stack software development for entrepreneurs, for emerging companies, and for enterprise ventures," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...