Click here to close now.

Welcome!

Linux Containers Authors: XebiaLabs Blog, Liz McMillan, Carmen Gonzalez, Roger Strukhoff, Elizabeth White

Related Topics: Linux Containers

Linux Containers: Article

VM: Virtual Memory or Virtual Mayhem?

How to solve the virtual memory problem in all Linux kernels.

(LinuxWorld) -- The fur has been flying in the Linux kernel development community of late, particularly because there's a lot of contention about how Linux should do virtual memory management (VM). Andrea Arcangeli and Rik van Riel have the two most visible opinions on how the VM should work, so they're often seen as at the heart of the struggle. However, it goes much deeper than that. The VM even seems like a hot button between Linus Torvalds and Alan Cox, although I'm guessing they're not as passionate about the issue as the journalists would like them to be to get mileage out of the topic.

Take us, for example. None other than our own Joshua Drake called the 2.4 Kernel the Kernel of pain, primarily because of problems with the VM algorithms and changes. Yours truly complained a little about the VM last August.

I'm about to weigh in once again about how I feel about VM, only in more detail this time. We're definitely into milking this topic for all it's worth.

After looking at the various VM algorithms in play, I've come up with an alternative. I'm sure Andrea Arcangeli and Rik van Riel are flooded with the same old suggestions over and over again, and it's entirely possible they've heard this one before. Nevertheless, I'll propose it, anyway.

First, let me walk you through a little of the history that gave me the notion of how best to deal with the Linux VM issue.

The brain-dead system that wouldn't die

Back when I was a programmer by trade, one of the projects I worked on involved Fortran programming for a farm of PDP-11s. I honestly don't remember which OS we had installed on the PDPs, but if I had to guess I'd say it was probably RSX-11/M.

What I'll never forget, however, is the one quirk I hated beyond all others. You couldn't run executable files unless they were stored in contiguous blocks on the disk.

Programmers are a virtual disk frag factory, so we ran out of contiguous space almost daily. This means we often found ourselves in the uncomfortable situation where we could no longer test the modifications to our programs until someone defragged the drives. The worst of it was that, if there was such a thing as a disk defrag utility available, the systems guys obviously didn't know about it. They solved the problem by taking the systems off line, backing up, and then restoring the contents of the drives. We couldn't even edit code while we waited.

By the way, Microsoft credits Windows NT architect Dave Cutler as the designer of RSX-11/M (see resources). As tempting as it might be to compare the brain-dead pieces of Windows NT to the bizarre behavior of RSX-11/M, it would be unfair to do so for two reasons. First, more reliable sources than Microsoft only credit Cutler for the design of VMS. Cutler borrowed some RSX-11/M code for VMS, but didn't design RSX-11/M. Second, RSX-11/M was probably only brain-dead because the PDP-11 wasn't much of a brain.

Any computer historian knows the PDP-11 was a breakthrough in affordable computing in its time. Compared to today's desktops, it would be like taking out a second mortgage to buy an abacus. A PDP-11 computer with 8K words of core memory and a 256K disk cost about $30,000 in 1972. It could run you thousands of dollars for an additional 4K of memory.

It may be possible that the RSX-11/M architect was an idiot. It is also possible that the resident portion of the RSX-11/M kernel would have to exceed the size of the typical memory configuration for the PDP-11 in order to change this behavior, which would inflate the price of the system in the process.

I don't happen to recall how much memory we had on our PDP-11s or how much of it was used by RSX-11/M. However, I do know we never had enough. I had to implement overlays to make my Fortran programs work. In case you're not an old fart as I am, overlays are a bit like implementing virtual memory at the application level instead of the OS. You split off parts of your program into modules called overlays. The main program loads an overlay whenever it needs the functions in that overlay.

The trick is to make sure that none of your overlays rely on functions that reside in any other overlays, because your goal is to have only one overlay in memory at a time. I'm not even sure it was possible to have more than one loaded, but it's been too long to remember. Regardless, I'd bet the company paid at least double my yearly salary for some of those PDP-11s, so it was worth the relatively minimal effort for me to break up my program into overlays.

The new VM proposal

What does this have to do with virtual memory performance on the Linux 2.4 kernel?

Everything.

First, think about what the VM is for. The VM is like an extremely sophisticated automated overlay system, only it deals with many more types of memory storage. It is the part of the OS that comes to the rescue when you have used up all your expensive memory and need more.

Here's how it works. The OS finds some data in memory that can safely be removed and stores it to the cheaper storage (disk swap space), which frees up some expensive memory for other use. When a program needs the data that has been swapped to disk, the OS swaps something else out to disk and brings the needed data back into memory.

Most of the arguments about the VM in Linux revolve around how the OS should decide which memory is swapped and when, and methods to make the process fast and painless.

The remaining arguments are usually about what the OS needs to do when you've filled up all available memory and swap space but some task still needs even more memory. In this case, most people agree that the OS needs to kill one or more running tasks to free up memory. Since you're talking about stopping programs dead in their tracks, you have to address the issue of how the OS decides which tasks are less important than others and can afford to be killed.

If you don't see how this relates to my PDP story yet, then here's a hint. The controversy over the VM involves several extremely talented programmers, each of whom could command hundreds of thousands of dollars per year in salaries. They have been spending a great deal of their time and brain power over the past years figuring out how to squeeze the best performance out of systems with limited RAM and drive space.

Still don't get it? Then let me get right to my proposal for a new VM algorithm for Linux. Granted, this VM algorithm is not meant for typical system loads, but it would solve the most annoying VM problems.

I propose that we create a kernel daemon that checks for either of the following two conditions:

  1. The system swaps to disk so much that you see a severe degradation in performance.
  2. A task needs memory after all available RAM and swap is filled.

If either condition is met, the kernel then kills all tasks except those it needs to display the following message on the screen: "Lay off the doughnuts this week and spend the money to buy another DIMM, you penny pinching skinflint!"

As a bonus, the daemon could check the Internet for current pricing and replace the part about "doughnuts" with some comparison that better represents the current state of the market.

Again, I apologize to Andrea and Rik if this has already been suggested, and I suspect it has. Nevertheless, it was therapeutic, if not useful, to offer the advice.

More Stories By Nicholas Petreley

Nicholas Petreley is a computer consultant and author in Asheville, NC.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
There's no doubt that the Internet of Things is driving the next wave of innovation. Google has spent billions over the past few months vacuuming up companies that specialize in smart appliances and machine learning. Already, Philips light bulbs, Audi automobiles, and Samsung washers and dryers can communicate with and be controlled from mobile devices. To take advantage of the opportunities the Internet of Things brings to your business, you'll want to start preparing now.
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
The world is at a tipping point where the technology, the device and global adoption are converging to such a point that we will see an explosion of a world where smartphone devices not only allow us to talk to each other, but allow for communication between everything – serving as a central hub from which we control our world – MediaTek is at the heart of both driving this and allowing the markets to drive this reality forward themselves. The next wave of consumer gadgets is here – smart, connected, and small. If your ambitions are big, so are ours. In his session at @ThingsExpo, Jack Hu, D...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York City, NY. O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption by amplifying "faint signals" from the alpha geeks who are creating the future. An active participa...
We’re entering a new era of computing technology that many are calling the Internet of Things (IoT). Machine to machine, machine to infrastructure, machine to environment, the Internet of Everything, the Internet of Intelligent Things, intelligent systems – call it what you want, but it’s happening, and its potential is huge. IoT is comprised of smart machines interacting and communicating with other machines, objects, environments and infrastructures. As a result, huge volumes of data are being generated, and that data is being processed into useful actions that can “command and control” thi...
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
There's Big Data, then there's really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at Big Data Expo®, Hannah Smalltree, Director at Treasure Data, discussed how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other machines...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem fil...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
The worldwide cellular network will be the backbone of the future IoT, and the telecom industry is clamoring to get on board as more than just a data pipe. In his session at @ThingsExpo, Evan McGee, CTO of Ring Plus, Inc., discussed what service operators can offer that would benefit IoT entrepreneurs, inventors, and consumers. Evan McGee is the CTO of RingPlus, a leading innovative U.S. MVNO and wireless enabler. His focus is on combining web technologies with traditional telecom to create a new breed of unified communication that is easily accessible to the general consumer. With over a de...
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize supplier management. Learn about enterprise architecture strategies for designing connected systems tha...
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, shared some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, a...
The Internet of Things (IoT) promises to evolve the way the world does business; however, understanding how to apply it to your company can be a mystery. Most people struggle with understanding the potential business uses or tend to get caught up in the technology, resulting in solutions that fail to meet even minimum business goals. In his session at @ThingsExpo, Jesse Shiah, CEO / President / Co-Founder of AgilePoint Inc., showed what is needed to leverage the IoT to transform your business. He discussed opportunities and challenges ahead for the IoT from a market and technical point of vie...
Grow your business with enterprise wearable apps using SAP Platforms and Google Glass. SAP and Google just launched the SAP and Google Glass Challenge, an opportunity for you to innovate and develop the best Enterprise Wearable App using SAP Platforms and Google Glass and gain valuable market exposure. In his session at @ThingsExpo, Brian McPhail, Senior Director of Business Development, ISVs & Digital Commerce at SAP, outlined the timeline of the SAP Google Glass Challenge and the opportunity for developers, start-ups, and companies of all sizes to engage with SAP today.
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...