Welcome!

Linux Containers Authors: Jnan Dash, Elizabeth White, Pat Romanski, Liz McMillan, Kalyan Ramanathan

Related Topics: Linux Containers

Linux Containers: Article

VM: Virtual Memory or Virtual Mayhem?

How to solve the virtual memory problem in all Linux kernels.

(LinuxWorld) -- The fur has been flying in the Linux kernel development community of late, particularly because there's a lot of contention about how Linux should do virtual memory management (VM). Andrea Arcangeli and Rik van Riel have the two most visible opinions on how the VM should work, so they're often seen as at the heart of the struggle. However, it goes much deeper than that. The VM even seems like a hot button between Linus Torvalds and Alan Cox, although I'm guessing they're not as passionate about the issue as the journalists would like them to be to get mileage out of the topic.

Take us, for example. None other than our own Joshua Drake called the 2.4 Kernel the Kernel of pain, primarily because of problems with the VM algorithms and changes. Yours truly complained a little about the VM last August.

I'm about to weigh in once again about how I feel about VM, only in more detail this time. We're definitely into milking this topic for all it's worth.

After looking at the various VM algorithms in play, I've come up with an alternative. I'm sure Andrea Arcangeli and Rik van Riel are flooded with the same old suggestions over and over again, and it's entirely possible they've heard this one before. Nevertheless, I'll propose it, anyway.

First, let me walk you through a little of the history that gave me the notion of how best to deal with the Linux VM issue.

The brain-dead system that wouldn't die

Back when I was a programmer by trade, one of the projects I worked on involved Fortran programming for a farm of PDP-11s. I honestly don't remember which OS we had installed on the PDPs, but if I had to guess I'd say it was probably RSX-11/M.

What I'll never forget, however, is the one quirk I hated beyond all others. You couldn't run executable files unless they were stored in contiguous blocks on the disk.

Programmers are a virtual disk frag factory, so we ran out of contiguous space almost daily. This means we often found ourselves in the uncomfortable situation where we could no longer test the modifications to our programs until someone defragged the drives. The worst of it was that, if there was such a thing as a disk defrag utility available, the systems guys obviously didn't know about it. They solved the problem by taking the systems off line, backing up, and then restoring the contents of the drives. We couldn't even edit code while we waited.

By the way, Microsoft credits Windows NT architect Dave Cutler as the designer of RSX-11/M (see resources). As tempting as it might be to compare the brain-dead pieces of Windows NT to the bizarre behavior of RSX-11/M, it would be unfair to do so for two reasons. First, more reliable sources than Microsoft only credit Cutler for the design of VMS. Cutler borrowed some RSX-11/M code for VMS, but didn't design RSX-11/M. Second, RSX-11/M was probably only brain-dead because the PDP-11 wasn't much of a brain.

Any computer historian knows the PDP-11 was a breakthrough in affordable computing in its time. Compared to today's desktops, it would be like taking out a second mortgage to buy an abacus. A PDP-11 computer with 8K words of core memory and a 256K disk cost about $30,000 in 1972. It could run you thousands of dollars for an additional 4K of memory.

It may be possible that the RSX-11/M architect was an idiot. It is also possible that the resident portion of the RSX-11/M kernel would have to exceed the size of the typical memory configuration for the PDP-11 in order to change this behavior, which would inflate the price of the system in the process.

I don't happen to recall how much memory we had on our PDP-11s or how much of it was used by RSX-11/M. However, I do know we never had enough. I had to implement overlays to make my Fortran programs work. In case you're not an old fart as I am, overlays are a bit like implementing virtual memory at the application level instead of the OS. You split off parts of your program into modules called overlays. The main program loads an overlay whenever it needs the functions in that overlay.

The trick is to make sure that none of your overlays rely on functions that reside in any other overlays, because your goal is to have only one overlay in memory at a time. I'm not even sure it was possible to have more than one loaded, but it's been too long to remember. Regardless, I'd bet the company paid at least double my yearly salary for some of those PDP-11s, so it was worth the relatively minimal effort for me to break up my program into overlays.

The new VM proposal

What does this have to do with virtual memory performance on the Linux 2.4 kernel?

Everything.

First, think about what the VM is for. The VM is like an extremely sophisticated automated overlay system, only it deals with many more types of memory storage. It is the part of the OS that comes to the rescue when you have used up all your expensive memory and need more.

Here's how it works. The OS finds some data in memory that can safely be removed and stores it to the cheaper storage (disk swap space), which frees up some expensive memory for other use. When a program needs the data that has been swapped to disk, the OS swaps something else out to disk and brings the needed data back into memory.

Most of the arguments about the VM in Linux revolve around how the OS should decide which memory is swapped and when, and methods to make the process fast and painless.

The remaining arguments are usually about what the OS needs to do when you've filled up all available memory and swap space but some task still needs even more memory. In this case, most people agree that the OS needs to kill one or more running tasks to free up memory. Since you're talking about stopping programs dead in their tracks, you have to address the issue of how the OS decides which tasks are less important than others and can afford to be killed.

If you don't see how this relates to my PDP story yet, then here's a hint. The controversy over the VM involves several extremely talented programmers, each of whom could command hundreds of thousands of dollars per year in salaries. They have been spending a great deal of their time and brain power over the past years figuring out how to squeeze the best performance out of systems with limited RAM and drive space.

Still don't get it? Then let me get right to my proposal for a new VM algorithm for Linux. Granted, this VM algorithm is not meant for typical system loads, but it would solve the most annoying VM problems.

I propose that we create a kernel daemon that checks for either of the following two conditions:

  1. The system swaps to disk so much that you see a severe degradation in performance.
  2. A task needs memory after all available RAM and swap is filled.

If either condition is met, the kernel then kills all tasks except those it needs to display the following message on the screen: "Lay off the doughnuts this week and spend the money to buy another DIMM, you penny pinching skinflint!"

As a bonus, the daemon could check the Internet for current pricing and replace the part about "doughnuts" with some comparison that better represents the current state of the market.

Again, I apologize to Andrea and Rik if this has already been suggested, and I suspect it has. Nevertheless, it was therapeutic, if not useful, to offer the advice.

More Stories By Nicholas Petreley

Nicholas Petreley is a computer consultant and author in Asheville, NC.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Internet of @ThingsExpo has announced today that Chris Matthieu has been named tech chair of Internet of @ThingsExpo 2017 New York The 7th Internet of @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, New York. Chris Matthieu is the co-founder and CTO of Octoblu, a revolutionary real-time IoT platform recently acquired by Citrix. Octoblu connects things, systems, people and clouds to a global mesh network allowing users to automate and control design flo...
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at 20th Cloud Expo, Ed Featherston, director/senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
According to Forrester Research, every business will become either a digital predator or digital prey by 2020. To avoid demise, organizations must rapidly create new sources of value in their end-to-end customer experiences. True digital predators also must break down information and process silos and extend digital transformation initiatives to empower employees with the digital resources needed to win, serve, and retain customers.
The WebRTC Summit New York, to be held June 6-8, 2017, at the Javits Center in New York City, NY, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 20th International Cloud Expo and @ThingsExpo. WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web co...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
The Internet of Things (IoT) promises to simplify and streamline our lives by automating routine tasks that distract us from our goals. This promise is based on the ubiquitous deployment of smart, connected devices that link everything from industrial control systems to automobiles to refrigerators. Unfortunately, comparatively few of the devices currently deployed have been developed with an eye toward security, and as the DDoS attacks of late October 2016 have demonstrated, this oversight can ...
What happens when the different parts of a vehicle become smarter than the vehicle itself? As we move toward the era of smart everything, hundreds of entities in a vehicle that communicate with each other, the vehicle and external systems create a need for identity orchestration so that all entities work as a conglomerate. Much like an orchestra without a conductor, without the ability to secure, control, and connect the link between a vehicle’s head unit, devices, and systems and to manage the ...
"Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Unsecured IoT devices were used to launch crippling DDOS attacks in October 2016, targeting services such as Twitter, Spotify, and GitHub. Subsequent testimony to Congress about potential attacks on office buildings, schools, and hospitals raised the possibility for the IoT to harm and even kill people. What should be done? Does the government need to intervene? This panel at @ThingExpo New York brings together leading IoT and security experts to discuss this very serious topic.
We are always online. We access our data, our finances, work, and various services on the Internet. But we live in a congested world of information in which the roads were built two decades ago. The quest for better, faster Internet routing has been around for a decade, but nobody solved this problem. We’ve seen band-aid approaches like CDNs that attack a niche's slice of static content part of the Internet, but that’s it. It does not address the dynamic services-based Internet of today. It does...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
An IoT product’s log files speak volumes about what’s happening with your products in the field, pinpointing current and potential issues, and enabling you to predict failures and save millions of dollars in inventory. But until recently, no one knew how to listen. In his session at @ThingsExpo, Dan Gettens, Chief Research Officer at OnProcess, discussed recent research by Massachusetts Institute of Technology and OnProcess Technology, where MIT created a new, breakthrough analytics model for ...
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, discussed why and how ReadyTalk diverted from healthy revenue and mor...
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2017 New York. The 20th Cloud Expo and 7th @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Internet to enable us all to im...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.