Click here to close now.




















Welcome!

Linux Containers Authors: Elizabeth White, Dana Gardner, Pat Romanski, Mike Kavis, AppDynamics Blog

Related Topics: Linux Containers

Linux Containers: Article

VM: Virtual Memory or Virtual Mayhem?

How to solve the virtual memory problem in all Linux kernels.

(LinuxWorld) -- The fur has been flying in the Linux kernel development community of late, particularly because there's a lot of contention about how Linux should do virtual memory management (VM). Andrea Arcangeli and Rik van Riel have the two most visible opinions on how the VM should work, so they're often seen as at the heart of the struggle. However, it goes much deeper than that. The VM even seems like a hot button between Linus Torvalds and Alan Cox, although I'm guessing they're not as passionate about the issue as the journalists would like them to be to get mileage out of the topic.

Take us, for example. None other than our own Joshua Drake called the 2.4 Kernel the Kernel of pain, primarily because of problems with the VM algorithms and changes. Yours truly complained a little about the VM last August.

I'm about to weigh in once again about how I feel about VM, only in more detail this time. We're definitely into milking this topic for all it's worth.

After looking at the various VM algorithms in play, I've come up with an alternative. I'm sure Andrea Arcangeli and Rik van Riel are flooded with the same old suggestions over and over again, and it's entirely possible they've heard this one before. Nevertheless, I'll propose it, anyway.

First, let me walk you through a little of the history that gave me the notion of how best to deal with the Linux VM issue.

The brain-dead system that wouldn't die

Back when I was a programmer by trade, one of the projects I worked on involved Fortran programming for a farm of PDP-11s. I honestly don't remember which OS we had installed on the PDPs, but if I had to guess I'd say it was probably RSX-11/M.

What I'll never forget, however, is the one quirk I hated beyond all others. You couldn't run executable files unless they were stored in contiguous blocks on the disk.

Programmers are a virtual disk frag factory, so we ran out of contiguous space almost daily. This means we often found ourselves in the uncomfortable situation where we could no longer test the modifications to our programs until someone defragged the drives. The worst of it was that, if there was such a thing as a disk defrag utility available, the systems guys obviously didn't know about it. They solved the problem by taking the systems off line, backing up, and then restoring the contents of the drives. We couldn't even edit code while we waited.

By the way, Microsoft credits Windows NT architect Dave Cutler as the designer of RSX-11/M (see resources). As tempting as it might be to compare the brain-dead pieces of Windows NT to the bizarre behavior of RSX-11/M, it would be unfair to do so for two reasons. First, more reliable sources than Microsoft only credit Cutler for the design of VMS. Cutler borrowed some RSX-11/M code for VMS, but didn't design RSX-11/M. Second, RSX-11/M was probably only brain-dead because the PDP-11 wasn't much of a brain.

Any computer historian knows the PDP-11 was a breakthrough in affordable computing in its time. Compared to today's desktops, it would be like taking out a second mortgage to buy an abacus. A PDP-11 computer with 8K words of core memory and a 256K disk cost about $30,000 in 1972. It could run you thousands of dollars for an additional 4K of memory.

It may be possible that the RSX-11/M architect was an idiot. It is also possible that the resident portion of the RSX-11/M kernel would have to exceed the size of the typical memory configuration for the PDP-11 in order to change this behavior, which would inflate the price of the system in the process.

I don't happen to recall how much memory we had on our PDP-11s or how much of it was used by RSX-11/M. However, I do know we never had enough. I had to implement overlays to make my Fortran programs work. In case you're not an old fart as I am, overlays are a bit like implementing virtual memory at the application level instead of the OS. You split off parts of your program into modules called overlays. The main program loads an overlay whenever it needs the functions in that overlay.

The trick is to make sure that none of your overlays rely on functions that reside in any other overlays, because your goal is to have only one overlay in memory at a time. I'm not even sure it was possible to have more than one loaded, but it's been too long to remember. Regardless, I'd bet the company paid at least double my yearly salary for some of those PDP-11s, so it was worth the relatively minimal effort for me to break up my program into overlays.

The new VM proposal

What does this have to do with virtual memory performance on the Linux 2.4 kernel?

Everything.

First, think about what the VM is for. The VM is like an extremely sophisticated automated overlay system, only it deals with many more types of memory storage. It is the part of the OS that comes to the rescue when you have used up all your expensive memory and need more.

Here's how it works. The OS finds some data in memory that can safely be removed and stores it to the cheaper storage (disk swap space), which frees up some expensive memory for other use. When a program needs the data that has been swapped to disk, the OS swaps something else out to disk and brings the needed data back into memory.

Most of the arguments about the VM in Linux revolve around how the OS should decide which memory is swapped and when, and methods to make the process fast and painless.

The remaining arguments are usually about what the OS needs to do when you've filled up all available memory and swap space but some task still needs even more memory. In this case, most people agree that the OS needs to kill one or more running tasks to free up memory. Since you're talking about stopping programs dead in their tracks, you have to address the issue of how the OS decides which tasks are less important than others and can afford to be killed.

If you don't see how this relates to my PDP story yet, then here's a hint. The controversy over the VM involves several extremely talented programmers, each of whom could command hundreds of thousands of dollars per year in salaries. They have been spending a great deal of their time and brain power over the past years figuring out how to squeeze the best performance out of systems with limited RAM and drive space.

Still don't get it? Then let me get right to my proposal for a new VM algorithm for Linux. Granted, this VM algorithm is not meant for typical system loads, but it would solve the most annoying VM problems.

I propose that we create a kernel daemon that checks for either of the following two conditions:

  1. The system swaps to disk so much that you see a severe degradation in performance.
  2. A task needs memory after all available RAM and swap is filled.

If either condition is met, the kernel then kills all tasks except those it needs to display the following message on the screen: "Lay off the doughnuts this week and spend the money to buy another DIMM, you penny pinching skinflint!"

As a bonus, the daemon could check the Internet for current pricing and replace the part about "doughnuts" with some comparison that better represents the current state of the market.

Again, I apologize to Andrea and Rik if this has already been suggested, and I suspect it has. Nevertheless, it was therapeutic, if not useful, to offer the advice.

More Stories By Nicholas Petreley

Nicholas Petreley is a computer consultant and author in Asheville, NC.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducted a live demonstration of how quickly application development can happen when the need to comply wit...
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs. The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with APIs within the next year.
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, d...
In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discussed the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation.
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect their organization.
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world and it starts with business models and monetization strategies.
Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, discussed IoE and the enormous opportunities it provides to public and private firms alike. She will share what businesses must do to thrive in the IoE economy, citing examples from several industry sectors.
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
Akana has released Envision, an enhanced API analytics platform that helps enterprises mine critical insights across their digital eco-systems, understand their customers and partners and offer value-added personalized services. “In today’s digital economy, data-driven insights are proving to be a key differentiator for businesses. Understanding the data that is being tunneled through their APIs and how it can be used to optimize their business and operations is of paramount importance,” said Alistair Farquharson, CTO of Akana.
Business as usual for IT is evolving into a "Make or Buy" decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud business applications and services across multiple cloud delivery models.
The enterprise market will drive IoT device adoption over the next five years. In his session at @ThingsExpo, John Greenough, an analyst at BI Intelligence, division of Business Insider, analyzed how companies will adopt IoT products and the associated cost of adopting those products. John Greenough is the lead analyst covering the Internet of Things for BI Intelligence- Business Insider’s paid research service. Numerous IoT companies have cited his analysis of the IoT. Prior to joining BI Intelligence, he worked analyzing bank technology for Corporate Insight and The Clearing House Payment...
"Optimal Design is a technology integration and product development firm that specializes in connecting devices to the cloud," stated Joe Wascow, Co-Founder & CMO of Optimal Design, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...
Electric Cloud and Arynga have announced a product integration partnership that will bring Continuous Delivery solutions to the automotive Internet-of-Things (IoT) market. The joint solution will help automotive manufacturers, OEMs and system integrators adopt DevOps automation and Continuous Delivery practices that reduce software build and release cycle times within the complex and specific parameters of embedded and IoT software systems.
"ciqada is a combined platform of hardware modules and server products that lets people take their existing devices or new devices and lets them be accessible over the Internet for their users," noted Geoff Engelstein of ciqada, a division of Mars International, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years.