Linux Containers Authors: Yeshim Deniz, Pat Romanski, Elizabeth White, Roger Strukhoff, Flint Brenton

Related Topics: Linux Containers

Linux Containers: Article

Linux: A Revolution in Scientific and Technical Computing

Linux clusters are the fastest-growing type of HPC system

It seems that Linux is everywhere you look these days. Among enterprise, desktop, even wireless users, Linux's versatility and portability have rapidly made it the operating system of choice. At academic institutions in particular, Linux is quickly becoming the Lingua Franca through which researchers investigate and collaborate, and Linux-based clusters have become a prerequisite for many modern research environments.

However, as use of Linux clusters becomes more widespread and the applications run on them become more complex, more and more researchers and engineers are running into a fundamental problem: As Linux is scaled to higher processor counts to support the most challenging HPC applications (such as those involving highly complex mathematical models, numerical methods, and scientific visualizations), the operating system faces stresses that it was never designed for. For some applications, that can mean extremely poor performance. For others, it means that Linux is simply not a viable option.

Fortunately, new HPC systems can optimize Linux for the most challenging HPC environments and let it meet the unique communications, management, and reliability demands of running complex applications at high processor counts. As more institutions deploy such systems, investigators worldwide are now using Linux for a wider range of HPC applications than ever before. As the only HPC vendor dedicated solely to supercomputing, Cray is working at the forefront of this movement and believes that Linux will be a key operating environment for academic HPC users for years to come.

Linux in HPC Environments
Many researchers and engineers have made Linux their operating environment of choice due to its wide familiarity, broad support, and ease of use for both users and administrators. Today Linux clusters are the fastest-growing type of HPC system. Clusters represented a third of the technical server market in 2004 and half the market by the first quarter of 2005. And, between 2002 and 2004, revenue from Linux-based systems more than quadrupled. In addition, key applications in many fields - including most independent software vendor (ISV) applications such as computational fluid dynamics codes used for computer-assisted engineering (CAE) - are now certified for Linux and many were even designed to run specifically on Linux.

However, in large-scale HPC environments running the most demanding HPC applications, the standard Linux operating system can be hard pressed to maintain acceptable levels of performance. For example, standard Linux sporadically executes low-priority functions such as operating system daemons. In a desktop or small cluster, this kind of activity is usually beneficial and has little impact on application performance. But in more advanced HPC environments with hundreds or thousands of processors working in close coordination, it can lead to "operating system jitter" in which most processors must sit idle at application barriers waiting for a few processors to catch up, causing significant performance degradation.

Standard Linux has other difficulties when scaling to large-scale HPC systems, including handling I/O to a shared global file system, managing thousands of instances of Linux booting off hundreds of nodes in local disks, and simply coordinating basic functions (such as starting and stopping processes) across hundreds or thousands of processors. Aside from impeding system performance, these issues also mean that typical Linux cluster middleware may not be reliable enough to support the most demanding HPC applications. For a computation that requires several weeks to complete, problems like these can cause an entire run to abort and days' worth of computation to be lost.

Recognizing these challenges on the one hand and the enormous utility of Linux for academic HPC users on the other, researchers at Cray and elsewhere have worked to optimize Linux for advanced HPC environments. Today, both the Cray XD1 and Cray XT3 systems incorporate some of these techniques. (While both Cray systems are purpose-built to deliver high sustained application performance, the Cray XD1 is more commonly used for mid-range scientific and technical computing, such as running ISV codes, while the Cray XT3 is typically deployed in environments with thousands of processors. The two systems resolve some of these issues differently.) These strategies allow larger-scale HPC systems to address challenges such as:

  • OS jitter: The Cray XD1 uses a Linux Synchronized Scheduler (LSS) to synchronize Linux housekeeping functions system-wide to better than a microsecond resolution. The Cray XT3 takes an alternative approach, using full Linux only on service nodes that handle administrative, user, and I/O functions, where Linux offers the greatest advantages. Compute nodes run a specialized lightweight kernel that minimizes application interrupts. System calls that require a full-featured operating system are forwarded to the Linux nodes. The system handles this distribution of labor dynamically, with no additional management required by the user.
  • File system efficiency: Both the Cray XD1 and XT3 systems use the Lustre parallel file system from Cluster File Systems, Inc. to provide the scalability and reliability that traditional NFS lacks in an HPC environment. A high-performance, highly available, object-based architecture, Lustre was designed specifically for HPC systems.
  • System management: The Cray XD1 employs Cray's Active Manager software, which streamlines the management of hundreds of copies of Linux across an HPC system. The Cray XT3 uses a single shared root file system that lets administrators view and manage hundreds of nodes as a single system. Both Cray systems also include sophisticated workload management and monitoring systems.
With these enhancements, scientists and engineers are using Linux-based Cray systems to successfully run even the most complex HPC applications in their operating environment of choice. The following are just three examples of the breakthrough science currently being done on Linux-based Cray HPC systems.

High-Resolution Earthquake Modeling at Pittsburgh Supercomputing Center
Modeling earthquakes can be a monumental task. Accurate simulations must resolve phenomena across vast spatial scales from meters to hundreds of kilometers, and time scales from hundredths of a second to hundreds of seconds. Compounding the complexity of the problem, ground motion is strongly influenced by complex soil properties, which can be observed only indirectly.

Researchers Jacobo Bielak and David O'Halloran from Carnegie Mellon University, Omar Ghattas from the University of Texas at Austin, Steven Day from San Diego State University, and Kwan-Liu Ma from the University of California at Davis have taken up the challenge of developing three-dimensional seismic models of earthquakes. Their breakthrough application Quake uses an innovative three-dimensional "inverse modeling" approach to model the geologically complex Greater Los Angeles Basin. Using seismic measurements from the surface (such as data from past earthquakes), the Quake team can create an improved model of the current subsurface geology. The technique provides detailed information on the three-dimensional structure of the sub-surface region, including the impact that recent quakes have had on that geology and associated faults. The project is supported by the National Science Foundation and the Southern California Earthquake Center (SCEC).

Seismic wavelengths are determined by the stiffness of sub-surface materials, which can vary significantly (especially in highly heterogeneous regions such as the Greater Los Angeles Basin), and by the frequency range of the propagating waves. Softer material, as is prevalent in the Greater Los Angeles Basin, produces shorter seismic wavelengths. Shorter wavelengths require much higher model resolution - and a much denser mesh - to model seismic wave propagation, and hence, an enormous amount of computation.

Adding to the challenge, the investigators wish to model higher-frequency ground motion, since it's seismic waves in the range of 1Hz to 5Hz that present the greatest danger to common low-rise structures. But each doubling of frequency requires a 16-fold increase in computing power. As a result, previous simulations have only modeled up to 0.5Hz.

To create higher-frequency, higher-resolution simulations than have been done previously, the Quake team is using the Cray XT3 system at the Pittsburgh Supercomputing Center (PSC). The system will run a highly parallel scalable meshing algorithm to create an extremely fine computational mesh, composed of approximately 10 billion elements. This parallel meshing algorithm is integrated with a parallel seismic wave propagation solver and parallel volume renderer to create an end-to-end parallel simulation capability that will do simulations that are among the largest unstructured mesh simulations ever conducted. At 2Hz, the simulation will model four times the frequency range of previous models and create a grid with 64 times the resolving power of the SCEC's previous "Terashake" simulation - quantifying the effect of higher-frequency seismic waves for the first time.

Using PSC's Cray XT3 system, the Quake team hopes to simulate a magnitude 7.7 earthquake centered over a 230-kilometer portion of the San Andreas fault. By more accurately forecasting ground motion over shorter distances, the investigators hope this work can help identify regions that will be hardest hit in a major earthquake and discover which seismic frequencies will be amplified most by the soil. Ultimately the data can be used to modify building codes in high-risk areas, help engineers design safer building structures, and potentially save lives. (For more information on Quake, visit www.cs.cmu.edu/~quake/.)

Igniting Combustion Modeling at the National Center for Computational Sciences
The physics of combustion are extremely complex, involving numerous dynamic elements across a wide range of scales. For example, studying the physics of turbulence/chemistry interactions in combustion flows requires a detailed understanding of the phenomena occurring in turbulent flows, spanning an enormous range of length and time scales and involving hydrocarbon fuels described by hundreds of chemical species and thousands of elementary reactions. And yet, if researchers want to design more fuel-efficient, environmentally friendly combustion devices, understanding such interactions is critical.

Historically, scientists studying combustion processes relied on physical experiments in which it was impossible to completely characterize the physical processes even with state-of-the-art laser diagnostics. Today, researchers from the Combustion Research Facility at Sandia National Laboratories are using innovative techniques to do detailed combustion simulations that were previously beyond their capabilities. With the aid of the National Center for Computational Sciences at Oak Ridge National Laboratory (ORNL) and its Cray XT3 system, the Sandia team can take advantage of new high-fidelity numerical approaches that can more fully and accurately resolve the component processes of combustion. Unlike physical experiments, these 'numerical' experiments can expose and emphasize the role of phenomena that was previously impossible to explore and reveal the causal relationships at the heart of the physical processes.

More Stories By Jeff Brooks

Jeff Brooks is a product manager for Cray's massively parallel processor systems (MPP), including the Cray XT3 and its descendents. As such, he leverages his in-depth knowledge
of high-performance computing (HPC) to direct Cray XT3 product design and development, bringing new levels of scalability and sustained application performance to HPC.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
news desk 12/27/05 11:18:05 PM EST

It seems that Linux is everywhere you look these days. Among enterprise, desktop, even wireless users, Linux's versatility and portability have rapidly made it the operating system of choice. At academic institutions in particular, Linux is quickly becoming the Lingua Franca through which researchers investigate and collaborate, and Linux-based clusters have become a prerequisite for many modern research environments.

@ThingsExpo Stories
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...