Welcome!

Linux Containers Authors: Liz McMillan, Pat Romanski, Elizabeth White, Yeshim Deniz, Roger Strukhoff

Related Topics: Linux Containers

Linux Containers: Article

Linux: A Revolution in Scientific and Technical Computing

Linux clusters are the fastest-growing type of HPC system

It seems that Linux is everywhere you look these days. Among enterprise, desktop, even wireless users, Linux's versatility and portability have rapidly made it the operating system of choice. At academic institutions in particular, Linux is quickly becoming the Lingua Franca through which researchers investigate and collaborate, and Linux-based clusters have become a prerequisite for many modern research environments.

However, as use of Linux clusters becomes more widespread and the applications run on them become more complex, more and more researchers and engineers are running into a fundamental problem: As Linux is scaled to higher processor counts to support the most challenging HPC applications (such as those involving highly complex mathematical models, numerical methods, and scientific visualizations), the operating system faces stresses that it was never designed for. For some applications, that can mean extremely poor performance. For others, it means that Linux is simply not a viable option.

Fortunately, new HPC systems can optimize Linux for the most challenging HPC environments and let it meet the unique communications, management, and reliability demands of running complex applications at high processor counts. As more institutions deploy such systems, investigators worldwide are now using Linux for a wider range of HPC applications than ever before. As the only HPC vendor dedicated solely to supercomputing, Cray is working at the forefront of this movement and believes that Linux will be a key operating environment for academic HPC users for years to come.

Linux in HPC Environments
Many researchers and engineers have made Linux their operating environment of choice due to its wide familiarity, broad support, and ease of use for both users and administrators. Today Linux clusters are the fastest-growing type of HPC system. Clusters represented a third of the technical server market in 2004 and half the market by the first quarter of 2005. And, between 2002 and 2004, revenue from Linux-based systems more than quadrupled. In addition, key applications in many fields - including most independent software vendor (ISV) applications such as computational fluid dynamics codes used for computer-assisted engineering (CAE) - are now certified for Linux and many were even designed to run specifically on Linux.

However, in large-scale HPC environments running the most demanding HPC applications, the standard Linux operating system can be hard pressed to maintain acceptable levels of performance. For example, standard Linux sporadically executes low-priority functions such as operating system daemons. In a desktop or small cluster, this kind of activity is usually beneficial and has little impact on application performance. But in more advanced HPC environments with hundreds or thousands of processors working in close coordination, it can lead to "operating system jitter" in which most processors must sit idle at application barriers waiting for a few processors to catch up, causing significant performance degradation.

Standard Linux has other difficulties when scaling to large-scale HPC systems, including handling I/O to a shared global file system, managing thousands of instances of Linux booting off hundreds of nodes in local disks, and simply coordinating basic functions (such as starting and stopping processes) across hundreds or thousands of processors. Aside from impeding system performance, these issues also mean that typical Linux cluster middleware may not be reliable enough to support the most demanding HPC applications. For a computation that requires several weeks to complete, problems like these can cause an entire run to abort and days' worth of computation to be lost.

Recognizing these challenges on the one hand and the enormous utility of Linux for academic HPC users on the other, researchers at Cray and elsewhere have worked to optimize Linux for advanced HPC environments. Today, both the Cray XD1 and Cray XT3 systems incorporate some of these techniques. (While both Cray systems are purpose-built to deliver high sustained application performance, the Cray XD1 is more commonly used for mid-range scientific and technical computing, such as running ISV codes, while the Cray XT3 is typically deployed in environments with thousands of processors. The two systems resolve some of these issues differently.) These strategies allow larger-scale HPC systems to address challenges such as:

  • OS jitter: The Cray XD1 uses a Linux Synchronized Scheduler (LSS) to synchronize Linux housekeeping functions system-wide to better than a microsecond resolution. The Cray XT3 takes an alternative approach, using full Linux only on service nodes that handle administrative, user, and I/O functions, where Linux offers the greatest advantages. Compute nodes run a specialized lightweight kernel that minimizes application interrupts. System calls that require a full-featured operating system are forwarded to the Linux nodes. The system handles this distribution of labor dynamically, with no additional management required by the user.
  • File system efficiency: Both the Cray XD1 and XT3 systems use the Lustre parallel file system from Cluster File Systems, Inc. to provide the scalability and reliability that traditional NFS lacks in an HPC environment. A high-performance, highly available, object-based architecture, Lustre was designed specifically for HPC systems.
  • System management: The Cray XD1 employs Cray's Active Manager software, which streamlines the management of hundreds of copies of Linux across an HPC system. The Cray XT3 uses a single shared root file system that lets administrators view and manage hundreds of nodes as a single system. Both Cray systems also include sophisticated workload management and monitoring systems.
With these enhancements, scientists and engineers are using Linux-based Cray systems to successfully run even the most complex HPC applications in their operating environment of choice. The following are just three examples of the breakthrough science currently being done on Linux-based Cray HPC systems.

High-Resolution Earthquake Modeling at Pittsburgh Supercomputing Center
Modeling earthquakes can be a monumental task. Accurate simulations must resolve phenomena across vast spatial scales from meters to hundreds of kilometers, and time scales from hundredths of a second to hundreds of seconds. Compounding the complexity of the problem, ground motion is strongly influenced by complex soil properties, which can be observed only indirectly.

Researchers Jacobo Bielak and David O'Halloran from Carnegie Mellon University, Omar Ghattas from the University of Texas at Austin, Steven Day from San Diego State University, and Kwan-Liu Ma from the University of California at Davis have taken up the challenge of developing three-dimensional seismic models of earthquakes. Their breakthrough application Quake uses an innovative three-dimensional "inverse modeling" approach to model the geologically complex Greater Los Angeles Basin. Using seismic measurements from the surface (such as data from past earthquakes), the Quake team can create an improved model of the current subsurface geology. The technique provides detailed information on the three-dimensional structure of the sub-surface region, including the impact that recent quakes have had on that geology and associated faults. The project is supported by the National Science Foundation and the Southern California Earthquake Center (SCEC).

Seismic wavelengths are determined by the stiffness of sub-surface materials, which can vary significantly (especially in highly heterogeneous regions such as the Greater Los Angeles Basin), and by the frequency range of the propagating waves. Softer material, as is prevalent in the Greater Los Angeles Basin, produces shorter seismic wavelengths. Shorter wavelengths require much higher model resolution - and a much denser mesh - to model seismic wave propagation, and hence, an enormous amount of computation.

Adding to the challenge, the investigators wish to model higher-frequency ground motion, since it's seismic waves in the range of 1Hz to 5Hz that present the greatest danger to common low-rise structures. But each doubling of frequency requires a 16-fold increase in computing power. As a result, previous simulations have only modeled up to 0.5Hz.

To create higher-frequency, higher-resolution simulations than have been done previously, the Quake team is using the Cray XT3 system at the Pittsburgh Supercomputing Center (PSC). The system will run a highly parallel scalable meshing algorithm to create an extremely fine computational mesh, composed of approximately 10 billion elements. This parallel meshing algorithm is integrated with a parallel seismic wave propagation solver and parallel volume renderer to create an end-to-end parallel simulation capability that will do simulations that are among the largest unstructured mesh simulations ever conducted. At 2Hz, the simulation will model four times the frequency range of previous models and create a grid with 64 times the resolving power of the SCEC's previous "Terashake" simulation - quantifying the effect of higher-frequency seismic waves for the first time.

Using PSC's Cray XT3 system, the Quake team hopes to simulate a magnitude 7.7 earthquake centered over a 230-kilometer portion of the San Andreas fault. By more accurately forecasting ground motion over shorter distances, the investigators hope this work can help identify regions that will be hardest hit in a major earthquake and discover which seismic frequencies will be amplified most by the soil. Ultimately the data can be used to modify building codes in high-risk areas, help engineers design safer building structures, and potentially save lives. (For more information on Quake, visit www.cs.cmu.edu/~quake/.)

Igniting Combustion Modeling at the National Center for Computational Sciences
The physics of combustion are extremely complex, involving numerous dynamic elements across a wide range of scales. For example, studying the physics of turbulence/chemistry interactions in combustion flows requires a detailed understanding of the phenomena occurring in turbulent flows, spanning an enormous range of length and time scales and involving hydrocarbon fuels described by hundreds of chemical species and thousands of elementary reactions. And yet, if researchers want to design more fuel-efficient, environmentally friendly combustion devices, understanding such interactions is critical.

Historically, scientists studying combustion processes relied on physical experiments in which it was impossible to completely characterize the physical processes even with state-of-the-art laser diagnostics. Today, researchers from the Combustion Research Facility at Sandia National Laboratories are using innovative techniques to do detailed combustion simulations that were previously beyond their capabilities. With the aid of the National Center for Computational Sciences at Oak Ridge National Laboratory (ORNL) and its Cray XT3 system, the Sandia team can take advantage of new high-fidelity numerical approaches that can more fully and accurately resolve the component processes of combustion. Unlike physical experiments, these 'numerical' experiments can expose and emphasize the role of phenomena that was previously impossible to explore and reveal the causal relationships at the heart of the physical processes.

More Stories By Jeff Brooks

Jeff Brooks is a product manager for Cray's massively parallel processor systems (MPP), including the Cray XT3 and its descendents. As such, he leverages his in-depth knowledge
of high-performance computing (HPC) to direct Cray XT3 product design and development, bringing new levels of scalability and sustained application performance to HPC.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
news desk 12/27/05 11:18:05 PM EST

It seems that Linux is everywhere you look these days. Among enterprise, desktop, even wireless users, Linux's versatility and portability have rapidly made it the operating system of choice. At academic institutions in particular, Linux is quickly becoming the Lingua Franca through which researchers investigate and collaborate, and Linux-based clusters have become a prerequisite for many modern research environments.

@ThingsExpo Stories
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on qua...
SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness. For more information, please visit https://www.cedexis.com.
SYS-CON Events announced today that Google Cloud has been named “Keynote Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Companies come to Google Cloud to transform their businesses. Google Cloud’s comprehensive portfolio – from infrastructure to apps to devices – helps enterprises innovate faster, scale smarter, stay secure, and do more with data than ever before.