Welcome!

Linux Authors: Carmen Gonzalez, Dana Gardner, Liz McMillan, Elizabeth White, Brian Vandegrift

Related Topics: Linux

Linux: Article

Can Linux Clusters Move into Mainstream Information Technology?

Exclusive Interview with Rob Lucke, Chief Solutions Officer for Vista Solutions Corp

The emergence of commodity supercomputing has driven clusters based on the Linux operating system into engineering and scientific research organizations that couldn't afford their own supercomputing resources before. But Linux clusters have the potential to become a hot topic in traditional information technology circles as well. The coming year may well be the technology tipping point when Linux cluster technology escapes its current home in research organizations and inhabits the traditional data center.

Building Clustered Linux Systems by Robert Lucke, recently published by Prentice Hall Professional Technical Reference and HP Books, attempts to provide a starting point for organizations interested in building or evaluating their first Linux cluster.

We took this opportunity to have a chat with Robert and ask him a few questions on the subject of Linux and clustering.

What made you want to write a book about building clusters?

Before starting work on clusters, I spent a considerable amount of time tackling workgroup architecture and large-scale system administration problems for my engineering and scientific customers. When I got the opportunity to work on a prototype Itanium 2 cluster at Pacific Northwest National Laboratory, I was fascinated by the new techologies like the high-speed interconnect from Quadrics, and recognized many familiar management and architectural issues. The more I learned, the more I saw applications for clusters in other more "traditional" areas. The book itself was a learning experience for me and an attempt to collect and organize cluster-building information for organizations that are investigating clustered solutions.

Why do you think that clusters are an important architecture?

If you have the proper application software, a cluster can scale high-performance, high-availability or high-throughput resources far beyond anything that is available in a single SMP system. Being able to do this with commodity hardware brings tremendous compute resources in reach of organizations that previously couldn't afford them. I see cluster architectures and techniques as the gateway to some of the resource virtualization that seems to be the Holy Grail of traditional IT departments today. I think this is exciting! I guess it's the love of finding an elegant solution to a problem that drives my excitement. Instead of being a marketechture, clusters represent a real solution to a group of scalable problems.

When someone says cluster, what does it mean to you?

I have learned to be very, very careful with the word cluster. It's overloaded and the meaning depends on the audience. Using cluster in a scientific context evokes a different mental picture than if it were used by traditional IT folks. In general, I think of a cluster as a group of separate resources like systems, CPUs and RAM that gets poured into a mold. The shape of the mold determines the final shape and behavior of the clustered solution. Sizing the problem and determining the shape of the mold is the fun part for me.

Besides scientific and engineering environments, do you see any other applications for clusters?

I sure do! There are database clusters, web server clusters, file server clusters, visualization clusters, and on and on. Instead of building clusters that push the upper limits of RAM and CPU resources, thousands to tens of thousands of CPUs, for example, the company I work for concentrates on application-specific clusters. These are smaller, single-function clusters that are meant to run application configurations that would have required a large and expensive SMP system. The intent is to lower the complexity of building and managing the cluster, but still provide a more cost-effective solution for the application. I think this type of approach is generally applicable in any type of computing environment.

 What are some of the common mistakes you've encountered in cluster building?

The single biggest problem I run into is what I call pile o'hardware syndrome. That's the notion that you just buy a whole bunch of cheap hardware, rack it up, and a cluster will magically appear out of a pile of pieces. It's still very common to underestimate the amount of work required to make physically separate resources work together as if they were one very large, manageable SMP system. A cluster is still a systems engineering problem that can turn nasty if you aren't careful. But, with advances in pre-racked, pre-cabled hardware from some of the hardware vendors and the cluster software toolkits like OSCAR and Rocks, I see cluster building getting easier all the time.

Why do you think that Linux is the best cluster operating system?

One simple answer is choice. There are commercial distributions, free distributions, white-box distributions and so forth. If you have a commercial software package like an Oracle database that's qualified against a particular Linux commercial distribution like SuSE or RedHat, you can build a fully supported cluster configuration. If you want to do research or custom work, there are free distributions like Debian or Fedora. Because the source code is available, you can choose your starting point and degree of customization. This is the best of all possible worlds.

The Linux operating system is stable, manageable and flexible. You are free to configure Linux as you see fit instead of trying to chip away at a black-box operating system that fights you every step of the way. There's a wealth of free management and development tools available. Oh, did I mention that Linux runs on a wide range of commodity hardware, both 32- and 64-bit? What's not to like? Nothing else comes close in my estimation.

What do you see ahead for clustered architectures?

I definitely see Linux clusters moving into mainstream information technology environments. If you look back, the scientific community tends to drive computing technologies that are later adopted by the more conservative IT organizations as business solutions. One modest example I can think of might be the World Wide Web and the Mosaic browser. I firmly believe that clusters, specifically Linux clusters, are poised to repeat this type of adoption pattern. I think we are very close, if not past, the tipping point.

What would you say to someone who is thinking about building his first cluster?

Do it, but do it with your eyes open. Do your homework before starting. Give yourself time to learn. Try not to fall into the pile o'hardware trap. Start small and scale up. Investigate starting points like openMosix, Rocks and OSCAR first. If you don't have time for the learning curve, then have a replicable solution designed and implemented for you.

Conclusion

In addition to their usefulness in scientific and engineering computing environments, I believe that Linux clusters and clustering techniques will be an important addition to the standard information technology solutions in the corporate datacenter. The trick is going to be sharing the cluster-building knowledge that's available in universities and research institutions with the traditional information technology organization. Because of its stability, flexibility, open nature, manageability and availability on a wide range of commodity hardware, I believe that Linux is the correct choice for the creation of clustered solutions. I am really looking forward to the next few years. I believe it will be an exciting time for both Linux and clusters.

About Rob Lucke

Rob Lucke is currently chief solutions officer for Vista Solutions Corp. (http://www.VistaSolutions.Net), concentrating on technical and scientific computing. Rob's field of expertise include Linux compute clusters, technical systems architecture, large-scale system administration techniques, network file systems, heterogeneous interoperability, software development and application and system-level performance tuning. Rob has over 30 years of experience in computing and software of all types from real-time data acquisition to transaction processing. His first book, Designing and Implementing Computer Workgroups, was published in 1999. His second book, Building Clustered Linux Systems, was published in September of 2004. Rob is Red Hat Linux certified engineer #807200931604117.

More Stories By Ibrahim Haddad

Ibrahim Haddad is a member of the management team at The Linux Foundation responsible for technical, legal and compliance projects and initiatives. Prior to that, he ran the Open Source Office at Palm, the Open Source Technology Group at Motorola, and Global Telecommunications Initiatives at The Open Source Development Labs. Ibrahim started his career as a member of the research team at Ericsson Research focusing on advanced research for system architecture of 3G wireless IP networks and on the adoption of open source software in telecom. Ibrahim graduated from Concordia University (Montréal, Canada) with a Ph.D. in Computer Science. He is a Contributing Editor to the Linux Journal. Ibrahim is fluent in Arabic, English and French. He can be reached via http://www.IbrahimHaddad.com.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.