Welcome!

Linux Authors: Pat Romanski, Haim Koshchitzky, Liz McMillan, Gary Kaiser, Elizabeth White

Related Topics: Open Source, Cloud Expo

Open Source: Article

NoHadoop: Big Data Requires Not Only Hadoop

The one area where MapReduce/Hadoop wins today is that it's freely available to anyone

Over the past few years, Hadoop has become something of a poster child for the NoSQL movement. Whether it's interpreted as "No SQL" or "Not Only SQL", the message has been clear, if you have big data challenges, then your programming tool of choice should be Hadoop. Sure, continue to use SQL for your ancient legacy stuff, but when you need cutting edge performance and scalability, it's time to go Hadoop.

The only problem with this story is that the people who really do have cutting edge performance and scalability requirements today have already moved on from the Hadoop model. A few have moved back to SQL, but the much more significant trend is that, having come to realize the capabilities and limitations of MapReduce and Hadoop, a whole raft of new post-Hadoop architectures are now being developed that are, in most cases, orders of magnitude faster at scale than Hadoop.



The problem with simple batch processing tools like MapReduce and Hadoop is that they are just not powerful enough in any one of the dimensions of the big data space that really matters. If you need complex joins or ACID requirements, SQL beats Hadoop easily. If you have realtime requirements, Cloudscale beats Hadoop by three or four orders of magnitude. If you have supercomputing requirements, MPI or BSP

beat Hadoop easily. If you have graph computing requirements, Google's Pregel beats Hadoop by orders of magnitude. If you need interactive analysis of web-scale data sets, then Google's Dremel architecture beats Hadoop by orders of magnitude. If you need to incrementally update the analytics on a massive data set continuously, as Google now have to do on their index of the web, then an architecture like Percolator beats Hadoop easily.

The one area where MapReduce/Hadoop wins today is that it's freely available to anyone, but for those that have reasonably challenging big data requirements, that simple type of architecture is nowhere near enough.

More Stories By Bill McColl

Bill McColl left Oxford University to found Cloudscale. At Oxford he was Professor of Computer Science, Head of the Parallel Computing Research Center, and Chairman of the Computer Science Faculty. Along with Les Valiant of Harvard, he developed the BSP approach to parallel programming. He has led research, product, and business teams, in a number of areas: massively parallel algorithms and architectures, parallel programming languages and tools, datacenter virtualization, realtime stream processing, big data analytics, and cloud computing. He lives in Palo Alto, CA.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.