Welcome!

Linux Authors: Elizabeth White, Pat Romanski, Raja Patel, Yeshim Deniz, Aria Blog

Related Topics: Big Data Journal, Java, SOA & WOA, Linux, Cloud Expo

Big Data Journal: Blog Feed Post

Real Time Big Data or Data in Motion

Data in motion includes social network data feeds, clickstreams, trading data, sensor data, etc.

Most of the discussions on Big Data begins and ends with Hadoop. It is the commercial version of HPC (High Performance Computing), whose underlying technologies have been around for years: clustering, parallel processing, and distributed file system. In today’s parlance you can read Hadoop clusters on commodity hardware, map-reduce algorithm, and HDFS in that order. There is no doubt that Hadoop has taken off in a big way, but it does not address one big emerging area called real time query and analysis on data that is moving all the time. Data can be categorized into three buckets – transactional data, analytics on data at rest, and analytics on data in flight (streaming, real time). We are talking about the last one here.

It is not about just velocity, but also latency. When an event occurs, we need to act on it within seconds or minutes. We have to “react in the moment”. First, the enterprise data warehouse (EDW) needs to be loaded with real time data, as opposed to the offline batch loading. What we need is continuous loading and data ingestion. Second, we have to do query and analysis on this fresh data as it comes for split-second decisions. The EDWs were designed years ago for offline batch processing and are unsuited for this new role. Hence newer technologies like in-memory processing, querying, and ingesting have to be looked at. As someone said – RAM is the new Disk, and Disk is the new Tape, and Tape is the new Microfiche (if they exist). One TB of RAM costs around $4k today and it will keep going down. Most EDW are under 5TB. So enterprises must evaluate the cost part of doing in-memory processing.

Data in motion includes social network data feeds, clickstreams, trading data, sensor data, etc. Velocity is the new big thing and actions on such data must be taken within seconds. There are economic values as well as safety values. For example, at Citibank, a 100 millisecond processing delay can cost them $1 million dollars of business. This also drastically reduces the analysis window for finding root cause. Scale-out solutions on commodity hardware offer big economic advantage. Solutions such as MemSQL, SAP HANA, Argyle Systems, Yahoo Storm, and Apache Spark/Shark are bringing in-memory processing architectures to handle this area of data in motion.

Read the original blog entry...

More Stories By Jnan Dash

Jnan Dash is Senior Advisor at EZShield Inc., Advisor at ScaleDB and Board Member at Compassites Software Solutions. He has lived in Silicon Valley since 1979. Formerly he was the Chief Strategy Officer (Consulting) at Curl Inc., before which he spent ten years at Oracle Corporation and was the Group Vice President, Systems Architecture and Technology till 2002. He was responsible for setting Oracle's core database and application server product directions and interacted with customers worldwide in translating future needs to product plans. Before that he spent 16 years at IBM. He blogs at http://jnandash.ulitzer.com.