Click here to close now.


Linux Containers Authors: Yeshim Deniz, Sanjeev Sharma, Tim Hinds, Jim Scott, Blue Box Blog

Related Topics: @BigDataExpo, Java IoT, Linux Containers, Agile Computing, @CloudExpo, Cloud Security

@BigDataExpo: Blog Post

In-Memory Database vs. In-Memory Data Grid By @GridGain | @CloudExpo [#BigData]

It's easy to start with technical differences between the two categories

A few months ago, I spoke at the conference where I explained the difference between caching and an in-memory data grid. Today, having realized that many people are also looking to better understand the difference between two major categories in in-memory computing: In-Memory Database and In-Memory Data Grid, I am sharing the succinct version of my thinking on this topic - thanks to a recent analyst call that helped to put everything in place :)


Skip to conclusion to get the bottom line.

Let's clarify the naming and buzzwords first. In-Memory Database (IMDB) is a well-established category name and it is typically used unambiguously.

It is important to note that there is a new crop of traditional databases with serious In-Memory "options". That includes MS SQL 2014, Oracle's Exalytics and Exadata, and IBM DB2 with BLU offerings. The line is blurry between these and the new pure In-Memory Databases, and for the simplicity I'll continue to call them In-Memory Databases.

In-Memory Data Grids (IMDGs) are sometimes (but not very frequently) called In-Memory NoSQL/NewSQL Databases. Although the latter can be more accurate in some case - I am going to use the In-Memory Data Grid term in this article, as it tends to be the more widely used term.

Note that there are also In-Memory Compute Grids and In-Memory Computing Platforms that include or augment many of the features of In-Memory Data Grids and In-Memory Databases.

Confusing, eh? It is... and for consistency - going forward we'll just use these terms for the two main categories:

  • In-Memory Database
  • In-Memory Data Grid

Tiered Storage
It is also important to nail down what we mean by "In-Memory". Surprisingly - there's a lot of confusion here as well as some vendors refer to SSDs, Flash-on-PCI, Memory Channel Storage, and, of course, DRAM as "In-Memory".

In reality, most vendors support a Tiered Storage Model where some portion of the data is stored in DRAM (the fastest storage but with limited capacity) and then it gets overflown to a verity of flash or disk devices (slower but with more capacity) - so it is rarely a DRAM-only or Flash-only product. However, it's important to note that most products in both categories are often biased towards mostly DRAM or mostly flash/disk storage in their architecture.

Bottom line is that products vary greatly in what they mean by "In-Memory" but in the end they all have a significant "In-Memory" component.

Technical Differences
It's easy to start with technical differences between the two categories.

Most In-Memory Databases are your father's RDBMS that store data "in memory" instead of disk. That's practically all there's to it. They provide good SQL support with only a modest list of unsupported SQL features, shipped with ODBC/JDBC drivers and can be used in place of existing RDBMS often without significant changes.

In-Memory Data Grids typically lack full ANSI SQL support but instead provide MPP-based (Massively Parallel Processing) capabilities where data is spread across large cluster of commodity servers and processed in explicitly parallel fashion. The main access pattern is key/value access, MapReduce, various forms of HPC-like processing, and a limited distributed SQL querying and indexing capabilities.

It is important to note that there is a significant crossover from In-Memory Data Grids to In-Memory Databases in terms of SQL support. GridGain, for example, provides pretty serious and constantly growing support for SQL including pluggable indexing, distributed joins optimization, custom SQL functions, etc.

Speed Only vs. Speed + Scalability
One of the crucial differences between In-Memory Data Grids and In-Memory Databases lies in the ability to scale to hundreds and thousands of servers. That is the In-Memory Data Grid's inherent capability for such scale due to their MPP architecture, and the In-Memory Database's explicit inability to scale due to fact that SQL joins, in general, cannot be efficiently performed in a distribution context.

It's one of the dirty secrets of In-Memory Databases: one of their most useful features, SQL joins, is also is their Achilles heel when it comes to scalability. This is the fundamental reason why most existing SQL databases (disk or memory based) are based on vertically scalable SMP (Symmetrical Processing) architecture unlike In-Memory Data Grids that utilize the much more horizontally scalable MPP approach.

It's important to note that both In-Memory Data Grids and In-Memory Database can achieve similar speed in a local non-distributed context. In the end - they both do all processing in memory.

But only In-Memory Data Grids can natively scale to hundreds and thousands of nodes providing unprecedented scalability and unrivaled throughput.

Replace Database vs. Change Application
Apart from scalability, there is another difference that is important for uses cases where In-Memory Data Grids or In-Memory Database are tasked with speeding up existing systems or applications.

An In-Memory Data Grid always works with an existing database providing a layer of massively distributed in-memory storage and processing between the database and the application. Applications then rely on this layer for super-fast data access and processing. Most In-Memory Data Grids can seamlessly read-through and write-through from and to databases, when necessary, and generally are highly integrated with existing databases.

In exchange - developers need to make some changes to the application to take advantage of these new capabilities. The application no longer "talks" SQL only, but needs to learn how to use MPP, MapReduce or other techniques of data processing.

In-Memory Databases provide almost a mirror opposite picture: they often requirereplacing your existing database (unless you use one of those In-Memory "options" to temporary boost your database performance) - but will demand significantly less changes to the application itself as it will continue to rely on SQL (albeit a modified dialect of it).

In the end, both approaches have their advantages and disadvantages, and they may often depend in part on organizational policies and politics as much as on their technical merits.

The bottom line should be pretty clear by now.

If you are developing a green-field, brand new system or application the choice is pretty clear in favor of In-Memory Data Grids. You get the best of the two worlds: you get to work with the existing databases in your organization where necessary, and enjoy tremendous performance and scalability benefits of In-Memory Data Grids - both of which are highly integrated.

If you are, however, modernizing your existing enterprise system or application the choice comes down to this:

You will want to use an In-Memory Database if the following applies to you:

  • You can replace or upgrade your existing disk-based RDBMS
  • You cannot make changes to your applications
  • You care about speed, but don't care as much about scalability

In other words - you boost your application's speed by replacing or upgrading RDBMS without significantly touching the application itself.

On the other hand, you want to use an In-Memory Data Grid if the following applies to you:

  • You cannot replace your existing disk-based RDBMS
  • You can make changes to (the data access subsystem of) your application
  • You care about speed and especially about scalability, and don't want to trade one for the other

In other words - with an In-Memory Data Grid you can boost your application's speed and provide massive scale by tweaking the application, but without making changes to your existing database.

It can be summarized it in the following table:

In-Memory Data GridIn-Memory Database
Existing Application Changed Unchanged
Existing RDBMS Unchanged Changed or Replaced
Speed Yes Yes
Max. Scalability Yes No

More Stories By Nikita Ivanov

Nikita Ivanov is founder and CEO of GridGain Systems, started in 2007 and funded by RTP Ventures and Almaz Capital. Nikita has led GridGain to develop advanced and distributed in-memory data processing technologies – the top Java in-memory computing platform starting every 10 seconds around the world today.

Nikita has over 20 years of experience in software application development, building HPC and middleware platforms, contributing to the efforts of other startups and notable companies including Adaptec, Visa and BEA Systems. Nikita was one of the pioneers in using Java technology for server side middleware development while working for one of Europe’s largest system integrators in 1996.

He is an active member of Java middleware community, contributor to the Java specification, and holds a Master’s degree in Electro Mechanics from Baltic State Technical University, Saint Petersburg, Russia.

@ThingsExpo Stories
WebRTC is about the data channel as much as about video and audio conferencing. However, basically all commercial WebRTC applications have been built with a focus on audio and video. The handling of “data” has been limited to text chat and file download – all other data sharing seems to end with screensharing. What is holding back a more intensive use of peer-to-peer data? In her session at @ThingsExpo, Dr Silvia Pfeiffer, WebRTC Applications Team Lead at National ICT Australia, will look at different existing uses of peer-to-peer data sharing and how it can become useful in a live session to...
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
NHK, Japan Broadcasting will feature upcoming @ThingsExpo Silicon Valley in a special IoT documentary which will be filmed on the expo floor November 3 to 5, 2015 in Santa Clara. NHK is the sole public TV network in Japan equivalent to BBC in UK and the largest in Asia with many award winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology covering @ThingsExpo Silicon Valley. The program will be aired during the highest viewership season of the year that it will have a high impact in the industry through this documentary in Japan. The film...
SYS-CON Events announced today that Luxoft Holding, Inc., a leading provider of software development services and innovative IT solutions, has been named “Bronze Sponsor” of SYS-CON's @ThingsExpo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Luxoft’s software development services consist of core and mission-critical custom software development and support, product engineering and testing, and technology consulting.
The broad selection of hardware, the rapid evolution of operating systems and the time-to-market for mobile apps has been so rapid that new challenges for developers and engineers arise every day. Security, testing, hosting, and other metrics have to be considered through the process. In his session at Big Data Expo, Walter Maguire, Chief Field Technologist, HP Big Data Group, at Hewlett-Packard, will discuss the challenges faced by developers and a composite Big Data applications builder, focusing on how to help solve the problems that developers are continuously battling.
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
In his session at @ThingsExpo, Tony Shan, Chief Architect at CTS, will explore the synergy of Big Data and IoT. First he will take a closer look at the Internet of Things and Big Data individually, in terms of what, which, why, where, when, who, how and how much. Then he will explore the relationship between IoT and Big Data. Specifically, he will drill down to how the 4Vs aspects intersect with IoT: Volume, Variety, Velocity and Value. In turn, Tony will analyze how the key components of IoT influence Big Data: Device, Connectivity, Context, and Intelligence. He will dive deep to the matrix...
When it comes to IoT in the enterprise, namely the commercial building and hospitality markets, a benefit not getting the attention it deserves is energy efficiency, and IoT’s direct impact on a cleaner, greener environment when installed in smart buildings. Until now clean technology was offered piecemeal and led with point solutions that require significant systems integration to orchestrate and deploy. There didn't exist a 'top down' approach that can manage and monitor the way a Smart Building actually breathes - immediately flagging overheating in a closet or over cooling in unoccupied ho...
SYS-CON Events announced today that Cloud Raxak has been named “Media & Session Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Raxak Protect automates security compliance across private and public clouds. Using the SaaS tool or managed service, developers can deploy cloud apps quickly, cost-effectively, and without error.
Scott Guthrie's keynote presentation "Journey to the intelligent cloud" is a must view video. This is from AzureCon 2015, September 29, 2015 I have reproduced some screen shots in case you are unable to view this long video for one reason or another. One of the highlights is 3 datacenters coming on line in India.
“The Internet of Things transforms the way organizations leverage machine data and gain insights from it,” noted Splunk’s CTO Snehal Antani, as Splunk announced accelerated momentum in Industrial Data and the IoT. The trend is driven by Splunk’s continued investment in its products and partner ecosystem as well as the creativity of customers and the flexibility to deploy Splunk IoT solutions as software, cloud services or in a hybrid environment. Customers are using Splunk® solutions to collect and correlate data from control systems, sensors, mobile devices and IT systems for a variety of Ind...
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the cloud and the best price/performance value available. ProfitBricks was named one of the coolest Clo...
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
Clearly the way forward is to move to cloud be it bare metal, VMs or containers. One aspect of the current public clouds that is slowing this cloud migration is cloud lock-in. Every cloud vendor is trying to make it very difficult to move out once a customer has chosen their cloud. In his session at 17th Cloud Expo, Naveen Nimmu, CEO of Clouber, Inc., will advocate that making the inter-cloud migration as simple as changing airlines would help the entire industry to quickly adopt the cloud without worrying about any lock-in fears. In fact by having standard APIs for IaaS would help PaaS expl...
Organizations already struggle with the simple collection of data resulting from the proliferation of IoT, lacking the right infrastructure to manage it. They can't only rely on the cloud to collect and utilize this data because many applications still require dedicated infrastructure for security, redundancy, performance, etc. In his session at 17th Cloud Expo, Emil Sayegh, CEO of Codero Hosting, will discuss how in order to resolve the inherent issues, companies need to combine dedicated and cloud solutions through hybrid hosting – a sustainable solution for the data required to manage I...
Mobile messaging has been a popular communication channel for more than 20 years. Finnish engineer Matti Makkonen invented the idea for SMS (Short Message Service) in 1984, making his vision a reality on December 3, 1992 by sending the first message ("Happy Christmas") from a PC to a cell phone. Since then, the technology has evolved immensely, from both a technology standpoint, and in our everyday uses for it. Originally used for person-to-person (P2P) communication, i.e., Sally sends a text message to Betty – mobile messaging now offers tremendous value to businesses for customer and empl...
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Bradley Holt, Developer Advocate at IBM Cloud Data Services, will demonstrate techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk will be on IBM Cloudant, Apa...
As more and more data is generated from a variety of connected devices, the need to get insights from this data and predict future behavior and trends is increasingly essential for businesses. Real-time stream processing is needed in a variety of different industries such as Manufacturing, Oil and Gas, Automobile, Finance, Online Retail, Smart Grids, and Healthcare. Azure Stream Analytics is a fully managed distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. It features built-in integration with Azur...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.