Welcome!

Linux Containers Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Stefana Muller

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo, @DXWorldExpo

@DevOpsSummit: Article

Data Demands of DevOps By @Delphix | @DevOpsSummit [#DevOps]

Technologies such as Chef, Puppet, and Docker have automated environment standup and configuration

Today, the demand for new applications is growing at an unprecedented rate throughout lines of business and across industries. Customer expectations for mobile and e-commerce capabilities are transforming software development speed and quality into a competitive differentiator for even the most unlikely businesses. For existing software development shops, the proliferation of platforms, increasing need for total global uptime, and accelerating pace of industry disruption by fast-paced startups have all put increased pressure on development. In every vertical, more code must be shipped than ever before, faster and at higher quality.

These pressures have forced a search for new methods, practices, and solutions that can allow organizations to accelerate application development and maintain quality standards, without any additional resources. The DevOps movement has shown particular promise in helping to meet these challenges, leading 43% of F1000 respondents in an IDC survey to adopt DevOps practices, while another 40% are investigating them.

What Is DevOps?
According to Gartner, DevOps is "not a market, but a tool-centric philosophy that supports a continuous delivery value chain." DevOps supports continuous delivery and a fast flow of features from concept to customer with tools that decrease feature friction and accelerate feedback at every phase of the process. These objectives are achieved with solutions that accelerate environment standup and enhance environment reproducibility.

Environment standup times are the key constraint in software development timelines. While code can be easily versioned, shared, and pushed with tools like Git, environment provisioning is a complex and manual process, requiring multiple touch points from administrators, and extensive, delicate configuration work. Business users have defined the needs and development teams have transformed their workflows, but everyone still waits on environments.

Environment reproducibility is the key constraint on friction and feedback, just as standup times are for initial code work. When dev and test environments are faithful copies of production, feature functionality is tested effectively and often. Errors are detected early and remediation is performed on the fly, preventing huge delays in the final testing stages or catastrophes in production. Features move seamlessly from environment to environment. Having parallel identical environments multiplies developer flexibility, allowing low-cost experimentation in both Dev and Test phases. But until recently, dev and test environments could not be such faithful copies.

The DevOps ecosystem of tools is transforming that landscape. Technologies such as Chef, Puppet, and Docker have automated environment standup and configuration. This automation both accelerates environment standup and enhances environment consistency. Replacing manual configuration tasks with automated processes reduces the load on ops staff and accelerates standup timelines, while automating or containerizing app states ensures that each developer or tester is working on an identical environment, maximizing consistency.

The Data Gap
However, even organizations with cutting-edge DevOps practices are finding that standup and reproducibility constraints still apply to data.

A tool like Docker may be able to stand up a lightweight application instance with consistent configuration, using minimal hardware and requiring no ops time. However, applications require data, and not only when they are deployed. Dev and test environments require full and faithful copies of production data. And they need that data to be delivered at the same pace and with the same automation as VMs are configured and cloud infrastructure is made available.

Current data management technology is not up to the challenge. With existing solutions, you can have your data slowly, at poor quality, or both.

If high quality data is the highest priority, organizations can opt to create full clones of production data. But these processes take as much or more time than all other stages of environment setup. In order to get a full clone of production, a backup admin has to get data out of production, system and storage administrators need to authorize and set up infrastructure, and (if the data is relational) a DBA must set up the database. Since the copy is full, infrastructure constraints will keep the number of available copies for experimentation or on-demand testing down. And the slow process timeline has a negative impact on data quality as well. In a continuous deployment world, the features in production today will not be the same as those in production last month or even week. Data changes even faster. So even a perfect copy of production weeks or months ago - and traditional data management techniques will take that long - is a poor approximation of data today. And features succeed or fail depending on their interaction with current data.

If, on the other hand, rapid access to data is the priority, organizations can employ shared data environments. Theoretically, sharing provides efficiency benefits by giving multiple teams immediate, concurrent access to a common data environment. But in practice, conflicts occur when more than one stakeholder contends for the same resources at the same time. The result is often a low quality, chaotic environment in which data changes from different projects collide with each other, yielding unreliable code and untrustworthy tests.

Solutions like subsetting or synthetic data are often also mentioned in discussions about providing data to developers and testers. However, they do not address the need for full and faithful production copies at all. By definition, a subset or a synthetic data set is not an accurate copy of production. That means that testing on a full production copy must be relegated to a special pre-production phase of the SDLC, which undermines the DevOps emphasis on consistent environments, regular tests, and continuous adjustments to hit project targets.

DevOps Data Tools
With challenges like these, a new set of data management tools are required, which will bring data delivery capabilities up to speed with DevOps needs. In the second part of this series, we'll look at the new category of solutions developing to meet this need.

More Stories By Louis Evans

Louis Evans is a Product Marketing Manager at Delphix. He is a subject-matter expert developing content, surveys and best practices pertinent to the DevOps community. Evans is also a speaker at DevOps focused industry events. He is a graduate of Harvard College, with a degree in Social Studies and Mathematics.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of ...