Welcome!

Linux Containers Authors: Zakia Bouachraoui, Yeshim Deniz, Elizabeth White, Liz McMillan, Pat Romanski

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Putting the Test Back into DevOps | @DevOpsSummit #DevOps #BigData #Microservices

DevOps has moved from being something based on speculation about what development could be, to a real part of the developer role

Putting the Test Back into DevOps
by Justin Rohrman

DevOps has become synonymous with empowering developers to move faster and deliver more software, but has unintentionally moved software quality and testing into a corner. The pendulum that swings back and forth between very slow testing and staying far away from overly technical solutions has swung hard with DevOps, completely ignoring the value that testers can bring by forcing customers to test new code and creating teams that rely monitoring systems to find things that could have been caught before customers noticed.

DevOps has moved from being something based on speculation about what development could be, to a real part of the developer role. Let's see how we can swing this trend back to a more responsible place, still using DevOps to release software faster but with a level of quality that customers will be happy with.

Continuous Integration and Delivery
Continuous Integration
(CI) and Delivery (CD) are two of the most fundamental concepts built into DevOps. They have helped us talk about tools that will build new products every time a new line of code is added to the repository and then as soon as that build is done, deploy it to production. Some of the more technology focused companies like Github have pushed this idea as far as it can go and are deploying new software to production many times every day.

The side effect of this style of fast paced development and release is that any time for a real live person to try the software out before delivering is squeezed out. As soon as there is software that can run, it goes into production. Paying customers are the new testers, and monitoring and reporting systems are the new bug reports.

If we dial this back a little bit, we get a strategy where developers do CI on their own environment, getting a new build every time they commit to their local source code repository. And after checking in to the main repo, code is continuously deployed to a staging environment.

I have had a lot of success with continuous deployment to a headless API server on smaller teams. One team I worked on had two people working on the API platform that the rest of our product was built on. Each time they would commit code, a new build would go to that server. Normally before that would happen, we would sit together and talk about the changes and their concerns. I might start stubbing out a few automated checks, and writing down some test ideas.

We were still using DevOps concepts to build an API quickly, but we were doing it in a way that didn't force customers to deal with our problems.

Monitoring Is Only Part of the Solution
A few large companies are using complex API and Web monitoring systems running in the background, sucking up large amounts of log data from the API looking for a few important keywords like Exception, and Error. Every time these words are found, emails and text messages are sent to let the development staff know that something has gone wrong.

Maybe you have a completely componentized product where you can flip bits to turn features on and off quickly to reduce exposure to faults. Some companies have bits and pieces of product that are built as components, a handful have a large amount. Most have little or none. For smaller companies, this would mean spending equal amounts of time developing architecture as on building product. That isn't a ratio most founders and investors would want to see.

Monitoring and rollback systems are great for API products, but are usually not feasible for young companies that urgently need product to sell.

Landmines for Consumers and Holes in Your Revenue
Using monitoring without pre-production testing is dangerous for you and your consumers. Imagine you are releasing a new version of your API that will help customers to use Geolocation information on their phone to find stores nearby. The code under the hood of the API has some automated checks using faked data to see if stores were returned correctly, but there are only a few and they are pretty simple.

One of your first customers to use this feature live in the island paradise of Hawaii and it just so happens that there are lots of cities there with special characters in the name. A few hours after deploy, the server log files are blowing up with errors from people in Kāne‘ohe and ‘Ewa Gentry trying to find the closest Dunkin Donuts.

Emails start flying through the development office and the feature gets shut off minutes later, but the damage is done. By the time your support person gets in touch with a developer, and that developer investigates and takes action, your customers have given up. About half of those users uninstalled the app and are searching with Google instead.

"Testing new API changes can expose your product to black swan problems in a way that DevOps never will."

There are real consequences for using your customers to test new code. Placing a skilled tester in front of that API probably would have brought up questions like "What happens for cities with really long names, or special characters, or very small cities that might not be in all mapping systems". Testing new API changes can expose your product to black swan problems in a way that DevOps never will.

Cases like this actually happen and are good examples of where using DevOps concepts to deliver internally, rather than to production, would have saved a few customers and company money.

DevOps presents a powerful set of ideas that can help to deliver code to customers faster than we used to. They can also be a dangerous way to deliver bad code and buggy product much faster than we would like to. If we slow down a little bit by pairing DevOps themes with skilled testers, these ideas and tools can help to deliver software and API updates faster than would be possible without them without exposing our customers to new kinds of risk.

Do you have experience integrating DevOps with testing? We would love to hear your story.

Read the original blog entry...

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

IoT & Smart Cities Stories
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...