Welcome!

Linux Containers Authors: Zakia Bouachraoui, Yeshim Deniz, Elizabeth White, Liz McMillan, Pat Romanski

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

How Testing Fits into DevOps, Because It’s Here to Stay By @JustinRohrman | @DevOpsSummit #DevOps

When DevOps first appeared on the scene, no one really knew what it meant

How Testing Fits into DevOps, Because It's Here to Stay
by Justin Rohrman

When DevOps first appeared on the scene, no one really knew what it meant. Books were defining the term in completely different way; conference speakers were sending out conflicting messages about tools that you absolutely must use (or not) to do "real" DevOps. I distinctly remember seeing a job advertisement or two that were hiring a DevOps person to "dev all the ops."

We all know better now.

Or, at least, some of us know (a little) better now. Having some time to experiment taught us that DevOps is a lot like "agile". It describes a set of methods and tools that help programmers deliver software faster. This isn't something one person does, it is part of daily life as a programmer.  There is still one big question in all of this. How do testers fit into development groups? How can testers continue to make a difference when software companies are driving further and further into technical practices?

Dev Ops

Teaching Developers to Test
Anyone can test software. Tell them to "play with it", or give them a spec. They'll even find a bug or two. If it's big and obvious, for example, login in broken, they'll probably find that too.

Even programmers can do this sort of testing. Programmer testing tends to be verification that what they thought they built works - it tends to miss out on the difference between the customer need and what the programmer understood, verifying only what the customer understood.  Because what they do is write low-level code, a programmer might create a code-test ‘test' before writing production code to see that some value is set, then write that production code, and then finally run the code and test in concert. This usually comes in the form of TDD, or BDD, or unit testing. These tools are a nice way to help a person check their work, but it often isn't enough. Anything surprising, that the programmer didn't guess ahead of time might be a problem is still a mystery. One way to go beyond this is to have the programmer test someone else's code.

Jesse Alford, who is on the technical staff at Pivotal Labs, did a talk at CAST2015 about how he spends his time at Pivotal teaching programmers to test software. Through a combination of pairing with programmers and then talking about the work, playing games with software testing themes built in (like Zen Do, and the infamous dice game), the programmers learn more about what skilled software testing looks like, and Jesse learns more about writing good code.

Pivotal has created a stronger team through this process of programmer / tester pairing, and teaching exercises.

What Can't Be Done in CI
Honestly, I think Continuous Integration is pretty cool. Why wouldn't you want to get some base feedback on the software for every single build? Why not build every few minutes to every few hours at the slowest?

Yet there are other quality questions that CI just can't answer, where delivering faster might not help.

Stability and Reliability
This is the question of how well your product runs over time. A quick 1-hour exercise of the software won't find a memory leak.  If something goes wrong, and eventually it will (this is software we're talking about) how does the product recover and can I continue using it without intervention from some sort of administrator?

Continuous Integration runs on the short term. Get the latest code from the repository, mash it up into something testable, run the automated checks, and spin things back down. These environments don't exist long enough to get a meaningful feel for how stable or reliable the product will be.

Performance
Most of us understand the idea behind performance testing - run the software with a great number of simultaneous users for an extended series of time and see if it slows down. Yet how to do performance testing, and what to make of the results, is an interesting combination of technical skill, mathematics, and social science. Single user performance testing can be as easy as sitting at a computer with a stopwatch to see how long a page takes to load or form to submit. More complicated versions include running a series of HTTP requests, measuring various aspects of the call, comparing that to previous measurements in different environments, and then trying to decide if the results matter.

The important thing to note here is that you are the most important thing in the equation. The performance tester needs to observe differences, then decide whether a 25 millisecond difference in one HTTP call between versions is important enough to do something about, or if the fact that one button click triggers 30 HTTP POSTS should be reported. The context is important, and the tester has it. The CI system never will.

Usability
The term ‘Usability' encompasses a great number of factors, including utility (can it do the job), usability (does it work for me) and identity (do I think of it as compatible with my sense of self.). Figuring out if software is usable, and how to improve it, is ‘soft' but it is still science. Ideas like affordance of devices and interviews, which both come from the ‘soft' science of anthropology, can help use answer these questions and improve our product. There is also the intuition of the user, which is even harder to understand and measure. This intuition can manifest when customers user the software and rub their foreheads trying to figure out what to do next, or become frustrated and ask someone for help, or even giving up all together. (When customers abandon a request for vacation, or submitting reimbursement, that system has problems.)

In some cases, usability studies and design are carefully handled ahead of time and then forgotten. More than once, I've worked on a product that was immaculately designed, but after performing one task many times, found it very tedious. That feeling of tediousness is a hint that something is going on, and it is not good.

Compatibility
No software is an island. Even very small programs like games that run on your phone or tablet have to play well with their environment and the other software running there. Business systems integrate with user accounts and often send and receive data with other pieces of software. Healthcare software is constantly sending patient information for health records and insurance information for billing.

Often the fastest way to learn if your software is sending the right medical billing codes in the right format for a patient to get insurance coverage is to create the scenario in your product and then send the output to your test system. That might be a 3rd party test system - when millions of dollars in involved, don't worry, they'll have one you can use. You might be able to do this every build, but by the time you built the file and the scripts, you could already have discovered what was broken and started fixing it.

How Continuous
Where Continuous Integrations will get every new line of code into a build and checked against the unit tests, Continuous Delivery (CD) takes it to the next level. CD takes the latest build and automatically deploys that to an environment along with whatever database and frameworks go along with that build. Some companies have pushed this concept to its logical conclusion and are pushing new code to production on every commit, something they call Continuous Deployment. These terms are used so interchangeably, and are so confusing, that I prefer "Continuous Delivery (to where)" - for example, Continuous Delivery to a staging server or continuous delivery to production. CD to production ("true" Continuous Deployment) takes a variety of engineering practices designed to enable partial features, turning features on and off, sending new features in "dark", database changes that run simultaneously so you can cut-back if needed, and more.

The first step is usually CD to staging, and then only if all the automated checks run green. Deploying every build automatically to a staging gives the benefits of fast visibility, but also protects users from big, unanticipated, black swan problems that will ruin their day. Deploying continuously to staging has the added benefit of allowing testers to control their own test environments.

One other strategy I've had work well is deploying automatically to test after getting a green light from suites of tests that cover multiple layers of the product - unit, service, and UI. Although these are just checks and usually won't show unexpected problems, they will show that certain aspects of the software still function the way you think they do. Having a second test environment to control and compare against the latest is a nice touch, too.

Not Everything Is Functional
One important aspect of DevOps is defining when a feature or code change is officially done. When a company releases quickly, and often, that definition can be as light as a green light from all automated checks run on a given build. This method treats software as a simple set of functions. I can enter a value in this text field and select one of these radio buttons then click a button and get a value out. With some higher (than unit) types of automated checking, we can string these functions together to get something a little more complicated.

That isn't enough though, and it certainly doesn't represent how people use software.

The main problem with relying on this type of testing is how simple and linear it is. When we use software, we don't take perfectly predefined and clean paths. Instead of performing a series of - submit 5, assert value, select check box, assert value, check for NULL, assert value - testers take a loosely guided path. We meander here and there looking for hints of something interesting and then strike when a clue shows itself. This kind of activity can happen all the time, both on a macro level ("what new features could use all more attention on staging, or even in production, right now?") and at the micro level, exploring a story just a little bit more before the code goes live. That micro-exploring work can even happen with continuous delivery to production, by turning the feature on in staging and "off" in production until the tester has completed an exploration run.

DevOps tends to treat testing as an activity to be completely automated. Over time, as DevOps begins to gain maturity, I see that changing. Human, thinking, in the moment testing might be different each time, and needs to be done by someone, a tester, or developer or someone else, while the things that run every time according to algorithm, the checking, that might be automated. Cutting out the exploring causes us to loose perspective in a way that was probably unanticipated.

The push toward DevOps can be scary for testers; it isn't hard to imagine that the methods and tools in the wrong hands could squeeze our special role out of development groups. The best way to stay relevant is by understanding your unique contribution, being able to explain it, and excelling at it.

So keep calm and Excel On

Read the original blog entry...

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

IoT & Smart Cities Stories
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...