Linux Containers Authors: Liz McMillan, Gordon Haff, Pat Romanski, Flint Brenton, Yeshim Deniz

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

How Testing Fits into DevOps, Because It’s Here to Stay By @JustinRohrman | @DevOpsSummit #DevOps

When DevOps first appeared on the scene, no one really knew what it meant

How Testing Fits into DevOps, Because It's Here to Stay
by Justin Rohrman

When DevOps first appeared on the scene, no one really knew what it meant. Books were defining the term in completely different way; conference speakers were sending out conflicting messages about tools that you absolutely must use (or not) to do "real" DevOps. I distinctly remember seeing a job advertisement or two that were hiring a DevOps person to "dev all the ops."

We all know better now.

Or, at least, some of us know (a little) better now. Having some time to experiment taught us that DevOps is a lot like "agile". It describes a set of methods and tools that help programmers deliver software faster. This isn't something one person does, it is part of daily life as a programmer.  There is still one big question in all of this. How do testers fit into development groups? How can testers continue to make a difference when software companies are driving further and further into technical practices?

Dev Ops

Teaching Developers to Test
Anyone can test software. Tell them to "play with it", or give them a spec. They'll even find a bug or two. If it's big and obvious, for example, login in broken, they'll probably find that too.

Even programmers can do this sort of testing. Programmer testing tends to be verification that what they thought they built works - it tends to miss out on the difference between the customer need and what the programmer understood, verifying only what the customer understood.  Because what they do is write low-level code, a programmer might create a code-test ‘test' before writing production code to see that some value is set, then write that production code, and then finally run the code and test in concert. This usually comes in the form of TDD, or BDD, or unit testing. These tools are a nice way to help a person check their work, but it often isn't enough. Anything surprising, that the programmer didn't guess ahead of time might be a problem is still a mystery. One way to go beyond this is to have the programmer test someone else's code.

Jesse Alford, who is on the technical staff at Pivotal Labs, did a talk at CAST2015 about how he spends his time at Pivotal teaching programmers to test software. Through a combination of pairing with programmers and then talking about the work, playing games with software testing themes built in (like Zen Do, and the infamous dice game), the programmers learn more about what skilled software testing looks like, and Jesse learns more about writing good code.

Pivotal has created a stronger team through this process of programmer / tester pairing, and teaching exercises.

What Can't Be Done in CI
Honestly, I think Continuous Integration is pretty cool. Why wouldn't you want to get some base feedback on the software for every single build? Why not build every few minutes to every few hours at the slowest?

Yet there are other quality questions that CI just can't answer, where delivering faster might not help.

Stability and Reliability
This is the question of how well your product runs over time. A quick 1-hour exercise of the software won't find a memory leak.  If something goes wrong, and eventually it will (this is software we're talking about) how does the product recover and can I continue using it without intervention from some sort of administrator?

Continuous Integration runs on the short term. Get the latest code from the repository, mash it up into something testable, run the automated checks, and spin things back down. These environments don't exist long enough to get a meaningful feel for how stable or reliable the product will be.

Most of us understand the idea behind performance testing - run the software with a great number of simultaneous users for an extended series of time and see if it slows down. Yet how to do performance testing, and what to make of the results, is an interesting combination of technical skill, mathematics, and social science. Single user performance testing can be as easy as sitting at a computer with a stopwatch to see how long a page takes to load or form to submit. More complicated versions include running a series of HTTP requests, measuring various aspects of the call, comparing that to previous measurements in different environments, and then trying to decide if the results matter.

The important thing to note here is that you are the most important thing in the equation. The performance tester needs to observe differences, then decide whether a 25 millisecond difference in one HTTP call between versions is important enough to do something about, or if the fact that one button click triggers 30 HTTP POSTS should be reported. The context is important, and the tester has it. The CI system never will.

The term ‘Usability' encompasses a great number of factors, including utility (can it do the job), usability (does it work for me) and identity (do I think of it as compatible with my sense of self.). Figuring out if software is usable, and how to improve it, is ‘soft' but it is still science. Ideas like affordance of devices and interviews, which both come from the ‘soft' science of anthropology, can help use answer these questions and improve our product. There is also the intuition of the user, which is even harder to understand and measure. This intuition can manifest when customers user the software and rub their foreheads trying to figure out what to do next, or become frustrated and ask someone for help, or even giving up all together. (When customers abandon a request for vacation, or submitting reimbursement, that system has problems.)

In some cases, usability studies and design are carefully handled ahead of time and then forgotten. More than once, I've worked on a product that was immaculately designed, but after performing one task many times, found it very tedious. That feeling of tediousness is a hint that something is going on, and it is not good.

No software is an island. Even very small programs like games that run on your phone or tablet have to play well with their environment and the other software running there. Business systems integrate with user accounts and often send and receive data with other pieces of software. Healthcare software is constantly sending patient information for health records and insurance information for billing.

Often the fastest way to learn if your software is sending the right medical billing codes in the right format for a patient to get insurance coverage is to create the scenario in your product and then send the output to your test system. That might be a 3rd party test system - when millions of dollars in involved, don't worry, they'll have one you can use. You might be able to do this every build, but by the time you built the file and the scripts, you could already have discovered what was broken and started fixing it.

How Continuous
Where Continuous Integrations will get every new line of code into a build and checked against the unit tests, Continuous Delivery (CD) takes it to the next level. CD takes the latest build and automatically deploys that to an environment along with whatever database and frameworks go along with that build. Some companies have pushed this concept to its logical conclusion and are pushing new code to production on every commit, something they call Continuous Deployment. These terms are used so interchangeably, and are so confusing, that I prefer "Continuous Delivery (to where)" - for example, Continuous Delivery to a staging server or continuous delivery to production. CD to production ("true" Continuous Deployment) takes a variety of engineering practices designed to enable partial features, turning features on and off, sending new features in "dark", database changes that run simultaneously so you can cut-back if needed, and more.

The first step is usually CD to staging, and then only if all the automated checks run green. Deploying every build automatically to a staging gives the benefits of fast visibility, but also protects users from big, unanticipated, black swan problems that will ruin their day. Deploying continuously to staging has the added benefit of allowing testers to control their own test environments.

One other strategy I've had work well is deploying automatically to test after getting a green light from suites of tests that cover multiple layers of the product - unit, service, and UI. Although these are just checks and usually won't show unexpected problems, they will show that certain aspects of the software still function the way you think they do. Having a second test environment to control and compare against the latest is a nice touch, too.

Not Everything Is Functional
One important aspect of DevOps is defining when a feature or code change is officially done. When a company releases quickly, and often, that definition can be as light as a green light from all automated checks run on a given build. This method treats software as a simple set of functions. I can enter a value in this text field and select one of these radio buttons then click a button and get a value out. With some higher (than unit) types of automated checking, we can string these functions together to get something a little more complicated.

That isn't enough though, and it certainly doesn't represent how people use software.

The main problem with relying on this type of testing is how simple and linear it is. When we use software, we don't take perfectly predefined and clean paths. Instead of performing a series of - submit 5, assert value, select check box, assert value, check for NULL, assert value - testers take a loosely guided path. We meander here and there looking for hints of something interesting and then strike when a clue shows itself. This kind of activity can happen all the time, both on a macro level ("what new features could use all more attention on staging, or even in production, right now?") and at the micro level, exploring a story just a little bit more before the code goes live. That micro-exploring work can even happen with continuous delivery to production, by turning the feature on in staging and "off" in production until the tester has completed an exploration run.

DevOps tends to treat testing as an activity to be completely automated. Over time, as DevOps begins to gain maturity, I see that changing. Human, thinking, in the moment testing might be different each time, and needs to be done by someone, a tester, or developer or someone else, while the things that run every time according to algorithm, the checking, that might be automated. Cutting out the exploring causes us to loose perspective in a way that was probably unanticipated.

The push toward DevOps can be scary for testers; it isn't hard to imagine that the methods and tools in the wrong hands could squeeze our special role out of development groups. The best way to stay relevant is by understanding your unique contribution, being able to explain it, and excelling at it.

So keep calm and Excel On

Read the original blog entry...

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

@ThingsExpo Stories
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...