Linux Containers Authors: Liz McMillan, Gordon Haff, Pat Romanski, Flint Brenton, Yeshim Deniz

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Continuous — Build, Break, and Fix Fast By @HoardingInfo | @DevOpsSummit #DevOps

What Continuous affords us is the ability to break our applications with confidence

Continuous - Build, Break, and Fix Fast
By Chris Riley

This is one of two PagerDuty posts on Continuous. Check out our first one: Are You Ready for Continuous Deployment?

Continuous Overload
If you pay any attention to modern software delivery conversations, it sometimes feels like you are being beaten over the head with a Continuous magic wand. Continuous Integration, Continuous Delivery, Continuous Deployment, Continuous Documentation, etc. The idea is so easy that it’s frustrating: Go fast. But the benefits of going fast are well beyond more builds to your user base. Perhaps the greatest value is long term, and hidden in how you can break the application continuously without fear because you can fix it continuously as well.

What does speed get you in the long term? Over a period of three months, can you say something more about your pipeline than “We had more builds”? More releases are one thing. But a delivery chain that does not support innovation is nothing more than a way to get from point A to point B. To demonstrate real success in a DevOps-driven organization, you need to be able to show that your software quality and functionality increases as well — which means all that cool functionality sitting buried in your backlog finally percolates to the top.

Why Continuous?
What Continuous affords us is the ability to break our applications with confidence — because we know we can rapidly alert on any issues, and iterate to a new build to address them. Teams can now implement a feature that they have been dying to implement but have avoided due to perceived risk.

What break/fail fast really means is that you can be opportunistic about your functionality, which quite possibly is the only way to build functionality that responds to user demands in near-real-time, or to learn how new functionality impacts the application and adoption. Without fail fast, applications may be doomed to purely linear releases, no matter how short your sprints are. And they can fall into the same trap we are all too familiar with — stagnation, also known as the eventual slowing of new functionality in favor of very small changes to existing functionality. When this happens your application gets stale, and can only be addressed with major rewrites, refactoring, or whole new applications. This is not Continuous at all, or at least not sustained Continuous.

Canary Releases
The extreme of fail fast and fix fast is something called canary releases. In a canary release you have one or more new releases of the application in parallel to the existing production version. Once the release(es) are complete, you divert all or sub-segments of traffic from production to the new release environments. The purpose of sub-segments is to do A/B testing on slight variations of the releases.

If anything goes wrong with a canary release, you will know quickly with a robust alerting mechanism. Then, you’ll revert traffic back to production. The exercise might have a small negative impact on your users, but the response is so rapid that it seems like a glitch. New functionality could be developed within day-length time frames.

Because you’ve learned more about the new functionality, you can either drop it, or fix it within that canary release, and rapidly test again. This truly is an iterative model. And it can be set up as not just one iteration, but rather multiple iterations going on in parallel.

This model would also be considered the extreme of continuous deployment, though without some of the automated functional and system testing before releases. These processes can be too long to support the concept. It does assume a few things. The biggest is that your application has a high enough user base and volume to support quick tests of new functionality.

I have not figured out how this can work with a highly distributed microservices application, but I am sure it is possible, if you have:

The Tools to Get It Done
Of course, such an advanced way of looking at both releases and application architectures requires great tooling to execute it. And the top three tools for fail fast are as follows:

  1. Release Automation: Your release automation tool needs to be able to handle several releases at once. They all do, but that is not what I mean. Support of multiple parallel releases requires very good state management, and dashboards to visualize releases. Without this visibility, it’s very difficult to know what releases are where, and when they are reverted, which can result in serious problems. And the additional overhead might not make the process worth it.
  2. On-Demand Cloud Environments: The infrastructure to support this is not about power, but flexibility. Platform as a Service (PaaS) is the most suitable form of infrastructure to support Canary releases, because with PaaS, you are provisioning against a pool of resources, not actual VMs. This makes provisioning faster, but also easier to manage because you do not have to worry about orchestration, etc. Most PaaS environments also support easier traffic swapping. With Infrastructure as Service (IaaS), you will need to control this via your DNS or load balancer, which likely is one additional step. However, there is no reason IaaS should be excluded from break fast processes, especially if you leverage container technology like Docker, which makes it nearly as simple as PaaS. IaaS or PaaS the developers need to be able to spin up and tear down as many environments as they desire, and do so on demand. If the attainment of environments is gated, there is no way to achieve parallelism, or to respond to issues with a new, updated release. The processes i’m pitching require full-stack deployments, so deploying on existing environments is also not an option. With such a rapid release and revert process, it’s very easy to accumulate variables that would contaminate persistent environments.
  3. Alerting: Logging your environments is one thing. It is important to implement logging, but mostly for historical data. However, in canary releases, historical is too late. The responses to what happens to each build need to be quick, and the lifespan of an iteration is very short. So you need a very strong alerting platform that can push alerts to you as they happen. However, the alerting platform must be smart as well. Because of the frequency and number of releases in parallel, too much information can easily become a problem; thus, response to any issue requires a lengthy filtering process.

It Is Not All Technology
The concepts around releasing faster, breaking faster, and fixing faster are not complex. But implementing them in existing environments can be. And this is where any experienced developer, operations and QA person knows that you cannot simply turn on the canary release switch.

Implementation is a journey, but the above process is a goal. And what is nice about the already popular process of continuous integration is that the experimental fail and revert process can be implemented in integration environments. There the impact is only on your internal team. In this case, the impact will mostly affect QA, as they will be responsible for testing releases as soon as they are built, so the biggest organizational change exists there. (Still far less than implementing team-wide and in production.)

The bottom line is the tools are available to build faster, fail faster, and fix faster — a process that will not only increase the number of builds you do a year, but also the innovation that can take place to produce newer functionality and quality. Because the tools already exist, the burden is on the team to find a path from their existing release processes to new processes.

This is one of two PagerDuty posts on Continuous. Check out our first one: Are You Ready for Continuous Deployment?

The post Continuous — Build, Break, and Fix Fast appeared first on PagerDuty.

Read the original blog entry...

More Stories By PagerDuty Blog

PagerDuty’s operations performance platform helps companies increase reliability. By connecting people, systems and data in a single view, PagerDuty delivers visibility and actionable intelligence across global operations for effective incident resolution management. PagerDuty has over 100 platform partners, and is trusted by Fortune 500 companies and startups alike, including Microsoft, National Instruments, Electronic Arts, Adobe, Rackspace, Etsy, Square and Github.

@ThingsExpo Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...