Welcome!

Linux Containers Authors: Yeshim Deniz, Liz McMillan, Zakia Bouachraoui, Elizabeth White, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, @CloudExpo

@DevOpsSummit: Blog Post

To Ensure Continuous Delivery By @TrevParsons | @DevOpsSummit [#DevOps]

Although continuous integration and continuous delivery are sometimes used synonymously, they are actually separate activities

Four Ways to Ensure Continuous Delivery Helps, Not Hurts

Customer demands aren't the only thing pushing development and operations teams into more frequent software releases. It is also need for quicker feedback on product quality, the desire to reduce bottlenecks in operations teams, and the goal to utilize less overhead on projects.

The concepts and spirit of continuous delivery are well known. However, organizations with existing applications starting to implement continuous delivery still have a lot to consider.

Making sure continuous delivery helps, not hurts your organization, is key to long-term success.

Faster features, quicker feedback, less snags, and more efficient teams are all good drivers to adopting continuous delivery (CD) and integration (CI).

Unless your organization was born with continuous delivery, making the move can be tricky. You have a few choices:

  1. Slip stream CI or CD into the current delivery pipeline. Sometimes, introducing CI into your development processes is not too much of a challenge, with the hardest part being the construction of a flexible integration lab. Because you can keep your existing waterfall release process, it looks and feels a lot like a more flexible UAT environment. CD is different however; inserting CD into existing processes can be complicated. It's obviously much different, and can create timing issues with roadmaps and previous releases.
  2. Start with a parallel development and operations group that is built bottom up for continuous delivery. Usually this requires a compelling event, like a new major version of an application, or an on-premise client application that is moving to a web application for it to make sense. The cost of another group is high, but it is an ideal opportunity to leverage all DevOps has to offer.
  3. Re-building the existing development process from scratch. While this is the best way to keep team structure intact (compared to option two), it requires you to allocate significant time when you're not releasing, and has a high failure rate.

What's the difference between CI and CD?
Although continuous integration and continuous delivery are sometimes used synonymously, they are actually separate activities with nearly identical tooling and processes. CI can be leveraged by almost any organization, even those stuck in waterfall-based projects, simply because the risk is much lower. However, CD impacts all users, and production infrastructure.

Now lets explore continuous delivery more.

In the spirit of DevOps the first and second option above make the most sense.

Option one allows you to start learning and leveraging the results driven culture without having a large impact on current production releases. This allows the team to learn about the processes and speed at which things happen, while maintaining the comfort level they're used to. The downside is it might encourage teams to stick with old habits, and eventually if not pushed forward, encourage reverting back to old ways.

Option two is ideal if there is a compelling event, staff, and budget to support it. This option allows you to leverage all DevOps tools right away, without much concern about integrating with existing components. The goal being that eventually everyone will move to the new processes after the support for the older waterfall driven applications die off.

Several very large software companies have successfully leveraged this approach. Now, large (previously non-technical) organizations with a combination of line-of-business applications and mobile and web applications benefit from this approach tremendously.

Both of these options have their pros and (more importantly) their cons. Startups designed from scratch with continuous delivery may not have addressed them, but you can.

There are four elements of CD that can pose serious challenges down the road.

  1. Ability to revert
  2. Tying Infrastructure and Application Layers
  3. Bugs, bugs, and more bugs
  4. Blind spots

1. Ability to revert
"Delivery" is the most important part of the modern release pipeline. It's emphasis is on getting code to market faster and being results driven.

Inevitably things will break (if they don't your team is not moving at the appropriate pace). Because things will break, reverting releases to previous versions is a very important component of the process. But reverts are not allows code; they can be configurations and even machines.

Building in great reverting mechanisms, which are fully tested against a series of previous releases, help teams know that their revert engine is there when they need it. It is wishful thinking to believe that release automation tools will do this for you; they won't. You need to perform regression testing on your revert process, at least early on, because teams will often forget dependencies at first run.

Having the analytics to let teams know what configurations look like before, and after a release, is critical to stay aware of all the changes that are happening between versions; both in the application and infrastructure.

2. Tying Infrastructure and Application Layers
With the speed of releases and the speed at which frameworks, web servers, and backbends that applications utilize are updating, it is critical that software releases be tied to their associated infrastructure.

A revert will get you back to a previously known good state, but it won't fix the problem. Every organization will have a different way of doing this, but without the correlation, development and operations will play ping-pong with issues and their potential resolutions.

It's a classic problem to have things such as new frameworks and patches run in integration environments but not production. This is a catalyst for wide spread issues. Without knowing the relationship of a release to infrastructure, a huge amount of time can be wasted trying to spot these issues and quickly starts looking like waterfall again.

3. Bugs, bugs, and more bugs
Let's say a small bug makes it through one release, two releases, or maybe even three. This means code on top of bad code has been released. Unfortunately this happens a lot, however with the power of frequent releases and active feedback it will eventually get caught.

Catching the bug is not the problem.

The problem is understanding exactly where the bug is. Sometimes a new feature using buggy functionality might be operating exactly as intended, but what it was written on top is not. This is why having a strong system for performance testing, QA, and QE is as critical to continuous delivery as tools like Jenkins, and GO.

4. Blind spots
These bugs often occur because of blind spots in the environment and poor analytics. These are areas in the infrastructure and code that teams can't always get a clear picture of. The issues end up surfacing in support tickets, complaining users, or potentially outages.

Blind spots should be avoided at all costs.

You achieve this by building in a culture of analytics first, and analytics everything, very early on. Making sure that operations and development teams know to produce analytics for all systems and applications, and where to push them. Leverage integrations of tools like Logentries with APM tools like New Relic to help gain the insights you need.

CI/CD give teams more flexibility and the ability to create better products. For teams with existing applications, it is also a fantastic opportunity to move in a new direction that keeps you competitive. The shift may not always be easy, but the rewards are well worth it. Taking into consideration the above aspects of CD that could potentially pose challenges down the road will help these teams implement CD without harm. Let us know what you think in the comments below.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

IoT & Smart Cities Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...