Welcome!

Linux Containers Authors: Elizabeth White, Yeshim Deniz, Liz McMillan, Zakia Bouachraoui, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, @CloudExpo

@DevOpsSummit: Blog Post

To Ensure Continuous Delivery By @TrevParsons | @DevOpsSummit [#DevOps]

Although continuous integration and continuous delivery are sometimes used synonymously, they are actually separate activities

Four Ways to Ensure Continuous Delivery Helps, Not Hurts

Customer demands aren't the only thing pushing development and operations teams into more frequent software releases. It is also need for quicker feedback on product quality, the desire to reduce bottlenecks in operations teams, and the goal to utilize less overhead on projects.

The concepts and spirit of continuous delivery are well known. However, organizations with existing applications starting to implement continuous delivery still have a lot to consider.

Making sure continuous delivery helps, not hurts your organization, is key to long-term success.

Faster features, quicker feedback, less snags, and more efficient teams are all good drivers to adopting continuous delivery (CD) and integration (CI).

Unless your organization was born with continuous delivery, making the move can be tricky. You have a few choices:

  1. Slip stream CI or CD into the current delivery pipeline. Sometimes, introducing CI into your development processes is not too much of a challenge, with the hardest part being the construction of a flexible integration lab. Because you can keep your existing waterfall release process, it looks and feels a lot like a more flexible UAT environment. CD is different however; inserting CD into existing processes can be complicated. It's obviously much different, and can create timing issues with roadmaps and previous releases.
  2. Start with a parallel development and operations group that is built bottom up for continuous delivery. Usually this requires a compelling event, like a new major version of an application, or an on-premise client application that is moving to a web application for it to make sense. The cost of another group is high, but it is an ideal opportunity to leverage all DevOps has to offer.
  3. Re-building the existing development process from scratch. While this is the best way to keep team structure intact (compared to option two), it requires you to allocate significant time when you're not releasing, and has a high failure rate.

What's the difference between CI and CD?
Although continuous integration and continuous delivery are sometimes used synonymously, they are actually separate activities with nearly identical tooling and processes. CI can be leveraged by almost any organization, even those stuck in waterfall-based projects, simply because the risk is much lower. However, CD impacts all users, and production infrastructure.

Now lets explore continuous delivery more.

In the spirit of DevOps the first and second option above make the most sense.

Option one allows you to start learning and leveraging the results driven culture without having a large impact on current production releases. This allows the team to learn about the processes and speed at which things happen, while maintaining the comfort level they're used to. The downside is it might encourage teams to stick with old habits, and eventually if not pushed forward, encourage reverting back to old ways.

Option two is ideal if there is a compelling event, staff, and budget to support it. This option allows you to leverage all DevOps tools right away, without much concern about integrating with existing components. The goal being that eventually everyone will move to the new processes after the support for the older waterfall driven applications die off.

Several very large software companies have successfully leveraged this approach. Now, large (previously non-technical) organizations with a combination of line-of-business applications and mobile and web applications benefit from this approach tremendously.

Both of these options have their pros and (more importantly) their cons. Startups designed from scratch with continuous delivery may not have addressed them, but you can.

There are four elements of CD that can pose serious challenges down the road.

  1. Ability to revert
  2. Tying Infrastructure and Application Layers
  3. Bugs, bugs, and more bugs
  4. Blind spots

1. Ability to revert
"Delivery" is the most important part of the modern release pipeline. It's emphasis is on getting code to market faster and being results driven.

Inevitably things will break (if they don't your team is not moving at the appropriate pace). Because things will break, reverting releases to previous versions is a very important component of the process. But reverts are not allows code; they can be configurations and even machines.

Building in great reverting mechanisms, which are fully tested against a series of previous releases, help teams know that their revert engine is there when they need it. It is wishful thinking to believe that release automation tools will do this for you; they won't. You need to perform regression testing on your revert process, at least early on, because teams will often forget dependencies at first run.

Having the analytics to let teams know what configurations look like before, and after a release, is critical to stay aware of all the changes that are happening between versions; both in the application and infrastructure.

2. Tying Infrastructure and Application Layers
With the speed of releases and the speed at which frameworks, web servers, and backbends that applications utilize are updating, it is critical that software releases be tied to their associated infrastructure.

A revert will get you back to a previously known good state, but it won't fix the problem. Every organization will have a different way of doing this, but without the correlation, development and operations will play ping-pong with issues and their potential resolutions.

It's a classic problem to have things such as new frameworks and patches run in integration environments but not production. This is a catalyst for wide spread issues. Without knowing the relationship of a release to infrastructure, a huge amount of time can be wasted trying to spot these issues and quickly starts looking like waterfall again.

3. Bugs, bugs, and more bugs
Let's say a small bug makes it through one release, two releases, or maybe even three. This means code on top of bad code has been released. Unfortunately this happens a lot, however with the power of frequent releases and active feedback it will eventually get caught.

Catching the bug is not the problem.

The problem is understanding exactly where the bug is. Sometimes a new feature using buggy functionality might be operating exactly as intended, but what it was written on top is not. This is why having a strong system for performance testing, QA, and QE is as critical to continuous delivery as tools like Jenkins, and GO.

4. Blind spots
These bugs often occur because of blind spots in the environment and poor analytics. These are areas in the infrastructure and code that teams can't always get a clear picture of. The issues end up surfacing in support tickets, complaining users, or potentially outages.

Blind spots should be avoided at all costs.

You achieve this by building in a culture of analytics first, and analytics everything, very early on. Making sure that operations and development teams know to produce analytics for all systems and applications, and where to push them. Leverage integrations of tools like Logentries with APM tools like New Relic to help gain the insights you need.

CI/CD give teams more flexibility and the ability to create better products. For teams with existing applications, it is also a fantastic opportunity to move in a new direction that keeps you competitive. The shift may not always be easy, but the rewards are well worth it. Taking into consideration the above aspects of CD that could potentially pose challenges down the road will help these teams implement CD without harm. Let us know what you think in the comments below.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

IoT & Smart Cities Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
According to Forrester Research, every business will become either a digital predator or digital prey by 2020. To avoid demise, organizations must rapidly create new sources of value in their end-to-end customer experiences. True digital predators also must break down information and process silos and extend digital transformation initiatives to empower employees with the digital resources needed to win, serve, and retain customers.
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...