Welcome!

Linux Containers Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Elizabeth White, Stefana Muller

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, @CloudExpo

@DevOpsSummit: Blog Post

To Ensure Continuous Delivery By @TrevParsons | @DevOpsSummit [#DevOps]

Although continuous integration and continuous delivery are sometimes used synonymously, they are actually separate activities

Four Ways to Ensure Continuous Delivery Helps, Not Hurts

Customer demands aren't the only thing pushing development and operations teams into more frequent software releases. It is also need for quicker feedback on product quality, the desire to reduce bottlenecks in operations teams, and the goal to utilize less overhead on projects.

The concepts and spirit of continuous delivery are well known. However, organizations with existing applications starting to implement continuous delivery still have a lot to consider.

Making sure continuous delivery helps, not hurts your organization, is key to long-term success.

Faster features, quicker feedback, less snags, and more efficient teams are all good drivers to adopting continuous delivery (CD) and integration (CI).

Unless your organization was born with continuous delivery, making the move can be tricky. You have a few choices:

  1. Slip stream CI or CD into the current delivery pipeline. Sometimes, introducing CI into your development processes is not too much of a challenge, with the hardest part being the construction of a flexible integration lab. Because you can keep your existing waterfall release process, it looks and feels a lot like a more flexible UAT environment. CD is different however; inserting CD into existing processes can be complicated. It's obviously much different, and can create timing issues with roadmaps and previous releases.
  2. Start with a parallel development and operations group that is built bottom up for continuous delivery. Usually this requires a compelling event, like a new major version of an application, or an on-premise client application that is moving to a web application for it to make sense. The cost of another group is high, but it is an ideal opportunity to leverage all DevOps has to offer.
  3. Re-building the existing development process from scratch. While this is the best way to keep team structure intact (compared to option two), it requires you to allocate significant time when you're not releasing, and has a high failure rate.

What's the difference between CI and CD?
Although continuous integration and continuous delivery are sometimes used synonymously, they are actually separate activities with nearly identical tooling and processes. CI can be leveraged by almost any organization, even those stuck in waterfall-based projects, simply because the risk is much lower. However, CD impacts all users, and production infrastructure.

Now lets explore continuous delivery more.

In the spirit of DevOps the first and second option above make the most sense.

Option one allows you to start learning and leveraging the results driven culture without having a large impact on current production releases. This allows the team to learn about the processes and speed at which things happen, while maintaining the comfort level they're used to. The downside is it might encourage teams to stick with old habits, and eventually if not pushed forward, encourage reverting back to old ways.

Option two is ideal if there is a compelling event, staff, and budget to support it. This option allows you to leverage all DevOps tools right away, without much concern about integrating with existing components. The goal being that eventually everyone will move to the new processes after the support for the older waterfall driven applications die off.

Several very large software companies have successfully leveraged this approach. Now, large (previously non-technical) organizations with a combination of line-of-business applications and mobile and web applications benefit from this approach tremendously.

Both of these options have their pros and (more importantly) their cons. Startups designed from scratch with continuous delivery may not have addressed them, but you can.

There are four elements of CD that can pose serious challenges down the road.

  1. Ability to revert
  2. Tying Infrastructure and Application Layers
  3. Bugs, bugs, and more bugs
  4. Blind spots

1. Ability to revert
"Delivery" is the most important part of the modern release pipeline. It's emphasis is on getting code to market faster and being results driven.

Inevitably things will break (if they don't your team is not moving at the appropriate pace). Because things will break, reverting releases to previous versions is a very important component of the process. But reverts are not allows code; they can be configurations and even machines.

Building in great reverting mechanisms, which are fully tested against a series of previous releases, help teams know that their revert engine is there when they need it. It is wishful thinking to believe that release automation tools will do this for you; they won't. You need to perform regression testing on your revert process, at least early on, because teams will often forget dependencies at first run.

Having the analytics to let teams know what configurations look like before, and after a release, is critical to stay aware of all the changes that are happening between versions; both in the application and infrastructure.

2. Tying Infrastructure and Application Layers
With the speed of releases and the speed at which frameworks, web servers, and backbends that applications utilize are updating, it is critical that software releases be tied to their associated infrastructure.

A revert will get you back to a previously known good state, but it won't fix the problem. Every organization will have a different way of doing this, but without the correlation, development and operations will play ping-pong with issues and their potential resolutions.

It's a classic problem to have things such as new frameworks and patches run in integration environments but not production. This is a catalyst for wide spread issues. Without knowing the relationship of a release to infrastructure, a huge amount of time can be wasted trying to spot these issues and quickly starts looking like waterfall again.

3. Bugs, bugs, and more bugs
Let's say a small bug makes it through one release, two releases, or maybe even three. This means code on top of bad code has been released. Unfortunately this happens a lot, however with the power of frequent releases and active feedback it will eventually get caught.

Catching the bug is not the problem.

The problem is understanding exactly where the bug is. Sometimes a new feature using buggy functionality might be operating exactly as intended, but what it was written on top is not. This is why having a strong system for performance testing, QA, and QE is as critical to continuous delivery as tools like Jenkins, and GO.

4. Blind spots
These bugs often occur because of blind spots in the environment and poor analytics. These are areas in the infrastructure and code that teams can't always get a clear picture of. The issues end up surfacing in support tickets, complaining users, or potentially outages.

Blind spots should be avoided at all costs.

You achieve this by building in a culture of analytics first, and analytics everything, very early on. Making sure that operations and development teams know to produce analytics for all systems and applications, and where to push them. Leverage integrations of tools like Logentries with APM tools like New Relic to help gain the insights you need.

CI/CD give teams more flexibility and the ability to create better products. For teams with existing applications, it is also a fantastic opportunity to move in a new direction that keeps you competitive. The shift may not always be easy, but the rewards are well worth it. Taking into consideration the above aspects of CD that could potentially pose challenges down the road will help these teams implement CD without harm. Let us know what you think in the comments below.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...