Welcome!

Linux Containers Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Stackify Blog

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Microsoft Cloud, Linux Containers, Agile Computing

@DevOpsSummit: Blog Post

Time to Invest in Deployment By @Itransition | @DevOpsSummit [#DevOps]

You can’t be sure that every one of your app deployments will be smooth sailing

Why Now Is the Right Time to Invest in Deployment Automation

Isn't it great to treat your girlfriend by cooking her favorite omelet every morning? In theory, sure, but in reality, chances are most of the time you end up with darn scrambled eggs instead. Let's face it: you're a great boyfriend but a terrible cook. Believe it or not, this is quite analogous to app deployment. You can't be sure that every one of your app deployments will be smooth sailing; every now and then you will mess up a thing or two (or a dozen) along the way.

Be it a critical urge to rapidly roll back to a previous release or the inability to find the phone number of that one guy responsible for deployment, the opportunities for things to go terribly wrong are endless. As a rule, there are two reasons behind your worst nightmares coming true:

  • You're good at development but operations isn't your strong suit.
  • You're not using deployment automation.

In this article, we'll focus on the second point - deployment automation.

The automating software deployment process for .NET has pretty much become a ‘no-brainer' during the past few years. New tools have made it extremely easy to make deployments faster and less risky at costs tending to zero. Not using automatic deployment in 2014 is like not using source control; it's possible to live without it but having it in place keeps you safe while requiring such little effort. Yet, many still resist given their perception of the hassle around the creation, configuration and maintenance of automated deployment. And they‘re really missing out.

There are many reasons to start investing in deployment automation for .NET, including a drastic increase in deployment success rates and frequencies, but most important, it is good for business. Here's why:

Stable Manual Deployment Is a Utopia
Let's have a brief look at what it usually takes to deploy an ordinary application:

  1. Checking out the version of the source code that you want to deploy (e.g. the latest commit of the /Release_01 branch);
  2. Building solution with appropriate settings applied;
  3. Transforming configuration appropriately;
  4. Publishing/packaging new version;
  5. Stopping the application/tuning load balancer so that users don't hit the app in the middle of deployment;
  6. Backing up the database;
  7. Updating the database structure/data;
  8. Removing old files (but keeping some, e.g. /Uploads folder);
  9. Copying new version to production server;
  10. Setting appropriate ACL permissions/other environment settings;
  11. Deploying dependencies recursively;
  12. Starting the application;
  13. Executing health-checks.

That's only the basic list; of course, every application is unique and has a slightly different process (so your app may have additional steps or not require some of the aforementioned ones, but most deployments are similar).

All those steps may seem easy enough to perform manually without anything going wrong. Well, day-to-day experience says that deployments always go wrong from time to time if executed manually, mainly due to our nature. Humans (particularly creative people like developers) are not very good at performing routine repetitive tasks; that's why we have computers. Here are a few examples of some of the most avoidable manual deployment errors I've seen:

  • Checking out the version of the source code that you want to deploy:
  • Ever had /Dev branch deployed to Production environment just because the guy who executed deployment was sure he switched to /Dev branch when, in actuality, he didn't?
  • Or even worse: getting /Production branch deployed with local intermediate changes never checked in the source control (it's so easy to forget about your local changes when you are to deploy a hotfix). So no one actually understands why the application misbehaves (I've seen people decompiling production .dll with reflector to understand what is in there, because production code is different than any version in the repository).
  • Building solution with appropriate setting applied:
  • It's so easy to forget to switch Visual Studio to Release configuration before you build the source code. And it's so sad to find out that your app is too slow in production because of DEBUG compilation.
  • Transforming configuration appropriately:
  • There's not much joy in discovering that your production application uses a development database after a new release when "data loss" is reported by end users (somebody forgot to replace connection string in web.config). In general, manual configuration transformation is a bad idea, because usually it is not versioned (meaning developers have to figure out how production configuration files differ from the dev version and "merge" them manually every time. As a result nobody is actually sure what the production configuration is and when/why it was changed); "What's the requestTimeOut in production? - hmm, I see it's 300 seconds now, but it was around 100 last week - who changed it?"
  • Backing up the database:
  • Discovering you forgot to back up the production database just at the moment your migration script fails corrupting the data.

You get the idea. I have no doubt you've encountered some of the aforementioned errors and can easily add a ton of others. The key problem with these errors is the fact that it's very easy to make any of them but very hard to get to the bottom of them. As a rule, it takes a lot of effort and nervous hours to detect and fix them.

Therefore, you are forced to have either a deployment document (checklist) or a special "deployment" guy on your team (or both). Each of these approaches has major drawbacks:

  • Deployment checklists are often outdated (developers are generally bad in maintaining documentation - and they are bad for a reason; maintaining docs is boring). Moreover, new team members still need to undergo deployment training when they join the project (and it takes time).
  • Your team's bus factor is 1. No new version/hotfix can be deployed if your "deployment" guy is on vacation.

These issues will not occur if deployments are done automatically, because computers are good at repetitive tasks (humans are not).

Time Is Money: Gain Both
If you'd rather have a million dollars straight away than a penny doubled every day for one month you should revisit your math, because the latter would result in you becoming over $10M richer. The story is exactly the same when investing in deployment automation. In addition to streamlining release operations, you're gaining profit and saving resources; in other words, you're increasing ROI.

In my experience, carefully going through a deployment process manually takes at least an hour for an average developer for a small project. In an agile environment, you usually want to have frequent deployments to QA/UAT platforms, so features are delivered and validated quickly (i.e., have 3-4 QA deployments per week [meaning manual deployment costs you at least 12-16 hours per month]). On the other hand, configuring automated deployment for a simple project rarely takes more than 16 hours, while deployment itself would then be just a click away. It truly is as simple as that; automation wins even with a tight time frame. Now add up time needed to train new developers to do the deployment plus time spent to troubleshoot deployment errors. It turns out that you can save about 25-40 hours per month with automation, which translates to about $500-800.

Another important aspect is the fact that overall team performance increases, because the QA team does not need developers to be distracted from new features when QA needs a new build to be delivered for validation which; this, in turn, means greater flexibility.

Take a look at this graph from McConnell's "Software project survival guide":

The longer it takes from introducing an error to its detection, the more time (and money) it takes to fix it. Being able to deploy without a developer's involvement means much more frequent deployments, which means much quicker error detection (i.e. allows your team move even faster!).

It's also about making errors less risky, which again means higher deployment frequency. And higher frequency entails faster feedback from testers and end users.

But there's more. If done intelligently, deployment automation also provides for automated reporting into the process, which results in zero efforts and money put into complying with audit requirements, and a significant unlikelihood of audit failure.

Sound too good to be true? Here's a summary of what our team achieved in terms of ROI when we introduced deployment automation in one of our .NET projects:

Overall efforts

90 man-months

Duration

20 months

 

Manual Deployment

Automated Deployment

Deployment time

2 man-hours/deployment

16 man-hours for one-time implementation

Deployment frequency

Twice a week

Once every day

Deployment error rate

10%

2%

Deployment cost

$16h/month

16h (one-time)

Overall benefits

 

$6,400 (320 man-hours) saved

200% increase in deployment frequency

5x decrease in deployment errors

 

And that's just a simple case. For complex projects/projects with high deployment frequencies, teams can save more than a thousand of hours per year.

New Opportunities Made Possible By Deployment Automation
I am a strong advocate of deployment automation for two main reasons:

  • It saves developers from routine error-prone tasks (which saves money).
  • It opens up several new opportunities that can make a great difference to your project's overall success.

I personally think the second reason is the most important for business, because of the following:

  • Integration tests as part of your daily workflow:

Everybody knows it is cost-effective to automate regression checks. Nowadays, we have plenty of cool tools to help implement integration tests. A good suite of UI regression tests is crucial for long-running projects' sustainable development; otherwise, in a year or two, you get to a point when you can't add new features because you are in a constant rush fixing the old ones. Automatic deployment is required in order to run these tests frequently and in an automated fashion. I recommend running them throughout the day (ideally - for every check in) - this saves money and helps you keep up with the schedule as you detect errors in under an hour from introducing them (see graph above).

  • Provide visibility with nightly builds:

Everybody wants visibility in agile development. The sooner you get something done and give it to product owners/target users, the sooner you understand what they actually need from the software (so you eliminate "you built exactly what I asked you, but it is not what I need" situation). A great way to provide visibility at no cost is to set up a nightly build, which deploys the current development version to a test environment just for reference. That way, everybody who is interested (stakeholders, managers, beta users, etc.) can see what has been built on a daily basis.

  • DevOps:

If you want your team to get into the DevOps world, be ready to invest in deployment automation. Developers tend to not like ops tasks, but they love automating stuff (that's why we automate email replies and program our coffee maker to run every morning). So a boring task to configure a new UAT platform becomes a challenging, exciting thing when you want to automate it.

  • Continuous deployment:

Continuous deployment, where every change that passes automated testing is deployed to production automatically, has been gaining traction over the last few years with most major tech companies adopting it (WordPress, Google, Facebook, Amazon).

Standard workflow with automated acceptance (integration tests) employed.

The bottom line is there is no excuse these days to not to do automatic deployment on .NET web projects. Technology is mature and easy to use, it saves time and money, it eliminates hard-to-troubleshoot errors because the process is reproducible and versioned, and it allows your team to be more flexible and move significantly faster.

More Stories By Ivan Antsipau

Ivan Antsipau is a senior .NET developer at Itransition specializing in architecting and implementation of business-specific web applications. With a specialist degree in Radiophysics and Computer Science, a knack for team leading, and years of hands-on programming experience under his belt, he sees the key to sustainable and accelerated delivery of software projects in elimination of stressful manual efforts with the help of continuous integration and automated testing.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and ...
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smart...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
"We view the cloud not as a specific technology but as a way of doing business and that way of doing business is transforming the way software, infrastructure and services are being delivered to business," explained Matthew Rosen, CEO and Director at Fusion, in this SYS-CON.tv interview at 18th Cloud Expo (http://www.CloudComputingExpo.com), held June 7-9 at the Javits Center in New York City, NY.
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
In his session at Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to maximize project result...
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...