Welcome!

Linux Containers Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Does Your DevOps Department Need More Attention? | @DevOpsSummit #Cloud #DevOps #Containers

Luckily the warning signs of a DevOps department in need of help are pretty easy to recognize

Does Your DevOps Department Need More Attention?
By Josh Symonds

There are some big red flags that signify your DevOps department needs an overhaul.

Your deployment process seems to take forever. It only work from a few developers’ computers. It’s different for each server you deploy to.

Sound familiar?

Luckily the warning signs of a DevOps department in need of help are pretty easy to recognize. Read on to learn how to identify if and when your infrastructure team needs more attention—plus a few suggestions to implement those changes as smoothly and seamlessly as possible.

All your servers are slightly different
Automation is the byword of DevOps. With automation, you remove manual intervention (and possible human error), which means you can deploy new services and recover from critical events faster. If a server were to go down, a new one should be automatically created.

That’s a good ideal, but don’t worry if you’re not there yet. Just having a process to create servers is an enormous step in the right direction. Investigate tools such as Chef, Puppet, Ansible, or Salt to standardize your provisioning process. You should be able to take a server from a bare image to a full-fledged member of your cluster in one command. If you can’t, you’re in danger of losing important infrastructure knowledge when a server inevitably dies. And recreating server configuration after it’s been destroyed is not a fun experience.

A huge bonus of a standardized stack is liberation from correcting strange, difficult-to-trace server problems. Sorting through header files and C source trying to track down an error, only to figure out a disk experienced a freakish one-time mishap, will become an exercise of the past. The next time your OS acts up in an unexpected way, just destroy the entire server and let your provisioning system bring it back, fresh and new.

You may be surprised how, through no fault of your application or your own, entropy can infest a system and gradually introduce errors, bloat, and bugs. Fighting server divergence is one of the hardest tasks in operations, but configuration management tools and a standardized server creation process are the most important steps to ensure conformity among all members of your cluster. The surest way improve your DevOps game is to establish a streamlined, automated provisioning process you know works on all your servers—and don’t be afraid to use it!

Change is hard
Another sign you need to reinvest in your DevOps stack is if you spend a lot of time trying to manually change parts of your infrastructure. If a simple version upgrade takes weeks of manual work by your system’s administrators, there’s definitely something wrong. No piece of software should be manually installed on a server (except maybe to test how it functions). Administrators should largely write and correct software in repositories, not fix them on servers.

On the provider side, if creating new load balancers, databases, or other provider-mediated resources takes a while and requires you to use your provider’s management console, consider a tool like Terraform or CloudFormation to automate and manage your infrastructure backend. Changes you make to any part of your infrastructure should be tracked, managed, and understood through your version control system. This includes both the software running on servers and the commands used to provision those servers and all associated resources.

And similarly, changes to the infrastructure should be quick and transparent. A new version of your application should be delivered via a continuous deployment process that occurs automatically after a merge or new version. Needing a developer or administrator to manually perform deployments is a serious problem; waiting for deployments is an artificial bottleneck that takes time and saps focus. You can be sure someone will forget how it works, which can lead to a breakdown of the process, unless it’s incredibly well-documented.

And if you’re documenting it that well, why not just write code that performs the documented steps for you?

Developer environments are inconsistent
When a new developer joins your company—or an existing engineer buys a new computer—hours of time must be devoted to installing proper tooling, ensuring versions of local software are correct, and debugging any application-specific problems that crop up. This may seem like a small issue but it can rear its ugly head at unexpected times. Even six months after an engineer joins, the code he or she developed locally may work differently once deployed. Figuring out the problem can turn into a days-long slog that craters productivity.

A developer should be able to work on an environment exactly identical to your production stack. Tools including Vagrant and Docker allow you to bring the same provisioning and containerization processes that your servers use to your developers’ workstations, which helps ensure versioning problems are a headache of the past.

But even if you can’t introduce Vagrant and Docker, having an automated install process and a standardized development environment can alleviate a lot of the pain caused by inconsistencies. Your Windows or Linux developers may chafe when required to use Macs, but if you can ensure Macs always install the correct version of your software tools, it may be worth asking them to make that sacrifice.

Of course, developing with virtual machines means a developer could use whatever platform they’re most comfortable with and still be guaranteed to receive the same software. But getting there takes a lot more work than having an automated install script.

Conclusion
If your DevOps initiative is suffering from some or all of these issues, it’s clear your organization is experiencing drag caused by bad tooling or lack of processes. Thankfully, most of these issues are easy to fix. Streamlining your DevOps flow will save your engineers and administrators countless hours of manual management and debugging. Paying a little more attention to your DevOps can make formerly implacable, difficult-to-debug problems easy to fix through automation and standardization.

The post Does Your DevOps Department Need More Attention? appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

IoT & Smart Cities Stories
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER gives detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPOalso offers sp...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
SYS-CON Events announced today that IoT Global Network has been named “Media Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. The IoT Global Network is a platform where you can connect with industry experts and network across the IoT community to build the successful IoT business of the future.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.