Welcome!

Linux Containers Authors: Zakia Bouachraoui, Elizabeth White, Yeshim Deniz, Liz McMillan, Pat Romanski

Related Topics: @DevOpsSummit, Linux Containers, @CloudExpo, FinTech Journal

@DevOpsSummit: Blog Feed Post

Deployment Management with @Plutora | @CloudExpo [#DevOps #APM]

Organizations miss opportunities to take advantage of dynamic cloud-based deployments for non-production environments

Adapt Environment Management to Cloud Deployments with Plutora

Despite the prevalence of public and private clouds in the enterprise, most IT departments still adhere to operational models designed for physical infrastructure and servers that involve complex environment setup processes.  As a result, organizations miss opportunities to take advantage of dynamic cloud-based deployments for non-production environments.

There are a number of reasons why it is often too difficult for enterprises to stand up a non-production environment for testing or staging. Databases have to be provisioned, application servers need to be configured, and a testing environment for a complex system can take weeks or months to properly certify as ready for use.  This involves a lot of coordination and effort to setup environments.  Organizations typically simplify setup by treating environments as permanent because they lack a tool like Plutora to help them keep track of environment management efforts.

  • Without Plutora, you'll use your clouds just like a colo facility and won't be able to adapt your processes to new possibilities. You'll over-provision your environments to make up for the lack of visibility into what's really needed.
  • With Plutora, you gain insight into how much effort is required to coordinate environment management tasks and you get a dashboard showing you when environments are critical during a release timeline. These two capabilities give you the opportunity to adapt environment management to the dynamic possibilities of cloud-based deployments.

Cloud vs. Colo: What's the Difference?

Before the advent of cloud computing systems such as OpenStack and EC2, we had Colocation Centers.  Companies rented physical space and installed servers in data centers. For example, in 2001 I remember shipping expensive physical infrastructure to colo facilities. Applications were installed on these physical assets directly, and in 2001, if you needed a new testing environment you would order more physical servers and send them to your data center. Colos were all CapEx; your environments were your physical servers.

Fast-forward 14 years and most large enterprises are either using or planning to use a private cloud such as OpenStack. Many of these companies are still renting space in colo facilities.  They still have dedicated data centers to run private clouds, but there's now an opportunity to model environment capacity as a dynamic function. If you need a new environment you don't need to purchase more servers. You just provision more cores on-the-fly to adapt to changing demand.

This is the vision of cloud computing - the ability to react to demand as needed. Unfortunately, this vision runs into an obstacle - most organizations don't plan for dynamic environments because they are too difficult to manage and model.

So, what happens? A company invests millions in standing up internal clouds only to have application teams fire up static environments that run for years. In other words, your application teams use your cloud like an old colo because they haven't adapted to the cloud. They don't take advantage of the dynamic capabilities of cloud computing because it takes too much effort to "hydrate" an environment.

Infrastructure Isn't the Problem: It's the Applications

Application setup time is prohibitive and the challenge isn't purely technical. When a large e-commerce site needs to setup an end-to-end testing environment, this might require ten departments to coordinate the efforts of over fifty people on infrastructure, database, and application configuration efforts. This work may involve an end-to-end architecture, including everything from the website to transaction databases that fuel orders. Instead of accounting for this setup time, many organizations just write this off as lost time.

In a static environment, this setup process happens so infrequently it isn't even tracked and this is the root of the problem. If you only have to setup your QA and staging environments once every two years why bother optimizing this process? (Hint: In the cloud it should happen more frequently.)

There is a better way to address setup complexity. Start using Plutora to orchestrate the hundreds of steps necessary for standing up an environment. Use the tools we've designed to properly account for the time and effort it takes to get an environment ready for use. Once you've done this and identified the necessary effort, you can use Plutora to factor environment setup time into your release schedules.

Use Plutora to Track Environments: Take Advantage of the Cloud

You have two sides to the environment setup problem. On one side you have the requirement for many application teams to maintain multiple environments for parallel development tracks. On the other side you have the effort involved in setting up and tearing down complex end-to-end testing environments. What can be done?

First, no amount of planning is going to do away with the requirement for QA and staging environments, but properly planning when capacity is needed can help you identify opportunities for environment sharing and reuse. Even though your teams may require multiple staging and QA systems, a tool like Plutora can help you establish schedules for these systems so that you can effectively scale up or scale down cloud-based resources for environments as they are needed.

Planning and tracking tasks involved in environment setup can also help you identify potential areas for automation that will reduce the time required to stand up and tear down testing environments.

Times Have Changed, So Should Your Approach to Environment Management...

In the more static situation of 2001 where you provisioned physical servers and mapped them to non-production environments, it didn't make sense to optimize environment setup and tear down because it was impractical to automate across the full architecture.

In 2015, it only takes a few minutes to spin up hundreds of virtual machines on a private cloud. Your organization needs to use a solution like Plutora to start capturing and optimizing the environment setup process so that you can take full advantage of the cloud. When you install OpenStack or start using Amazon EC2, you should adopt Plutora to help you manage your environments.

Read the original blog entry...

More Stories By Plutora Blog

Plutora provides Enterprise Release and Test Environment Management SaaS solutions aligning process, technology, and information to solve release orchestration challenges for the enterprise.

Plutora’s SaaS solution enables organizations to model release management and test environment management activities as a bridge between agile project teams and an enterprise’s ITSM initiatives. Using Plutora, you can orchestrate parallel releases from several independent DevOps groups all while giving your executives as well as change management specialists insight into overall risk.

Supporting the largest releases for the largest organizations throughout North America, EMEA, and Asia Pacific, Plutora provides proof that large companies can adopt DevOps while managing the risks that come with wider adoption of self-service and agile software development in the enterprise. Aligning process, technology, and information to solve increasingly complex release orchestration challenges, this Gartner “Cool Vendor in IT DevOps” upgrades the enterprise release management from spreadsheets, meetings, and email to an integrated dashboard giving release managers insight and control over large software releases.

IoT & Smart Cities Stories
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...