Welcome!

Linux Containers Authors: Zakia Bouachraoui, Elizabeth White, Yeshim Deniz, Liz McMillan, Pat Romanski

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers

@DevOpsSummit: Article

Docker Containers and #Microservices | @DevOpsSummit #DevOps #Docker

How Docker has transformed continuous delivery and how Linux and Windows can further accelerate our digital future

A History of Docker Containers and the Birth of Microservices
by Scott Willson

From the conception of Docker containers to the unfolding microservices revolution we see today, here is a brief history of what I like to call 'containerology'.

In 2013, we were solidly in the monolithic application era. I had noticed that a growing amount of effort was going into deploying and configuring applications. As applications had grown in complexity and interdependency over the years, the effort to install and configure them was becoming significant. But the road did not end with a single deployment, no, the installation and configuration work was repeated over and over again, not only for each software release but for each and every environment that we promote applications to until finally being deposited into production, where this we repeat this exercise one last time.

What struck me in 2013 was that these monolithic apps were overwhelmingly being deployed inside virtual machines (VMs). Whether the targeted environment was for Development, QA or Production, VMs were the deployment endpoint that hosted the applications.

Promoting the VM image
At that time, I thought that it would save considerable time and effort if the VM image is directly promoted instead of the myriad number application artifacts. Think of it; I told people. IT personnel need only perform update and configuration tasks once, then after the application execution proves stable, the VM image can then be promoted up the delivery pipeline as a ready-to-run container. Conceptually, all that was needed was for an IT professional to make a few network changes to the VM each step along the way, and then swap out the older versioned VM image for the new image.

It sounded simple enough. As is often the case, reality turned out to be more difficult. The problem was that VM images were too big to be considered conveniently deployable artifacts, and there were more changes needed to the VMs than simple network settings, such as infrastructure, security and storage properties.

Though using a VM image as a transportable application container wasn't feasible at the time, I fell in love with the idea of being able to promote an immutable package that was tested, verified and warranted, rather than deploying numerous files that required various configuration changes. What I didn't realize at the time was that Linux kernel partitioning would provide the foundation for fulfilling my vision.

Docker is born
The Linux kernel had been undergoing changes along these lines since 2006 but had matured enough for a company called, dotCloud to create the Docker project in 2010. Docker wasn't just providing a framework for spinning up and spinning down Linux kernel partitions, or containers; they were focused on using these containers as stateless app-in-a-box hosts for single applications. Docker containers were set to change fundamentally the way applications are architected, developed and delivered.

In 2013 Docker Inc. was born, and in 2016, the Docker Data Center and Docker Cloud have come online. Docker provides an abstraction layer to Linux containers, which guarantees the runtime environment exposed to an application will be identical no matter where the container is hosted and runs as long as the container is running in a Docker host. The Docker image is now the immutable package that can be promoted up through the Continuous Delivery pipeline and can safely enable Continuous Deployment.

Container-Map.jpg

Since containers are an isolated process space running inside the Linux OS (figure 1), their "boot time" is measured in seconds if not milliseconds. Swapping in and out new images for old ones happens virtually instantaneously, and Docker images are small enough to reside within versioned repositories meaning rolling back a failed deployment is easy and nearly instantaneous. If an error is detected post-deployment, simply swap out the current image with the previous version.

Microsoft joins the ‘containerology' party
Microsoft also recognized the benefit of containerology - the architecting, developing, hosting, and deploying of containerized applications. In 2015, they announced that Windows too will offer container technology (Figure 2).

Windows containers come in two runtime flavors, Windows Server Core and Nano Server. Windows containers also provide two different types of isolation. A Windows Server Container is like its Linux counterpart, in that it is an isolated process space running inside the Windows OS. Additionally, like Linux, all containers share the same kernel. However, Microsoft offers a second, more secure, version of a container called a Hyper-V container. In a Hyper-V container, the kernel the application interacts with is a virtual kernel, and not the OS's actual kernel. In other words, Hyper-V containers are completely isolated from one another even down to the kernel level.

Windows-Service-Docker-Engine.jpg

Not only has Microsoft jumped on the container bandwagon, but they also shared the vision of Docker's application focused model for containers. Microsoft partnered with Docker, and as a result, one can run Linux or Windows containers with Docker. Being able to run applications in either Linux or Windows hosted containers will provide companies flexibility and reduce any refactoring costs associated with rewriting, tweaking or re-architecting existing applications.

The bold new world that containerology will take us to is that of microservices. In my opinion, microservices (specifically as enabled by Docker) represent the first feasible step towards mechanized or industrialized applications. In the mechanical engineering world, complex systems were built buying off the shelf components and widgets. In contrast, the software world was accustomed to fabricating every part needed to built complex applications.

Microsoft and the Object Management Group attempted to address this problem by defining COM and CORBA respectively, however, these had their challenges and neither standard ever fully realized a universal market of reusable components that any developer could assemble to build any application on any platform. I am not going to go into SOA or SOAP in this article, but suffice it to say, the software industry has tried and failed to deliver anything that approached the standardization, and standardized tooling of the manufacturing sector.

How microservices can revolutionize app development
Microservices can change that. Each microservice offers a single focused application that performs a particular function. A Docker container provides an immutable, portable and stateless package for microservices. Docker container images can be shipped, shared and versioned as well as be used as a foundation for building new containers. Docker Hub provides ready to use images that can be downloaded and assembled into more complex applications.

Need the software equivalent of an actuator, a cog, wheel or gear? As of today, and going forward one will be able to download desired "prefab" components rather than having to build each and every widget, component or interface from scratch. Docker has addressed security concerns with this level of sharing and reuse with their Content Trust. Docker Content Trust makes it possible to verify the publisher of Docker images and guarantees the contents of the image.

We are heading into yet another technology transformation which is both exciting and challenging - it always is. The word 'disruptive' has come into vogue of late, but the when hasn't the software industry been disruptive? Look what the invention of the spreadsheet did to floors of accounting departments.

Word processors, ERP systems, RDBMSs, smart phones, the Internet. The list goes on, and will continue to go on - change is the norm in the world of technology and especially software. I share Docker's vision, a world of downloadable, reusable and adaptable components that can be used to assemble sophisticated or complex applications. I hope that the need to continually reinvent the wheel will become more of an exception than the rule in the future.

More Stories By Automic Blog

Automic, a leader in business automation, helps enterprises drive competitive advantage by automating their IT factory - from on-premise to the Cloud, Big Data and the Internet of Things.

With offices across North America, Europe and Asia-Pacific, Automic powers over 2,600 customers including Bosch, PSA, BT, Carphone Warehouse, Deutsche Post, Societe Generale, TUI and Swisscom. The company is privately held by EQT. More information can be found at www.automic.com.

IoT & Smart Cities Stories
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...