Welcome!

Linux Containers Authors: Zakia Bouachraoui, Elizabeth White, Yeshim Deniz, Liz McMillan, Pat Romanski

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

A Deep Dive into Docker – Part 2 By @AppDynamics | @DevOpsSummit

The technical aspects of Docker, such as the difference between Docker and virtual machines

A Deep Dive into Docker - Part 2
By Anand Akela

In Part One of this Docker primer I gave you an overview of Docker, how it came about, why it has grown so fast and where it is deployed. In the second section, I'll delve deeper into technical aspects of Docker, such as the difference between Docker and virtual machines, the difference between Docker elements and parts, and the basics of how to get started.

Docker vs. Virtual Machines
First, I will contrast Docker containers with virtual machines like VirtualBox or VMware. With virtual machines the entire operating system is found inside the environment, running on top of the host through a hypervisor layer. In effect, there are two operating systems running at the same time.

In contrast, Docker has all of the services of the host operating system virtualized inside the container, including the file system. Although there is a single operating system, containers are self-contained and cannot see the files or processes of another container.

Differences Between Virtual Machines and Docker

  • Each virtual machines has its own operating system, whereas all Docker containers share the same host or kernel.

  • Virtual machines do not stop after a primary command; on the other hand, a Docker container stops after it completes the original command.

  • Due to the high CPU and memory usage, a typical computer can only run one or two virtual machines at a time. Docker containers are lightweight and can run alongside several other containers on an average laptop computer. Docker's excellent resource efficiency is changing the way developers approach creating applications.

  • Virtual machines have their own operating system, so they might take several minutes to boot up. Docker containers do not need to load an operating system and take microseconds to start.

  • Virtual machines do not have effective diff, and they are not version controlled. You can run diff on Docker images and see the changes in the file systems; Docker also has a Docker Hub for checking images in and out, and private and public repositories are available.

  • A single virtual machine can be launched from a set of VMDK or VMX files while several Docker containers can be started from a one Docker image.

  • A virtual machine host operating system does not have to be the same as the guest operating system. Docker containers do not have their own independent operating system, so they must be exactly the same as the host (Linux Kernel.)

  • Virtual machines do not use snapshots often - they are expensive and mostly used for backup. Docker containers use an imaging system with new images layered on top, and containers can handle large snapshots.

Similarities Between Virtual Machines and Docker

  • For both Docker containers and virtual machines, processes in one cannot see the processes in another.

  • Docker containers are instances of the Docker image, whereas virtual machines are considered running instances of physical VMX and VMDK files.

  • Docker containers and virtual machines both have a root file system.

  • A single virtual machine has its own virtual network adapter and IP address; Docker containers can also have a virtual network adapter, IP address, and ports.

Virtual machines let you access multiple platforms, so users across an organization will have similar workstations. IT professionals have plenty of flexibility in building out new workstations and servers in response to expanding demand, which provides significant savings over investing in costly dedicated hardware.

Docker is excellent for coordinating and replicating deployment. Instead of using a single instance for a robust, full-bodied operating system, applications are broken down into smaller pieces that communicate with each other.

Installing Docker
Docker gives you a fast and efficient way to port apps on machines and systems. Using Linux containers (LXC) you can place apps in their own applications and operate them in a secure, self-contained environment. The important Docker parts are as follows:

  1. Docker daemon manages the containers.

  2. Docker CLI is used to communicate and command the daemon.

  3. Docker image index is either a private or public repository for Docker images.

Here are the major Docker elements:

  1. Docker containers bold everything including the application.

  2. Docker images are of containers or the operating system.

  3. Dockerfiles are scripts that build images automatically.

Applications using the Docker system employ these elements.

Linux Containers - LXC
Docker containers can be thought of as directories that can be archived or packed up and shared across a variety of platforms and machines. All dependencies and libraries are inside the container, except for the container itself, which is dependent on Linux Containers (LXC). Linux Containers let developers create applications and their dependent resources, which are boxed up in their own environment inside the container. The container takes advantage of Linux features such as profiles, cgroups, chroots and namespaces to manage the app and limit resources.

Docker Containers
Among other things, Docker containers provide isolation of processes, portability of applications, resource management, and security from outside attacks. At the same time, they cannot interfere with the processes of another container, do not work on other operating systems and cannot abuse the resources on the host system.

This flexibility allows containers to be launched quickly and easily. Gradual, layered changes lead to a lightweight container, and the simple file system means it is not difficult or expensive to roll back.

Docker Images
Docker containers begin with an image, which is the platform upon which applications and additional layers are built. Images are almost like disk images for a desktop machine, and they create a solid base to run all operations inside the container. Each image is not dependent on outside modifications and is highly resistant to outside tampering.

As developers create applications and tools and add them to the base image, they can create new image layers when the changes are committed. Developers use a union file system to keep everything together as a single item.

Dockerfiles
Docker images can be created automatically by reading a Dockerfile, which is a text document that contains all commands needed to build the image. Many instructions can be completed in succession, and the context includes files at a specific PATH on the local file system or a Git repository location; related subdirectories are included in the PATH. Likewise, the URL will include the submodules of the repository.

Getting Started
Here is a shortened example on how to get started using Docker on Ubuntu Linux - enter these Docker Engine CLI commands on a terminal window command line. If you are familiar with package managers, you can use apt and yum for installation.

  1. Log into Ubuntu with sudo.

  2. Make sure curl is installed:
    $ which curl

  3. If not, install it but update the manager first:
    $ sudo apt-get update
    $ sudo apt-get install curl

  4. Grab the latest Docker version:
    $ curl -fsSL

  5. You'll need to enter your sudo password. Docker and its dependencies should be downloaded by now.

  6. Check that Docker is installed correctly:
    $ docker run hello-world

You should see "Hello from Docker" on the screen, which indicates Docker seems to be working correctly. Consult the Docker installation guide to get more details and find installation instructions for Mac and Windows.

Ubuntu Images
Docker is reasonably easy to work with once it is installed since the Docker daemon should be running already. Get a list of all docker commands by running sudo docker

Here is a reference list that lets you search for a docker image from a list of Ubuntu images. Keep in mind an image must be on the host machine where the containers will reside; you can pull an image or view all the images on the host using sudo docker images

Commit an image to ensure everything is the same where you last left - that way it is at the same point for when you are ready to use it again: sudo docker commit [container ID] [image name]

To create a container, start with an image and indicate a command to run. You'll find complete instructions and commands with the official Linux installation guide.

Technical Differences
In this second part of my two-part series on Docker, I compared the technical differences between Docker and virtual machines, broke down the Docker components and reviewed the steps to get started on Linux. The process is straight forward - it just takes some practice implementing these steps to start launching containers with ease.

Begin with a small, controlled environment to ensure the Docker ecosystem will work properly for you; you'll probably find, as I did, that the application delivery process is easy and seamless. In the end, the containers themselves are not the real advantage: the real game-changer is the opportunity to deliver applications in a much more efficient and controlled way. I believe you will enjoy how Docker allows you to migrate from dated monolithic architectures to fast, lightweight microservice faster than you thought possible.

Docker is changing app development at a rapid pace. It allows you to create and test apps quickly in any environment, provides access to big data analytics for the enterprise, helps knock down walls separating Dev and Ops, makes the app development process better and brings down the cost of infrastructure while improving efficiency.

The post A Deep Dive into Docker - Part 2 appeared first on Application Performance Monitoring Blog | AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

IoT & Smart Cities Stories
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...