Welcome!

Linux Containers Authors: Liz McMillan, Greg O'Connor, Elizabeth White, Kareen Kircher, Stefan Bernbo

Blog Feed Post

Immutable Infrastructure with Ansible and Packer

Immutable Infrastructure with Ansible and Packer by Marko Locher from Codeship

At Codeship we run immutable servers which we internally call Checkbot. These are the machines responsible for running your tests, deploying your software and reporting the results back to our web application. Of course, there are constant changes to the setup of these images. New software needs to be installed, packages upgraded, old software versions removed. Let’s see how we do that!

Vagrant and Packer Workflow

The software stack used for building and testing these images in our current workflow consists of Vagrant for development, Packer for actual image generation and a series of shell scripts for provisioning. This worked fine for the last years, but as our team grows and more people are making changes to the scripts, this can easily get out of hand and become confusing. So we were looking for a lightweight tool to replace our shell scripts with. As we didn’t want to have an agent running to watch over the host, most configuration management tools were not an acceptable solution.

Using Ansible

Ansible with it’s YAML based syntax and agentless model fits quite nicely. We are still in the process of getting started, but the experience was so good, I couldn’t wait to share my findings. Maybe this post can convince you to take a look at Ansible and get started with configuration management yourself.

Getting started with Ansible

According to their website “Ansible is the simplest way to automate IT”. You could compare it to other configuration management systems like Puppet or Chef. These are complicated to setup and require installation of an agent on every node. Ansible is different. You simply install it on your machine and every command you issue is run via SSH on your servers. There is nothing you need to install on your servers and there are no running agents either.

> # Ansible installation via pip
> $ sudo pip install ansible

Something that took me a while to appreciate was the fact that Ansible playbooks (the pendant to Chef cookbooks or Puppet modules) are plain YAML files. This makes certain aspects a bit harder, but keeps the playbooks simple and easy to understand. (Try writing complicated shell commands with multiple levels of quoting and you will see what I mean.) Even for somebody who doesn’t know a lot about Ansible. For a more thorough introduction, please see the Ansible homepage and don’t forget to check the fantastic docs available at http://docs.ansible.com.

Building Immutable Infrastructure with Ansible

I started with the default integrations in Packer and Vagrant, which are straightforward to setup and require just a few lines of configuration.

Packer

{
    "provisioners": [
        {
            "type": "shell",
            "execute_command": "echo 'vagrant' | {{ .Vars }} sudo -E -S sh '{{ .Path }}'",
            "inline": [
                "sleep 30",
                "apt-add-repository ppa:rquillo/ansible",
                "/usr/bin/apt-get update",
                "/usr/bin/apt-get -y install ansible"
            ]
        },
        {
            "type": "ansible-local",
            "playbook_file": "../ansible/checkbot.yml",
            "role_paths": [
                "../ansible/roles/*"
            ]
        }
    ]
}

Vagrant

# Provisioning with ansible
config.vm.provision "ansible" do |ansible|
    ansible.inventory_path = "ansible/inventory"
    ansible.playbook = "ansible/checkbot.yml"
    ansible.sudo = true
end

But I decided to change those in favor of a couple shell scripts to get more flexibility when calling Ansible. Also it allows me to compensate for certain differences in the way Ansible is integrated with both Packer and Vagrant. As removing any possible differences is key in avoiding subtle bugs in testing vs. production. As an example take our current code for creating a LXC container and configuring some basic settings. I’m sure that, even without any further explanation, you can quite easily figure out what each item is supposed to do.

Config.j2

# Template used to create this container: /usr/share/lxc/templates/lxc-ubuntu
# Parameters passed to the template:
# For additional config options, please look at lxc.conf(5)

# Common configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf

# Container specific configuration
lxc.rootfs = /var/lib/lxc/{{lxc_container}}/rootfs
lxc.mount = /var/lib/lxc/{{lxc_container}}/fstab
lxc.utsname = {{lxc_container}}
lxc.arch = amd64

# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:11:f6:6c

# cgroup configuration
lxc.cgroup.memory.limit_in_bytes = {{lxc_memory_limit}}M

# Hooks
lxc.hook.pre-start = /var/lib/lxc/{{lxc_container}}/pre-start

config.yml

---
# file: host/defaults/main.yml

# LXC
lxc_container: codeship
lxc_memory_limit: 15360

lxc.yml

---
# file: host/tasks/lxc.yml

- name: LXC | Installation
  apt:
    pkg: "{{item}}"
    state: present
  with_items:
    - lxc
    - lxc-templates
    - debootstrap
    - bridge-utils
    - socat

- name: LXC | Check configuration
  command: lxc-checkconfig

- name: LXC | Create new container
  command: "lxc-create -n {{lxc_container}} -t ubuntu creates=/var/lib/lxc/{{lxc_container}}/"

- template: src=lxc/config.j2 dest=/var/lib/lxc/{{lxc_container}}/config
- template: src=lxc/pre-start.j2 dest=/var/lib/lxc/{{lxc_container}}/pre-start mode=0744 owner=root group=root

pre-start.j2

#!/bin/sh

# setup ssh access for the root user
mkdir -p /var/lib/lxc/{{lxc_container}}/rootfs/root/.ssh/
cp ~ubuntu/.ssh/id_rsa.pub /var/lib/lxc/{{lxc_container}}/rootfs/root/.ssh/authorized_keys

# setup ssh access for the rof user
if [ -d "/var/lib/lxc/{{lxc_container}}/rootfs/home/rof/" ]; then
  mkdir -p /var/lib/lxc/{{lxc_container}}/rootfs/home/rof/.ssh/
  cp ~ubuntu/.ssh/id_rsa.pub /var/lib/lxc/{{lxc_container}}/rootfs/home/rof/.ssh/authorized_keys
fi

This is only the beginning and a small step in configuring a whole build system for use by Codeship, but it shows the beauty of Ansible. It is extremely simple to understand. It provides a good abstraction of commonly needed patterns, like package installation, templates for configuration files, variables to be used by playbooks or configuration files and a lot more. And it doesn’t require any software installation on the host except an SSH server, which is pretty standard anyways.

And in combination with Packer we have an environment that let’s us build our production system running on EC2 as simple as a box used for development with Vagrant. And that’s great, because it makes our team more productive.

Codeship – A hosted Continuous Deployment platform for web applications

What’s possible with Ansible

Nevertheless we are far from finished. I am just starting to learn what is possible with Ansible and what modules are available. Some of the items on my checklist for the next months include

  • running multiple playbooks in parallel to speed up provisioning
  • getting to know the module system a lot better, and possibly write some modules myself
  • fine tuning the output generated by ansible
  • converting all the remaining shell scripts to playbooks, which is going to be the biggest part

What do YOU think about Ansible? If you have ideas or suggestions to improve our workflow, please let us know in the comments!

Further Information

Read the original blog entry...

More Stories By Manuel Weiss

I am the cofounder of Codeship – a hosted Continuous Integration and Deployment platform for web applications. On the Codeship blog we love to write about Software Testing, Continuos Integration and Deployment. Also check out our weekly screencast series 'Testing Tuesday'!

@ThingsExpo Stories
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, will deep dive into best practices that will ensure a successful smart city journey.
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
Pulzze Systems was happy to participate in such a premier event and thankful to be receiving the winning investment and global network support from G-Startup Worldwide. It is an exciting time for Pulzze to showcase the effectiveness of innovative technologies and enable them to make the world smarter and better. The reputable contest is held to identify promising startups around the globe that are assured to change the world through their innovative products and disruptive technologies. There w...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
There is growing need for data-driven applications and the need for digital platforms to build these apps. In his session at 19th Cloud Expo, Muddu Sudhakar, VP and GM of Security & IoT at Splunk, will cover different PaaS solutions and Big Data platforms that are available to build applications. In addition, AI and machine learning are creating new requirements that developers need in the building of next-gen apps. The next-generation digital platforms have some of the past platform needs a...
SYS-CON Events announced today Telecom Reseller has been named “Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is expected in the amount of information being processed, managed, analyzed, and acted upon by enterprise IT. This amazing is not part of some distant future - it is happening today. One report shows a 650% increase in enterprise data by 2020. Other estimates are even higher....
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Identity is in everything and customers are looking to their providers to ensure the security of their identities, transactions and data. With the increased reliance on cloud-based services, service providers must build security and trust into their offerings, adding value to customers and improving the user experience. Making identity, security and privacy easy for customers provides a unique advantage over the competition.
I wanted to gather all of my Internet of Things (IOT) blogs into a single blog (that I could later use with my University of San Francisco (USF) Big Data “MBA” course). However as I started to pull these blogs together, I realized that my IOT discussion lacked a vision; it lacked an end point towards which an organization could drive their IOT envisioning, proof of value, app dev, data engineering and data science efforts. And I think that the IOT end point is really quite simple…
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abil...
Is the ongoing quest for agility in the data center forcing you to evaluate how to be a part of infrastructure automation efforts? As organizations evolve toward bimodal IT operations, they are embracing new service delivery models and leveraging virtualization to increase infrastructure agility. Therefore, the network must evolve in parallel to become equally agile. Read this essential piece of Gartner research for recommendations on achieving greater agility.
SYS-CON Events announced today that Venafi, the Immune System for the Internet™ and the leading provider of Next Generation Trust Protection, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Venafi is the Immune System for the Internet™ that protects the foundation of all cybersecurity – cryptographic keys and digital certificates – so they can’t be misused by bad guys in attacks...
For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording.