The Official Ionic Blog

Build amazing native and progressive web apps with HTML5

We are huge fans of Docker here at Ionic. Docker keeps our code and its dependencies together and lets us more fully utilize our computing resources for products like Ionic Creator and the upcoming Ionic.io services.

One challenge we faced with Docker, though, was that any time we made even the smallest change to our code, we had to go through the process of building a new container, pulling it down to our servers, and replacing the running version.

We store all of our code in GitHub, use the Docker Registry to automatically build and store our containers, and use Ansible to script the management and deployment of our containers to our servers. Even with a fully automated process, deploying that one small change could take us 20 minutes or more. After some brainstorming, we realized there is a better way for us to utilize Docker.

After the initial container build, 99% of our changes are purely code. We aren’t adding any new dependencies or changing any of the requirements for running the code. Docker is really just a way to encapsulate the infrastructure required to run our code in a self-contained package. Because 99% of our changes are to code, not infrastructure, we realized we didn’t need to go through the effort to rebuild the infrastructure on every change.

The killer Docker feature that lets us solve this problem is volumes. In the first iterations of our Docker files, our code was pulled from GitHub and built directly into the container. Now, we deliberately leave the code out of the container and instead load it through a host volume on container start. When we want to do a new deploy, Ansible pulls down our master branch from GitHub into an app directory on our servers. Then it checks to make sure that the associated container is running, and if it’s not, it will start the container and map the app code into the container.

The other component that makes this work for us is that most of our apps are Python (Django), and we serve them with uWSGI inside the Docker container. uWSGI has a touch reload feature that watches a specified file and reloads the uWSGI server when the file is touched. After Ansible has pulled our changes from GitHub, we have Ansible touch the uwsgi.ini file, which triggers uWSGI to reload inside the running container. Just like that, we’re running the updated version of our code!

What this means, in brief, is that we took our deployments from a 20+ minute process that looked like this:

  1. Commit and push to GitHub.
  2. Docker Registry pulls the changes and builds a new container.
  3. Ansible connects to our servers and pulls the new container.
  4. Ansible finds any running instances of the old container and stops them.
  5. Ansible starts new instances of the container.

to this approximately 10 second process:

  1. Commit and push to GitHub.
  2. Ansible connects to our servers, pulls latest master from GitHub.
  3. Ansible touches the app’s uwsgi.ini file to trigger UWSGI to reload.

Breaking it down

Supervisor / uWSGI

We are using Supervisor inside the Docker container to start the processes run by the container. Our supervisord.conf looks like the following:

[supervisord]
nodaemon=true

[program:uwsgi]
command = /usr/local/bin/uwsgi --touch-reload=/path/to/code/in/container/uwsgi.ini --ini /path/to/code/in/container/uwsgi.ini

We are using the uwsgi.ini file as the trigger file via the –touch-reload flag.

Docker

When we start our container, we add a host volume that contains the code for our app. That host volume is mapped to an app path in the container from which uWSGI will load the app.

docker run -d -P -v /path/to/code/on/host:/path/to/code/in/container --name=container_name driftyco/testapp

Ansible

Ansible is in charge of cloning the application code from GitHub into our host’s app directory, ensuring that the Docker container is running and touching the uWSGI touch-reload trigger file. We have created playbooks to direct the deployment of each of our services, so deploying is just a matter of running the right one.

For a quick code deploy, we run a playbook that contains these tasks and takes only a few seconds to run:

- set_fact: host_volume="/path/to/code/on/host"
- name: Git pull the latest code
  git: [email protected]:{{ org }}/{{ container }}.git
       dest={{ host_volume }}
       accept_hostkey=yes
       force=yes

- name: Gracefully reload uwsgi
  file: path={{ touch_file }} state=touch

If we need to restart the entire container or update system packages, we can do a container deploy, which takes a few minutes, with these tasks:

- name: Add app dir if it doesn't yet exist
  file: path={{ host_volume }} owner=nobody group=docker recurse=yes state=directory
  sudo: yes
- name: Pull Docker image
  command: "{{ item }}"
  ignore_errors: yes
  with_items:
    - docker pull {{ org }}/{{ container }}
    - docker stop {{ container }}
    - docker rm {{ container }}
- name: Run Docker image with app volumes
  command:  docker run -d -P -v {{ host_volume }}:{{ container_volume }} --name={{ container }} {{ extra_params }} {{ org }}/{{ container }}

For a full deploy, we run both playbooks together in sequence; it’s that simple. 😉

Conclusion

Because Docker is primarily a way to encapsulate infrastructure into a self-contained, deployable package, there is often no need to rebuild the entire container just to deploy a couple code changes. By utilizing volumes in Docker, we remove the code from the container, enabling code to be updated independently of the container that runs it. Finally, we can use the UWSGI touch reload feature to restart UWSGI in the container and load the updated code from the volume.

  • http://www.regnoult.com Regnoult François

    Hi, interesting procedure. But does it mean that your source code is present on ALL your boxes? You also still need to reload the change on all your boxes. I don’t really see where you saved the 20min you presented

  • Joshua Kugler

    I would be interested in seeing your startup scripts and/or config. Since you’ve mounted the code on a volume, I would assume it’s not in the standard Python sys.path. What do you do to fiddle with sys.path before you start your app to make sure Python can import all needed code?

  • http://iteam.se/ Christian Landgren

    If you add your files on different lines in your dockerfile it will automatically cache the previous layers if no changes were made. This speeds up the deployment a lot. Example:

    WORKDIR /app
    ADD packages.json /app/
    RUN npm install –production
    ADD index.js /app/
    ADD images /app/
    ADD out /app/
    CMD node index.js

    This means that npm install will only run if the packages.json was changed.

  • Ashwin

    Can I do configuration management for the WAS ND, Weblogic, TIBCO EMS , WMQ, BPM products through Docker or Chef + Docker?

    By Configuration management what I mean is for ex: WAS ND, would I be able create/modify/delete JDBC data source/JMS Topic/Queue, Share library, Users modifications etc on a WASND instance without using Jython/JACL or any external scripts. The point is do update run time configurations for a product solely using Docker or Chef + Docker, if its possible kindly let me know how.

  • agentfitz

    Thank you for the article! I want to understand how you mounted the volume, but I’m a bit confused. For a developer working on OSX, such as myself, it’s important to bear in mind there are 3 possible “levels” here, host (OSX), docker-machine (could also be considered the host of the docker engine), and the docker engine itself.

    Considering this, when you are referring to the “host” in this article, are you referring to OSX (if that’s what you’re running), are are you referring to the docker-machine?

    Assuming by host you mean OSX, then is /path/to/code/on/host relative to the Dockerfile or is it a path from the system base? In other words, on OSX: /my-code (relative to Dockerfile) or /Users/me/projects/www/myproject/my-code?

    Thanks

  • jroubieu

    Hi! It’s been more than a year since this post. I’m curious about whether you still use this deployment workflow, and if it has evolved since then and why. Could you give us an insight in a few lines?