4 min read

How to use Docker really?

Docker is a fast growing technology to manage multiple processes in an isolated environment, think to it as lightweight virtual machine (which shares the kernel with the host computer) or a package manager for Linux webapps.
 
 
The idea isn’t so new, FreeBSD Jails, an Linux OpenVZ containers existed before Docker, but Docker is Linux mainstream and already have a huge community and ecosystem.
 
Currently I use Docker in a development server and I really like it, I hope to put it in production soon. As soon I can manage a Docker based solution in a similar way of Debian Proxmox.
 
Here are the questions I had an the possible solutions I’ve got.
 

Process Management: supervisord vs systemd

Docker raccomand to use one process per container. Unfortunatly it isn’t always possible or smart. So many solutions came out and docker suggests supervisord.
But me and RedHat don’t like this option, it means you’ll have a python even if your software don’t use python and then Linux already have a standard modern and controversial process manager and it is called systemd. Its scripts are simple and work in the same way inside and outside the container
 

Which base image should I use?

I’ve seen many new container based on debian:jessie, the first Debian based distro with systemd.. unfortunately it currently doesn’t run fine with systemd containers.
ubuntu:14.04 and fedora:21 are respectively a very common distro and systemd distro but probably you don’t need a full distro inside the container so I currently raccomand as base archlinux with some systemd specific tweaks.
 
FROM base/archlinux
# more info: http://rhatdan.wordpress.com/2014/04/30/running-systemd-within-a-docker-container/
 
ENV container docker
RUN pacman -Syu --noconfirm; \
(cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*; \
rm -f /etc/systemd/system/*.wants/*; \
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/bin/init"]
 
And that’s it, if you have a systemd based image, it really easy to start system or custom .service scripts.
 
FROM grigio/archlinux-systemd
 
EXPOSE 8080 6800
RUN pacman -S aria2 git busybox --noconfirm
RUN git clone https://github.com/ziahamza/webui-aria2
 
ADD ./aria2.service /etc/systemd/system/multi-user.target.wants/
ADD ./aria2-web.service /etc/systemd/system/multi-user.target.wants/
RUN ln -s /etc/systemd/system/multi-user.target.wants/aria2-web.service /usr/lib/systemd/system/
RUN ln -s /etc/systemd/system/multi-user.target.wants/aria2.service /usr/lib/systemd/system/
 
CMD ["/usr/sbin/init"]
 
What you will get is a working systemd system in a container that will run your services as good as an host system. You can even enter in a container and disable or restart services, even if your host init system is Upstart.
 
Unfortunatly systemd needs to be the first process in the container and currently the systemd contaners needs more priviledges so you have to run them with “--priviledged” flag (see this bug )
 

How do I keep my data persistent?

You can mount every time a directory in your container or you can create a “data container” which is a container used only to share data.
 
In brief i like something like this:
 
docker run -d --restart=always -v /Volumes/download:/data --name=downloads  ubuntu:14.04 echo Data-only container
 
docker run --volumes-from=downloads  -ti ubuntu:14.04 /bin/bash
 
So anything you’ll save in /data will survive to the next “docker run”.
 

How do I restart a container at boot?

Running Docker containers doesn’t restart if you restart the docker daemon, so if you really care about that container use a restart policy.
 

Security

Containers aren’t virtual machines, so you share the same CPU architecture of the host (e.g x86_64, ARM,..) and you shouldn’t rely on a particular feature of the version of the Linux host kernel or driver. Sharing things mean also that there are many areas to look at. Here some slides which analyze the problems and the solutions in their complexity.
 

Monitoring

When you have many containers running simultaneously I like to have an overview to understand “who do what” and “top” and Dockerana are the tools I prefer.
 

How to enter in a Docker running container?

SSH. I’m joking, but many popular container use an openssh-server inside the container. It is wrong and a simpler way is here: nsenter.
 

How do I migrate a running database with Docker?

This is a complex topic and probably the solution will be outside Docker.
 

Beyond Docker, orchestration and Web UI

CoreOS is a new kind of Linux distro which manages software in Docker containers and it supports orchestrations in multi host installations.
Panamax a CoreOS based distro with a Web UI.
 
Currently I use Ubuntu 12.04.5 (Linux 3.13) with Docker 1.2 PPA and Docker UI, the layers on top of Docker are very interesting but currently I prefer to manage Docker by hand :)
 
I’ve just packaged a known command line downloader (aria2) with its web ui (web-aria2), it won’t pollute your OS, just let me know if it work in yours.
 
source:
 
run:
docker run -d --restart=always -v /sys/fs/cgroup --privileged --volumes-from=downloads -p 6800:6800 -p 9100:8080 --name="web-aria2" grigio/web-aria2