Can I use Docker as a hypervisor for OpenStack (Juno)?
There are several possible ways of using Docker with OpenStack. The one described here is based on OpenStack's Nova Docker Driver. That is, it means talking to a container using the Nova Compute API.
Not all Docker features are supported by the Nova Docker Driver.
Features that are either not supported, or have a non-standard implementation are:
- Attaching volumes to Docker containers is not supported
- Disk usage is limited to 10 GB per container regardless of flavor settings
- Compute resources on a host node are distributed according to a CPU share constraint value which depends upon the OpenStack flavor used. Each container will have a CPU share constraint value of 1024 * VCPUs. Thus a 2 VCPU flavor result in a CPU shares value of 2048. More about CPU constraints can be read here: Link26
Note: Mixing a libvirt (KVM/QEMU) hypervisor with a Docker hypervisor on the same host is not recommended.
If libvirt (KVM/QEMU) hypervisors are to be mixed with Docker hypervisors on different hosts, but still as part of the same OpenStack deployment, then it might be necessary to let OpenStack know which compute hosts can support Docker. This can be done, for example, by creating a special flavor and restricting it (via a shared custom extra specs key=value pair) to a particular Host Aggregate, thus aggregating the docker hypervisor hosts. Another way of restricting it would be by setting a special image property and relying on the 'ImagePropertiesFilter' property of Nova Scheduler to schedule the image properly.
A working deployment of Bright OpenStack 7.1 (Juno) is needed.
Code snippets used in this article are meant to be executed in the environment indicated by the shell prompts:
|master#||Shell on the head node|
|node001#||Shell on the compute node|
|docker#||Shell inside of running container|
|docker-image#||Shell inside of software image chroot|
|cmsh#||CMSH on the head node|
Docker Nova driver installation
1. Preparing software image
First of all, the existing OpenStack software image is cloned for use in Docker:
cmsh# softwareimage cmsh# clone openstack-image docker-image cmsh# commit
After the image is cloned, a chroot is opened to this image:
cmsh# rshell docker-image
Required packages are installed:
docker-image# yum install docker git python-pip python-pbr python-oslo-utils python-pbr python-babel python-six python-oslo-serialization python-oslo-utils python-oslo-config
Additional python modules are installed:
docker-image# pip install "oslo.concurrency<=0.2.0" "docker-py>=0.5.1"
A nova rootwrap file
/etc/nova/rootwrap.d/docker.filter is created, with a directory created if needed. The file contains:
# nova-rootwrap command filters for setting up network in the docker driver # This file should be owned by (and only-writeable by) the root user [Filters] # nova/virt/docker/driver.py24: 'ln', '-sf', '/var/run/netns/.*' ln: CommandFilter, /bin/ln, root
The Nova Docker driver is downloaded, and some patches are applied in order to make it work with Bright 7.1. The driver is then installed:
docker-image# git clone https://github.com/stackforge/nova-docker.git31 -b stable/juno docker-image# cd nova-docker docker-image# git show 04059f9c540bf4531a7a3294c09807f275208af4 | git apply docker-image# sed -i '/oslo.serialization/d' requirements.txt docker-image# python setup.py16 install
Docker is set up to use to use a tcp socket instead of unix, in order to allow the Nova service to connect to it:
docker-image# echo "DOCKER_NETWORK_OPTIONS=-H tcp://127.0.0.1:2375" > /etc/sysconfig/docker-network
A bash alias is added for convenience:
docker-image# echo 'alias docker="docker -H tcp://127.0.0.1:2375"' >> /root/.bashrc
The chroot environment is exited:
2. Setup CMDaemon
A new category is created for Docker host nodes and the newly-created software image is set for that categoryt:
cmsh# category clone openstack-compute-hosts openstack-docker-compute-hosts cmsh# category use openstack-docker-compute-hosts cmsh# set softwareimage docker-image
The Docker service is added to the services:
cmsh# services cmsh# add docker cmsh# set autostart yes cmsh# set monitored yes cmsh# commit
The nova-compute service is set up to use Docker driver as a hypervisor:
cmsh# roles cmsh# use openstack::node cmsh# customizations cmsh# add nova.conf cmsh# entries cmsh# add DEFAULT:compute_driver=novadocker.virt.docker.DockerDriver cmsh# commit
/var/lib/docker is added to the
cmsh# set excludelistupdate
using the text editor to add the line:
The nova-docker category is assigned to the nodes to be used as Docker hosts:
cmsh# device cmsh# set node001 category openstack-docker-compute-hosts cmsh# commit
Update nodes file system according to image:
device imageupdate -w -c openstack-docker-compute-hosts
After the operation is finished, nova-compute and Docker services are restarted:
cmsh# foreach -c openstack-docker-compute-hosts (services; restart openstack-nova-compute; restart docker)
On the head node
/etc/glance/glance-api.conf is edited and docker added to a list of supported images, e.g.:
The glance-api service is restarted:
master# service openstack-glance-api restart
Now Docker should be up and running. The Docker hosts should be seen in the OpenStack Dashboard:
3. Testing your docker setup
To test the newly-created setup, some images are first added to Glance. In order to do that, a running machine with Docker, and access to a master node, is needed. The newly-created docker compute hosts (for example node001) can be used for this:
First of all, a Docker image is downloaded (e.g. docker.io/redis23):
node001# docker pull redis
OpenStack credentials environment variables are set up:
node001# scp master:/root/.openstackrc /tmp/openstackrc node001# source /tmp/openstackrc node001# rm -f /tmp/openstackrc
The Docker image is added to Glance:
node001# docker save redis | glance image-create --is-public=True --container-format=docker --disk-format=raw --name redis
Note: The Glance image name must match name in Docker registry
Now an attempt should be made to launch an instance:
node001# nova boot --image "redis" --flavor m1.tiny test
It should be possible to locate the instance host node in dashboard (admin→instances):
It should be possible to ssh to the host node and issue some Docker commands.
node001# docker ps
The shell can now be run inside the container
node001# CONTAINER_ID=d913b9975cc4 #your container id node001# docker exec -i -t $CONTAINER_ID /bin/bash
4. Setting up access to metadata API
In order to be able to get instance metadata (e.g. instance id/name/flavor etc.), the instance network should be properly configured. Below is an example of such a network setup.
First a network is created, a subnet is added, and its gateway is set to some address:
Next, the router is created, and its gateway set to the external network:
After that, a second interface is added to the router, connected to the newly-created network, and its IP address set with a value for the subnet gateway:
A new Docker instance is now launched using this network. The routes of the container should be checked:
docker# ip route list
The result should look something like this (note the default gateway value):
default via 10.0.0.1 dev nsa806d1df-d7 10.0.0.0/16 dev nsa806d1df-d7 proto kernel scope link src 10.0.0.3
It should now be possible to access the metadata API from within the Docker container:
docker# curl http://169.254.169.254/openstack24
2012-08-10 2013-04-04 2013-10-17 latest
Most of time, if the Docker driver fails to boot the container, then the instance will fail to be created with an error "No valid host was found". The real error message is located in
/var/log/nova/nova-compute.log on a host node. Below is a list of the most common nova-docker driver errors, along with ways to solve them.
1. Error: the compute node fails to spawn a new docker instance,
nova-compute.log contains the following error:
RuntimeError: Cannot find any PID under container "CONTAINER_ID"
Reason: This error can occur for some images (e.g. busybox). It can happen if the container shuts down immediately after being created — this can be checked by having Docker run busybox on a host node.
Solution: Use images that have a running service as an entry point
2. Error: the compute node fails to spawn a new Docker instance. The nova-compute.log contains following error:
NovaException: Cannot load repository file: HTTPConnectionPool(host='127.0.0.1', port=2375): Read timed out. (read timeout=10)
Reason: Several Docker operations such as transferring the Docker image from Glance require some time. The default timeout value is 10 seconds, and the nova-docker driver raises a timeout error after that.
Solution: increase the timeout value within /usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py27 (search for the word "timeout")
3. Error: the compute node fails to spawn a new Docker instance. The nova-compute.log contains the error:
APIError: 404 Client Error: Not Found ("No such image: someimage")
Reason: most likely because the Glance image name differs from the Docker image name
Solution: recreate Glance image with the same name as the Docker image name
4. Error: failed to create a glance image using docker save | glance image-create. The command fails with:
[Errno 32] Broken pipe when trying to create glance image
Reason: it takes time for Docker to create the image file. Glance can close the connection beyond a certain timeout value.
Solution: change/add timeout parameter in /etc/glance/glance-api.conf
Alternatively the Docker image can be saved to a file and then supplied to a glance command:
node001# docker save your_image > your_image.img node001# cat your_image.img | glance image-create --container-format=docker --disk-format=raw --name your_image