Skip to end of metadata
Go to start of metadata

Why would we want to deploy CoprHD on docker?

Well first of all, while CoprHD may only be deployed on OpenSUSE hosts for the moment, it's possible to run the CoprHD docker container on virtually any linux distribution (and even Mac OS X).

This makes it a lot easier for users to adopt CoprHD in their own environment.

Also, in terms of performance and resource consumption, docker has comparative advantage over traditional virtualization architectures.

This page covers the basics on deploying standalone/1+0 and eventually 2+1 CoprHD on a single docker host.

Preparations

In order to deploy CoprHD on docker, the first problem we need to solve is how to assign a static IP address to a CoprHD container.

Then there are two more ingredients we need to prepare: a docker image with CoprHD rpm installed, and an ovfenv.properties file that contains the CoprHD configurations.

How to assign a static IP address to a docker container

By default, docker always assign a dynamic IP address to a container when it starts, and that's not what we want, because all the services require a static IP to communicate with each other.

While docker might support this with the libnetwork library in the long run, for now in order to assign a static IP address to a container, we will basically need to do the following:

  1. start a container without network stack using the "–net=none" option of "docker run".
  2. create a pair of veth interfaces on the docker0 bridge
  3. give one interface (veth) to the docker host
  4. give the other interface to the container, name it as "eth0" and assign to it an IP address within the bridge subnet, as well as a MAC address.
  5. create static NAT rules on the host to redirect access to port 443/4443 to the container.
  6. (optional but you probably need this) update the /etc/hosts file and the hostname with the correct IP address/hostname. Note that docker will not persist these so each time you start the container will have to do this again.

Refer to https://docs.docker.com/articles/networking/#container-networking for more details on how step 1-4 is done manually.

Fortunately, we have an open-source tool pipework to automate these steps and we are using this to simplify our own script.

To install pipework, run the following:

curl https://raw.githubusercontent.com/jpetazzo/pipework/master/pipework > /usr/local/bin/pipework
chmod +x /usr/local/bin/pipework

The NAT rules can be created using iptables like the following:

iptables -t nat -A DOCKER -p tcp --dport 443 -j DNAT --to-destination 172.17.0.1:443

The above example forwards all the tcp connections to port 443 to the same port on one of the container IP 172.17.0.1.

Following is a diagram that depicts how the NAT rules work for a container in a docker host:

Prepare CoprHD docker image

A CoprHD docker image should contain every library required to run CoprHD (what we refer to as a "runtime"), with the CoprHD rpm installed.

This can be done by executing the following make target:

make BUILD_TYPE=oss docker 

Below is some details on how the image is built.

Essentially, a runtime image is an opensuse image that contains:

  1. A specifically compiled nginx (with the nginx_upstream_check and headers-more-nginx modules and https enabled)
  2. The correct java version (1.7.0) as well as python (to run some of the internal scripts).
  3. The svcuser/storageos user/groups
  4. Some other dependencies

One of the CoprHD contributors Erik Henrikson provided a Dockerfile that make it possible to build such a runtime image:

FROM opensuse:13.2
#
# Install prerequisite software
#
# The following packages are required to RUN CoprHD and are mandatory
RUN zypper --non-interactive install keepalived wget openssh-fips telnet aaa_base arping2 python python-base mozilla-nss sudo ipcalc java-1_7_0-openjdk
# The following packages are required to build nginx and may be removed after nginx gets installed
RUN zypper --non-interactive install --no-recommends patch gcc-c++ pcre-devel libopenssl-devel tar make
#
# Need to get sipcalc - using "unstable" one from opensuse
#
ADD http://download.opensuse.org/repositories/home:/seife:/testing/openSUSE_13.2/x86_64/sipcalc-1.1.6-5.1.x86_64.rpm /
RUN rpm -Uvh --nodeps sipcalc-1.1.6-5.1.x86_64.rpm && \
rm -f sipcalc-1.1.6-5.1.x86_64.rpm
#
# Create users/groups
#
RUN groupadd storageos && useradd -d /opt/storageos -g storageos storageos
RUN groupadd svcuser && useradd -g svcuser svcuser
#
# Download, patch, compile, and install nginx, clean up the source files at the end
# All the commands are squeezed into a single RUN command in order to save some space within an image layer
#
RUN wget http://nginx.org/download/nginx-1.6.2.tar.gz && \
wget --no-check-certificate https://github.com/yaoweibin/nginx_upstream_check_module/archive/v0.3.0.tar.gz && \
wget --no-check-certificate https://github.com/openresty/headers-more-nginx-module/archive/v0.25.tar.gz && \
tar xvzf nginx-1.6.2.tar.gz && tar xvzf v0.3.0.tar.gz && tar xvzf v0.25.tar.gz && \
cd nginx-1.6.2 && patch -p1 < ../nginx_upstream_check_module-0.3.0/check_1.5.12+.patch && ./configure --add-module=../nginx_upstream_check_module-0.3.0 --add-module=../headers-more-nginx-module-0.25 --with-http_ssl_module --prefix=/usr --conf-path=/etc/nginx/nginx.conf && make && make install && cd .. && \
rm -f nginx-1.6.2.tar.gz v0.3.0.tar.gz v0.25.tar.gz && \
rm -rf nginx-1.6.2 nginx_upstream_check_module-0.3.0 headers-more-nginx-module-0.25
#
# Copy the storageos rpm into the container and install it without starting any service (since systemd is not yet available)
#
ADD storageos-*.x86_64.rpm /
RUN DO_NOT_START="yes" rpm -iv storageos-*.x86_64.rpm && \
rm -f /storageos-*.x86_64.rpm
#
# Prepare a hook for the ovfenv.properties file
# An actual file needs to be provided when the container starts
#
RUN ln -s /coprhd/ovfenv.properties /etc
#
# Start /sbin/init in the background to enable systemd
#
CMD ["/sbin/init"]

Our new docker make target depends on the existing rpm target, which builds the storageos rpm.

The next part of the Dockerfile copies the rpm into the build container and install the rpm with the DO_NOT_START flag.

This skips the part that generates all the configuration files, which relies on the ovfenv.properties file that is unknown at build time.

The actual ovfenv.properties file is passed into the container at runtime via an external data volume, to be mounted at /coprhd.

In order to start ViPR services with systemd, we need to first start the container with /sbin/init in the background, hence the CMD line at the end.

Deploying CoprHD docker image

Now that we have the docker image, the next step is to deploy it with custom IP address configurations (via the ovfenv.properties file).

The CoprHD docker image can be deployed into a standalone deployment, a 1+0 deployment or even 2+1 or 3+2 deployments on the same docker host.

Deploying as a standalone CoprHD

Deploying a standalone is the easiest of all, there's only one IP address required and you don't have to worry about the VIPs.

Having said that, it involves the following steps:

  1. Prepare a /data dir for the container as an external data volume, since by default docker doesn't persist anything for a container.
  2. Start the container and assign a static IP address to it (we already talked about how earlier).
  3. Prepare a /etc/ovfenv.properties file inside the container, which contains all the custom network settings.
  4. Wait for the services to come up.

We've prepared a simple script to automate the installation:

#/bin/bash
STANDALONE_ADDR=172.17.0.1
GATEWAY=172.17.42.1
HOSTNAME=standalone
NETMASK_BITS=16
DATA_DIR=${PWD}/standalone
SETUP_DIR=${PWD}/data
CLEANUP_OLD=true
if ${CLEANUP_OLD}; then
echo "Cleaning up old containers and NAT rules"
docker stop $(docker ps --no-trunc -q)
docker rm $(docker ps --no-trunc -aq)
iptables -F DOCKER -t nat
fi
# Ensure that pipework is installed
/usr/bin/which pipework 2>&1 > /dev/null
if [ $? -ne 0 ]; then
curl https://raw.githubusercontent.com/jpetazzo/pipework/master/pipework > /usr/local/bin/pipework
chmod +x /usr/local/bin/pipework
fi
# Ensure that the /data directory exists and has proper ownership
if [ ! -d ${DATA_DIR} ]
then
echo "creating data directory"
mkdir ${DATA_DIR}
fi
chmod 777 ${DATA_DIR}
# Start the container
echo -e network_gateway=${GATEWAY}'\n'network_netmask=255.255.0.0'\n'network_prefix_length=64'\n'network_standalone_ipaddr=${STANDALONE_ADDR}'\n'network_vip=${VIP}'\n'network_gateway6=::0'\n'network_standalone_ipaddr6=::0'\n'network_vip6=::0'\n'node_count=1 > ${SETUP_DIR}/ovfenv.properties
CONTAINER_ID=$(docker run --net=none -ti --privileged -v ${SETUP_DIR}:/coprhd:ro -v ${DATA_DIR}:/data:rw -d coprhd-devkit)
echo "Created container ${CONTAINER_ID}"
# Configure the container and install storageos rpm
pipework docker0 -i eth0 ${CONTAINER_ID} ${STANDALONE_ADDR}/${NETMASK_BITS}@${GATEWAY}
docker exec -it ${CONTAINER_ID} hostname ${HOSTNAME}
docker exec -it ${CONTAINER_ID} /bin/bash -c "echo ${VIPR1_ADDR} ${HOSTNAME} >> /etc/hosts"
# Configure static NAT on the docker host
iptables -t nat -A DOCKER -p tcp --dport 443 -j DNAT --to-destination ${STANDALONE_ADDR}:443
iptables -t nat -A DOCKER -p tcp --dport 4443 -j DNAT --to-destination ${STANDALONE_ADDR}:4443

A few words on the script.

The gateway should be the same with the IP address of the docker0 bridge. You can get it from the docker host:

ip addr show docker0 | grep -w inet | cut -d' ' -f6 | cut -d/ -f1

The standalone IP address could be any address in the docker0 subnet. In our case the subnet is 172.17.42.1/16 so we choose 172.17.0.1.

To run this, create a dir coprhd with the storageos rpm, and an empty data dir, then run the script above.

Deploying as a 1+0 CoprHD

1+0 deployment is actually quite similar to standalone deployments, only that it provides an additional VIP which is supposed to be the access point from the outside world.

Knowing this, the only change we need to make on the rpm installation script is to create the NAT rule for the VIP, instead of the standalone IP address.

Also, the node_id should be vipr1 instead of standalone.

The rpm installation script for 1+0 will look like the following:

#/bin/bash
VIPR1_ADDR=172.17.0.1
GATEWAY=172.17.42.1
VIP=172.17.0.2
HOSTNAME=vipr1
NETMASK_BITS=16
DATA_DIR=${PWD}/vipr1
SETUP_DIR=${PWD}/data
CLEANUP_OLD=true
if ${CLEANUP_OLD}; then
echo "Cleaning up old containers and NAT rules"
docker stop $(docker ps --no-trunc -q)
docker rm $(docker ps --no-trunc -aq)
iptables -F DOCKER -t nat
fi
# Ensure that pipework is installed
/usr/bin/which pipework 2>&1 > /dev/null
if [ $? -ne 0 ]; then
curl https://raw.githubusercontent.com/jpetazzo/pipework/master/pipework > /usr/local/bin/pipework
chmod +x /usr/local/bin/pipework
fi
# Ensure that the /data directory exists and have proper ownership
if [ ! -d ${DATA_DIR} ]
then
echo "creating data directory"
mkdir ${DATA_DIR}
fi
chmod 777 ${DATA_DIR}
# Start the container
echo -e network_gateway=${GATEWAY}'\n'network_netmask=255.255.0.0'\n'network_prefix_length=64'\n'network_1_ipaddr=${VIPR1_ADDR}'\n'network_vip=${VIP}'\n'network_gateway6=::0'\n'network_1_ipaddr6=::0'\n'network_vip6=::0'\n'node_count=1'\n'node_id=${HOSTNAME} > ${SETUP_DIR}/ovfenv.properties
CONTAINER_ID=$(docker run --net=none -ti --privileged -v ${SETUP_DIR}:/coprhd:ro -v ${DATA_DIR}:/data:rw -d coprhd-devkit)
echo "Created container ${CONTAINER_ID}"
# Configure the container and install storageos rpm
pipework docker0 -i eth0 ${CONTAINER_ID} ${VIPR1_ADDR}/${NETMASK_BITS}@${GATEWAY}
docker exec -it ${CONTAINER_ID} hostname ${HOSTNAME}
docker exec -it ${CONTAINER_ID} /bin/bash -c "echo ${VIPR1_ADDR} ${HOSTNAME} >> /etc/hosts"
# Configure static NAT on the docker host
iptables -t nat -A DOCKER -p tcp --dport 443 -j DNAT --to-destination ${VIP}:443
iptables -t nat -A DOCKER -p tcp --dport 4443 -j DNAT --to-destination ${VIP}:4443

Deploying as a 2+1 CoprHD

Similarly, we can repeat the steps above to deploy a 2+1 cluster in a single docker host.

Basically we need to create 3 container with different IP addresses and the same VIP, like the following diagram shows.

Note that it's a limitation for the moment that all containers need to reside on the same docker host.

You will need a very powerful docker host (e.g., 24G+ memory) in order to run 3 CoprHD containers.

I haven't really tried this (since I don't have the equipment), but I've verified that keepalived is able to fail over the VIP between containers in this case.

Pending Issues

  • Trim the size of the runtime image. Right now it's at 800M
  • Figure out a way to build an "appliance" image, with the storageos rpm installed by default, and be able to start a new container from it with specified network configurations (in the form of a /etc/ovfenv.properties file).

 References