Nvidia Docker and Docker Compose Enabled

January 11, 2019
docker tensorflow-gpu nvidia-docker

Recently, I rely much on docker. Almost all of my projects already dockerised and it working flawlessly. The main reason of using docker is because it’s easy to maintain and isolated, not make your host OS dirty with tons of files and dependencies. And, since I am working closely with my coleague at the office we use docker as part of our workflow. Sharing projects, doing CI/CD and releasing updates to our Kubernetes cluster never as easy and seamless like this before.

The project which is not dockerised yet is the deep learning project. This project covers my own workspace (it is a jupyter with gpu support) and tensorflow-serving for no deploy my deep learning model in production. The reason of why I am not dockerised my deep learning project is because of the dependency of GPU support back then. However, NVIDIA provides nvidia-docker which have full support to GPU (as long as the driver are installed correctly), isolation of individual devices (like me who has 2 GTX 1070ti from office’s GPU RIG) and ease of deplyment.

Here is the way I implement the nvidia-docker 2 with docker-compose for my deep learning workspace.

First, make sure you already install NVIDIA driver and it’s working properly by checking with nvidia-smi command, this is my GPU for example

habsari@box:~$ nvidia-smi
Sun Jan 20 15:47:35 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.87                 Driver Version: 390.87                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 107...  Off  | 00000000:01:00.0 Off |                  N/A |
|  0%   52C    P0    39W / 180W |      0MiB /  8117MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 107...  Off  | 00000000:02:00.0 Off |                  N/A |
|  0%   53C    P0    40W / 180W |      0MiB /  8119MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Second, we can now install nvidia-docker2, if you have previous version of docker you might to uninstall it first. To install nvidia-docker2 just follow steps from readme.md file in nvidia-docker github page.

# Add the package repositories
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
# Install nvidia-docker2 and reload the Docker daemon configuration
sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

And, to check is nvidia-docker successfully installed

# Test nvidia-smi with the latest official CUDA image
docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi

Third, we want to install docker-compose and add some configuration to make it support with nvidia-docker runtime. Since I am more comfortable working with docker-compose while using docker.

# downloading latest version of docker compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

# add executable permission to docker-compose binary
sudo chmod +x /usr/local/bin/docker-compose

# add symbolic link 
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

Test docker compose:

habsari@box:~$ docker-compose --version
docker-compose version 1.17.1, build unknown

To use docker-compose with nvidia-docker, we need to update docker daemon.json config file so docker-compose will use nvidia as default runtime. We simply just need to add "default-runtime": "nvidia" in /etc/docker/daemon.json

# filename: /etc/docker/daemon.json
{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

lastly, do not forget to restart our docker daemon.

sudo pkill -SIGHUP dockerd

Now, we can create a Dockerfile and docker-compose.yml file to kick start our deep learning environment using Tensorflow-GPU and Jupyter.

# filename: Dockerfile
FROM tensorflow/tensorflow:latest-gpu-py3
# filename: docker-compose.yml
version: '2'
services:
  jupyter:
    restart: always
    container_name: 'garfield_gpu'
    build: .
    volumes:
      - ./workdir:/notebooks # notebook directory
    ports:
      - 8888:8888 # exposed port for jupyter
    environment:
      - NVIDIA_VISIBLE_DEVICES=0 # which gpu do you want to use for this container
      - PASSWORD=yourstrongpassword

Last, do not forget to create workdir directory for jupyter persistent storage.

To run the container just type: docker-compose up --build, to access your jupyter navigate your browser to localhost:8888 and log in with your jupyter password.

The detail of docker`s files of this project can be accessed in my repository: https://github.com/yudanta/jupyter-tf-gpu-docker

comments powered by Disqus