too flow birds
November 28, 2017

Set up TensorFlow with Docker + GPU in Minutes

Why Docker is the best platform to use Tensorflow with a GPU.
logos dancing

Docker is the best platform to easily install Tensorflow with a GPU. This tutorial aims demonstrate this and test it on a real-time object recognition application.

Docker Image for Tensorflow with GPU

Docker is a tool which allows us to pull predefined images. The image we will pull contains TensorFlow and nvidia tools as well as OpenCV. The idea is to package all the necessary tools for image processing. With that, we want to be able to run any image processing algorithm within minutes.

First of all, we need to install Docker.

> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
> sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
> sudo apt-get update
> apt-cache policy docker-ce
> sudo apt-get install -y docker-ce
> sudo systemctl status docker

After that, we will need to install nvidia-docker if we want to use GPU:

> sudo dpkg -i nvidia-docker*.deb

At some point, this installation may fail if nvidia-modprobe is not installed, you can try to run (GPU only):

> sudo apt-get install nvidia-modprobe
> sudo nvidia-docker-plugin &

Eventually, you can run this command to test your installation. Hopefully, you will get the following output (GPU only):

> sudo nvidia-docker run --rm nvidia/cuda nvidia-smi
Result of nvidia-smi
Result of nvidia-smi

Fetch Image and Launch Jupyter

You probably are familiar with Jupyter Notebook. Jupyter Notebook documents are both human-readable documents containing the analysis description and the results (figures, tables, etc..) as well as executable documents which can be run to perform data analysis. Jupyter Notebook can also run distributed algorithms with GPU.

To run a jupyter notebook with TensorFlow powered by GPU and OpenCv, launch:

> sudo nvidia-docker run --rm --name tf1 -p 8888:8888 -p 6006:6006 redaboumahdi/image_processing:gpu jupyter notebook --allow-root

If you just want to run a jupyter notebook with TensorFlow powered by CPU and OpenCV, you can run the following command:

> sudo docker run --rm --name tf1 -p 8888:8888 -p 6006:6006 redaboumahdi/image_processing:cpu jupyter notebook --allow-root

You will get the following result out of your terminal. Then you can navigate to your localhost and use the port 8888, for me, the link looks like this: http://localhost:8888/

jupyter password

You will need to paste your token to identify and access your jupyter notebooks: 3299304f3cdd149fe0d68ce0a9cb204bfb80c7d4edc42687

copy paste

And eventually, you will get the following result. You can therefore test your installation by running the jupyter notebooks.

notebooks

The first link is a hello TensorFlow notebook to get more familiar with this tool. TensorFlow is an open-source software library for dataflow programming across a range of tasks. It is principally used to build deep neural networks. The third link gives an example of using TensorFlow to build a simple fully connected neural network. You can find here a TensorFlow implementation of a convolutionnal neural network. I highly recommand using GPU to train CNN / RNN / LSTM networks.

Real-Time Object Recognition

Now it is time to test our configuration and spend some time with our machine learning algorithms. The following code helps us track objects over frames with our webcam. It is a sample of code taken from the internet, you can find the github repository at the end of the article.

First of all, we need to open the access to the xserver to our docker image. There are different ways of doing so. The first one opens an access to your xserver to anyone. Other methods are described in the links at the end of the article.

> xhost +local:root

Then we will bash to our Docker image using this command:

> sudo docker run -p 8888:8888 --device /dev/video0 --env="DISPLAY" 
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" -it image_processing bash

We will need to clone the github repository, which is a real-time object detector:

> git clone https://github.com/datitran/object_detector_app.git && cd object_detector_app/

Finally, you can launch the python code:

> python object_detection_app.py

The code that we are using uses OpenCV. It is know as one of the most used libraries for image processing and available for C++ as well as Python.

You should see the following output, OpenCV will open your webcam and render a video. OpenCV will also find any object in the frame and print the label of the predicted object.

Conclusion

I showed how one can use Docker to get your computer ready for image processing. This image contains OpenCV and TensorFlow with either GPU or CPU. We tested our installation through a real-time object detector. I hope it convinced you that most of what you need to process images is contained in this Docker image. Thank you for following my tutorial. Please don’t hesitate to send me any feedback!

Useful Links

Thanks to Adil Baaj and Charles Bochet. 

apache airflow to Celery

How Apache Airflow Distributes Jobs on Celery workers

The life of a distributed task instance

cloud

How to Build a Serverless REST API in 15 Minutes on AWS

Use AWS Lambda to build a Serverless REST API, storing data in S3 and querying it with Athena.

how to build a successful ai poc

How To Build A Successful AI PoC

Turn Your Artificial Intelligence Ideas Into Working Software