$$ %---- MACROS FOR SETS ----% \newcommand{\znz}[1]{\mathbb{Z} / #1 \mathbb{Z}} \newcommand{\twoheadrightarrowtail}{\mapsto\mathrel{\mspace{-15mu}}\rightarrow} % popular set names \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\I}{\mathbb{I}} % popular vector space notation \newcommand{\V}{\mathbb{V}} \newcommand{\W}{\mathbb{W}} \newcommand{\B}{\mathbb{B}} \newcommand{\D}{\mathbb{D}} %---- MACROS FOR FUNCTIONS ----% % linear algebra \newcommand{\T}{\mathrm{T}} \renewcommand{\ker}{\mathrm{ker}} \newcommand{\range}{\mathrm{range}} \renewcommand{\span}{\mathrm{span}} \newcommand{\rref}{\mathrm{rref}} \renewcommand{\dim}{\mathrm{dim}} \newcommand{\col}{\mathrm{col}} \newcommand{\nullspace}{\mathrm{null}} \newcommand{\row}{\mathrm{row}} \newcommand{\rank}{\mathrm{rank}} \newcommand{\nullity}{\mathrm{nullity}} \renewcommand{\det}{\mathrm{det}} \newcommand{\proj}{\mathrm{proj}} \renewcommand{\H}{\mathrm{H}} \newcommand{\trace}{\mathrm{trace}} \newcommand{\diag}{\mathrm{diag}} \newcommand{\card}{\mathrm{card}} \newcommand\norm[1]{\left\lVert#1\right\rVert} % differential equations \newcommand{\laplace}[1]{\mathcal{L}\{#1\}} \newcommand{\F}{\mathrm{F}} % misc \newcommand{\sign}{\mathrm{sign}} \newcommand{\softmax}{\mathrm{softmax}} \renewcommand{\th}{\mathrm{th}} \newcommand{\adj}{\mathrm{adj}} \newcommand{\hyp}{\mathrm{hyp}} \renewcommand{\max}{\mathrm{max}} \renewcommand{\min}{\mathrm{min}} \newcommand{\where}{\mathrm{\ where\ }} \newcommand{\abs}[1]{\vert #1 \vert} \newcommand{\bigabs}[1]{\big\vert #1 \big\vert} \newcommand{\biggerabs}[1]{\Bigg\vert #1 \Bigg\vert} \newcommand{\equivalent}{\equiv} \newcommand{\cross}{\times} % statistics \newcommand{\cov}{\mathrm{cov}} \newcommand{\var}{\mathrm{var}} \newcommand{\bias}{\mathrm{bias}} \newcommand{\E}{\mathrm{E}} \newcommand{\prob}{\mathrm{prob}} \newcommand{\unif}{\mathrm{unif}} \newcommand{\invNorm}{\mathrm{invNorm}} \newcommand{\invT}{\mathrm{invT}} \newcommand{\P}{\text{P}} \newcommand{\pmf}{\text{pmf}} \newcommand{\pdf}{\text{pdf}} % real analysis \renewcommand{\sup}{\mathrm{sup}} \renewcommand{\inf}{\mathrm{inf}} %---- MACROS FOR ALIASES AND REFORMATTING ----% % logic \newcommand{\forevery}{\ \forall\ } \newcommand{\OR}{\lor} \newcommand{\AND}{\land} \newcommand{\then}{\implies} % set theory \newcommand{\impropersubset}{\subseteq} \newcommand{\notimpropersubset}{\nsubseteq} \newcommand{\propersubset}{\subset} \newcommand{\notpropersubset}{\not\subset} \newcommand{\union}{\cup} \newcommand{\Union}[2]{\bigcup\limits_{#1}^{#2}} \newcommand{\intersect}{\cap} \newcommand{\Intersect}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\intersection}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\Intersection}[2]{\bigcap\limits_{#1}^{#2}} \newcommand{\closure}{\overline} \newcommand{\compose}{\circ} % linear algebra \newcommand{\subspace}{\le} \newcommand{\angles}[1]{\langle #1 \rangle} \newcommand{\identity}{\mathbb{1}} \newcommand{\orthogonal}{\perp} \renewcommand{\parallel}[1]{#1^{||}} % calculus \newcommand{\integral}[2]{\int\limits_{#1}^{#2}} \newcommand{\limit}[1]{\lim\limits_{#1}} \newcommand{\approaches}{\rightarrow} \renewcommand{\to}{\rightarrow} \newcommand{\convergesto}{\rightarrow} % algebra \newcommand{\summation}[2]{\sum\nolimits_{#1}^{#2}} \newcommand{\product}[2]{\prod\limits_{#1}^{#2}} \newcommand{\by}{\times} \newcommand{\integral}[2]{\int_{#1}^{#2}} \newcommand{\ln}{\text{ln}} % exists commands \newcommand{\notexist}{\nexists\ } \newcommand{\existsatleastone}{\exists\ } \newcommand{\existsonlyone}{\exists!} \newcommand{\existsunique}{\exists!} \let\oldexists\exists \renewcommand{\exists}{\ \oldexists\ } % statistics \newcommand{\distributed}{\sim} \newcommand{\onetoonecorresp}{\sim} \newcommand{\independent}{\perp\!\!\!\perp} \newcommand{\conditionedon}{\ |\ } \newcommand{\given}{\ |\ } \newcommand{\notg}{\ngtr} \newcommand{\yhat}{\hat{y}} \newcommand{\betahat}{\hat{\beta}} \newcommand{\sigmahat}{\hat{\sigma}} \newcommand{\muhat}{\hat{\mu}} \newcommand{\transmatrix}{\mathrm{P}} \renewcommand{\choose}{\binom} % misc \newcommand{\infinity}{\infty} \renewcommand{\bold}{\textbf} \newcommand{\italics}{\textit} \newcommand{\step}{\text{step}} $$

Docker for Data Scientists

Featured image

Docker is hot in the developer world and although data scientists aren’t strictly software developers, Docker has some very useful features for everything from data exploration and modeling to deployment.

Since major services like AWS support Docker containers, it’s even easier to implement Continuous Integration Continuous Delivery with Docker. In this post, I’ll show you how to use Docker as a data scientist.

What is Docker?

It’s a software container platform that provides an isolated container for us to have everything we need for our experiments to run. Essentially, it’s a light-weight VM that’s built from a script that can be version controlled. We can now version control our data science environment!

Developers use Docker when collaborating on code with coworkers and they also use it to build agile software delivery pipelines to ship new features faster. Any of this sound familiar?

Docker terminology

I have a mathematics background, so I can’t avoid definitions.

Containers: very small user-level virtualization that helps you build, install, and run your code

Images: a snapshot of your container

Dockerfile: a yaml-based file that’s used to build your image; this is what we can version control

Dockerhub: GitHub for your Docker images; you can set up Dockerhub to automatically build an image anytime you update your Dockerfile in GitHub

Why Docker is so awesome for data science

Worked fine in dev

Ever heard these comments from your coworkers?

For the most part, these concerns are easily resolved by Docker. The exception at the moment of posting is GPU support for Docker images, which only run on Linux machines. Other than that, you’re golden.

Docker for Python and Jupyter Notebook

Look at this Dockerfile.

# reference: https://hub.docker.com/_/ubuntu/
FROM ubuntu:16.04

# Adds metadata to the image as a key value pair example LABEL version="1.0"
LABEL maintainer="Bobby Lindsey <[email protected]>"

# Set environment variables

# Create empty directory to attach volume
RUN mkdir ~/GitProjects

# Install Ubuntu packages
RUN apt-get update && apt-get install -y \
    wget \
    bzip2 \
    ca-certificates \
    build-essential \
    curl \
    git-core \
    htop \
    pkg-config \
    unzip \
    unrar \
    tree \

# Clean up
RUN apt-get clean && rm -rf /var/lib/apt/lists/*

# Install Jupyter config
RUN mkdir ~/.ssh && touch ~/.ssh/known_hosts
RUN ssh-keygen -F github.com || ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN git clone https://github.com/bobbywlindsey/dotfiles.git
RUN mkdir ~/.jupyter
RUN cp /dotfiles/jupyter_configs/jupyter_notebook_config.py ~/.jupyter/
RUN rm -rf /dotfiles

# Install Anaconda
RUN echo 'export PATH=/opt/conda/bin:$PATH' > /etc/profile.d/conda.sh
RUN wget --quiet https://repo.anaconda.com/archive/Anaconda3-5.2.0-Linux-x86_64.sh -O ~/anaconda.sh
RUN /bin/bash ~/anaconda.sh -b -p /opt/conda
RUN rm ~/anaconda.sh

# Set path to conda
ENV PATH /opt/conda/bin:$PATH

# Update Anaconda
RUN conda update conda && conda update anaconda && conda update --all

# Install Jupyter theme
RUN pip install msgpack jupyterthemes
RUN jt -t grade3

# Install other Python packages
RUN conda install pymssql
RUN pip install SQLAlchemy \
    missingno \
    json_tricks \
    bcolz \
    gensim \
    elasticsearch \

# Configure access to Jupyter
WORKDIR /root/GitProjects
CMD jupyter lab --no-browser --ip= --allow-root --NotebookApp.token='data-science'

If you’ve ever installed packages in Ubuntu, this should look very familiar. In short, this Dockerfile is a script to automatically build and setup a light-weight version of Ubuntu with all the necessary Ubuntu packages and Python libraries needed to do [my] data science exploration with Jupyter Notebooks.

The best part is that this will run the same way whether I’m on macOS, Linux, or Windows - no need to code separate install scripts and third-party tools to have the same environment in each operating system.

To build a Docker image from this Dockerfile, all you need to do is execute

docker build -t bobbywlindsey/docker-data-science .

in the command line and Bob’s your uncle. To run the image, you have two options - you can either run the image interactively (which means you’ll see the output of your Jupyter Notebook server in real time) or in detached mode (where you can drop into the image’s terminal and play around).

To run the image interactively on Windows, execute

docker run -it -v ~/GitProjects:/root/GitProjects --network=host -i bobbywlindsey/docker-data-science


docker run -it -v ~/GitProjects:/root/GitProjects -p 8888:8888 -i bobbywlindsey/docker-data-science

To run the image in detached mode for linux:

docker run -d --name data-science -v ~/GitProjects:/root/GitProjects --network=host -i bobbywlindsey/docker-data-science
docker exec -it data-science bash

or for MacOS and Windows:

docker run -d --name data-science -v ~/GitProjects:/root/GitProjects -p 8888:8888 -i bobbywlindsey/docker-data-science
docker exec -it data-science bash

Not too bad! I realize those run commands might be a bit much to type up, so there’s a couple options that I see. You can either alias those commands or you can use a docker-compose file instead.

Using multiple containers

Dockerize all the things

I won’t go into Docker Compose much here, but as an example, this a docker-compose file I have to run a Docker image used for a Jekyll blog:

version: '3'
      - JEKYLL_ENV=docker
    image: bobbywlindsey/docker-jekyll
      - ~/Dropbox/me/career/website-and-blog/bobbywlindsey:/root/bobbywlindsey
      - 4000:4000
      - 35729:35729

With that file, your run command then becomes:

docker-compose run --service-ports site

But Docker Compose is much more capable than just using it as a substitute for aliasing your run commands. Your docker-compose file can configure multiple images and by using a single command, you create and start all your services at once.

For example, let’s say you build one Docker image to preprocess your data, another to model the data, and another to deploy your model as an API. You can use docker-compose to manage each image’s configurations and run them with a single command.


Even though Docker might require a learning curve for some data scientists, I believe it’s well worth the effort and it doesn’t hurt to brush up those DevOps skills. Have you used Docker for your data science efforts?

comments powered by Disqus