Codementor Events

5 Docker Best Practices to Adopt in 2018

Published Jan 03, 2018Last updated Jan 05, 2018

Docker adoption increases with each year, and it’s proving to be a lifesaver for software development teams who can use Docker containers to package and ship their apps with everything they need to run (base images, libraries, binaries, database systems, runtime, etc.), without worrying about dependency issues. In this sense, Docker is great for reducing conflict between developers and sysadmins over applications behaving differently in different environments.

Even though the concept of containerized applications stretches back as far as the early noughties in FreeBSD Jails, Docker has taken container use into the mainstream. Docker’s popularity can be explained because of how its unified API simplifies the entire container ecosystem, ensuring portability and seamless operation between its components.

In this article, you’ll learn about five docker best practices you can adopt in your Docker usage as a developer for 2018. For some overviews and other resources to get you started with Docker, check out this Docker wiki page.

Graceful Termination

Shutting down Docker apps gracefully is an often overlooked topic. However, it’s important to know how to do this, particularly for apps that need to serve some requests or write data to a file before terminating.

There are various ways to shutdown a Docker container, but for graceful termination you’ll want to use the docker stop command. By combining this command with a defined number of seconds, you can flush the cache or write data to relevant files before shutting down the container. For example, the following code gives you five minutes to gracefully terminate your Docker containers:

docker stop ----time=300 foo

Don’t Store Data In Containers

While it’s possible to store data in containers, what many people forget is that containers are disposable. This means that you must be able to stop a container at any moment and replace it with another version of that process with no loss of data.

Consider the case when you run two copies of an application on some computer, say, a cloud instance, and you balance loads between them. For such a setup to work, the two containers would have to share data outside of each other’s filesystems, otherwise, each copy of the app couldn’t interact. With this setup, you could safely shut down either copy of the application, and there’d be no loss of data because the data is external to the containers.

It’s much more sensible to store any data you need on the host system, in some shared directory.

Verify Image Authenticity

Security is a vital aspect of running containers—there is an enormous amount of choice for images on which you can base containers, from images provided by official repositories to ones provided by total strangers.

It’s imperative, therefore, not to just trust any container. You need to be 100 percent confident that the container you’ve chosen for your app doesn’t contain malicious code. There are two easy ways to verify authenticity:

Only pull base images from official repositories, such as Docker Hub.
Use Docker Content Trust to verify the authenticity of images in registries.

Reduce Image Layers

To efficiently run containerized applications, it’s best to limit image sizes. To do this, note that images are composed of layers, which are basically changes made to images in order to build a container.

For example, to build a Dockerfile that installs Apache’s HTTP server daemon and the cURL command line tool on Ubuntu Xenial, your code should be:

FROM ubuntu:xenial
RUN yum install -y curl httpd

An inefficient way to do the same thing would be to split the single RUN command into two commands, one for installing cURL and the other for installing the Apache server daemon.

Write Readable Dockerfiles

It’s important to remember that you probably won’t be the only person using a Dockerfile. So, in much the same way as readable code is encouraged as a best practice when developing apps normally, the same rings true for writing Dockerfiles.

One of the best ways to achieve readability in Dockerfiles is to split your RUN commands into multi-line parts. So, instead of one lengthy and confusing line of code specifying different things to install, everything becomes clear and readable.

Continuing the same example from the previous point, let’s say instead of just cURL and Apache server daemon you had other things to install such as the net-tools package, the Git control system, and the tar archiving utility.

Instead of your RUN command looking like this:

RUN apt-get install -y tar git curl httpd net-tools

Writing a readable Dockerfile would make the RUN command look like this:

RUN apt-get install -y tar \
git
curl
httpd
net-tools \

Closing Thoughts

Hopefully, these tips will provide the platform to help you, as a developer, get the most out of Docker for 2018, improving how you write Dockerfiles, how secure your containers are, and how efficiently you build containers.

Discover and read more posts from Ronan
get started
post commentsBe the first to share your opinion
Show more replies