Enzo Radnaï

Jul 24, 2025

Dockerfile: from zero to hero

I like my Dockerfile like my dessert, with few layers

Whether you're a platform engineer, SRE, software developer, or simply curious about IT and working with containers, you should know how to create and run one.

This post won't explain in detail what happens under the hood but will serve primarily as a cheat sheet to optimize image size and build time.

All commands are simplified. Building a real production grade application container would need a few extra steps.

‘Member the basics ?

  • FROM Where everything usually starts. This is the base image your container will run on. Those are mostly linux distributions with more or less dependencies like your favorite programming language.

  • ARG and ENV Both are related to environment variables but the first one will only exists when building image while the other one is used during the run phase.

  • RUN Allow you to execute shell commands while image is building. Pretty useful to install dependencies or libraries.

  • WORKDIR Basically the equivalent of doing mkdir /my-folder && cd /my-folder . Since you don’t want to work at root level (remember your image is like a linux distribution), this syntactic sugar is kinda useful.

  • COPY or ADD Both serve the same purpose of importing files into your image but ADD allows you to get remote file like a basic curl or extract archive from local file.

  • ENTRYPOINT and CMD Usually you’ll see only one of them in a Dockerfile but in the state of art they’re complementary. ENTRYPOINT should be used to call you binary or script while CMD should be used to pass flags required by your app.

A piece of code worth a thousand worlds

dockerfile

To run this image and override default value for NODE_VERSION and USER_NAME, execute the following command:

docker run —build-args NODE_VERSION=24 -e USER_NAME=toto .

Like a boss

Am I downloading the Internet ?

When creating a Dockerfile, you'll often need to execute scripts requiring extra dependencies like jq or specific media libraries, and that's perfectly fine! What isn't acceptable is keeping dependencies you no longer need.

For example, if you've upgraded from h264 to h265 video codec, remove the old one!

While this advice might seem obvious, I can assure you it's frequently overlooked.

I am Root

If we look at our Dockerfile, we execute many shell commands (RUN). In Linux distributions, allowed commands are linked to sudoers. And guess which user is default in most base images (FROM)? Root!

This means anyone executing a shell in your container will have unrestricted privileges. To fix this security issue, you can add the following to your Dockerfile

dockerfile

Now all next commands will use the user permissions and executing shell in container will be logged-on this user.

Order and control

Docker images are built with overlapping layers. When you rebuild an image and no layer has changed, the process is fast because layers are cached.

Looking at our Dockerfile, we have twelve layers, one for each line we wrote.

Think of a Docker image like a building constructed from the ground up, where the first line of the file is the foundation.

What happens if you need to rebuild the first floor of a completed building? You must reconstruct everything from that floor to the top. Docker images work the same way. If you change the second line of your Dockerfile, all subsequent layers lose their cache and must be rebuilt.

A "change" here means either editing the Dockerfile itself or modifying the content you're working with. For example, if you COPY a package.json that has changed between builds, all following layers will need rebuilding!

This is why you should order your layers from least frequently changing to most frequently changing. This approach saves build time and uses cache more efficiently.

Let's extend our building analogy. Which consumes fewer resources: one six-meter-high floor or two three-meter floors?

Obviously the first option! Layers work similarly—grouping instructions reduces image size.

We can improve our Dockerfile by consolidating the apk commands like this:

dockerfile

Be a bit rude

When it comes to image building, you will (nearly) always need to copy some files from your repository.

You’ll have two ways to do it:

dockerfile

The second approach is more convenient, but remember, our goal is to keep Docker images as small as possible. We don't want to copy the entire repository into our app. This is where we need to ignore (sigh) irrelevant files.

You can create a .dockerignore file next to your Dockerfile and list all files or paths you want to exclude from the copy process.

bash

Semver, semver everywhere

Software development requires regular updates, and everyone wants to use the latest versions with new features and resolved security vulnerabilities. Docker makes keeping our running environments up to date remarkably simple.

dockerfile

TADAAAA ! Every time I’ll build my image, I’ll have the latest version of Node.js

However, new versions can introduce breaking changes. In my opinion, we should maintain control over versions and update them manually, as demonstrated in our earlier Dockerfile example. Since most images use semantic versioning, it’s easy to do properly.

While this approach requires dedicating time to keep applications updated, it ensures version upgrades won't unexpectedly break your system. More importantly, it prevents you from pushing non-functional images to your production registry.

I’d also suggest to use semantic versioning for your own image tag and do not only push latest in order to easily handle rollback. IMHO both should be used. I know it also sounds obvious, but saw it too many times.

Alpine won’t always reach new heights

When it comes to docker images, alpine looks to be every where but it isn’t always a solution and you can really get in deep s**t at some point. I could explain it myself but someone did it well enough

Cherry on the cake

Now you have a finely tuned Dockerfile that's as small and fast-building as possible.

What if I told you we can do even better?

Let me introduce you to my dear friend, Multi-Staging. Remember earlier when we were building a docker file from an app repository? To have something executable, we had to copy all necessary files and install all required dependencies from a specific (and heavy) image.

It’s time to use AS !

In our first Dockerfile, we built and ran our app in the same image. This meant the image we pushed to our registry contained all the source code.

We can improve that !

dockerfile

Since the first stage is just a building process, you don't need to set a user with restricted permissions. This image won't be run, so security concerns are minimal.

You also don't need ENTRYPOINT and CMD in the first stage, as you'll never execute this intermediate image.

When you build and push your image, only the lightweight second stage will be stored in your repository, significantly reducing your final image size.


Accelerate your CI with Shipfox

Try for free
Authors
Enzo Radnaï

Enzo Radnaï

Software Engineer

Share
Twitter
LinkedIn
Reddit