The evolution of containers and why Docker is so complex

An overview for the many layers that structure the Docker engine

Eytan Manor
5 min readOct 5, 2022

--

People love containers nowadays, they just make deployments so much easier. I’ll admit, I’ve also been using them quite a bit recently — from running an application locally, to CI/CD and scaling. To be honest, I’ve always treated containers like a black-box and never put too much thought into them: There’s a CLI tool, that given a set of commands, will magically create a separation in the system using this thing called a “container”. But what exactly is a container? And how is it being created? I suggest we go through some history first.

Before you dive in, make sure you’re familiar with Docker ;)

Some History

The Big-Bang 💥

No, I don’t mean the beginning of the universe, I mean the big bang in the containerization ecosystem, one that you probably never heard of. In other words, what started as a single project that was supposed to include a complete solution (a monolith), was later split into many sub-components for handling different tasks, as new requirements arose over time.

Before

It started as a monolith. Source: Below Kubernetes: Demystifying container runtimes by Thierry Carrez

After

As new requirements arose over time, things were split into sub-components

Don’t worry about all the details in the diagram yet — it’ll all make sense soon. What’s important to understand is the following:

  • New projects came along and they only needed to consume certain parts of Docker.
  • As a result, Docker’s implementation got split.
  • Projects are dependent on one another based on a certain hierarchy. So even if it looks like a chaos, there’s strict order between things.

Many events occurred that led everything to this point, but I’ll cherry-pick the relevant ones so we can build a simplified timeline as to what happened.

The Evolution 🧬

Docker made its initial release in March 2013. At the time, there were several other solutions for containers, like:

It’s not necessary to memorize client names. The idea is that consistently throughout history, there were always multiple implementations that tried to achieve exactly the same thing.

Cloud providers quickly started adopting the concept of containers, because it gave consumers much more fine grained control over the environment. However, since there were so many ways to achieve the same thing, it lead to a problem where it was difficult to migrate from one cloud service to another, because there was no unified API.

In July 2015, in an attempt to solve this issue and further advance container technology, a group named CNCF — Cloud Native Computing Foundation, was formed by the Linux Foundation. Founding members of the group included people from Google, CoreOS, Mesosphere, Red Hat, Twitter, Huawei, Intel, Cisco, IBM, Docker, Univa, and VMware. It shows you how important this effort was to everyone.

One of the first initiatives by the CNCF was defining a set of specifications called OCI — Open Container Initiative, to create a standard in the world of containers. As for today, the specs contain rules around the following:

  • Runtime spec — specifies the configuration, execution environment, and lifecycle of a container.
  • Image spec — specifies the process of building, transporting, and preparing a container image to run.
  • Distribution spec — specifies an API protocol to facilitate and standardize the distribution of content.

Docker was the first to release a runtime (a program that implements the OCI runtime spec) called “runc”, which was contributed to the CNCF. Shortly after, other runtime implementations were released:

In December 2016, as part of the Kubernetes project, which heavily relied on Docker at the time to achieve containerization, a new specification was announced, called CRI — Container Runtime Interface. As much as Docker was great, it was an overkill for Kubernetes’ use case. In other words — there was a need for an additional runtime that’s a stripped down version of Docker, one that can manage multiple containers, also known as “high-level runtime” (vs. “low-level runtime”, i.e., runc).

As a result, Docker further broke down their implementation and released the following:

An additional high-level runtime was released, called CRI-O, which is CRI compliant and was exclusively built for Kubernetes in mind from the ground up.

Docker Today

To make sense out of everything, I’ve mapped some of Docker’s components into a flowchart. So given a certain request, for example if you’re trying to create a container, the following will be all the stations it will have to go through before it reaches the Linux kernel, depending on the platform you’re using:

A container creation flow, initiated by either Docker or Kubernetes

Let me briefly walk you through these, from bottom to top:

  • Linux Kernel — There are mainly 2 features in Linux, that when are utilized together, can achieve nearly complete isolation: The first one is “namespaces”, which is used to isolate things like processes, users, and network adapters (soft-isolation), and the second is “cgroups”, which is used to isolate things like CPU usage, memory, and disk I/O (hard-isolation). As for today, all runtimes use these two to create a container.
  • runc — A CLI tool for spawning and running containers on Linux according to the OCI spec.
  • containerd — A daemon that’s responsible for turning images into containers and conducting over them. When an image is pulled from a registry, it will be handed over to runc (or any other low-level runtime) for spawning and running a container.
  • cri-containerd — A shim around containerd to make it CRI compliant, which is utilized by Kubernetes’ “kubelet”. To learn more, see my article — “Scalability: What is Kubernetes trying to achieve exactly?”.
  • dockerd —A daemon that includes all the business logic for Docker engine and its reach features, like its desktop client or CLI. User interactions will always be redirected to this daemon, which will then be propagated to containerd, etc.

Accordingly, if you have Docker installed, you should be able to access its dependencies via the command line:

$ runc --help
$ containerd --help
$ dockerd --help

If you happen to run into references over the internet to other components which I haven’t listed, always keep in mind — they have to stick to the CRI or OCI specs, and they probably interact with one of the runtime types.

--

--