The aim of this article is to provide a better understanding of containers, independent from the provider (whether Docker or Rkt, etc.). It provides a useful overview, as well as some thoughts of what we can expect to see in the future with containers and virtualization.
What are containers
Containers are isolated processes that help us to better manage the problems of traditional installation and execution of apps. Containers allow us to efficiently address the tasks related to not only the deployment, but also the development of software. For example, we can run different versions of the same app under the same operating system, without causing conflicts among them.
Containers are key in cloud computing, as they help to distribute and spin up software efficiently in different stages, such as development and production, without the need to use virtual machines.
The container ecosystem has been growing constantly. For example, this is a set of tools found in the Cloud Native Computing Foundation (CNCF) landscape; at the time I wrote this article:
As you can see, giant providers are involved in the creation of tools to manage containers, such us AWS, Azure and Google (Kubernetes).
Containers add more isolation to existing processes, on top of the default isolation they already have. In a traditional process architecture, every process is limited to run one at a time on a single CPU. Also each process has reserved its own virtual memory. Furthermore, in operating systems like Windows, processes are isolated by having restricted permissions to certain users.
These are some of the characteristics of container isolation:
- Among containers, we can either isolate a unique file system for each of them but share the same network configuration; or isolate both, the file system and the network configuration.
- A container can have a file system allocated locally so that its processes will believe that the isolated file system is their root directory.
- A container can be set a limit to use system resources such as memory, CPU, or block IO.
- A container can have restricted access for persistent data.
- A container can have isolated storage by using mount namespaces and even isolated drives.
- Containers can rely on a Linux technology called cgroups. This is a characteristic that limits the usage of resources such as memory, CPU and disk I/O; inside a container.
- Container namespaces isolate global system resources. A container can have different types of namespaces:
- Mount points. A set of file systems that a process can see.
- Process Ids. Provides a unique identifier for a process namespace.
- UTC. Isolate unique identifiers in a system, like a hostname.
- IPC processes. These types of isolation allows processes to communicate with each other.
- Network. Isolate a namespace by means of a unique IP address.
- User. Limit access to system resources, by assigning privileges from a set of users and groups.
Tools like Docker, ease the process of configuring these types of namespaces manually; proving in sum, a higher degree of isolation.
What is an image and how are they related to containers?
An image can be an executable file that acts as the snapshot of a container. A container is the result of running an image. So, when we are talking about images, we are also talking about containers.
An image can be divided in abstraction layers. One of them is the container layer, which is the only layer where a process can write. In fact, the other layers are read-only such as the file system layer. For instance, we can have an image with a layer for image creation, and some others to run your app inside the container. Every change made to the container goes to the writable container layer.
Some file system layers that exist in an image, in order to manage system resources efficiently, are:
- Union filesystem. A union filesystem operates by creating common layers that make them fast. It can combine the layers of multiple directories into one.
- Overlayfs. Similar to union file systems, but with an implementation that provides even more benefits to performance. For instance, previous pulled layers can be cached to be used by future images sharing the same layers.
Images can be created from running containers
Sometimes it is necessary to create an image as the state of a running container. In fact, the resulting image can be pushed to the registry so that it can be pulled in the future to create new containers. This can be useful for example, when you want developers to start from a modified image, to avoid repeating the process of changing the state of the container every time an original image is being pulled from the cloud.
Containers use Linux resource management mechanisms
Another Linux technique used by containers in order to manage filesystems is the copy-on-write strategy, in which processes initially share the same filesystem, instead of creating a filesystem copy for each of them. Only when a process changes the file system structure, will a copy of the filesystem be made, decreasing the overhead to these operations.
Using containers in a non-linux platform or desktop platform, for development
The vast majority of containers are Linux based, and of course, are capable to run on Linux distributions. However sometimes, for development purposes, we want to be able to run them on other platforms. For instance Docker Community Edition (CE) is a tool available for many popular desktop and cloud platforms such as Windows, MAC, AWS, or Azure.
Container distribution: Registries
Containers are commonly available in the cloud by storage systems know as Registries, from where they can be distributed. The following is a list of popular container registries:
Container storage: Repositories
A registry can have a set of repositories, and each repository usually groups different versions of the same image.
Orchestration relates to automation tools for managing a group of containers. Here is a list of common container orchestration tools:
In the past, there used to be templates exposed in the cloud, for an application to run inside them. Even if these templates helped with that process, you still had to deal with building the image and configuring all the necessary system resources such as the file system, so that you can run you app inside of your container.
Today, this process is much more automated. For instance, Docker provides the Docker Hub registry, where you can find a container repository for your app. That way, all you need to do to have your app running in a container, is use a single command in your terminal to pull your image from the right repo so that you can smoothly spin up your container. This process of running containers by using a registry, is pretty similar to using a package manager like yarn.
Container security in an organization
There is a plethora of recommendations when it comes to security in general. In the case of containers I would like to point out the following two key points:
- Store containers in a private registry, controlled at least by your devops team.
- Run security scans on containers frequently, in order to mitigate threats.
The Open Container Initiative (OCI) is a community for creating standards around containers. In fact, this year they created the first spec: 1.0.0. This spec applies mainly to container providers such as Docker or Rkt. It has a couple of main pieces:
- Container format specification: standards about the format of container images.
- Container runtime specification: the state and the lifecycle of a container.
Conclusion: Will virtual machines fade away in the future?
Containers are a new approach for virtualization in general. They offer a few remarkable advantages over virtual machines: less space and more performance. This is because, multiple containers are able to share the same operating system resources, saving storage size and making their runtime faster.
Containers can potentially become the future of virtual machines as they are both constantly evolving to the point in which they will be comparable with each other.
What have been your experiences with containers? Leave a comment below!