What are Docker Containers?

Docker containers are a form of "lightweight" virtualization They allow a process or process group to run in an environment with its own file system, somewhat like chroot jails , and also with its own process table, users and groups and, optionally, virtual network and resource limits. For most purposes, the processes in a container think they have an entire OS to themselves and do not have access to anything outside the container (unless explicitly granted). This lets you precisely control the environment in which your processes run, allows multiple processes on the same (virtual) machine that have completely different (even conflicting) requirements, and significantly increases isolation and container security.

In addition to containers, Docker makes it easy to build and distribute images that wrap up an application with its complete runtime environment.

For more information, see  What are containers and why do you need them?  and  What Do Containers Have to Do with DevOps, Anyway?.

Containers vs Virtual Machines (VMs)

The difference between the "lightweight" virtualization of containers and "heavyweight" virtualization of VMs boils down to that, for the former, the virtualization happens at the kernel level while for the latter it happens at the hypervisor level. In other words, all the containers on a machine share the same kernel, and code in the kernel isolates the containers from each other whereas each VM acts like separate hardware and has its own kernel.

Docker Carrying Haskell.jpg

Containers are much less resource intensive than VMs because they do not need to be allocated exclusive memory and file system space or have the overhead of running an entire operating system. This makes it possible to run many more containers on a machine than you would VMs. Containers start nearly as fast as regular processes (you don't have to wait for the OS to boot), and parts of the host's file system can be easily "mounted" into the container's file system without any additional overhead of network file system protocols.

On the other hand, isolation is less guaranteed. If not careful, you can oversubscribe a machine by running containers that need more resources than the machine has available (this can be mitigated by setting appropriate resource limits on containers). While containers security is an improvement over normal processes, the shared kernel means the attack surface is greater and there is more risk of leakage between containers than there is between VMs.

For more information, see Docker containers vs. virtual machines: What's the difference? and DevOps Best Practices: Immutability

How Docker Containers Enhance Continuous Delivery Pipelines

There are, broadly, two areas where containers fit into your devops workflow: for builds, and for deployment. They are often used together, but do not have to be.

Builds

Deployment

For more information, see:

Implementing Containers into Your DevOps Workflow

Containers can be integrated into your DevOps toolchain incrementally. Often it makes sense to start with the build environment, and then move on to the deployment environment. This is a very broad overview of the steps for a simple approach, without delving into the technical details very much or covering all the possible variations.

Requirements

Containerizing the build environment

Many CI/CD systems now include built-in Docker support or easily enable it through plugins, but   docker   is a command-line application which can be called from any build script even if your CI/CD system does not have explicit support.

  1. Determine your build environment requirements and write a Dockerfile based on an existing Docker image, which is the specification used to build an image for build containers. If you already use a configuration management tool, you can use it within the Dockerfile. Always specify precise versions of base images and installed packages so that image builds are consistent and upgrades are deliberate.

  2. Build the image using   docker build   and push it to the Docker registry using   docker push .

  3. Create a Dockerfile for the application that is based on the build image (specify the exact version of the base build image). This file builds the application, adds any required runtime dependencies that aren't in the build image, and tests the application. A multi-stage  Dockerfile  can be used if you don't want the application deployment image to include all the build dependencies.

  4. Modify CI build scripts to build the application image and push it to the Docker registry. The image should be tagged with the build number, and possibly additional information such as the name of the branch.

  5. If you are not yet ready to deploy with Docker, you can extract the build artifacts from the resulting Docker image.

It is best to also integrate building the build image itself into your devops automation tools.

Containerizing deployment

This can be easier if your CD tool has support for Docker, but that is by no means necessary. We also recommend deploying to a container orchestration system such as Kubernetes in most cases.

Half the work has already been done, since the build process creates and pushes an image containing the application and its environment.

Once deployed, tools such as Prometheus are well suited to docker container monitoring and alerting, but this can be plugged into existing monitoring systems as well.

FP Complete has implemented this kind of DevOps workflow, and significantly more complex ones, for many clients and would love to count you among them! Contact us about our Devops Services page.

For more information, see How to secure the container lifecycle and Containerizing a legacy application: an overview.

Subscribe to our blog via email
Email subscriptions come from our Atom feed and are handled by Blogtrottr. You will only receive notifications of blog posts, and can unsubscribe any time.

Do you like this blog post and need help with Next Generation Software Engineering, Platform Engineering or Blockchain & Smart Contracts? Contact us.