

By
elizabeth.lavelle
August 17, 2022
In recent years, a technology known as "containerization" has become very popular. As a concept, it refers to packaging up all that is required to run an application, including runtime dependencies, configuration files, and settings, and then running the application in an isolated, preprepared environment. This is similar to running separate applications in entirely separate virtual machines, but is generally much more lightweight, as the containerized application is run directly on the host operating system, and is simply isolated using features the host already provides.
There are multiple software solutions that allow for containerization, but one of the most popular is Docker.
Docker is a containerization solution that allows containers to be quickly and easily created. It runs on Windows, Mac OS, and Linux, and is used at Enable in both development and production.
Containers can be created and built on-site when used to run your own software, or you can make use of already published containers for third party dependencies, hosted on public container registries, the most well known of which is likely Docker's own hub.
The most simple use case for Docker is running one or more preprepared containers as a dependency. An example of this might be if your application uses a database, for example PostgreSQL, and some sort of cache, such as Redis. Rather than installing these directly on a machine, along with any other dependencies you may have, they can be run inside Docker by pulling their container images from a container registry. This can be done by running the following commands on a machine with Docker installed.
This will quickly set up a Redis and PostgreSQL instance, using locally cached images if available or otherwise using images pulled from the Docker hub container registry. Any application running on the host will be able to access them via ports `6379` and `5432` respectively. Additionally, if the host's ports are exposed on some network, then anything else on that network will also be able to access the conatiner applications on the same ports, as the containers' ports are now linked to the host's ports.
We exposed these ports on the docker container with the `-p` switch. Docker also allows us to map to different ports. For example, if the application inside the container used port `123`, we could map that to port `456` on the host using `-p 456:123`. This is particularly useful if you need two instances of the same application running on the same machine, such as might arise in the case of local development, and need them to be completely independent.
We also used the `-d` switch, which simply detaches the containers from our terminal and allows them to run in the background, and the `-e` switch, which allows us to set an environment variable inside the container. In this case we set the password for the default PostgreSQL user. Lastly, there's the `--name` switch, which we used to give our containers friendly names.
We can now see our running containers by running the command `docker ps`. Running this should produce an output similar to the one below.
In addition to allowing us to easily set up applications as a dependency, Docker also allows us to containerize our own applications. To do this, we must create a Dockerfile, which defines the steps to build our image. A simple Dockerfile to host an ASP.NET Core web API might look like the following, assuming we bulid our web API beforehand on the host and place the output in a folder called `BuildOutput` alongside the Dockerfile.
In order, we start from a container image (`mcr.microsoft.com/dotnet/core/aspnet:3.1`) that already contains the runtime for our app, provided by Microsoft, and build on that. We set the working directory in which we want to make changes, and then copy the build output from outside our container into the current working directory `/WebApi` inside our container. We then tell Docker to run the command `dotnet WebApi.dll` when the container is started by a user.
To build the image, the above dockerfile must be placed alongside our `BuildOutput` directory, in a file named `Dockerfile`. We build the image by running the command `docker build -t MyImageTag` in the same directory as the Dockerfile. In running this command, we give our image a helpful tag with the `-t` switch.
Having built our image, it will now reside in the local image cache, and can be run using the `docker run` command as above. Alternatively, it could be pushed to a container registry, using the commands below.
The image can now be pulled and run on other machines without building, similar to how we did in the previous section. It is fully self-contained, without needing any development resources.
Traditionally, in production, software would have to run on "bare metal", that is to say it runs on the host operating system with no containerization or virtualization, and therefore no isolation between it and other apps on the system. Any dependency conflicts would have to be resolved by hand as there was no isolation between applications. There was also the option of running applications inside separate virtual machines, each with their own guest operating system. This is very effective at isolating software, but is much more heavy in terms of resource utilization and in terms of setup time.
However, containerization, and by extension Docker, offers a much more lightweight alternative to running in a full set of virtual machines while still providing similarly effective isolation. Infact, running applcations inside a Docker container is not significantly more resource hungry than simply running them on the host directly.
A Docker container is its own environment, it contains all libraries and runtimes necessary for a web application to run. This means that anywhere a Docker container can run, your application can run. And as the environment is already fully setup by the time the container is built, there is minimal configuration required at the time you are actually deploying your application.
What all this means is that your application has little reason to care about where it is deployed. In principle, Microsoft Azure, AWS, Linode, or any other hosting provider that has some facility for hosting docker containers needs only to be given the image to run and to be told how to connect that container to the outside world. It should then run the same as it would in any other hosting provider.
Similar to being agnostic toward hosting providers, spinning up a Docker container on an engineer's machine will produce the exact same environment inside the container as it would in production, for the same reasons as stated above. Having a local environment match the production environment as closely as possible is important for ensuring that the behavior an engineer sees is the same behavior that the production instance produces. Docker makes this easy to achieve.
At Enable, we use Docker to quickly spin up instances of third party dependencies our software requires, such as Redis. This reduces the complexity of setting up an engineer's machine, and therefore also decreases any downtime before an engineer is able to get to work. Any time saved during development is an optimization and represents an increase in productivity.
Additionally, we have started to containerize some of our own software using Docker. For example, our own central PDF generation app now runs entirely inside a set of Docker containers when running as a dependency on an engineer's machine. This reduces the setup time before we're able to start working on a solution that depends on our PDF app. Once a container is built, it is published to our own private Azure Container Registry.
Meanwhile, in production, our central PDF generation app is partly containerized. Some components of it run directly on an Azure App Service, while other parts of it run inside a container, where we have much greater control over the environment in which our app runs.
If you'd like to get set up with Docker yourself, you can find out more via the official Getting Started guide.