Containerizing toolchains to streamline customer deliveries




Recently, I completed a project that targeted the ESP32 microcontroller. Unfortunately, things got messy when it came time for my customer to verify the implementation on their end. While I had already figured out the proper system configuration sometime ago to get the source to successfully build on my development machine, I had to remember all the steps for my customer. This resulted in a painful back and forth, and we still couldn’t figure out why my customer couldn’t get the build to work. I then had the idea to use a container to ease the pain. I had been using containers for another open source Linux project and thought it made sense to create a container for my client. Within a few hours, I had a container set up, and the client was able to build the codebase with a few commands. I had previously relegated containers to be used by system administrators and was pleasantly surprised with their capability in providing a self-contained, independent environment for building source code.

In this article, we’ll briefly go over what are containers, what goes into setting up a container with specific examples from my experiences, and finally, what value they can provide for customer deliveries. While there are different container technologies and products, this article will focus on Docker.

What Are Containers?

To understand what containers are, let’s consider the alternative, which are virtual machines (VMs). A VM contains all of the necessary SW components to run a self-contained operating system on your host machine, as shown in Figure 1.


Figure 1: Virtual machine’s interaction with the host PC. (Source: Author)

The VM is an application that represents a complete system, with its own OS (and kernel) that may be completely different from the OS running on the host PC. For example, the host PC could run Windows and the VM can run Linux. As a matter of fact, the host PC can even be running an x86 architecture and the VM could be running an ARM architecture! A VM gives the most flexibility in running a virtualized environment. In our specific use case, we only need to configure the VM for the target toolchain once. Then, we could ship that VM to our clients and they can simply run that VM and execute the appropriate commands to invoke the toolchain. While this seems to be the simplest option, it comes with a few drawbacks.

First, if an issue is discovered in one of the “Toolchain Dependencies” of Figure 1, we would need to rebuild and reship the VM to our client. While the specifics of a VM configuration are encapsulated in a file, the contents of a VM are not. Second, VM images are inherently large in size, which don’t lend themselves to be version controlled. Third, the large size of VMs makes the shipping process difficult. Since, we can’t easily check in a VM image into a source control repository, we would have to rely on cloud storage solutions, which can quickly become costly and make tracking difficult and awkward. Finally, VMs can make our clients’ workflow awkward. For example, while the focus of the VM is to run the toolchain specific to the project, what if there are changes to the codebase that need to be pulled by our client? They will have to configure the VM for their version control workflow. This quickly becomes a disaster if they need to update their VM (see the first point). Every time they need to download and run a new version of our VM, they will have to go through any of their own customizations.

Containers address these issues by running natively in the host PC/OS; it’s simply another application, as shown in Figure 2.


Figure 2: Container’s interaction with the host PC. (Source: Author)

A container isn’t a complete operating system. Instead, it only consists of the tools and applications that we require and specify. The container is isolated from and runs independently of the host PC configuration. Of course, as seen in Figure 2, the major drawback to using a container is that it uses the same OS and kernel as the host PC. Thus, if the basis of our toolchain is Linux, then our customer will need to run Linux. While there are techniques to run a Linux container on a Windows host PC, they are not straightforward. However, if our customer is unfamiliar with setting up Linux, we can provide them with a barebones VM with the same OS as our host PC and the appropriate container technology configured. They can then simply run the container inside the VM to build our project’s source code. While this may seem to defeat the purpose of a container, remember that the VM configuration is not exhaustive. It’s only configured to run the container.

How Can We Set Up a Docker Container?

Now that we’ve discussed what a container is, we can discuss how to set up a container, specifically a Docker container. The first step is to build a Docker container. While there are three common techniques to build a Docker container, we’re going to focus on the first two. We’ll briefly discuss the last technique to build a Docker container and why it doesn’t necessarily apply to our use case of creating an isolated environment to serve as a build system.

The first technique to build a Docker container is also the simplest. In fact, it doesn’t involve any building at all. Instead, we simply execute a command to run a Docker container that is hosted on Docker’s “hub” and has been pre-built by a vendor. In the case of the Espressif toolchain, which is used to build projects targeting the ESP32 microcontroller, the command shown in Listing 1 would be executed at the root of the source code directory:

esp32_source $> docker run --rm -v $PWD:/project -w /project espressif/idf idf.py build

Listing 1: Running a Docker container from the hub

The command above is “docker run” (not just “docker”), and the options to the command are outlined below:

  • –rm: Remove the Docker container from the local repository upon completion. Without this flag, every time that the command is invoked, a new Docker container will be created and unnecessarily occupy space on your hard drive.
  • -v <host dir>:<container dir>: Mounts the <host dir> directory on the host PC to <container dir> on the container. When the container is running, any operations it performs in <container dir> are effectively performed in <host dir>. In the above example, the current working directory (“$PWD”) will be mounted as the directory “/project” in the container.
  • -w <container dir>: The directory where operations will be performed in the container (i.e. the “working directory”). Since the working directory is set to “/project”, which is pointing to the current working directory, any operations performed in the container will be effectively performed in the root directory of the codebase.
  • espressif/idf: The Docker container to run. If we were to create our own custom container, which we’ll discuss next, Docker would first look for the named container locally before reaching out to its hub. Since we haven’t created one, Docker will simply search for a container that has the equivalent name on its hub.
  • idf.py build: This is the command to execute in a bash terminal of the container with the environment appropriately configured.

And with that, all of the dependencies necessary to build source code for the ESP32 are isolated to the container, but the final binary is placed in the build directory of the root, as intended.

While the above is very simple, it relies on some assumptions that may not be desirable. For example, the container provided by espressif uses the master branch of the esp-idf Github repository as its basis. Since the master branch of a repository can change over time, it can have a dire effect on our project. At best, there may be an innocuous change to an API that we’ve been using in our project. At worst, a subsystem may have completely been redone and have a core piece of logic in our project may need to be redone. To avoid these sorts of issues, the recommendation is to just create a fork of the repository.

That means we can’t use the standard container provided by espressif on Docker’s hub. Instead, we have to create one. Fortunately, there’s a straightforward process that we can follow that can also help us learn how to create containers from scratch. To understand the process, we must first understand how a Docker container is created, even one that resides on Docker’s hub. Every Docker container is built using a “Dockerfile”. This is simply a text file with a particular syntax that instructs Docker how to put together different pieces to form a container. To avoid having to create a Dockerfile from scratch, we can simply take the Dockerfile at Espressif-idf Dockerfile and modify it to point to our fork of the esp-idf Github repository. It’s worthwhile to go through the Dockerfile, which is reproduced in Listing 2, and understand its syntax.

FROM ubuntu:18.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y 
    apt-utils 
    bison 
    ca-certificates 
    ccache 
    check 
    curl 
    flex 
    git 
    gperf 
    lcov 
    libncurses-dev 
    libusb-1.0-0-dev 
    make 
    ninja-build 
    python3 
    python3-pip 
    unzip 
    wget 
    xz-utils 
    zip 
   && apt-get autoremove -y 
   && rm -rf /var/lib/apt/lists/* 
   && update-alternatives --install /usr/bin/python python /usr/bin/python3 10
RUN python -m pip install --upgrade pip virtualenv
# To build the image for a branch or a tag of IDF, pass --build-arg IDF_CLONE_BRANCH_OR_TAG=name.
# To build the image with a specific commit ID of IDF, pass --build-arg IDF_CHECKOUT_REF=commit-id.
# It is possibe to combine both, e.g.:
#   IDF_CLONE_BRANCH_OR_TAG=release/vX.Y
#   IDF_CHECKOUT_REF=<some commit on release/vX.Y branch>.
ARG IDF_CLONE_URL=https://github.com/espressif/esp-idf.git
ARG IDF_CLONE_BRANCH_OR_TAG=master
ARG IDF_CHECKOUT_REF=
ENV IDF_PATH=/opt/esp/idf
ENV IDF_TOOLS_PATH=/opt/esp
RUN echo IDF_CHECKOUT_REF=$IDF_CHECKOUT_REF IDF_CLONE_BRANCH_OR_TAG=$IDF_CLONE_BRANCH_OR_TAG && 
    git clone --recursive 
      ${IDF_CLONE_BRANCH_OR_TAG:+-b $IDF_CLONE_BRANCH_OR_TAG} 
      $IDF_CLONE_URL $IDF_PATH && 
    if [ -n "$IDF_CHECKOUT_REF" ]; then 
      cd $IDF_PATH && 
      git checkout $IDF_CHECKOUT_REF && 
      git submodule update --init --recursive; 
    fi
# Install all the required tools, plus CMake
RUN $IDF_PATH/tools/idf_tools.py --non-interactive install required 
  && $IDF_PATH/tools/idf_tools.py --non-interactive install cmake 
  && $IDF_PATH/tools/idf_tools.py --non-interactive install-python-env 
  && rm -rf $IDF_TOOLS_PATH/dist
# Ccache is installed, enable it by default
ENV IDF_CCACHE_ENABLE=1
COPY entrypoint.sh /opt/esp/entrypoint.sh
ENTRYPOINT [ "/opt/esp/entrypoint.sh" ]
CMD [ "/bin/bash" ]

Listing 2: Espressif-idf Dockerfile

As can be seen above, there are only a handful of instructions that make up a Dockerfile. These instructions are described below:

  • FROM: Every Dockerfile must start with a base image, which is defined by this instruction. This simply tells Docker the starting point for the container.
  • RUN: This instruction executes a command in the container. It is important to keep in mind that every line is executed independently of every other line. This means that no assumptions can be made on the state of the container for each command. That’s why we see conjunctive commands being executed (i.e. combined with “&&”). These commands need to be run in a single invocation because the state of the container can’t be assumed.
  • ENV:T his instruction sets a particular environment variable in the container.
  • ARG: This instruction defines a variable that a user can pass to the Docker builder. Notice that instead of creating a new Dockerfile, we could have simply set the IDF_CLONE_URL variable to our forked repository when building the Docker container.
  • COPY: This instruction copies files from the host PC to a location in the container.
  • CMD: This instruction instructs the container to run a fallback command if none is specified during the “docker run” invocation.
  • ENTRYPOINT: This instruction allows us to specify commands that will be run inside the container followed by the command specified by the user during the “docker run” invocation. For example, in the case of ESP-IDF, we know that the “export.sh” file in the esp-idf source directory must be sourced to set the appropriate environment variables. This step is most likely is performed in the “/opt/esp/entrypoint.sh” script. Unfortunately, since the Docker hub doesn’t have the entrypoint.sh script for us to confirm, we need to try something else. We can simply run the Docker container with the invocation in Listing 3:
esp32_source $> docker run --rm -v $PWD:/project -w /project espressif/idf cat /opt/esp/entrypoint.sh

Listing 3: Retrieving the contents of entrypoint.sh

The above command simply shows the contents of the entrypoint.sh script, which is reproduced in Listing 4:

#!/usr/bin/env bash
set -e
. $IDF_PATH/export.sh
echo "$@"
exec "$@"

Listing 4: The contents of entrypoint.sh

And our hunch is confirmed! The entrypoint.sh script sets up the ESP-IDF environment, and executes the command provided in the “docker run” invocation.

We can package the Dockerfile with the modification to use our forked repository, the entrypoint.sh file, and send those two files to our client. We can then instruct our client to execute the commands in Listing 5 to build the Docker container (assuming that the two files have been downloaded to the “cool-container” directory):

cool-container $> ls
Dockerfile entrypoint.sh
cool-container $> docker build -t cool-container .

Listing 5: Building a Docker container from a Dockerfile

Finally, we can instruct our client to execute the command in Listing 6 to run the Docker container that was just built, at the root of the project source code:

esp32_source $> docker run --rm -v $PWD:/project -w /project cool-container idf.py build

Listing 6: Running our custom Docker container!

We notice that the invocation is almost the same as before, but the name of the Docker container is of the one built locally. With that, our client can successfully build the project using just two commands, without having to worry about dependencies.

While getting the project to successfully build is a great first step, there’s more that we can do. Since the ESP-IDF tool natively supports displaying console output from the ESP32 in its “idf.py” Python script, there must be a way to do the same using our Docker container. The answer is very simple and shown in Listing 7.

esp32_source $> docker run --rm -v $PWD:/project -w /project --device=/dev/ttyUSB0 cool-container idf.py -p /dev/ttyUSB0 monitor

Listing 7: Using our Docker container to get console output

The –device flag simply makes a local device visible to the container. We can then use the standard invocation of idf.py to use that particular serial port inside the container to monitor console output.

I mentioned earlier that there was a third way to build Docker containers. This final method assumes that we have multiple Docker containers that are responsible for different tasks. While this is useful when creating web applications, it’s not generally needed for embedded software build systems, since the point of your Docker container is to perform the single task of building source code. Nonetheless, if you are interested, you can look up “docker compose” on the Internet.

In summary, using Docker containers to encapsulate toolchains for customers is an effective way to mitigate configuration issues between our system and our client’s system. While there are some key assumptions that need to be made (e.g. the host PC and container should run Linux), containers provide a quick and manageable way to ensure consistency of builds.


Mohammed Billoo is Founder of MAB Labs, LLC. He has over 12 years of experience architecting, designing, implementing, and testing embedded software, from Linux device drivers to applications on resource constrained microcontrollers. Mohammed has also led numerous teams in launching embedded systems, from R&D prototypes to large-scale TRL 9 government solutions. He is also an Adjunct Professor of Electrical Engineering at The Cooper Union for the Advancement of Science and Art, where he teaches courses in Computer Architecture and Advanced Computer Architecture.

 

The post Containerizing toolchains to streamline customer deliveries appeared first on Embedded.com.





Original article: Containerizing toolchains to streamline customer deliveries
Author: Mohammed Billoo