Making the world a safer place with systemd, docker, persistence, and patience

Sometimes things don’t go the way you want. But if you’re Dan Walsh, you don’t give up. You keep working because security is important. Containers are also important, so surely there’s a way to bring these two important things together. Surely.

Dan, a senior principal software engineer for Red Hat, tackled the touchy subject of systemd running with docker. Now, ordinarily, talking about docker or systemd would cause a flurry of responses–champions and detractors. Talking about them together? You must be crazy, Dan. Well maybe so, but Dan is also Dan. That means he’s all for doing the right thing–the right way–to keep businesses and Red Hat customers happy.

A confession

Before we get into the nitty-gritty, your friendly blogger has a confession to make: I know nothing about systemd and very little about containers. I want to. I’m learning. But, this talk was a bit over my head and this session recap will be very high-level. All hope is not lost for those of you wanting to dig deeper–the session was recorded and will be available soon.

Defined goals and issues

Dan needed to find a way to integrate docker with systemd to be able to manage processes and ensure everything stays secure. With the speed that containers are moving today, the fear is that implementation will outpace security. Not a good thing, but it’s what happens with any new kind of technology.

“I don’t want this to run until systemd is ready to receive requests.”

So, we delved into getting systemd working with docker daemon. There are a few issues here, especially understanding who owns restart. Docker daemon wants to be a process manager and systemd is a process manager. After considering the options, it may be best for docker to own and do the auto restarts.

Dan then moved on to talking about systemd working with docker containers directly. He issued a word of warning for trying to the socket activation to work in docker containers: “Don’t.” It’s way too complicated. The bottom line is: sd_notify and socket activation don’t work well in docker containers.

The progress that is runC

Over the last few years, there’s been an effort to create standards around running containers. A lot of these efforts have been concentrated into the Open Container Initiative (OCI) and one of the things that came out of that is a standard runtime environment for containers. runC is that environment–a lightweight universal runtime container. Docker is now using this advancement. A big win.

So, now that we have this standard runtime container, let’s look at using systemd with that. After much effort, sd_notify, a notify service manager for service status changes, and socket activation have both been merged.

Issues arise in logging

Dan then addressed a concern with docker–there are multiple logging agents. What’s more, the logs aren’t persistent, so you can accidentally remove the logs if you remove a container. Not cool. To fix this, docker now supports journald and, through that, now has persistent logging. This helps in a greater sense of security in that attackers can’t do nefarious stuff with a container and then remove that container, thus destroying the logs in the process.

This is available in Fedora rawhide now.

The PID 1 problem

Now, one of the biggest, most contested issues with docker and systemd: the PID 1 problem. Essentially there is a problem with the reaping of zombie processes. This leads to orphaned processes, which can cause some serious havoc. When running in production, the last thing you want is anything identified as “havoc.”

Dan recalled several attempts to introduce process management–systemd–into containers, while keeping base image sizes. After these attempts, which failed for several reasons, a compromise was reached that all parties agreed upon: systemd-container, a sleek version of systemd that has been cleaned up and streamlined for containers. This is now available in Red Hat Enterprise Linux 7.2.

Okay. Prove it.

Dan then launched into a short and sweet demo. He had a docker file, running apache on Fedora 23. He then showed systemd with no privilege– and no errors. Dan then issued a docker stop command and we saw as systemd exited cleanly out of the system. From there he turned it over to questions.

The takeaway: It’s not perfect and needs some work, but we’re getting there with container security. Despite the disagreements. Despite the roadblocks. Prepare your containers: security is coming.



  1. I am developing a utility called `dockerfy` that you may find my `dockerfy` utility useful starting services, pre-running initialization commands before the primary command starts. See

    For example:

    RUN wget; \
    tar -C /usr/local/bin -xvzf dockerfy-linux-amd64-*tar.gz; \
    rm dockerfy-linux-amd64-*tar.gz;

    ENTRYPOINT dockerfy
    COMMAND –run bash -c ‘echo -e “STARTING UP\n” ‘ —
    –start bash -c “while false; do echo ‘Ima Service’; sleep 1; done” — \
    –stdout /var/log/nginx/access.log \
    –stderr /var/log/nginx/error.log \
    –reap — \

    Would run a bash script as a service, echoing “Ima Service” every second, while the primary command `nginx` runs. If nginx exits, then the BLINK service will automatically be stopped.

    As an added benefit, dockerfy can run as PID 1 and any zombie processes left over by nginx will be automatically cleaned up (reaped)

    You can also tail log files such as /var/log/nginx/error.log to stderr, using the `–stderr` and `–stdout` flags.
    Or edit nginx’s configuration prior to startup and much more

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s