Enterprise Containers 101 with Red Hat platform architect, Langdon White

This was a full session with a line out the door just minutes before it was schedule to start. A majority of the attendees self-identified as System Administrators. A handful were developers. Almost none had experience with containers.

Red Hat platform architect, Langdon White, has experience as a back-end developer, and the session was intended to walk the crowd through exactly how he built a container–and why. He listed the steps he goes through to build applications:

  • First, a VM runs applications
  • Build a “monolith” (take what’s on a VM and put it in a container)
  • Create a set of containers
  • Finally, a “nulecule”

When Langdon was building a container, he used Drupal running on a single VM, and also looked into Software Collections to help with installing more than 1 version of software on a single machine (He used mariadb, php, and nginx.)

Langdon put it all in a container, “because you have this nice ‘blob’ that you completely control, but it’s not as heavyweight as a VM. There’s a shared kernel, but is also independent (can run different versions).”

He advocated for the Red Hat developer program website and using them to learn about creating containers. Specifically, there’s a lot of info on using Vagrant to launch containers.

Next, Langdon talked about when to create a separate container. You can break them apart based on different microservices you may want to live on their own. Since you’re testing your containers at this point, you don’t want to run it in production. Kubernetes can help keep track of your containers. (Langdon said set up 2 containers as 2 different pods.)

“Nulecule” comes in when you want to launch OpenStack from Docker containers.

He had a few lessons he’s learned by doing:

  1. When you’re doing this kind of work, just get the “blob” out, and shave off different services and think about how you can package them in a nulecule and share them across the organization.
  2. If you’ve worked with Docker containers, you may start with architecture and orchestration, the build the containers. He recommends doing the opposite, so you’re not building documentation for documentation’s sake.
  3. Visit the Red Hat Customer Portal and look at Linter for Dockerfile. Linter can tell you whether you built your files correctly. When using Dockerfile, White said to imaging each line has a re-boot afterwards, so do things on the same line when needed.

Langdon ended the talk with a sort of lightening round of tips and tricks:

  • The stuff you don’t change much should stay near the top.
  • Minimize the number of layers before production. When you get closer to deployment, clean up the code.
  • The “offline first” movement for mobile apps is interesting. Everyone loves applications that work offline. Depending on bandwidth, start adding features. Think about that related to datacenter service. Can you operate at error modes at a lesser quality of service?
  • OpenShift 3 by Red Hat is just another Kubernetes. It’s seamless. This “blob” is made of a few containers, orchestration built in via atomic. You can push it on to OpenShift, It should be seamless.
  • I don’t know that containerizing databases is a good idea. Most of the time, there is some big iron thing that is all of your data. There are use cases such as old-school data marts where the data is ephemeral, though, and that might
  • Sys admins in the audience said that developers haven’t taken on containers yet. How are containers managed in orgs? Who owns the containers? Langdon says that today, everybody does every thing. One of the huge benefits to go towards DevOps, is that you really can say who owns what. This is why he likes OpenShift and Software Collections. He knows where things stop and start. Developers know what they’re going to get paged about. Operators know what they’re going to get paged about.
  • He said he’d like to see all Red Hat products ship in a container.