Red Hat Container Technology Strategy

“Control and choice”

containers_r

In the late 20th century, the datacenter underwent a succession from proprietary mainframes to UNIX to Linux®. This was largely a result of software innovation presenting robust, cheaper alternatives to the previous extremes of vertical integration. As new freedoms arose, the centralized control once held by IT administrators was fragmented, and developers began assuming some of the responsibilities. With diversified ownership and the ability to combine and tailor software, innovation became a major factor in creating new markets and technologies. Things became more flexible.

Today, containers are changing the role of operating systems in similar ways. In this Summit break-out session, Red Hatters Daniel Riek, senior director of systems design and engineering, and Clayton Coleman, architect for containerized application infrastructure, gave attendees some insight into Red Hat’s future strategy for container technology.

“What you’re actually running in production is never what you actually tested.”

eternal_spiral_of_change_rManaged code is the name of the game. As Riek points out, in any given development process, bug fixes or security patches will likely come out during or after testing, only to be implemented in production. The more complex the system, the more likely it’ll happen. And lacking alternatives, we often commit these new, untested patches in production. With only slight changes, the fixes may integrate well, but when they don’t… “it’s a big disaster.” This creates an “eternal spiral of change,” where development never achieves consistency with production. As soon as 1 patch can be applied, a next version has already been released, and the entire cycle begins again. The cluster remains in a variable and unvalidated state, never quite reflecting the original design or tested application.

“Containers are redefining the OS.”

One of the benefits of the Linux transition was that standardized builds could be issued—app behavior was no longer dependent on the state of the machine at the time of compile. Reproducible builds were great, but not particularly scalable. The monolithic nature of the stack made for maintenance headaches and imposed a serious time burden on system updates. Managing huge arrays of these virtual machines (VMs) can lead to sprawl and other challenges, and requires a good deal of management resources. Containers, on the other hand, bring all of the dependencies for an application into 1 binary runtime; updates don’t affect the host system. No more VM sprawl. Instead, there’s a new problem: The container begins to resemble the mainframe of yore. It is monolithic, it is distributed with a state, and it has to be tracked across what are now becoming huge, cloud-based systems.

“Tooling has to support the spectrum”

A fundamental part of the Red Hat® strategy is to establish a strongly defined API connection between kernel and application. With containers reaching this state of maturity, and DevOps being the way to administer them, IT is ceding a large amount of control to the developer. Seeing this as a facet of development highlights the need for tools that span the spectrum from development to production. With dev and production sharing host resources, handling their diverse workloads mandates transparent, unequivocal communication with the host.

The resulting continuum from Mode 1 to Mode 2 enables convergence to a single platform, an open, packaged-services ecosystem. This, similarly, requires high availability and integrated security. Red Hat provides high performance via focusing, for instance, on direct hardware access, GPU and NUMA awareness, disruption avoidance, and hardware scheduling.

“Finding the least common denominator”

Innovation and disruption are not, therefore, a matter of architecting, but of packaging. Through standardization and optimization (across communities and industry), Red Hat hopes to establish a platform that lets any app run on your infrastructure of choice, with standardized container support extended into higher levels, such as Kubernetes. Remaining completely open will allow people and systems to move easily across ecosystems while continuing to work in familiar environments. And, with CloudForms, Red Hat provides a “single pane of glass” view of all your managed containers.

“Breaking vertical integration”

Coleman stated that cloud, in many ways, is currently similar to the mainframe mentality of the late 20th century. Vertically integrated solutions create vendor lock-in and unfair pricing models and result in lost control over your IT. This effectively stifles innovation. Red Hat wants to break this trend by providing an abstraction layer across the cloud, where apps are portable and come with an open service and communication ecosystem. And, if you insist, you can even bring along your vertically integrated solution.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s