Live from the Summit: Using Red Hat products in public clouds


When you’re looking to run your Red Hat-based applications in a public cloud—almost always as part of a hybrid cloud deployment—there are two broad aspects to consider. The first is the overall economics and suitability of public clouds for a specific workload. The second is the specific Red Hat offerings available through the Certified Cloud Provider (CCP) program. Those were the topics covered by Red Hat’s Gordon Haff and Jane Circle in their “How to use Red Hat solutions in a public cloud” presentation.

Haff focused on general considerations associated with using public clouds. Consider the nature of your workloads. Public clouds (and indeed private clouds on infrastructure such as Red Hat Enterprise Linux OpenStack Platform) are optimally matched with workloads that are stateless, latency insensitive, and that scale out rather than up. See, for example, Cloud Infrastructure for the real world.

Workload usage matters as well. A workload with low background usage and only infrequent spikes may require using a different type of cloud instance (EC2 in the case of Amazon) from a workload with more frequent spikes. Haff offered an example of how—just for a single instance—the difference between using an Amazon Web Services (AWS) medium and 2xlarge instance at 50 percent utilization over the course of a year would result in about about a 6x and $4,000 difference in cost. Multiply that by hundreds of instances as you might see in a typical production deployment and you get a sense for how important understanding your workloads can be.

Of course, using public clouds isn’t just about the economics. Some organizations choose to use public clouds to allow them to focus on core competencies—which may not include running data centers. It also allows them, or their investors, to avoid making capital outlays for server gear against an uncertain future.

Finally, Haff discussed some of the issues associated with compliance and governance associated with public clouds. In general, the issue isn’t so much security in the classic sense as about audit and data management. Of particular concern of late is regulatory regimes governing data placement and notifications. These differ widely by country and state and even the provider nationality can matter wherever the data may physically reside. (Regional providers are sometimes preferred as a result.)

Circle then discussed how to consume Red Hat products—including but not limited to Red Hat Enterprise Linux—on Red Hat Certified Cloud Providers.  There are currently about 75 CCPs. These are trusted destinations for customers who want to use public clouds as an integral element of a hybrid cloud implementation. They offer images certified by Red Hat, provide the same updates and patches that you get directly from Red Hat, and are backed by Red Hat Global Support Services.

You can use Red Hat products in public clouds through two basic mechanisms: on-demand and Cloud Access.

On-demand consumption is available in monthly and hourly consumption models. Some public cloud providers also have reserved instances for long-term workloads. You engage with the CCP for all support issues, backed by Red Hat Global Support Services and the CCP bills you for both resource consumption and Red Hat products. The CCP handles updates through their Red Hat Update Infrastructure.

You can think of this as “RHN for the Public Cloud” and it’s immediately available and transparent to you.  Certain CCP’s (currently AWS and Google Cloud Platform) also offer a “bring your own subscription” offering called Cloud Access. Cloud Access provides portability of some Red Hat subscriptions between on-premise and public clouds. You keep your direct relationship with Red Hat but consume on a public cloud. A new Cloud Access feature, just introduced on April 7, lets you import your own image using a cloud provider’s import tools rather than just using a standard image. In the case of Cloud Access, you will typically use Red Hat Satellite to manage updates for both on premise and CCP images.

The takeaways from this talk?

  • Develop an appropriate application architecture
  • Ensure data is portable: test, test, test!
  • Understand the legal and regulatory compliance requirements of your applications
  • Isolate workloads as needed in a public cloud
  • Choose a cloud provider that is trusted and certified
  • Do the ROI to determine the right consumption model
  • Ensure consistent update for your images to maintain application certifications
  • Enable hybrid cloud management, policy, and governance


More information


Event: Red Hat Summit 2014
Date: Tues. April 15, 2014
Type: Session
Track: Cloud readiness
Title: How to use read solutions in a public cloud
Speaker: Jane Circle (Red Hat), Gordon Haff (Red Hat)

Executive Exchange: Why every business and executive needs DevOps now: My 15-year journey studying high-performance IT organizations

header_text_exec_1170In a game of charades, if you were asked to describe IT innovation with two syllables, you’d be right if you guessed either DevOps or Gene Kim. Kim is an author and speaker, and he describes his work bringing developers and IT operations together as a “moral crusade.” He is singularly focused on preventing the signs of the downward spiral of IT: breaking promises, and lengthening deployment times.

And that starts with unifying developers and IT operations team. And that’s not easy.

Kim describes the differences between developers and operations like a true Trekkie:

Developers are Spock: A little bit weird, sits closer to the boss, thinks too hard
Operations are Scotty: Pulls levers and turns knobs, easily excited, yells a lot in emergencies.

The opportunity cost of wasted IT efforts is, Kim says, approximately $2.5 trillion/year. That’s why he co-wrote the book The Phoenix Project: A novel about IT, DevOps, and helping your business win. Kim believes the next surge of productivity will come from applying manufacturing’s lean principles to the IT value stream.

The characteristics of the “downward spiral” Kim described might sound familiar. The first is the backlog of promises make when, say, product managers take what one Red Hat Summit Executive Exchange attendee called the “PowerPoint over the wall” approach, and request new projects. Our fragile infrastructure breaks, causing an outage. (Kim’s joke: “Show me a developer who’s not causing an outage, and I’ll show you one who’s on vacation.”) When we break promises, we tend to then over-promise, and end up with what Kim calls “technical debt.” This is an accumulation of “crap” in a datacenter that accrues—and “we’ll fix it when we have time,” Kim said.

Kim’s second downward spiral characteristic: deployments take longer. When operations and developers are at war, we can’t be agile and nimble. There are too many steps to completion. And operations can get mired in unplanned work with whittling away at all that technical debt its accrued.

Everyone loses. And IT becomes order takers.

There is a better way. Companies like Google, Amazon, Netflix, Spottily, Spottily, Twitter, and Facebook have all succeeded at DevOps–and created competitive advantage. They’re prolific. And they’ve found a way to escape what slower IT departments risk: irrelevance.

In 2009, 10 deploys a day was fast. Today, Amazon deploys once every 11.6 seconds. “Winning in the marketplace involves out experimenting the competition,” Kim said.

Kim quoted Intuit founder Scott Cook. “By installing a rampant innovation culture, they now do 165 experiments in the three months of tax season,” Cook said. “Business result? Conversion rate of the website is up 50%. Employee result? The folks just love it, because now their ideas can make it to market.”

He also shared some stats on why companies from many different verticals are adopting a DevOps workflow:

  • They’re more agile
  • 30x more frequent deployments
  • 8,000x faster lead time than peers
  • They’re more reliable
  • 2x the change success rate
  • 12x faster MTTR (mean time to recover)

As a student of efficiency and better workflow management, Kim has identified three characteristics of DevOps:

Flow – This measures deployments per day versus lead time. Rudolph finds that long lead time leads to catastrophic deployment mistakes. That’s why he suggests operations provide environments on demand so anyone who wants one can get one—which is why PaaS [Platform-as-a-Service] is so important. Developing code in production-like environment lessens the risk of deployment errors and business risks. The highest performing IT departments have two characteristics: 89% are using both infrastructure version control and automated code deployments. Flow bottlenecks include environment creation, code deployment, test setup and run, overly tight architecture, development, product management.

Feedback – On the assembly line at Toyota, there’s an “andon” cord anyone can pull to stop production to solve a problem. It’s pulled 3,500 times a day. It’s the only way they can build 2,000 vehicles a day. So many stops and starts might seem disruptive, but it prevents the technical debt from building up. If it goes downstream, we can’t overcome it, Rudolph said. And he suggests that operations pull that cord as well. “To have shared goals, you have to have shared pain,” Rudolph said.

Culture of experimentation and learning – Break things early and often.

If your team exhibits these behaviors, they aren’t high performers:

  • Info is hidden
  • Messengers are shot
  • Responsibilities are shirked
  • Failed is covered up
  • New ideas are crushed
  • Bridging is discouraged

Kim says the downward spiral happens everywhere. But if you employ DevOps and let people control their own outcomes, you can avoid it.



Event: Red Hat Executive Exchange
Date: Tue, April 15, 2014

Executive Exchange: IT business leaders roundtable

header_text_exec_1170The best part of the inaugural Red Hat Summit Executive Exchange was seeing strangers come together for 1 day to openly discuss their IT challenges. The executive leadership roundtable was a more organized discussion, and included 5 attendees and mediator, Peter High, president, Metis Strategy.

Roundtable participants:

Adam Burden, Global managing director, emerging technology innovation, Accenture
Aziz Safa, VP & GM, Enterprise applications and applications strategy, Intel IT
Lee Congdon, CIO, Red Hat
Tim Dickson, CIO, Emerging technologies and user experience, Dell
Tony McGivern, CIO, FICO

The panel shared stories about their small and large successes connecting IT to business goals, and supporting fresh new ideas to completion. Some of the many tips they shared:


  • Build a strong IT core and increase business value of IT.
  • Find a partner in another part of the organization and cross-pollinate.
  • Develop and R&D mentality to bring ideas to the rest of the organization.
  • Build skills on your team so IT speaks the language of business: finance.
  • There is only one customer; the external one. – Don’t refer to colleagues as customers.
  • Hire people who know both the business and IT. They’ll come up with more ideas.
  • Be customer #1: Use your own products. It gives you a fast feedback loop. Strengthens your support org.
  • Get executive support. If this isn’t organization-wide, it’s not going to work. Be prepared for tough feedback. Convert the ones who are the most noisy.
  • Organization needs to make a decision that a part of the organization needs to be focused on innovation. But it has to involve everyone. Those on the service desk are fixing problems, and if they don’t “bubble up” the pervasive ones, we won’t succeed. Everybody is an innovator.
  • Everyone is under cost pressure. Are you ready to justify the cost of growth? Or find efficiencies in your own organization? Or do you sign up with a big, bold project and find ways to fund it?
  • Be ready for serendipity. While teams have their sleeves rolled up, go for it.
  • Innovation comes from people who are doing their day-to-day jobs in the field.
  • A lot of innovation comes out of recessions when money is tight.
  • Allocate 3 days a month. Operations, engineering, product team. Work on nothing other than a wild, crazy, idea.
  • Create different domains to innovate inside of. Don’t just look at a single product’s progress—look at the portfolio return.


Event: Red Hat Executive Exchange
Date: Tue, April 15, 2014

Executive Exchange: Manufacturing and DevOps with Intel

header_text_exec_1170The ties between DevOps and manufacturing are clear today, but to Glenn Rudolph, IT director of Data Centers and Hosting at Intel, that tie simply describes his career path. Rudolph was a manufacturing supervisor, and before he read Gene Kim’s book on DevOps, he made the connection between efficient manufacturing practices and IT efficiencies.

Manufacturing is linear. There’s process control. Lean manufacturing methodologies. Modular progression.

When Rudolph joined Intel’s IT team, he was faced with a sea of acronyms (PaaS, SaaS, IaaS, DaaS, BMaaS), and many mentions of cloud, “I’m so sick of cloud, now. It’s a word I love to hate.” He also faced a litany of IT challenges like, “public/private/hybrid, multiple BU teams demanding different capabilities, security, support, confusing marketplace, open versus proprietary” etc.

Rudolph’s fresh perspective helped him navigate the behind-the-scenes world of IT. “Think about this from a customer’s point of view. Customers don’t know all of this.” They just interact with the end product.

Rudolph also advocates for creating a “model of record” (MOR). He’s defined it as a high-level model for all IT processes that might seem difficult to achieve.

IT is an an in flexion point, according to Rudolph. He recommend three areas of focus for CIOs.

Compete (lower your TCO):
Infrastructure process – Define a MOR
Build enterprise cloud – Enhance user experience

Business process
ID and focus on share of available market globally – You can’t be everyone to everyone
Increase standardization – Reduce variability and complexity

Human process
Transform the workforce – change the toolbox. Technology requires different competencies. Make sure you have the right people and skills in place to compete.
Integrate teams and drive collaboration – Remove waste from the system

While the term “cloud” seemed overused in Rudolph’s early days in the datacenter, he has embraced OpenStack as a platform for IT hosting. It’s flexible enough to work alongside his existing infrastructure, because it’s open source technology. He also values open APIs. “People will create their own solutions if you don’t keep yourself relevant,” Rudolph said. He wants to know who the next player is, and said open source helps you use the best in class, and plan ahead.

But cloud management is hard, according to Rudolph. He is no longer looking for narrow specialists with a focus on hardware. He’d prefer to have generalists who know a lot about different areas of the datacenter, and have a software focus. He sees the silos of hardware and software becoming obsolete.

Rudolph is building an “open cloud,” and he is using the DevOps (or manufacturing) model to do it. He said it’s helped his team “spread its wings, and build expertise in a production environment. We’re learning a lot.”

So what’s the business value of all this? Rudolph is, of course, tracking his acronyms—specifically KPIs and TCO. He also compares these to 3rd party figures to validate his team’s value to the business. “Compute numbers, storage numbers up. Head count costs are down. Physical and virtual server costs (TCO) have been very competitive compared to 3rd party vendors. Cost is going down, but you need people to support it on your side. TCO tells you.”

Enterprise hosting is alive and well, according to Rudolph. Visit for more information.



Event: Red Hat Executive Exchange
Date: Tue, April 15, 2014

Live from the Summit: PaaS and DevOps, Red Hat and Cisco


Hicham Tout, senior technical leader at Cisco, and Red Hat’s Dan Juengst partnered to teach attendees how to quantify the financial benefits of Platform-as-a-Service (PaaS) to build business cases within their IT teams. They talked about Cisco’s OpenShift deployment and its benefits, and weighed in on where PaaS fits within DevOps.

Tout began the discussion with the application development challenges IT teams are facing with regards to scripts:

  • It often takes too long to build scripts and build and deploy apps.
  • Because all development environments are different, the same scripts can’t be used for different languages.
  • Developers prefer different languages but choice can be costly when more scripts are required.

“The number of apps skyrocketed at Cisco,” said Tout. “Not just for products but because users wanted to prototype apps to demonstrate to their own business owners that it could be done.” Cisco saw a massive migration of applications to the cloud, which created multiple copies of data.

Struggles like these are what drove the company to look at OpenShift and its capabilities. “Usually when you provision an infrastructure, you put a package app on top of it, you build your own, or use a hybrid between the two,” said Tout. “You need a platform to build on top of. We looked at the platform aspect of it.”

The IT team wanted an environment and the business needed a platform with more diverse languages. “This is why we started looking at OpenShift—this is what drove us there,” said Tout.

The other driver is prototyping. “There’s significant hunger within IT—especially from those who work on the developer side—to show what can be done within an application,” said Tout. “We wanted to give them an environment where they could do this quickly… and provide them with the ability to control the life cycle of demos and applications.”

OpenShift Enterprise by Red Hat provided Cisco with life cycle improvements. It made it easier to create new apps and retire old ones. “We wanted to manage the environment rather than be involved in it app-by-app or environment-by-environment,” said Tout. “The density piece was interesting to us too. With OpenShift, I can squeeze 10 to 15 apps in the same virtual machine… a much smaller footprint.”

Tout highlighted the the total cost of ownership (TCO) trend from its legacy, bare-metal datacenter in 2009 to a fully self-provisioned datacenter in 2014.

Juengst elaborated on Tout’s life cycle improvements and connected them to DevOps. “Developers want the latest tools—the latest PHP, Ruby, whatever—and it’s problematic to support all those different languages,” he said. “The business wants more speed, greater social presence, and more apps—demanding more from the organization. What can IT do? Take this world of siloed developers and operations of the past, and turn them into DevOps.”

DevOps helps developers get immediate feedback about their apps—creating opportunities for continued experimentation. “There’s tremendous value from being able to experiment a lot,” said Juengst. DevOps and PaaS give IT teams the abilities to automate code check-in, testing, and delivery, so teams can use code to push successful apps to production and reject bad applications, creating continuous delivery. DevOps provides accelerated app delivery, self-service access to the latest tools for developers, and standardized and controlled environments for operations.

“OpenShift can be a key enabler for DevOps, providing tooling that enables you to do that automation,” said Juengst. It provides auto scaling, security, and gives developers choice of language without the costs associated with new scripts.

How do you map all of this back to ROI? You use the OpenShift ROI calculator, which Juengst used to walk attendees through a company’s application development, system configuration and management, and IT capital cost savings. He also revealed the cost savings and business value associated to accelerating your application development.


More information


Event: Red Hat Summit 2014
Date: Tues, April 15, 2014
Type: Session
Track: Business & IT strategy alignment
Title: Quantifying the financial benefits of PaaS: Build your business case
Speaker: Hicham Tout (Cisco), Dan Juengst (Red Hat)

Live from the Summit: Docker, Atomic, and application-centric packaging


Lars Herrmann, senior director of Product and Business Strategy and Daniel Riek, senior director of Systems Design and Engineering–both from Red Hat–entertained a capacity crowd during their afternoon session addressing Docker, Linux containers, and packaging.

In a series of short skits, Riek and Herrmann assumed enterprise IT roles as an applications owner (Herrmann) and the person that manages infrastructure (Riek). They demonstrated a common scenario in many IT organizations:

Herrmann: I have a project. It’s going to make us millionaires. I need a couple servers.
Riek: You’ll need hardware. That’ll be 3-4 months.
Herrmann: My developers need a library that’s not in the standard build.
Riek: I think we’ll get that in the next update–maybe 6 months from now?

The new library might satisfy the needs of development, but applications that needed the old library would require updating.  Moving applications to production requires security testing, a maintenance window, approval from IT, and more.

Then they took their scenario forward–to the years of virtualization.

Herrmann: I need to start a new project. It’s going to make us billionaires.
Riek: It’s self-service. Pick an image that works for you.
Herrmann: That sounds easy.
Riek: You just have to make sure you adhere to these management and security requirements.


Though application developers can set up their own servers, it does leave them responsible for compliance and security–something that takes time away from innovation. Things have gotten better, but there’s room for improvement.

This is what Docker (and Atomic–more about that in a moment) can bring. “The infrastructure is just the infrastructure,” said Riek.

Hermann: I have a trillion-dollar application. What do I have to do so my developers can work on it?
Riek: Take your Docker image, build your stuff on it, then hand it over when its done.
Herrmann: What if I have a custom dependency for something?
Riek: Create a new layer, add that to your image, and we can roll that into the next gen of your container image.

Linux containers provide lightweight isolation of process, network, and filesystem spaces. Docker is a toolchain that builds on Linux containers, aggregating packaging, adding an API, an image format, and a delivery and sharing model.

Containers can replace virtualization where containers are more applicable, such as for:

  • Horizontal application isolation
  • Lightweight delegation
  • Application virtualization
  • Density

“What really makes this work is the introduction of the concept of layering,” said Riek. “You start with a base image and can then specialize the image by adding a differential layer on top.  From an operational point of view, this is very agile.”


Project Atomic, the upstream community for Red Hat Enterprise Linux Atomic Host, works to integrate many technologies that work together, including:

  • Docker
  • SELinux
  • cgroups
  • namespaces
  • RPM
  • yum

Red Hat Enterprise Linux Atomic Host contains:

  • A minimal host with atomic updates
  • systemd for process management
  • Generic container orchestration primitives
  • Integration with OpenShift GearD for cross-node PaaS orchestration
  • Shared services and management agents deployed as privileged containers

Red Hat Enterprise Linux Atomic Host intends to take technical risk out of the infrastructure. Updates are made all at once. When something breaks, you can roll back to the last known-good version.

“With this, you have a generic atomic image and you can deploy exactly the same binary container image that you deploy in-house, in the cloud. It gives an abstraction layer for aggregate package deployment,” said Riek.

“We actually believe that this is fairly disruptive. We are not here announcing the end of something, but the beginning of something.”


Container certification gives ISVs a new way to deliver value to customers and creates new opportunities for interaction. “The enthusiasm we see on the Red Hat side around Docker is because it solves our own problems as an ISV to Red Hat Enterprise Linux,” said Riek.

Other benefits of application-centric packaging include:

  • Portable and reliable application deployments
  • Lightweight footprint and minimal overhead
  • Rapid and efficient application delivery
  • Simplified application development life cycle
  • Isolated and secure deployments
  • Fine-grained control

“As you might have noticed, we’re really excited about this stuff,” said Herrmann. “It’s about portable deployments. It’s about applications moving in the open hybrid cloud. It’s about efficiency in delivery, in packaging and maintenance, and organizationally. It’s about speed, agility, the promise of cloud and big data. This reduces the time it takes to get something created, copied, deployed, and updated.  We’re not talking days, hours, weeks–we’re talking seconds.”

“This is really a big deal.”


More information


Event: Red Hat Summit 2014
Date: 4:50 p.m., Tue April 16, 2014
Type: Session
Track: Cloud readiness (Cloud deep dive)
Technical difficulty: 3
Title: Application-centric packaging with Docker and Linux containers
Speaker: Daniel Riek (Red Hat), Lars Herrmann (Red Hat)

Executive Exchange: HBR research – Business transformation and the CIO role



How do some “accelerated” IT departments succeed and others simply survive? Our Red Hat Summit Executive Exchange attendees know. Abbie Lundberg, president, Lundberg Media and former CIO magazine editor in chief, revised the latest Harvard Business Review Analytic Services survey of 400 business leaders with Executive Exchange attendees.

The study identified three different types of businesses and their posture toward innovation:

  1. Innovation accelerators – Innovation is in company DNA and consciously pursued strategy throughout organization
  2. Ad hoc innovators – Pockets of ad hoc innovation but not pervasive or replicated across whole company
  3. Innovation not a priority – Innovation not a top priority; focus elsewhere

So what do these “accelerators” do to get to better ideas, faster? Lundberg shared a few questions from the survey. But the most telling answer came from this question: Does your IT org have the correct talent? 62% innovation accelerators say yes. 53% of companies that say innovation is a “low priority” say no. Embracing innovation is something business leaders clearly look for in their IT staff.

Question 1:  What areas of your business will be most affected by IT-enabled innovation over the next 3 years?

  • Customer engagement and insights
  • Business models products
  • Services (includes agile development, being iterative, improving time to market)

Accelerator companies know how to commercialize IT

Lundberg says they use IT to make products and services smarter and sell the information around them, much like smart vehicles provide information about our surroundings. These accelerated IT departments offer internally developed capabilities as cloud based service, make analytics capability available for a fee, redefined product value chain as service to others.

Question 2:  What mechanism does your company employ to stay on top of new IT developments? (TIP: Accelerators create emerging tech groups, tech labs, futurist groups)

  • Industry conferences and events
  • Super users in the business bring in new ideas to IT
  • Emerging technology group that is part of IT (Accelerators do this the most — 63%)

Question 3:  What’s the approach you take to IT-enabled business innovations?

Accelerators take a collaborative approach (48%) rather than stealth IT, IT led, or business led. “It’s out of necessity that people are coming together. The only way we’ll compete is if we come together to solve problems,” Lundberg said.

Customers expect IT and marketing to work together

Increasingly, customers expect their experiences with businesses to be high quality (Lundberg calls it the “iPad effect”). And the link between IT and marketing creates these kinds of experiences. IT departments are beginning to understand the value of storytelling, such as video storytelling. Marketing and IT together understand the value of building technology into that video so that someone can interact with it. Lundberg also suggested improving the customer experience by having end-users work with IT interact to create insights.

Back to the survey questions.

Question 4:  Which of the following approaches does your company empty for IT driven business innovation?

  • Cross-functional innovation board – People who are running marketing, IT, and operations.
  • We don’t have 2 years to fix this. Our competitors are Walmart and Amazon – using a social tool so cashiers know more about customers.
  • Bringing down walls and getting everyone working together

Question 5:  How would you characterize your company’s current IT organization?

  • 53% of companies that don’t see IT as an innovator view theirs as a “cost center”
Question 6:  What qualities do accelerators rate highest on a list of attributes:
  • 59% say access to the right technology
  • 52% say knowledge of the business
  • 52% technical skills and expertise

One of the Executive Exchange attendees said he thinks the “dumbing down of IT” has been a large problem over the last several years. Upscaling IT is paramount, he added, and said we need people significantly skilled in deployment and management. It’s increasingly important to have business managers understand IT. Lundberg suggested creating training for IT to hone their skills without requiring them become a manager.

Lundberg asked attendees to write down 1 word that describes the role of IT. Hers was “catalyst.” She said, “It’s not easy to do. Keep it up. Accelerate it if you can. Bring your teams along with you. IT has to be happening beyond your department, across the organization.”


Event: Red Hat Executive Exchange
Date: Tue, April 15, 2014

Executive Exchange: All-day meeting of IT business leaders at Red Hat Summit

header_text_exec_1170“Expunge IT from our vocabulary. It is not separate from the business. It is the business, dammit.” – Thomas Koulopoulos, Chairman and founder, Delphi Group.

For the first time, Red Hat hosted an Executive Exchange during the Red Hat Summit, bringing IT leaders and their peers from almost every business type together for one day. The day was packed with discussions with Red Hat executives; research from IDC and Harvard Business Review; a panel discussion with Accenture, Intel, Red Hat, Dell and FICO; and a passionate plea for DevOps from Gene Kim, author of The Phoenix Project; and think tank founder and author, Thomas Koulopoulos.


The day began with a Q&A session with Red Hat CEO, Jim Whitehurst, and Paul Cormier, president, Red Hat products and technologies.

Some questions from the crowd included:

  • The maturation process from project to product. Cormier explained the open source development model, describing how more mature products like Red Hat Enteprise Linux can have more time between releases. Cormier hopes more contributors will surface. OpenShift is like Linux, according to Cormier. It has a rich blend of communities and features supporting it.
  • Thoughts on the services and how they can help customers upgrade to new technologies. The Red Hat services team works directly with our partner ecosystem to help on the edges of technology innovation, and on key issues like product upgrades or migrations.
  • How Red Hat Enteprise Linux can help power different types of clouds. “The value of Red Hat Enterprise Linux is that it runs on any hardware platform,” Cormier said. “That value isn’t changed. It’s become more and more important.”
  • Creating common messaging frameworks. It’s critical, but Cormier pointed out that the process of developing these frameworks has to be managed, and worked through communities like JBoss, and through OpenStack projects.


“We’re not building a stack of products,” Brian Stevens, Red Hat CTO of emerging technologies. “We’re making IT more efficient.”

Why should people care about emerging tech? Half the time, ideas die early and often, Stevens said. But, “If we went around the room, 90% of the innovations you mention are happening in the open.”

The value of the open source model allows us to make lighter investments before we eventually commit. For example, with Open Daylight, a dozen people are working on it actively. But when it begins to transform from community to product, upwards of 100 people will invest in the product and training people to use it. Stevens also discussed OpenStack adoption drivers, big data and scale-out storage as a way to manage data horizontally, across the enterprise, and containers. With emerging technology, Stevens said Red Hat’s role is to enable the dialogue and experimentation.


Gillen gave the group an overview of how the market is dealing with the “3rd platform,” which is his way of describing mobility integration, social business, and networks and how they are changing the way customers interact with us and one another.

A few quick insights from the IDC research:

  • OpenStack is the next “big thing” for Linux growth
  • Virtualization is reaching a saturation point
  • More than half of companies IDC surveyed said when building a private cloud, they’re looking to use a different hypervisor
  • The KVM hypervisor will be pulled forward by OpenStack
  • The amount of attention OpenStack gets from enterprise customers “is really impressive.”

Gillen said that organization are facing a challenge: function like a service provider and be threatened by outside, public clouds. Or open up to projects like private cloud as an opportunity to rethink your virtualization strategy and trajectory.


Event: Red Hat Executive Exchange
Date: Tue, April 15, 2014