OpenShift 3 and the next generation of PaaS


  • Ashesh Badani, vice president and general manager, OpenShift
  • Matt Hicks, senior director, Engineering
  • Joe Fernandes, vice president, Management and Security
  • Clayton Coleman, lead engineer, OpenShift

In front of a standing-room-only crowd, a roundtable of Red Hat cloud experts discussed the new features in OpenShift Enterprise 3 by Red Hat and laid out the roadmap for where it’s going in the future.

Continue reading “OpenShift 3 and the next generation of PaaS”

Red Hat system management vision and strategy

Joe Fitzgerald, Red Hat’s vice president of management and security, took us on a quick tour through the past, current, and future plans for one of the next frontiers: integrated, open management. Joe has more than 30 years of system management experience; a dozen of those as an open source supporter. With a few management paradigm shifts under his belt, he takes the coming changes in stride. But first, a little context.

Continue reading “Red Hat system management vision and strategy”

Red Hat Enteprise Linux roadmap: Summit Q&A

If you attended the Red Hat Enterprise Linux roadmap session at this year’s Red Hat Summit, you saw it was a packed house with a lot of great questions. And now that Red Hat Enterprise Linux 7 is generally available, we want to provide even more explanation and details on questions asked by attendees.

Check out what attendees wanted to learn more about – sorted by topic – in the 2014 roadmap session below. You’ll also find links to additional information where available. In case you missed it, you can download the presentation slides here.

Continue reading “Red Hat Enteprise Linux roadmap: Summit Q&A”

Live from the Summit: Steering wheels and easy buttons for deploying OpenStack with high availability

RHELOSP-HA-sessionWith an opening slide of a Formula 1 racer, Red Hat’s Arthur Berezin, senior technical product manager for virtualization, drew parallels between the sport and deploying OpenStack.

Driving a car without your steering wheel is something you obviously don’t want to do,” Berezin said, “and sometimes it feels like you’re doing that with OpenStack—you’re the driver, and you need a way to control your deployment.”

High availability means “100% 99.999% uptime and making sure everything runs consistently on high scale,” he said, before introducing a new, easy way to deploy OpenStack and ensure high availability.

Berezin demonstrated upstream features in the RDO community that will be coming downstream to Red Hat Enterprise Linux OpenStack Platform soon: Foreman and Staypuft.

Foreman is an open source system for managing configurations, provisioning, and monitoring on multiple cloud providers including OpenStack. It’s a steering wheel of sorts, while Staypuft is “an easy button,” as Berezin described it, for installing Foreman.


Berezin walked through OpenStack examples with Horizon, Cinder, and Network, explaining what features the RDO community uses now to ensure high availability for services, database, and messaging:

  • Services
    • Pacemaker cluster
    • HAProxy load balancer
  • Database
    • Galera DB replication
  • Messaging
    • RabbitMQ mirrored queues

Choose your options, hit deploy, and you’ve got a highly available environment,” said Berezin.

Staypuft puts you in control with dead-simple features to reduce deployment time and make life a little easier.

With a single selection button, you control your nodes and you choose the services you want,” Berezin said. “It’s that easy.”


More information


Event: Red Hat Summit 2014
Date: 2:30 p.m., Wed April 16, 2014
Type: Session
Track: Cloud deep dive
Technical difficulty: 3
Title: Red Hat Enterprise Linux OpenStack Platform high availability
Speaker: Arthur Berezin

Live from the Summit: Reset expectations for building enterprise clouds


As a former analyst, Alessandro Perilli has found a way to see through the promises of cloud to the real-world problems every enterprise encounters when building a private cloud.

When we think about cloud, he says, we have certain expectations:

  • Simple
  • Cheap-ish
  • Self service
  • Standardize and fully automated
  • Elastic (application level)
  • Resource consumption-based
  • Infinitely scalable (infrastructure level)
  • Minimally governed

But what we actually get is dramatically different. (If you need a visual, Perilli compared our current expectations of private cloud to a winged Pegasus, and our actual progress on private cloud to a donkey to drive the point home.)

The differences between private enterprise clouds and public are enormous. Compare the list below to our expectations, above:

  • The (enterprise) private cloud
  • Undeniably complex
  • Expensive
  • Self-service (The only similarity to the above list)
  • Partially standardized and barely automated
  • All but elastic
  • Mostly unmetered
  • Capacity constrained
  • Heavily governed

As an ex-Gartner analyst, Perilli shared what he called a (very mature) private cloud reference architecture. It includes many layers (5) plus self-service provisioning, chargeback, and self-service provisioning on top of a physical infrastructure. Several vendors (including Red Hat) can sell all of these components.

So what actually makes up a cloud? Do you need all those layers? And if you don’t have them today, when will you get them?

Gartner identified 5 phases to get you from a traditional datacenter to a public cloud. Getting from phase 1 (Technologically proficient) to stage 2 (operationally ready) can take years. That’s why Perilli recommends you take a “dev & test” approach to private clouds. “Very few private clouds are in production. There are a few industries where we’ve seen them successful.”


“It’s easier said than done” to be fully standardized and automated. There’s a progression of complexity as you try to provision applications. The more components an app has, the more provisioning is required. The more pieces you try to automate, the harder your job will be. You also have legacy systems that need to be integrated. And when you design applications in the physical world, provisioning isn’t always well documented and, therefore, not repeatable. And, truth be told, it may take you 1 year to convert 10 applications (of the thousands in the enterprise).


What does “cloud application” mean anyway, Perilli asked. The application should have cloud characteristics like rapid scaling and elasticity, self-service, etc. And it should be architected to be automated, failure aware, parallelizable, etc. And, Perilli, noted that most developers and applications don’t fit the cloud model. He says it’s a matter of culture and training. And most organizations, except for Netflix, don’t have everything it takes to do it right. “Enterprises take a long, long, long time to get there,” Perilli said.


Because we’re too busy with provisioning and orchestration issues, charge-back capabilities are taking a back seat. There are resources and licenses to manage, and if you aren’t careful, you’ll miss the charge-back opportunity.


There’s a massive need for governance, but customers aren’t successful in production clouds. You need an approval workflow, and there’s a need for applications to be removed when not needed. We need process governance, but aren’t at the maturity level to provide it.

So what do we do? Stay away from private cloud altogether? Nope.


Speed to market and quality. Huge demand for Platform-as-a-Service (PaaS) bundles. Perilli says customers have shared these kinds of successes with him. The key, according to Perilli, is to buy only the cloud you need. Some providers sell you more than you need, and you end up using only 1/12th of their modules.

More tips from Perilli:

  • Don’t believe the promise of fully automated production cloud. Build a test cloud instead and be pragmatic.
  • Introduce support for scale out application in a meaningful way. Consider a multi-tier cloud architecture.

As a new Red Hatter, Perilli provided a quick overview of the capabilities of Red Hat’s cloud architecture. In particular, he pointed out CloudForms as a way to manage disparate systems under a single entity. And instead of buying more than you need, Red Hat plans to rely on certified ISVs to provide more management capabilities on top of our cloud. It’s a lean, smart approach, and one we’ll hear more of in the months to come.


More information


Event: Red Hat Summit 2014
Date: 2:30 p.m., Wed April 16, 2014
Type: Session
Track: Red Hat Cloud
Technical Difficulty: 1
Title: Building enterprise clouds: Resetting expectations to get them right
Speaker: Alessandro Perilli

Live from the Summit: Build scalable infrastructure with Red Hat Enterprise Linux OpenStack Platform

Will Foster, Dan Radez, and Kambiz Aghaiepour–all senior engineers with Red Hat–wanted more automation in their environment. “What we did not want to do was be in the business of manually managing the building of [OpenStack] clusters,” Aghaiepour said. That’s a common problem for enterprises–or anyone–thinking about performance benchmarking and scalability testing on OpenStack. But it was also an opportunity.

Before too long, they had 9 racks with 200 baremetal nodes running Red Hat Enterprise Linux OpenStack Platform 4 (based on Havana) on Red Hat Enterprise Linux 6.5. They used Foreman 1.5 (part of Red Hat Satellite 6) for node provisioning and hostgroup-driven OpenStack deployment. Other tools or technologies used included:

  • OpenFlow 1.1
  • IPMI (intelligent platform management interface)
  • Nova Compute
  • Neutron networking
  • Controller
  • OpenStack storage (GlusterFS)
  • Puppet
  • Staypuft (OpenStack Foreman installer)

Recommended best practices were around utility services and configuration management. For services, the group administered Puppet, PXE, DHCP, and DNS through Foreman, which keeps things in one place and eases administrative sprawl. For config management, they used Puppetmaster through Foreman and distributed revision control systems. “Anything we do once or twice, we never want to do again. We automate it,” said Foster.

Foster also recommended doing as much as you can through Kickstart %post. Foreman, he said, makes this easy. They also used Linux software RAID. Foster noted that most modern CPUs can handle RAID overhead. His final bit of systems design advice: Keep it simple. Use shared storage for important stuff.  Nodes should be a commodity–it should be faster to spin up a new one than to fix an old one.

Aghaiepour demoed the automated environment–showing off to the packed room how quickly (and easily) a 70-node cluster could be erased, redefined, reprovisioned, and deployed. The whole process took less than 10 minutes. He also demonstrated how the instance exported data to a calendaring tool. This calendar keeps track of the node types that are going to be present in the cluster, so that engineers can figure out when they should plan to test applications or services that require a particular node type.

Want to try setting up a scalable infrastructure of your own?

> Watch the demo: Automated OpenStack deployments with Foreman and Puppet

> Get the provisioning demo tools and scripts from GitHub.

This kind of tooling and DevOps work is intended to help engineers get their jobs done.  IT no longer has to provision servers, or even set up VMs–with enough careful planning and the right automation tools, users can spin up and take down their own instances. And the instances themselves can be responsive to the intended use with proper APIs, groups, and settings.  No new hardware needed.

More information


Event: Red Hat Summit 2014
Date: 10:40 a.m., Wed April 16, 2014
Type: Session
Track: Application and platform infrastructure
Technical difficulty: 3
Title: Building scalable cloud infrastructure using RHEL-OSP
Will Foster (sr systems engineer, Red Hat), Kambiz Aghaiepour (principal software engineer, Red Hat), Dan Radez (senior software engineer, Red Hat)

Executive Exchange: Leading in the era of hyper connectivity with Thomas Koulopoulus

header_text_exec_1170“Expunge IT from our vocabulary. It is not separate from the business,” said author and Red Hat Executive Exchange speaker, Thomas Koulopoulus. “It is the business, damn it.”

That’s the way Koulopoulus ended his talk with executives attending the one-day conference in San Francisco on Tuesday. Everything that came before built the case for us to completely rethink our approach to IT.

He doesn’t subscribe to the notion that everything that can be invented has been invented. “We’re not even close,” Koulopoulus said. “But we behave as though the best stuff has already been invented.” And we create generational chasms to help justify why we aren’t keeping up. Generation X. Millennials. Baby boomers. He believes these titles were created as a way for us to excuse ourselves from adapting to a rapidly changing world.


So, if you had to choose a word that best defines our ability to reshape society, technology, and business over the past 200 years, war would it be? Attendees guessed “information,” “democracy,” “capitalism,” and the Internet. And what is the word that defines the greatest CHALLENGE to innovation? Guesses included ubiquity, focus, communication, intelligence. Koulopoulus suggested that 1 word works for both: connections.

“This is what makes us different that Socrates and Plato,” he said. “[Connections] are fundamentally what has changed the human experience more than anything else.” So how are we connecting, and how will our connections change us in the future? No one can predict the future (except sci-fi writers), he said. (See this AT&T ad for proof.) But the velocity at which we’re creating new technology, and the sheer amount of people on the planet (who are living longer) means we are on a path of imminent change.

How are we connecting?

  • In 1800, the global population reached 1 billion people for the first time
  • We are projected to have 10 billion people at 2080
  • It is estimated that by 2020 we will have 2.8 trillion machine (or computer-based) connections

The confluence of machine, data, and human connections is creating a new form of intelligence. Cloud is becoming an intelligent organism. And we’re surrounded by sensors in our cars, home, in stores, and in cities. A virtual tsunami of information is coming at us. “The number of grains of sand in the world is less than 1% of the data we will have in 2100,” Koulopoulus said.


It’s only natural that we have a hard time processing big numbers like this. And in his upcoming book <<>>, The Gen Z Effect: How the Hyperconnected Generation is Changing Business Forever, he explores how younger folks will look at business entirely differently than we do. For example, to them, IT isn’t a separate department from the business—it IS the business. Kids don’t get “aloneness,” he said. They are always connected to their friends in different ways. As his son told him when Koulopoulos told him to go outside and play instead of playing video games: “Dad, this [game] is my cup-de-sac.”

But blaming behaviors on a title like “millennial” is a mistake. “You have a set of behaviors that define what it means to be a part of a new society,” he said. “If you do adopt these behaviors then you become a functioning member of the new society. If you don’t, you’re disconnected. ‘Gen Z’ is just a set of behaviors we decide to take on.” If you’ve read Sherry Turkle’s book, Alone Together: Why we expect more from technology and less from each other, you know that our interactions with technology are at an early stage, and we have control over our future—if we choose to take it.


So how to we boldly go into the future knowing what we know is waiting for is this chaotic and overwhelming? Koulopoulus suggests that we can’t try and go directly into the future, rather we can take the techie path—a twisty, windy path with many diversions and turns. Or we can skip some steps like our grandparents did with iPads. They didn’t dabble on Commodore computers with us. They used typewriters, then slingshotted to an iPad and they’re texting us.

The connections we talked about earlier are shaping tomorrow’s trends. “We will invent highly personalized communities that are hyper local and hyper global at the same time,” he said. And we’ll trade on behavior. We’ll have deep, predictive knowledge of immensely complex systems.

Another prediction is that transparency will apply to business and government. Transparency creates an understanding of behavior, and so do technology and data. So the threat is to live with transparency while providing security. Your reaction times are shortened, and the amount of data you have access to is radically bigger. “The threat is never where you look for it,” he said.


Those in IT and marketing are swimming in the rip currents, Koulopoulos said. And the frustration we feel as CIOs and leaders of IT is that we’re treading water. Just don’t get stuck

“You can’t use the patterns of the past to navigate the future,” he said. “Our role is as leaders—not just a catalyst. Business people won’t truly understand the power available or the quagmire. You are the leaders. Your job is to get a seat at the table. If you can’t, you’ll get commoditized.”

Find any model and any business where the innovation hasn’t been foundational for that industry and has been built and supported on the bedrock of IT.

“You are the innovators,” he said. “If you’re still thinking, ‘But I’m not,’ that has to be your mission. If you don’t do it, data scientists will. They are not IT they’re business folks. That’s a huge threat.”

  • Move from product ownership to strategy and service organization
  • We are missing the predictive view. Operations looks at dashboards, and business analysts look through the rear-view mirror, he said. So who’s looking through the windshield? It should do that. Change the way things are done and choose the behaviors you want to adopt.
  • What is IT? That’s the question.

As for Koulopoulos, “I look forward to that day when I say, ‘I want to get off the train.’ The world will look different to me then. Completely unrecognizable.”


Event: Red Hat Executive Exchange
Date: Tue, April 15, 2014