Live from the Summit: Reset expectations for building enterprise clouds

13904400445_e76e810c46_o

As a former analyst, Alessandro Perilli has found a way to see through the promises of cloud to the real-world problems every enterprise encounters when building a private cloud.

When we think about cloud, he says, we have certain expectations:

  • Simple
  • Cheap-ish
  • Self service
  • Standardize and fully automated
  • Elastic (application level)
  • Resource consumption-based
  • Infinitely scalable (infrastructure level)
  • Minimally governed

But what we actually get is dramatically different. (If you need a visual, Perilli compared our current expectations of private cloud to a winged Pegasus, and our actual progress on private cloud to a donkey to drive the point home.)

The differences between private enterprise clouds and public are enormous. Compare the list below to our expectations, above:

  • The (enterprise) private cloud
  • Undeniably complex
  • Expensive
  • Self-service (The only similarity to the above list)
  • Partially standardized and barely automated
  • All but elastic
  • Mostly unmetered
  • Capacity constrained
  • Heavily governed

As an ex-Gartner analyst, Perilli shared what he called a (very mature) private cloud reference architecture. It includes many layers (5) plus self-service provisioning, chargeback, and self-service provisioning on top of a physical infrastructure. Several vendors (including Red Hat) can sell all of these components.

So what actually makes up a cloud? Do you need all those layers? And if you don’t have them today, when will you get them?

Gartner identified 5 phases to get you from a traditional datacenter to a public cloud. Getting from phase 1 (Technologically proficient) to stage 2 (operationally ready) can take years. That’s why Perilli recommends you take a “dev & test” approach to private clouds. “Very few private clouds are in production. There are a few industries where we’ve seen them successful.”

PARTIALLY STANDARDIZED AND BARELY AUTOMATED

“It’s easier said than done” to be fully standardized and automated. There’s a progression of complexity as you try to provision applications. The more components an app has, the more provisioning is required. The more pieces you try to automate, the harder your job will be. You also have legacy systems that need to be integrated. And when you design applications in the physical world, provisioning isn’t always well documented and, therefore, not repeatable. And, truth be told, it may take you 1 year to convert 10 applications (of the thousands in the enterprise).

ALL BUT ELASTIC

What does “cloud application” mean anyway, Perilli asked. The application should have cloud characteristics like rapid scaling and elasticity, self-service, etc. And it should be architected to be automated, failure aware, parallelizable, etc. And, Perilli, noted that most developers and applications don’t fit the cloud model. He says it’s a matter of culture and training. And most organizations, except for Netflix, don’t have everything it takes to do it right. “Enterprises take a long, long, long time to get there,” Perilli said.

MOSTLY UNMETERED

Because we’re too busy with provisioning and orchestration issues, charge-back capabilities are taking a back seat. There are resources and licenses to manage, and if you aren’t careful, you’ll miss the charge-back opportunity.

HEAVILY GOVERNED

There’s a massive need for governance, but customers aren’t successful in production clouds. You need an approval workflow, and there’s a need for applications to be removed when not needed. We need process governance, but aren’t at the maturity level to provide it.

So what do we do? Stay away from private cloud altogether? Nope.

ENTERPRISES SEE ENORMOUS VALUE IN PRIVATE CLOUDS

Speed to market and quality. Huge demand for Platform-as-a-Service (PaaS) bundles. Perilli says customers have shared these kinds of successes with him. The key, according to Perilli, is to buy only the cloud you need. Some providers sell you more than you need, and you end up using only 1/12th of their modules.

More tips from Perilli:

  • Don’t believe the promise of fully automated production cloud. Build a test cloud instead and be pragmatic.
  • Introduce support for scale out application in a meaningful way. Consider a multi-tier cloud architecture.

As a new Red Hatter, Perilli provided a quick overview of the capabilities of Red Hat’s cloud architecture. In particular, he pointed out CloudForms as a way to manage disparate systems under a single entity. And instead of buying more than you need, Red Hat plans to rely on certified ISVs to provide more management capabilities on top of our cloud. It’s a lean, smart approach, and one we’ll hear more of in the months to come.

 

More information

 

Event: Red Hat Summit 2014
Date: 2:30 p.m., Wed April 16, 2014
Type: Session
Track: Red Hat Cloud
Technical Difficulty: 1
Title: Building enterprise clouds: Resetting expectations to get them right
Speaker: Alessandro Perilli

Live from the Summit: Using Red Hat products in public clouds

jane_circle_1170

When you’re looking to run your Red Hat-based applications in a public cloud—almost always as part of a hybrid cloud deployment—there are two broad aspects to consider. The first is the overall economics and suitability of public clouds for a specific workload. The second is the specific Red Hat offerings available through the Certified Cloud Provider (CCP) program. Those were the topics covered by Red Hat’s Gordon Haff and Jane Circle in their “How to use Red Hat solutions in a public cloud” presentation.

Haff focused on general considerations associated with using public clouds. Consider the nature of your workloads. Public clouds (and indeed private clouds on infrastructure such as Red Hat Enterprise Linux OpenStack Platform) are optimally matched with workloads that are stateless, latency insensitive, and that scale out rather than up. See, for example, Cloud Infrastructure for the real world.

Workload usage matters as well. A workload with low background usage and only infrequent spikes may require using a different type of cloud instance (EC2 in the case of Amazon) from a workload with more frequent spikes. Haff offered an example of how—just for a single instance—the difference between using an Amazon Web Services (AWS) medium and 2xlarge instance at 50 percent utilization over the course of a year would result in about about a 6x and $4,000 difference in cost. Multiply that by hundreds of instances as you might see in a typical production deployment and you get a sense for how important understanding your workloads can be.

Of course, using public clouds isn’t just about the economics. Some organizations choose to use public clouds to allow them to focus on core competencies—which may not include running data centers. It also allows them, or their investors, to avoid making capital outlays for server gear against an uncertain future.

Finally, Haff discussed some of the issues associated with compliance and governance associated with public clouds. In general, the issue isn’t so much security in the classic sense as about audit and data management. Of particular concern of late is regulatory regimes governing data placement and notifications. These differ widely by country and state and even the provider nationality can matter wherever the data may physically reside. (Regional providers are sometimes preferred as a result.)

Circle then discussed how to consume Red Hat products—including but not limited to Red Hat Enterprise Linux—on Red Hat Certified Cloud Providers.  There are currently about 75 CCPs. These are trusted destinations for customers who want to use public clouds as an integral element of a hybrid cloud implementation. They offer images certified by Red Hat, provide the same updates and patches that you get directly from Red Hat, and are backed by Red Hat Global Support Services.

You can use Red Hat products in public clouds through two basic mechanisms: on-demand and Cloud Access.

On-demand consumption is available in monthly and hourly consumption models. Some public cloud providers also have reserved instances for long-term workloads. You engage with the CCP for all support issues, backed by Red Hat Global Support Services and the CCP bills you for both resource consumption and Red Hat products. The CCP handles updates through their Red Hat Update Infrastructure.

You can think of this as “RHN for the Public Cloud” and it’s immediately available and transparent to you.  Certain CCP’s (currently AWS and Google Cloud Platform) also offer a “bring your own subscription” offering called Cloud Access. Cloud Access provides portability of some Red Hat subscriptions between on-premise and public clouds. You keep your direct relationship with Red Hat but consume on a public cloud. A new Cloud Access feature, just introduced on April 7, lets you import your own image using a cloud provider’s import tools rather than just using a standard image. In the case of Cloud Access, you will typically use Red Hat Satellite to manage updates for both on premise and CCP images.

The takeaways from this talk?

  • Develop an appropriate application architecture
  • Ensure data is portable: test, test, test!
  • Understand the legal and regulatory compliance requirements of your applications
  • Isolate workloads as needed in a public cloud
  • Choose a cloud provider that is trusted and certified
  • Do the ROI to determine the right consumption model
  • Ensure consistent update for your images to maintain application certifications
  • Enable hybrid cloud management, policy, and governance

 

More information

 

Event: Red Hat Summit 2014
Date: Tues. April 15, 2014
Type: Session
Track: Cloud readiness
Title: How to use read solutions in a public cloud
Speaker: Jane Circle (Red Hat), Gordon Haff (Red Hat)

Live from the Summit: PaaS and DevOps, Red Hat and Cisco

13883355524_93c2a3dc51_z

Hicham Tout, senior technical leader at Cisco, and Red Hat’s Dan Juengst partnered to teach attendees how to quantify the financial benefits of Platform-as-a-Service (PaaS) to build business cases within their IT teams. They talked about Cisco’s OpenShift deployment and its benefits, and weighed in on where PaaS fits within DevOps.

Tout began the discussion with the application development challenges IT teams are facing with regards to scripts:

  • It often takes too long to build scripts and build and deploy apps.
  • Because all development environments are different, the same scripts can’t be used for different languages.
  • Developers prefer different languages but choice can be costly when more scripts are required.

“The number of apps skyrocketed at Cisco,” said Tout. “Not just for products but because users wanted to prototype apps to demonstrate to their own business owners that it could be done.” Cisco saw a massive migration of applications to the cloud, which created multiple copies of data.

Struggles like these are what drove the company to look at OpenShift and its capabilities. “Usually when you provision an infrastructure, you put a package app on top of it, you build your own, or use a hybrid between the two,” said Tout. “You need a platform to build on top of. We looked at the platform aspect of it.”

The IT team wanted an environment and the business needed a platform with more diverse languages. “This is why we started looking at OpenShift—this is what drove us there,” said Tout.

The other driver is prototyping. “There’s significant hunger within IT—especially from those who work on the developer side—to show what can be done within an application,” said Tout. “We wanted to give them an environment where they could do this quickly… and provide them with the ability to control the life cycle of demos and applications.”

OpenShift Enterprise by Red Hat provided Cisco with life cycle improvements. It made it easier to create new apps and retire old ones. “We wanted to manage the environment rather than be involved in it app-by-app or environment-by-environment,” said Tout. “The density piece was interesting to us too. With OpenShift, I can squeeze 10 to 15 apps in the same virtual machine… a much smaller footprint.”

Tout highlighted the the total cost of ownership (TCO) trend from its legacy, bare-metal datacenter in 2009 to a fully self-provisioned datacenter in 2014.

Juengst elaborated on Tout’s life cycle improvements and connected them to DevOps. “Developers want the latest tools—the latest PHP, Ruby, whatever—and it’s problematic to support all those different languages,” he said. “The business wants more speed, greater social presence, and more apps—demanding more from the organization. What can IT do? Take this world of siloed developers and operations of the past, and turn them into DevOps.”

DevOps helps developers get immediate feedback about their apps—creating opportunities for continued experimentation. “There’s tremendous value from being able to experiment a lot,” said Juengst. DevOps and PaaS give IT teams the abilities to automate code check-in, testing, and delivery, so teams can use code to push successful apps to production and reject bad applications, creating continuous delivery. DevOps provides accelerated app delivery, self-service access to the latest tools for developers, and standardized and controlled environments for operations.

“OpenShift can be a key enabler for DevOps, providing tooling that enables you to do that automation,” said Juengst. It provides auto scaling, security, and gives developers choice of language without the costs associated with new scripts.

How do you map all of this back to ROI? You use the OpenShift ROI calculator, which Juengst used to walk attendees through a company’s application development, system configuration and management, and IT capital cost savings. He also revealed the cost savings and business value associated to accelerating your application development.

 

More information

 

Event: Red Hat Summit 2014
Date: Tues, April 15, 2014
Type: Session
Track: Business & IT strategy alignment
Title: Quantifying the financial benefits of PaaS: Build your business case
Speaker: Hicham Tout (Cisco), Dan Juengst (Red Hat)

Live from the Summit: Best practices for PaaS, OpenStack, & cloud adoption

13883319464_9c866cf43e_zDefining a cloud strategy for your organization comes with a lot of questions, but once you’ve answered the why, you need the how. Enter best practices for Platform-as-a-Service (PaaS), OpenStack, and more with industry expert David S. Linthicum and Red Hat cloud evangelist Gordon Haff.

After covering emerging standards in cloud adoption, Linthicum discussed 3 important questions to ask yourself as you look for solutions to fit your IT needs:

1. What is open and extensible?
2. What is cost-effective?
3. What meets your damn requirements?

That led him to best practice #1: Open your mind. “Coming in with a pure-born view of everything you’re going to leverage and ‘damn if it doesn’t meet your requirements’ is a dangerous approach,” said Linthicum. An open mind is essential for a mix-and-match environment, where you have to find what just works. “What impresses me about Red Hat is that it works and plays with other stuff really well,” he said.

The audience comprised a variety of verticals—when asked, hands shot up representing transportation, healthcare, tech, retail, finance, and more. That certainly speaks to the broad appeal of open cloud solutions, but Linthicum brought up an interesting and often overlooked trend: “The larger the industry gets, the more likely they are to fail because they don’t like to share [IT services or knowledge],” he warned.

That brought us to best practice #2: Go hire someone with a brain. “You need someone who can make the appropriate calls so that you’re marching in the right direction,” said Linthicum.

Most cloud-based systems are lacking architecture, and what’s more, solutions architects can get too narrowly focused on their own areas. “Typically, people aren’t going to have a range of skills that lets them be agnostic architects to make the right decision from all available choices,” Linthicum said. Hence, the need for open minds and sharp brains.

CAUGHT IN THE TRAP

One of the common pitfalls organizations make when investing in cloud resources is when that investment ignores training, or proofs of concept, or support. As a result, he described, many clouds are not meeting expectations.

Additionally, customers get caught up in the technology itself sometimes. “They call up and the first thing they want to know is ‘what’s the best out there? Amazon? Google? Red Hat?’ instead of asking ‘what’s the best solution for me?” said Linthicum.

That reinforced the advantage of keeping an open mind and choosing open cloud infrastructures.

13883316664_ff9c1f7d0c_zA FIRESIDE CHAT

The session pivoted from structured presentation to a “fireside chat” of sorts as Red Hat’s Gordon Haff steered the hour into audience interaction, which he found in spades.

One audience member asked about adoption habits or trends with PaaS, which coincided with another question about multi-hypervisor strategies. Linthicum explained that a lot of PaaS use is initially small at first but increasing to more mission-critical apps. “If you look at IDC and Gartner, they show a lot more multi-hypervisor use out there. It’s becoming the norm,” he said. “People aren’t throwing out VMware but they’re initially adopting KVM, RHEV, and Hyper-V for new types of projects so they don’t have to increase their VMware spending.”

Another audience member asked about private/public hybrid infrastructure approaches to PaaS. Haff described the variety of options within the OpenShift portfolio—Red Hat’s offering—including OpenShift Online for public and OpenShift Enterprise for private PaaS. Both got ringing endorsements from Linthicum. “Theirs is pretty much the only one in the industry that just works right now. It’s rock-solid, and I have no problems saying that” he said.

 

More information

 

Event: Red Hat Summit 2014
Date: Tues, April 15, 2014
Type: Session
Title: Best practices for PaaS, OpenStack, & cloud adoption
Speaker: David S. Linthicum (Cloud Technology Partners), Gordon Haff (Red Hat)