Spanning the development and production spectrum
Tuesday’s discussion of the Red Hat Cloud Roadmap began with a brief overview of the current portfolio. Host James Labocki, Red Hat senior manager of strategic design practice for integrated solutions, described Red Hat’s vision of a unified cloud solution that meets IT needs “all the way from development to production.”
Four use cases
James was joined by product managers for several of the Red Hat Cloud Suite components. With the comprehensive nature of this cloud solution, the group found it useful to describe the cloud roadmap by way of 4 common use cases and relevant components.
How do we deliver apps faster?
Many organizations take weeks or months to deliver a simple stack to developers. There is a great demand for self-service and automation across hybrid cloud environments. Providing shared tools and a comprehensive self-service microservices catalog brings dev closer to ops and allows efficient communication, so that what you ordered is what ends up being delivered. With everybody working in a system of shared-but-separate resources, development and testing can be done in real production environments without disrupting business.
Red Hat has also focused optimization efforts on the biggest source of deployment delay: waiting on other people. The given example illustrated how automation across DevOps can reduce a 10-day deployment cycle to a matter of minutes, effectively eliminating the wait and reducing the actual work time from 10 hours to minutes as well.
Using Ansible, everything can be automated, even networking–with no more waiting on other departments for provisioning. Future development in the Ansible networking initiative will focus on including network function virtualization (NFV). The Tower ecosystem will continue to expand, bringing automation to other tools and providing analytics and environment analysis.
How do we optimize IT?
IT optimization includes standardizing and optimizing existing footprints and applying cohesive life cycle management to disparate, multi-platform systems. It also includes system discovery to identify new workloads and direct development activities. Migration to an open platform is also considered as a secondary use case, as organizations are increasingly abandoning inflexible vertically integrated solutions.
Principal concerns are controlling VM sprawl and learning how to migrate existing cloud and virtual systems to new, open source solutions. Hyperconvergence via Red Hat Enterprise Virtualization and Red Hat Gluster Storage can let you cut the number of required servers by half, a useful innovation for telcos and other massive-scale services.
Automated discovery and pull metrics and monitoring help anticipate a lack of resources, creating them as demand rises. A real-time risk assessment can help predict a kernel panic, for instance, and tell you how to prevent it.
How do we modernize dev and ops?
A third use case is that of the organization bent on modernizing development and operations. Here, the concern is rebuilding for the cloud, and employing microservices, containers, and pipeline delivery systems. These groups are interested in building cloud-native applications, where previously separate, monolithic development has proven hard to maintain and scale.
Problems are further compounded with stateful applications and persistence. With Red Hat OpenShift, persistence is achieved by orchestrating the linkage of storage to containers, so that both can be moved around as needed. With pipelining to interface with Jenkins, container movement can then be tracked across the environment. Service linking reduces the need to start and stop services during migration and update.
Ansible will handle container creation, feeding into dev, testing, and deployment as directed by Playbooks. Satellite will allow for creation of a trusted content source, pulling in various resources for downstream consumption by, for example, OpenShift. This layer helps unify DevOps and ProdOps, removing roadblocks and ensuring consistency at all phases. Cloudforms helps manage dependencies between containers and their underlying systems, and returns chargeback data to reveal the system cost in cycles and space.
How do we make sure infrastructure is scalable?
“It doesn’t help if you can build applications really fast, if you can’t scale them globally.” Exponential scalability derives from asynchronous, API-based infrastructure, as found in OpenStack. Scale-up infrastructure has proven costly and unsustainable, and do-it-yourself scale-out is difficult and similarly expensive. To provide enterprise-ready scalable infrastructure, Red Hat is focusing on composability across single and multi-site zones via Red Hat OpenStack Platform Director. Composability here means setting up templates and deployment parameters, then using Director to deploy them across the infrastructure. This composability then extends to upgrades and ensuring high-availability (HA) for the undercloud.
The Red Hat Cloud roadmap will continue to strive for simplicity in management and monitored, responsive scalability (including for Director and other tools). The team is also working towards open access to all public cloud services, reduced HA constraints, and consolidated life cycle knowledge for all deployments. The Red Hat Integrated Solutions business unit will, in turn, focus on consolidating these innovations into a single consumable package and eliminating installation complexity and allowing robust, reproducible integrated installations.