Tom Callway
on 12 February 2016
We’ve submitted several talks to the OpenStack Summit in Austin. We’ve listed them all below with links to where to vote for each talk so if you think they are interesting – please vote for them!
Understanding updates to the Ubuntu Cloud Archive
Speaker: Mark Baker
Over 2000 organisations build OpenStack clouds using packages fromthe Ubuntu Cloud Archive and with all projects following the same release cycle it was easy for end users to know what versions to expect from releases and updates. Now with the advent of Core and Big Tent in Liberty, OpenStack projects are free to follow their own schedule posting stable updates, milestones, release candidates or final releases when ready. In this talk Mark Baker, OpenStack Product Manager at Canonical will explain how the Cloud Archive will be maintained and updated in light of these changes so that end users know what to expect and when. If you use the Cloud Archive and want to know more or have string feleings about how yupdates should be managed come along!
Why should I consider a converged architecture for my OpenStack cloud?
Speaker: James Page
The typical approach to architecting an OpenStack cloud deployment separates the deployment of the control plane of the cloud onto dedicated server infrastructure, providing physical separation from storage and compute services providing resources to tenants of the cloud. This approach has some limitations in terms of flexibility, fault tolerance and scalability.
The Ubuntu OpenStack converged cloud architecture treats the control plane of the cloud as a discrete set of services. By spreading those services as far and wide as possible (including on storage and compute servers), we can achieve an high level of resilience, improve fault tolerance and increase the scalability of the individual components of the control plane an OpenStack cloud with no ‘special place’ for control plane services.
Deploying Openstack from Source to Scalable Multi-Node Environments
Speaker: Corey Bryant
OpenStack is a complex system with many moving parts. DevStack has provided a solid foundation for developers and CI to test OpenStack deployments from source, and has been an essential part of the gating process since OpenStack’s inception.
DevStack typically presents a single-node OpenStack deployment, which has testing limitations as it lacks the complexities of real, scalable, multi-node OpenStack deployments.
Ubuntu now addresses the complexity of multi-node service orchestration of OpenStack deployments and has the ability to deploy OpenStack from source rather than from binary packages.
Come and hear about how we’ve implemented this feature for Ubuntu OpenStack, how to use it yourself, and even see a live deployment of OpenStack Newton from source!
Multi-unit OpenStack cloud deployment using LXD containers on your laptop
Speaker: James Page
Testing OpenStack deployments without having access to multiple pieces of physical server infrastructure can be challenging; Find out how to use LXD (the container hypervisor for Linux) with Juju (the service modelling tool from Canonical) to deploy OpenStack in LXC containers on your laptop, simulating a real world multi-node deployment with Open vSwitch overlay networking and running KVM instances without the overhead of nested virtualization.
Accelerating Production OpenStack using Low-Latency, Peer-to-Peer Storage and Networking
Speakers: Brian Fromme (Canonical) & Stephen Bates (Microsemi Corporation)
Enterprise workloads in OpenStack require low-latency, high-performance storage and networking to achieve real-world performance objectives. Specific workloads can benefit from accelerated network speeds and lower latency between VMs and their block storage. PCIe-based flash storage further accelerates the storage layer.
To satisfy real-world performance needs, CPU offloading is required. In this session, Microsemi and Canonical will show how production OpenStack performance can be accelerated through the use of Peer-to-Peer (p2p) communication between all PCIe devices including RDMA capable NICs and NVM Express SSDs. Technical details of the PCIe implementation will be described. A focus on database acceleration will be shown.This presentation is targeted at a technical OpenStack architect.
Deploying Agile and Secure OpenStack Networks for organizations with highly sensitive data (E.g. Telco/Government)
Speakers: Mark Baker (Canonical), Mike Meskill (Awnix) & Ali Khayam (CTO Office)
The need to secure data and tightly control access to resources and administrative functions while remaining agile and responsive to internal customer and business/mission needs is the #1 requirement for companies and government organizations today, and needs to be designed into every phase of the cloud lifecycle, from deployment, to configuration to operations. These requirements are applicable to all organizations, but especially important in organizations with highly sensitive data and services, such as telecommunication companies and government entities. While respecting these requirements, the upbring of the cloud should be fully automatable and complete in few minutes. The deployed cloud solution should then satisfy a wide range of requirements from DDoS attacks prevention to separation of tenant and provider networks, perimeter endpoint security and encryption in-flight.
This session will cover security at scale with OpenStack and Software Defined Networks.
Confronting Complexity – The Number One Barrier to Enterprise Adoption
Speakers: Mark Baker (Canonical), Kenny Johnston (Rackspace) & Keiichiro Tokunaga (Fujitsu)
We’ve all heard it before. OpenStack is too complex. There are too many projects, governance procedures, and communities to keep up with. There are too many deployment architectures, tools and configurations to get started quickly. Complexity costs. It requires me to have an entire team of OpenStack professionals which I simply can’t afford. In survey after survey Enterprises evaluating OpenStack site complexity as the number one barrier to adoption.
What is the OpenStack community doing to confront this complexity? What more can we do? How will this concern improve or dissipate over coming releases? With a coordinated effort, how could we make OpenStack easy understand, evaluate, and cost effective to deploy and operate?
Making the economics of OpenStack work
Speakers: Tom Callway
For OpenStack to become the way to organisations to deploy, manage and scale applications in the next decade, the economics need to stack up. How do the economics of OpenStack compare to existing virtualisation solutions or using public cloud platforms? As organisations look at the ever increasing options for workload and service delivery, this talk examines the costs of OpenStack today, how it measures up against the alternatives and what OpenStack users can do to improve the economics of running applications.
A little of what you fancy: multi-hypervisor cloud deployment with Hyper-V, KVM and LXD
Speakers: James Page (Canonical) & Gabriel Adrian Samfira (Cloudbase)
For clouds running mixed operating system workloads, sometimes the right choice of hypervisor is not to always use KVM; Learn how to deploy and use OpenStack clouds that make use of multiple hypervisors in a single compute region in a seamless way, providing the ability to deploy each type of workload on the best hypervisor choice.
Modeling, Copying and Pasting an OpenStack Cloud
Speakers: Ryan Beisner
What exactly is your OpenStack cloud’s topology? How many machines? How many containers? Which services are deployed, and where? What are the configurations of each service? Can you re-deploy or reproduce that cloud topology? OpenStack has no notion of a cloud. Users have a need to describe their particular cloud deployment in a repeatable, consistent way. In this talk we will describe modeling, documenting, redeploying and reproducing OpenStack deployments — and why that is useful.
Master on Metal
Speakers: Ryan Beisner
Devstack can deploy OpenStack from source, true. But how do specific OpenStack commit levels stack up in your rack? In this talk we will describe our experience in repeatedly and consistently deploying OpenStack, from source, to bare metal — and why that is useful. Master, specific tags or your repos.
How I Deployed 14,000 OpenStack Clouds in 12 Months (And Tested Them)
Speakers: Ryan Beisner
Automating the deployment and validation of a multi-dimensional matrix of operating system, OpenStack release, topology, configuration and substrate. This talk describes open source toolsets, testing approaches and validation methodologies as they relate to validating pre-production systems and functional test environments. Attendees can expect to gain a high-level understanding of how to gain vast leverage over a daunting task, with a very low person-to-machine ratio, and perhaps leverage that knowledge to improve their own processes.
Is bigger really better?
Speakers: Billy Olsen & Jill Rouleau
In today’s OpenStack deployments you are faced with a multitude of decisions to make such as which database, hypervisor, network, block storage, object storage, etc. In this talk, we consider the ‘Micro Cloud’ architecture, in which many small clouds are deployed instead of one large cloud and explore the advantages and disadvantages of both architectures.
Getting containers for free
Speakers: Tycho Andersen
The performance and density advantages containers offer are well understood. The security model and restrictions on workloads are not well understood. Fortunately, there has been a lot of buzz about the advantages, but without a clear understanding of the security model and execution environment, operators are not well positioned to decide whether or not containers are right for them. In this talk, I’ll cover both of these topics, so as to shed some light for operators trying to make an informed decision on containers.
Application abstraction enables application scalability
Speakers: Bill Bauman
As cloud computing infrastructure scales, the applications that run there need to scale as well. In order to scale apps, just like infrastructure, a certain amount of abstraction must take place. Much like virtualization and intelligently manged machine containers have created an abstraction layer from hardware, OpenStack clouds need a simliar approach to abstracting the applications themselves. The benefits of app abstraction enable hyperscale and hyper-dynamic deployment of complex, enterprise workloads in clouds. Traditional scripting and configuration management approaches no longer meet the needs of a modern, OpenStack cloud.
This session will discuss Juju Charms and Puppet Application Orchestration, and how they both illustrate the imperative for a model-based approach to the application lifecycle in an OpenStack cloud.
Testing real world OpenStack deployments
Speakers: Gema Gomez-Solano
Desperate times require desperate measures. Testing real world OpenStack deployments is no easy task and doing so in a repeatable and reliable manner can become complicated really fast. Deploying reliably? Testing virtualised? In Containers? What test cases?
This is the story of my journey from nothing to testing OpenStack upgrades in an automated and reliable fashion. The ups, the downs and the desperation along with the tips and tricks that will help you get there sooner.
OpenStack Yoga training : how distributed can OpenStack for NfV go?
Speakers: Nicolas Thomas
NfV deployments must be able to cope with distributed networks: Point of presence, multi domain/region/zone.
What is the maximum distance allowed between compute and control ? Storage and control ? Distance between redundant unit of control services ? How can developer test appreciate impact without accessing a Tier 1 SP network ?
Baby Steps: Get Your Deploy Scripts Bulletproof
Speakers: Greg Lutostanski
With so many architectural options available for OpenStack it’s hard to have your cloud exactly the same as everyone else’s, because frankly you have different workloads. Great, but that puts the burden of testing and maintaining squarely on your shoulders. As consumers of OpenStack we need to know we can stand up our cloud with minimal headache and make sure everything is rock solid. A walkthrough of how I run my little CI farm for my deployments — both on bare metal and triple-O, which open source tools I lean heavily on, and what to do when something starts failing.
Deploying OpenStack (and more!) from the ToR switch
Speakers: David Duffey (Canonical) & Fernando Sanchez (PLUMgrid)
Whitebox switches are rapidly changing the way we manage datacenter networks. Large data-centers are developing their own network software and operating systems. We will demonstrate a new, modern, open source operating system that disaggregates the core operating system distribution from the network control software (i.e. disaggregate the NOS) and provides atomic updates and application isolation and security running on switch. In this presentation, we will demonstrate this flexibility by running an OpenStack installer on the ToR switch to deploy OpenStack with SDN to a rack of machines. We will also demonstrate how to deploy network operating systems, network control software, and applications to bare metal switches.
How bcache can be utilized in an OpenStack environment
Speakers: Matt Rae & Josh McJilton
Bcache allows one or more SSDs to act as a cache for one or more slower hard disk drives. We will demonstrate scenarios of bcache used within OpenStack environments to provide SSD like performance to slower block devices.