Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

20
Protecting the Galaxy Multi-Region Disaster Recovery with OpenStack and Ceph Sean Cohen Sébastien Han Federico Lucifredi

Transcript of Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Page 1: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Protecting the GalaxyMulti-Region Disaster Recovery

with OpenStack and Ceph

Sean CohenSébastien Han

Federico Lucifredi

Page 2: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Your saversSean Cohen - Principal Product Manager Red Hat OpenStack Platform

Sébastien Han - Senior Domain Architect - http://www.sebastien-han.fr/

Federico Lucifredi - Product Manager Director, Red Hat Ceph Storage

Page 3: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

THIS PRESENTATION ONLY FOCUSES ON DATA

DISASTER RECOVERY

DISCLAIMER

Page 4: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Our Mission

IT organizations require a disaster recovery strategy addressing outages with loss of storage, or extended loss of availability at the primary site.

The general idea is to seamlessly and transparently backup OpenStack images and block devices from one site to another. So in an event of a

failure resources in site A can be manually brought online in site B.

Page 5: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

AssumptionsWhile designing your cloud environment, you must make sure that:

1. Images are template of your applications2. Applications data is always hosted on Cinder block devices3. Only ephemeral data should be stored on the virtual machine root disk4. Your application stack is managed by Heat (or another automation tool)

In a failure scenario, the user ‘simply’ re-bootstraps the application stack using Heat, configures it using its configuration management system then starts the application.

Page 6: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Longstanding effort building our Galaxy

● We have been busy building a strong single site model, now it’s time to extend it to multi-site.

● You can start with a single site and add another one later. In the end, you don’t need to re-architect your cloud while adding another location.

Page 7: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Use case architectures

Page 8: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Properties:● Single OpenStack site● A data recovery site● Both sites have with the same cluster FSID● Same L2 segment

Challenge:● Failover procedure

How to recover?● Promote secondary site● Reconnect all the services to the recovery cluster ● Eventually move back to the primary site

Page 9: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Expected capabilities● Multiple isolated OpenStack environments● Each site has in-live/in-sync backup of:

○ Glance images○ Cinder block devices

● In an event of a failure, any site can recover its data from another site● Storage architecture based on Ceph

Page 10: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Properties:● Keystone on the controllers (as usual)● Individual login on each region/site● Both sites have each other’s data● Both sites have the same cluster FSID

Challenge:● Replicate metadata for images and

volumes

How to recover?● Promote the secondary site● Import DB records in the survival site

Page 11: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Properties:● Shared Keystone● Keystone centralized and replicated DB● Both sites have each other's data● Works with N sites● Both sites have with the same cluster

FSID

Challenges:● Replicate UUID tokens● MySQL cross-replication over WAN● Requires low latency and high bandwidth● Fernet tokens are not ready yet

How to recover?● Promote the secondary site● Import DB records in the survival site

Page 12: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

The road ahead with Ceph RBD Mirroring

Page 13: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

RBD mirroringNow available with Ceph Jewel and this summer with the upcoming RHCS 2.0 release.

● New daemon ‘rbd-mirror’ synchronises Ceph images from one cluster to another● Relies on two new RBD image features:

○ journaling: enables journaling for every transaction on the image○ mirroring: tells the rbd-mirror daemon to replicate images

● Images have states: primary and non-primary (promote and demote calls)

Page 14: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

RBD mirroring write path

Page 15: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

RBD Mirroring Setup● Use different cluster names; routable connectivity● Deploy the rbd-mirror daemon on each cluster● Same pool configuration at both sites● Add peering pool● Add RBD image settings

○ Enable journaling on image○ Mirror pool or specific images

Challenges:

● No HA support for RBD-mirror yet● Two sites only● LibRBD-only, no current kRBD support

Page 16: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Where are we in Mitaka?

Page 17: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

New API’s● Cinder Replication v2.1 (“Cheesecake”) was implemented to support a

disaster recovery scenario when an entire host can be failed over to a secondary site.○ Allowing the preservation of user data access for ‘replication-enabled’ volumes

type to allow cloud admins to rebuild/recover their cloud.○ The new model is backend/pool-based rather than volume-based, so in a case of

failover, you'll be failing over an entire backend.○ This is a building block for Ceph Cinder replication support

Page 18: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Gap analysis● Keystone: no real production readiness for Fernet Tokens yet● Glance: no way to replicate images metadata to another site● Nova: no way to replicate quotas, flavors, ssh keys etc…● Cinder: pending support for Cinder replication API and the RBD driver with RBD

mirroring

Some of these issues (metadata replication) are addressed by the Kingbird project: Centralized service for multi-region OpenStack deployments

Page 19: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

Putting it all togetherThe road ahead in Newton

● RBD driver Cinder Replication support○ Make use of the new replication API to support RBD Mirroring (promote/demote location)

● Necessary changes in the Cinder RBD driver to support RBD mirroring○ Cinder type to point to a replicated Ceph pool

● Cinder Replication with More Granularity (“Tiramisu”)○ Tiramisu API will be tenant facing. It gives tenant more control on what should be replicated

together, i.e., a volume or a group of volumes. (using Replication Groups)

● Kingbird is really young but is a real enabler and the way toward multi-site

Page 20: Protecting the Galaxy - Multi-Region Disaster Recovery with OpenStack and Ceph

THANKSMay the force be with you!