Openstack with ceph

Post on 05-Dec-2014

2.647 views 0 download

description

 

Transcript of Openstack with ceph

Inktank

Openstack with Ceph

Ian Colle Ceph Program Manager, Inktank ian@inktank.com @ircolle www.linkedin.com/in/ircolle ircolle on freenode inktank.com | ceph.com

Who is this guy?

People need storage solutions that… •  …are open

•  …are easy to manage

•  …satisfy their requirements - performance - functional - financial (cha’ ching!)

Selecting the Best Cloud Storage System

Hard Drives Are Tiny Record Players and They Fail Often jon_a_ross, Flickr / CC BY 2.0

D

55 times / day

= D

D D

x 1 MILLION

D D

D D

“That’s why I use Swift in my Openstack implementation” Hmmm, what about block storage?

I got it!

• Persistent - More familiar to users

• Not tied to a single host

- Decouples compute and storage - Enables Live migration

• Extra capabilities of storage system

- Efficient snapshots - Different types of storage available - Cloning for fast restore or scaling

Benefits of Block Storage

Ceph has reduced administration costs - “Intelligent Devices” that use a peer-to-peer mechanism to detect failures and react automatically – rapidly ensuring replication policies are still honored if a node becomes unavailable. - Swift requires an operator to notice a failure and update the ring configuration before redistribution of data is started.

Ceph guarantees the consistency of your data

- Even with large volumes of data, Ceph ensures clients get a consistent copy from any node within a region. - Swift’s replication system means that users may get stale data, even with a single site, due to slow asynchronous replication as the volume of data builds up.

Ceph over Swift

Swift has quotas, we do not (coming this Fall) Swift has object expiration, we do not (coming this Fall)

Swift over Ceph

Ceph Ceph provides object AND block storage in a single system that is compatible with the Swift and Cinder APIs and is self-healing without operator intervention.

Swift

If you use Swift, you still have to provision and manage a totally separate system to handle your block storage (in addition to paying the poor guy to go update the ring configuration)

Total Solution Comparison

Openstack I know, but what is Ceph?

OPEN SOURCE

COMMUNITY-FOCUSED

SCALABLE

NO SINGLE POINT OF FAILURE

SOFTWARE BASED

SELF-MANAGING

philosophy design

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

Monitors: • Maintain cluster map • Provide consensus for distributed decision-making • Must have an odd number • These do not serve stored objects to clients

M

OSDs: • One per disk (recommended) • At least three in a cluster • Serve stored objects to clients • Intelligently peer to perform replication tasks • Supports object classes

16

DISK

FS

DISK DISK

OSD

DISK DISK

OSD OSD OSD OSD

FS FS FS FS btrfs xfs ext4

M M M

M

M

M

HUMAN

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

L

19

LIBRADOS • Provides direct access to RADOS for applications • C, C++, Python, PHP, Java • No HTTP overhead

LIBRADOS

M

M

M

APP

native

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

M

M

M

LIBRADOS RGW

APP

native

REST

LIBRADOS RGW

APP

RADOS Gateway: • REST-based interface to RADOS • Supports buckets, accounting • Compatible with S3 and Swift applications

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

M

M

M

VM

LIBRADOS LIBRBD

VIRTUALIZATION CONTAINER

RADOS Block Device: • Storage of virtual disks in RADOS • Allows decoupling of VMs and containers • Live migration! • Images are striped across the cluster • Boot support in QEMU, KVM, and OpenStack Nova (more on that later!) • Mount support in the Linux kernel

RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RADOSGW A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

What Makes Ceph Unique? Part one: CRUSH

APP ??

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

APP

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

APP

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

A-G

H-N

O-T

U-Z

F*

10 10 01 01 10 10 01 11 01 10

10 10 01 01 10 10 01 11 01 10

hash(object name) % num pg

CRUSH(pg, cluster state, rule set)

10 10 01 01 10 10 01 11 01 10

10 10 01 01 10 10 01 11 01 10

CRUSH • Pseudo-random placement algorithm • Ensures even distribution • Repeatable, deterministic • Rule-based configuration

• Replica count • Infrastructure topology • Weighting

35

36

37

38

What Makes Ceph Unique Part two: thin provisioning

40

HOW DO YOU

SPIN UP

THOUSANDS OF VMs

INSTANTLY

AND

EFFICIENTLY?

42

43

44

How Does Ceph work with Openstack?

RBD support initially added in Cactus Have increased features / integration with each subsequent release You can use both the Swift (object/blob store) and Keystone (identity service) APIs to talk to RGW Cinder block storage as a service talks directly to RBD Nova cloud computing controller talks to RBD via the hypervisor Coming in Havana – Ability to create a volume from an RBD image via the Horizon UI

Ceph / Openstack Integration

What is Inktank? I really like your polo shirt, please tell me what it means!

The majority of Ceph contributors Formed by Sage Weil (CTO), the creator of Ceph, in 2011 Funded by DreamHost and other investors (Mark Shuttleworth, etc.)

Who?

To ensure the long-term success of Ceph To help companies adopt Ceph through services, support, training, and consulting

Why?

Guide the Ceph roadmap - Hosting a virtual Ceph Design Summit in early May

Standardize the Ceph development and release schedule - Quarterly stable releases, interim releases every 2 weeks * May 2013 – Cuttlefish RBD Incremental Snapshots! * Aug 2013 – Dumpling Disaster Recovery (Multisite) Admin API

* Nov 2013 – Some really cool cephalopod name that starts with an E

Ensure Quality - Maintain Teuthology test suite - Harden each stable release via extensive manual and automated testing

Develop reference and custom architectures for implementation

What?

• Inktank is a Strategic partner for Dell in Emerging Solutions

• The Emerging Solutions Ecosystem Partner Program is designed to deliver complementary cloud components

• As part of this program, Dell and Inktank provide: > Ceph Storage Software - Adds scalable cloud storage to the Dell OpenStack-powered cloud - Uses Crowbar to provision and configure a Ceph cluster (Yeah

Crowbar!) > Professional Services, Support, and Training - Collaborative Support for Dell hardware customers > Joint Solution - Validated against Dell Reference Architectures via the

Technology Partner program

Inktank/Dell Partnership

Try Ceph and tell us what you think! http://ceph.com/resources/downloads/ http://ceph.com/resources/mailing-list-irc/

- Ask, if you need help. - Help others, if you can!

Ask your company to start dedicating dev resources to the project! http://github.com/ceph Find a bug (http://tracker.ceph.com) and fix it! Participate in our Ceph Design Summit!

What do we want from you??

We’re planning the next release of Ceph and would love your input. What features would you like us to include?

iSCSI?

Live Migration?

One final request…

Questions?

56

Ian Colle Ceph Program Manager, Inktank ian@inktank.com @ircolle www.linkedin.com/in/ircolle ircolle on freenode inktank.com | ceph.com