Ceph Day LA: Adventures in Ceph & ISCSI

18
ADVENTURES IN iSCSI Paul Evans principal architect daystrom technology group paul at daystrom dot com los angeles ceph day SQUEEZING WAY-COOL STORAGE TECH INTO A LEGACY MODEL + Tu Holmes storage lead electronic arts tu at ea dot com

Transcript of Ceph Day LA: Adventures in Ceph & ISCSI

Page 1: Ceph Day LA: Adventures in Ceph & ISCSI

ADVENTURES IN iSCSI

Paul Evansprincipal architect

daystrom technology grouppaul at daystrom dot com

los angelesceph day

SQUEEZING WAY-COOL STORAGE TECH INTO A LEGACY MODEL

+Tu Holmesstorage lead

electronic artstu at ea dot com

Page 2: Ceph Day LA: Adventures in Ceph & ISCSI

WHAT’S IN THIS TALK

• Why (add) iSCSI?

• the How of iSCSI + Ceph

• the Good & Bad

• Lessons (hopefully) Learned

Page 3: Ceph Day LA: Adventures in Ceph & ISCSI

CEPH: THE FUTURE OF STORAGEWHY SADDLE IT WITH OLD (SCSI) TECH?

Page 4: Ceph Day LA: Adventures in Ceph & ISCSI

Reasons for iSCSI

IncompatibilitiesVirtualization Databases

Page 5: Ceph Day LA: Adventures in Ceph & ISCSI

EA i SCS I REASONS…

✓ Kernel

(and Desire to use VolMgr)

Incompatibility

Page 6: Ceph Day LA: Adventures in Ceph & ISCSI

CE

PH

+ I

SC

SI

x12 x12 x12 x12 x12 x12

OSD Host MON Host iSCSI GTWY Host

Main Network

H H H H

Page 7: Ceph Day LA: Adventures in Ceph & ISCSI

DES IRED i SCS I FEATURES

Maximum Throughput

Good Support

HA Options

Page 8: Ceph Day LA: Adventures in Ceph & ISCSI

CEPH+ iSCS I OPT IONSLIO

(targetcli) SCST TGT iET

Maintainer Datera (rt)Red Hat (fb) SCST Ltd. Community

(FUJITA Tomonori) Community

Lastest Stable 2.1.0 (rt)2.1.fb41(fb) 3.0.1 1.0.60 1.4.20

Latest Commit

Jan 2015 (rt)Jun 2015 (fb) June 2015 Jul 2015 Jun 2014

Next Release 3.0 (?) 3.1 ? ?

Mainline Kernel Yes No No No

Page 9: Ceph Day LA: Adventures in Ceph & ISCSI

Supppor ted Fea tu re s

LIO SCST STGT iET

Kernel Yes Yes User Space Split

RBD Yes Yes Yes No

iSER Yes Pre-Release Yes No

SRP (IB) Yes Yes No No

ALUA Yes Yes No No

Page 10: Ceph Day LA: Adventures in Ceph & ISCSI

EA

CL

US

TE

R

CU

RR

EN

TOSD Service

MON Service

iSCSI GTWY Service

H StorNext Host

StorNext Backup HostVol Mgr

x12 x12 x12 x12 x12 x12 x12 x12

H H H HH H

Page 11: Ceph Day LA: Adventures in Ceph & ISCSI

EA

CL

US

TE

R

PO

OL

CO

NFI

GCEPH

iSCSI

VolMgrBACKUP HOST

ISCSI GATEWAYS

CEPH CLUSTER

EC Tier

Cache Tier

RBD RBD RBD RBD RBD RBD

Page 12: Ceph Day LA: Adventures in Ceph & ISCSI

HW

SP

EC

S

SCALE-OUT CLUSTER OF STORAGEOBJECT+FILE+BLOCK ACCESS

HIGH AVAILABILITYSELF-HEALING

SIM

PLE

LOW

-CO

ST

SCAL

E-O

UT S

TORA

GE

THE SIMPLE SOLUTION FOR THE STORAGE OF EVERYTHING

The storageFOUNDRY Nautilus system is the smart choice for organizations with diverse storage needs and explosive data growth. Nautilus is a massively scalable, open-source storage solution developed from the ground-up to provide object, block and file-system access in one self-managing, self-healing platform. Using a reliable non-stop architecture, this multi-streamed approach to storage grids is the flexible solution for Data Automation, High-Throughput Media or Analytics, and OpenStack Clouds. The Nautilus C100 series are fast and dense 1U storage nodes for delivering from 48TB to 120TB in a single rack unit, with a low-service architecture to minimize in-field support needs. The C100 nodes provide a range of protocols and speeds, from 1G Ethernet to 56G InfiniBand, enabling it to create stand-alone C-series Cloud deployments, or drop into applicationtuned roles for 4K-Media, GS-Genomics and E-series deployments. Nautilus answers the call for multi-protocol, cost-effective and scalable storage that serves up classic Enterprise Storage I/O and well as innovative direct-access data, making it the simple choice for today’s unpredictable computing needs and THE STORAGE OF EVERYTHING.

NFS SMB FTP HTTP iSCSI 1/10/40G ETH 56G InfiniBand 8G/16G FC

foundrystorage

C100/C140/C150SCALE-OUT STORAGE NODES

5600NS

NAUTILUS

NAUTILUSobject storage

8x C100-48T 1U Nodes

• 12 x 4TB SATA HDD• 1 x 400G SATA SSD• 1 x E3-1200 4Core CPU

• 32G ECC DRAM• 1 x 10GE / 4 x 1GE

foun

dry

stora

ge

www.storageFOUNDRY.net

Nautilus C100-G3Nautilus C300-G3

Nautilus 4K

Page 13: Ceph Day LA: Adventures in Ceph & ISCSI

GO

OD

& B

AD

✓ Up to 1GB/sec Burst✓ 600MB Sustained WritePerformance

Scalability

Availability

ReliabilityB

✓ Add cluster (Ceph) storage on-the-fly✓ Dynamically Expand/Add LUNs

✓ Ceph is naturally Highly Available✓ iSCSI HA via Active/Passive Gateways

✓ Current config: resource constrained✓ Difficult to recover from LIO faults

Page 14: Ceph Day LA: Adventures in Ceph & ISCSI

RE

LIA

BIL

ITY

✓ Limit Scrubbing to Low-Load Times

✓ Limit Background Maintenance/Repair

✓ Limit PGs to maximize resources

✓ Monitor/Track occurrences of Blocked IO

ReliabilityBTunings

Page 15: Ceph Day LA: Adventures in Ceph & ISCSI

EA

CL

US

TE

RNEXT-GEN

x12 x12 x12 x12 x12 x12 x12 x12

OSD Service

MON Service

iSCSI GTWY Service

H StorNext Host

StorNext Backup HostVol Mgr

H H H HH H

Page 16: Ceph Day LA: Adventures in Ceph & ISCSI

INT

O T

HE

FU

TU

RE

Active-Active (i)SCSI Support is comingI

➡ Updates to both LIO & Ceph occurring

➡ ‘Support’ for VMware / Persistent Reservations

➡ Timeline: 60-180 days… YMMV

Futures…

Page 17: Ceph Day LA: Adventures in Ceph & ISCSI

the Three Lessons Learned

Resource Management

Resource Management

Resource Management

It’s mostly about…

Page 18: Ceph Day LA: Adventures in Ceph & ISCSI

thank you!

Paul Evansprincipal architect

paul at daystrom dot com

technology grouptechnology group

Tu HolmesStorage Lead

tu at ea dot com