GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

26
Store your trillions of bytes using commodity hardware and open source (GlusterFS) Theophanis K. Kontogiannis RHC{SA,E,EV,ESM,I,X} [email protected] @tkonto

description

GlusterFS Presentation @FOSSCOMM2013

Transcript of GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Page 1: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Store your trillions of bytes using commodity hardware and open source

(GlusterFS)

Theophanis K. Kontogiannis

RHC{SA,E,EV,ESM,I,X}

[email protected]

@tkonto

Page 2: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The problem

● Data growth beyond manageable sizes● Data growth beyond cost effective sizes

How much would it cost to store 100PB of non structured data in a storage???

Page 3: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The idea

Create a scalable data storing infrastructure uniformly presented to clients using:

● Commodity (even off the Self) Hardware

● Open standards

Page 4: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The concept

Page 5: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The vision

GlusterFS:

Open – Unified - Extensible

Scalable – Manageable - Reliable

Scale-out Network Attached Storage (NAS) Software Solution for

On Premise - Virtualized - Cloud Environments

Page 6: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The implementation

● Open source, distributed file system capable of scaling to thousands petabytes (actually, 72 brontobytes!)

and handling thousands of clients.

Processing:

1024 Terabytes = 1 Petabyte1024 Petabytes = 1 Exabyte1024 Exabytes = 1 Zettabyte1024 Zettabytes = 1 Yottabyte

1024 Yottabytes = 1 Brontobyte

● Clusters together storage building blocks over Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace.

● Based on a stackable user space design and can deliver exceptional performance for diverse workloads.

● Self Healing● Not tied to I/O profiles or hardware or OS

-The question is how much is a BrontoByte?-The question is WHO CARES?

Page 7: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Really it can support that much?

Yes it can! 2^32 (max subvolumes of distribute translator)

X 18 exabytes (max xfs volume size)

= 72 brontobytes

(or 89,131,682,828,547,379,792,736,944,128bytes)

GlusterFS is supporting 2^128 (uuid) inodes

Page 8: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

And this is how it goes

Page 9: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

A bit of (business as usual) history

● Gluster Inc. was founded in 2005● Focused in Public & Private Cloud Storage ● Main product GlusterFS was written by

Anand Babu Periasamy, Gluster’s founder and CTO

● Received $8.5M in 2010 via VC funding● Acquired for $136M by Red Hat in 2011

Page 10: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

GlusterFS <--> Red Hat Storage

● Gluster.com redirects to RHS pages● Gluster.org actively supported by RedHat

What is important is the integration of technologies in ways that demonstrably

benefit the customers

Page 11: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Components

● brick

The brick is the storage filesystem that has been assigned to a volume. ● client

The machine which mounts the volume (this may also be a server). ● server

The machine (virtual or bare metal) which hosts the actual filesystem in which data will be stored.

● subvolume

A brick after being processed by at least one translator. ● volume

The final share after it passes through all the translators● Translator

Code that interprets the actual files geometry/location/distribution on disks comprising a volume and is responsible for the perceived performance

Page 12: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The Outer Atmosphere View

Page 13: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The 100.000ft view

Storage Node

Page 14: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The 50.000ft View

Page 15: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The 10.000ft View

Page 16: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

The ground level view

Page 17: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

...and the programmers view

if (!(xl->fops = dlsym (handle, "fops"))) { gf_log ("xlator", GF_LOG_WARNING, "dlsym(fops) on %s", dlerror ()); goto out; } if (!(xl->cbks = dlsym (handle, "cbks"))) { gf_log ("xlator", GF_LOG_WARNING, "dlsym(cbks) on %s", dlerror ()); goto out; } if (!(xl->init = dlsym (handle, "init"))) { gf_log ("xlator", GF_LOG_WARNING, "dlsym(init) on %s", dlerror ()); goto out; } if (!(xl->fini = dlsym (handle, "fini"))) { gf_log ("xlator", GF_LOG_WARNING, "dlsym(fini) on %s", dlerror ()); goto out; }

Page 18: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Course of action

● Partition, Format and mount the bricks● Format the partition● Mount the partition as a Gluster "brick"● Add an entry to /etc/fstab● Install Gluster packages on nodes● Run the gluster peer probe command● Configure your Gluster volume (and the translators)● Test using the volume

Page 19: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Translators?

Translator Type Functional Purpose

Storage Lowest level translator, stores and accesses data from local file system.

Debug Provide interface and statistics for errors and debugging.

Cluster Handle distribution and replication of data as it relates to writing to and reading from bricks & nodes.

Encryption Extension translators for on-the-fly encryption/decryption of stored data.

Protocol Interface translators for client / server authentication and communications.

Performance Tuning translators to adjust for workload and I/O profiles.

Bindings Add extensibility, e.g. The Python interface written by Jeff Darcy to extend API interaction with GlusterFS.

System System access translators, e.g. Interfacing with file system access control.

Scheduler I/O schedulers that determine how to distribute new write operations across clustered systems.

Features Add additional features such as Quotas, Filters, Locks, etc.

Page 20: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Not flexible with command line?

Page 21: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Benchmarks?Method and platforms pretty much standard:

● Multiple 'dd' of varying blocks are read and written from multiple clients simultaneously.

● GlusterFS Brick Configuration (16 bricks)

Processor - Dual Intel(R) Xeon(R) CPU 5160 @ 3.00GHz

RAM - 8GB FB-DIMM

Disk - SATA-II 500GB

HCA - Mellanox MHGS18-XT/S InfiniBand HCA● Client Configuration (64 clients)

RAM - 4GB DDR2 (533 Mhz)

Processor - Single Intel(R) Pentium(R) D CPU 3.40GHz

Disk - SATA-II 500GB

HCA - Mellanox MHGS18-XT/S InfiniBand HCA● Interconnect Switch: Voltaire port InfiniBand Switch (14U)

Page 22: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Size does not matter....

Page 23: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

...number of participants does

Page 24: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Suck the throughput. You can!

Page 25: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

And you can GeoDistribute it :)

Multi-site cascading

Page 26: GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR

Enough with food for thoughts...

● www.redhat.com/products/storage-server/● www.gluster.org

Now back to your consoles!!!!

Thank you...