GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR
-
Upload
theophanis-kontogiannis -
Category
Documents
-
view
941 -
download
0
description
Transcript of GlusterFS Presentation FOSSCOMM2013 HUA, Athens, GR
Store your trillions of bytes using commodity hardware and open source
(GlusterFS)
Theophanis K. Kontogiannis
RHC{SA,E,EV,ESM,I,X}
@tkonto
The problem
● Data growth beyond manageable sizes● Data growth beyond cost effective sizes
How much would it cost to store 100PB of non structured data in a storage???
The idea
Create a scalable data storing infrastructure uniformly presented to clients using:
● Commodity (even off the Self) Hardware
● Open standards
The concept
The vision
GlusterFS:
Open – Unified - Extensible
Scalable – Manageable - Reliable
Scale-out Network Attached Storage (NAS) Software Solution for
On Premise - Virtualized - Cloud Environments
The implementation
● Open source, distributed file system capable of scaling to thousands petabytes (actually, 72 brontobytes!)
and handling thousands of clients.
Processing:
1024 Terabytes = 1 Petabyte1024 Petabytes = 1 Exabyte1024 Exabytes = 1 Zettabyte1024 Zettabytes = 1 Yottabyte
1024 Yottabytes = 1 Brontobyte
● Clusters together storage building blocks over Infiniband RDMA or TCP/IP interconnect, aggregating disk and memory resources and managing data in a single global namespace.
● Based on a stackable user space design and can deliver exceptional performance for diverse workloads.
● Self Healing● Not tied to I/O profiles or hardware or OS
-The question is how much is a BrontoByte?-The question is WHO CARES?
Really it can support that much?
Yes it can! 2^32 (max subvolumes of distribute translator)
X 18 exabytes (max xfs volume size)
= 72 brontobytes
(or 89,131,682,828,547,379,792,736,944,128bytes)
GlusterFS is supporting 2^128 (uuid) inodes
And this is how it goes
A bit of (business as usual) history
● Gluster Inc. was founded in 2005● Focused in Public & Private Cloud Storage ● Main product GlusterFS was written by
Anand Babu Periasamy, Gluster’s founder and CTO
● Received $8.5M in 2010 via VC funding● Acquired for $136M by Red Hat in 2011
GlusterFS <--> Red Hat Storage
● Gluster.com redirects to RHS pages● Gluster.org actively supported by RedHat
What is important is the integration of technologies in ways that demonstrably
benefit the customers
Components
● brick
The brick is the storage filesystem that has been assigned to a volume. ● client
The machine which mounts the volume (this may also be a server). ● server
The machine (virtual or bare metal) which hosts the actual filesystem in which data will be stored.
● subvolume
A brick after being processed by at least one translator. ● volume
The final share after it passes through all the translators● Translator
Code that interprets the actual files geometry/location/distribution on disks comprising a volume and is responsible for the perceived performance
The Outer Atmosphere View
The 100.000ft view
Storage Node
The 50.000ft View
The 10.000ft View
The ground level view
...and the programmers view
if (!(xl->fops = dlsym (handle, "fops"))) { gf_log ("xlator", GF_LOG_WARNING, "dlsym(fops) on %s", dlerror ()); goto out; } if (!(xl->cbks = dlsym (handle, "cbks"))) { gf_log ("xlator", GF_LOG_WARNING, "dlsym(cbks) on %s", dlerror ()); goto out; } if (!(xl->init = dlsym (handle, "init"))) { gf_log ("xlator", GF_LOG_WARNING, "dlsym(init) on %s", dlerror ()); goto out; } if (!(xl->fini = dlsym (handle, "fini"))) { gf_log ("xlator", GF_LOG_WARNING, "dlsym(fini) on %s", dlerror ()); goto out; }
Course of action
● Partition, Format and mount the bricks● Format the partition● Mount the partition as a Gluster "brick"● Add an entry to /etc/fstab● Install Gluster packages on nodes● Run the gluster peer probe command● Configure your Gluster volume (and the translators)● Test using the volume
Translators?
Translator Type Functional Purpose
Storage Lowest level translator, stores and accesses data from local file system.
Debug Provide interface and statistics for errors and debugging.
Cluster Handle distribution and replication of data as it relates to writing to and reading from bricks & nodes.
Encryption Extension translators for on-the-fly encryption/decryption of stored data.
Protocol Interface translators for client / server authentication and communications.
Performance Tuning translators to adjust for workload and I/O profiles.
Bindings Add extensibility, e.g. The Python interface written by Jeff Darcy to extend API interaction with GlusterFS.
System System access translators, e.g. Interfacing with file system access control.
Scheduler I/O schedulers that determine how to distribute new write operations across clustered systems.
Features Add additional features such as Quotas, Filters, Locks, etc.
Not flexible with command line?
Benchmarks?Method and platforms pretty much standard:
● Multiple 'dd' of varying blocks are read and written from multiple clients simultaneously.
● GlusterFS Brick Configuration (16 bricks)
Processor - Dual Intel(R) Xeon(R) CPU 5160 @ 3.00GHz
RAM - 8GB FB-DIMM
Disk - SATA-II 500GB
HCA - Mellanox MHGS18-XT/S InfiniBand HCA● Client Configuration (64 clients)
RAM - 4GB DDR2 (533 Mhz)
Processor - Single Intel(R) Pentium(R) D CPU 3.40GHz
Disk - SATA-II 500GB
HCA - Mellanox MHGS18-XT/S InfiniBand HCA● Interconnect Switch: Voltaire port InfiniBand Switch (14U)
Size does not matter....
...number of participants does
Suck the throughput. You can!
And you can GeoDistribute it :)
Multi-site cascading
Enough with food for thoughts...
● www.redhat.com/products/storage-server/● www.gluster.org
Now back to your consoles!!!!
Thank you...