GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google...
Transcript of GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google...
GFS
Servers are a mix of commodity machines and machines specifically designed for Google Not necessarily the fastest Purchases are based on the best
performance per dollar Uses customized versions of Linux
Servers are put into racks and connected through a 1 GB Ethernet
Racks are organized into clusters connected and connected to a gigabit cluster switch
A data center consists of clusters Clusters run an application/service
Google has numerous data centers scattered around the world (2012) 11 are found in the United States + 3 being
built 1 in Canada 17 international (outside of US and
Canada) No one is sure of how many and where
Motivation Google needs a good distributed file system
Redundant storage of massive amounts of data oncheap and unreliable computers
Why not use an existing file system? Google’s problems are different from anyone
else’s• Different workload and design priorities
GFS is designed for Google apps and workloads Google apps are designed for GFS
Google Workload Characteristics
Most files are mutated by appending new data – large sequential writes
Random writes are very uncommon Files are written once, then they are
only read Reads are sequential Large streaming reads and small
random reads High sustained throughput favoured
over low latency
Google Workload Characteristics
Google applications: Data analysis programs that scan through
data repositories Data streaming applications Archiving Applications producing (intermediate)
search results
Assumptions
High component failure rates Inexpensive commodity components fail all
the time “Modest” number of HUGE files
Just a few million Each is 100MB or larger; multi-GB files
typical
GFS Architecture Single master for a cluster Multiple chunkservers
Chunks
Files are divided into fixed-size chunks of 64 MB
Each chunk has an identifier, chunk handle, assigned by the master at the time of chunk creation Consists of file name, chunk number
Each chunk is replicated 3 times
GFS Architecture
Chunkservers
Store chunks on local disks as Linux files
Read/write chunk data specified by a chunk handle and byte range
GFS Architecture
Master Stores metadata
The file and chunk namespaces Mapping from files to chunks Locations of each chunk’s replicas (referred
to as chunk locations) Interacts with clients Creates chunk replicas
Master Orchestrates chunk modifications across
multiple replicas
Deletes old files (via garbage collection)
Read
The client application translates the file name and byte offset specified application into a chunk index within the file
The master replies with the corresponding chunk handle (information needed to find a chunk) and locations of the replicas.
Read The client then sends a request to one
of the replicas -- most likely the closest one
Note: Further reads of the same chunk require no more client-master interaction
Updates of Replicated Data Each mutation (modification) is
performed at all the replicas
Modifications are applied in the same order across all replicas If you have three chunks do you want one
chunk to have a b c and other to have a c b ?
For each chunk there is a primary chunk that coordinates updates
Updates of Replicated Data After finding replicas the client pushes
data to the replicas
The replicas do not write
The primary tells the replicas in which order they should apply modifications For each update the primary assigns a
sequence number Replicas update chunks based on the
sequence number
Updates of Replicated Data Let us say there are two updates to be
processed by a chunk from two clients: wa and wb
Their requests are received by the replicas
Different replicas may receive requests in different orders
Updates of Replicated Data Replicas should apply the writes in the
same order
The primary replica assigns a sequence number to each write wb may get sequence number 1 wa may get sequence number 2
The sequence numbers are sent to each replica
Updates of Replicated Data Let’s say that a replica got wa but not wb
It receives a sequence number of 2 for wa and the sequence number of 1 for wb
It delays executing wa until it receives wb
Updates of Replicated Data
1. Client asks master for replica locations
2. Master responds3. Client pushes data to all
replicas; replicas store it in a buffer cache
4. Client sends a write request to the primary (identifying the data that had been pushed)
5. Primary forwards request to the secondaries (identifies the order)
6. The secondaries respond to the primary
7. The primary responds to the client
Failure Handling During Updates
If a write fails at the primary: The primary may report failure to the client
– the client will retry If the primary does not respond, the client
retries from Step 1 by contacting the master If a write succeeds at the primary, but
fails at several replicas The client retries several times (Step 3-7)
Metadata On Master File names and chunk mappings are
also kept persistent in an operation log Chunk locations are kept in memory
only It will be lost during the crash The master asks chunk servers about their
chunks at startup – builds a table of chunk locations
Why Keep Metadata In Memory?
To keep master operations fast Master can periodically scan its internal
state in the background, in order to implement: Re-replication (in case of chunk server
failures) Chunk migration (for load balancing)
Why Not Keep Chunk Locations Persistent?
Chunk location – which chunk server has a replica of a given chunk
Master polls chunk servers for that information on startup
Thereafter, master keeps itself up-to-date: It controls all initial chunk placement,
migration and re-replication It monitors chunkserver status with regular
HeartBeat messages
Why Not Keep Chunk Locations Persistent?
Motivation: simplicity Eliminates the need to keep master and
chunkservers synchronized Synchronization would be needed when
chunkservers: Join and leave the cluster Change names Fail and restart
Operation Log Historical record of metadata changes Maintains logical order of concurrent
operations Log is used for recovery – the master
replays it in the event of failures Master periodically checkpoints the log
Chunk Size
Chunk size is 64 MB Larger than typical file system block sizes
Advantages Reduces a client’s need to interact with the
master Reduce network overhead by keeping a
persistent TCP connection to the chunkserver over a period of time
Reduces the size of the metadata stored on the master
Chunk Size
Disadvantages A small file consists of a small number of
chunks The chunkservers storing these chunks may
become hot spots if many clients are accessing the same file• Does not occur very much in practice
Deployment in Google
Many GFS clusters Hundreds/thousands of storage nodes
each Managing petabytes of data
Summary
GFS demonstrates how to support large-scale processing workloads on commodity hardware design to tolerate frequent component
failures optimize for huge files that are mostly
appended and read feel free to relax and extend FS interface as
required go for simple solutions (e.g., single master)
GFS has met Google’s storage needs