HDFS Namenode High Availability

18
NameNode HA Suresh Srinivas- Hortonworks Aaron T. Myers - Cloudera

description

Slides from HDFS Namenode High Availability talk from Hadoop World 2011.

Transcript of HDFS Namenode High Availability

Page 1: HDFS Namenode High Availability

NameNode HA

Suresh Srinivas- HortonworksAaron T. Myers - Cloudera

Page 2: HDFS Namenode High Availability

2

Overview

• Part 1 – Suresh Srinivas(Hortonworks)−HDFS Availability and Data Integrity – what is the record?−NN HA Design

• Part 2 – Aaron T. Myers (Cloudera)−NN HA Design continued

Client-NN Connection failover

−Operations and Admin of HA−Future Work

Page 3: HDFS Namenode High Availability

Current HDFS Availability & Data Integrity

• Simple design, storage fault tolerance−Storage: Rely in OS’s file system rather than use raw disk−Storage Fault Tolerance: multiple replicas, active monitoring−Single NameNode Master

Persistent state: multiple copies + checkpoints Restart on failure

• How well did it work?−Lost 19 out of 329 Million blocks on 10 clusters with 20K nodes in 2009

7-9’s of reliability Fixed in 20 and 21.

−18 months Study: 22 failures on 25 clusters - 0.58 failures per year per cluster Only 8 would have benefitted from HA failover!! (0.23 failures per cluster year)

−NN is very robust and can take a lot of abuse NN is resilient against overload caused by misbehaving apps

3

Page 4: HDFS Namenode High Availability

HA NameNode

Active work has started on HA NameNode (Failover)• HA NameNode−Detailed design and sub tasks in HDFS-1623

• HA: Related work−Backup NN (0.21)−Avatar NN (Facebook)−HA NN prototype using Linux HA (Yahoo!)−HA NN prototype with Backup NN and block report replicator (eBay)

HA is the highest priority4

Page 5: HDFS Namenode High Availability

5

Approach and Terminology• Initial goal is Active-Standby−With Federation each namespace volume has a NameNode

Single active NN for any namespace volume

• Terminology−Active NN – actively serves the read/write operations from the clients−Standby NN - waits, becomes active when Active dies or is unhealthy

Could serve read operations−Standby’s State may be cold, warm or hot

Cold : Standby has zero state (e.g. started after the Active is declared dead. Warm: Standby has partial state:

• has loaded fsImage & editLogs but has not received any block reports• has loaded fsImage and rolled logs and all block reports

Hot Standby: Standby has all most of the Active’s state and start immediately

Page 6: HDFS Namenode High Availability

6

High Level Use Cases

Supported failures

• Single hardware failure−Double hardware failure not

supported

• Some software failures−Same software failure affects

both active and standby

• Planned downtime−Upgrades−Config changes−Main reason for downtime

• Unplanned downtime−Hardware failure−Server unresponsive−Software failures−Occurs infrequently

Page 7: HDFS Namenode High Availability

7

Use Cases

• Deployment models−Single NN configuration; no failover−Active and Standby with manual failover

Standby could be cold/warm/hot Addresses downtime during upgrades – main cause of unavailability

−Active and Standby with automatic failover Hot standby Addresses downtime during upgrades and other failures

See HDFS-1623 for detailed use cases

Page 8: HDFS Namenode High Availability

8

Design

• Failover control outside NN

• Parallel Block reports to Active and Standby (Hot failover)

• Shared or non-shared NN state

• Fencing of shared resources/data−Datanodes−Shared NN state (if any)

• Client failover−IP Failover−Smart clients (e.g configuration, or ZooKeeper for coordination)

Page 9: HDFS Namenode High Availability

Failover Control Outside NN

• HA Daemon outside NameNode

• Daemon manages resources−All resources modeled uniformly−Resources – OS, HW, Network etc.−NameNode is just another resource

• Heartbeat with other nodes

• Quorum based leader election−Zookeeper for coordination and Quorum

• Fencing during split brain−Prevents data corruption

HADaemon

Resources

Leader

Election

Actionsstart, stop, failover, monitor, …

Fencing/STONITH

QuorumServiceHe

artb

eat

ResourcesResources

SharedResources

Page 10: HDFS Namenode High Availability

NN HA with Shared Storage and ZooKeeper

NNActive

NNStandby

Shared NN state with single

writer(fencing)

DN

FailoverControllerActive

ZK

CmdsMonitor Health of NN. OS, HW

Monitor Health of NN. OS, HW

Block Reports to Active & StandbyDN fencing: Update cmds from one

DN DN

FailoverControllerStandby

ZK ZKHeartbeat Heartbeat

Page 11: HDFS Namenode High Availability

HA Design Details

11

Page 12: HDFS Namenode High Availability

12

Client Failover Design

• Smart clients−Users use one logical URI, client selects correct NN to connect to

• Implementing two options out of the box−Client Knows of multiple NNs −Use a coordination service (ZooKeeper)

• Common things between these−Which operations are idempotent, therefore safe to retry on a failover−Failover/retry strategies

• Some differences−Expected time for client failover−Ease of administration

Page 13: HDFS Namenode High Availability

13

Ops/Admin: Shared Storage

• To share NN state, need shared storage−Needs to be HA itself to avoid just shifting SPOF

BookKeeper, etc will likely take care of this in the future−Many come with IP fencing options−Recommended mount options:

tcp,soft,intr,timeo=60,retrans=10

• Not all edits directories are created equal−Used to be all edits dirs were just a pool of redundant dirs−Can now configure some edits directories to be required−Can now configure number of tolerated failures−You want at least 2 for durability, 1 remote for HA

Page 14: HDFS Namenode High Availability

14

Ops/Admin: NN fencing

• Client failover does not solve this problem

• Out of the box−RPC to active NN to tell it to go to standby (graceful failover)−SSH to active NN and `kill -9’ NN

• Pluggable options−Many filers have protocols for IP-based fencing options−Many PDUs have protocols for IP-based plug-pulling (STONITH)

Nuke the node from orbit. It’s the only way to be sure.

• Configure extra options if available to you−Will be tried in order during a failover event−Escalate the aggressiveness of the method−Fencing is critical for correctness of NN metadata

Page 15: HDFS Namenode High Availability

15

Ops/Admin: Monitoring

• New NN metrics−Size of pending DN message queues−Seconds since the standby NN last read from shared edit log−DN block report lag−All measurements of standby NN lag – monitor/alert on all of these

• Monitor shared storage solution−Volumes fill up, disks go bad, etc−Should configure paranoid edit log retention policy (default is 2)

• Canary-based monitoring of HDFS a good idea−Pinging both NNs not sufficient

Page 16: HDFS Namenode High Availability

16

Ops/Admin: Hardware

• Active/Standby NNs should be on separate racks

• Shared storage system should be on separate rack

• Active/Standby NNs should have close to the same hardware−Same amount of RAM – need to store the same things−Same # of processors - need to serve same number of clients

• All the same recommendations still apply for NN−ECC memory, 48GB−Several separate disks for NN metadata directories−Redundant disks for OS drives, probably RAID 5 or mirroring−Redundant power

Page 17: HDFS Namenode High Availability

17

Future Work

• Other options to share NN metadata−BookKeeper−Multiple, potentially non-HA filers−Entirely different metadata system

• More advanced client failover/load shedding−Serve stale reads from the standby NN−Speculative RPC−Non-RPC clients (IP failover, DNS failover, proxy, etc.)

• Even Higher HA−Multiple standby NNs

Page 18: HDFS Namenode High Availability

18

QA

• Detailed design (HDFS-1623)−Community effort−HDFS-1971, 1972, 1973, 1974, 1975, 2005, 2064, 1073