Running Hadoop on Windows

19
Running Hadoop on Windows What is Hadoop? Hadoop is a an open source Apache project written in Java and designed to provide users with two things: a distributed file system (HDFS) and a method for distributed computation. It’s based on Google’s published Google File System and MapReduce concept which discuss how to build a framework capable of executing intensive computations across tons of computers. Something that might, you know, be helpful in building a giant search index. Read the Hadoop project description and wiki for more information and background on Hadoop. What’s the big deal about running it on Windows? Looking for Linux? If you’re looking for a comprehensive guide to getting Hadoop running on Linux, please check out Michael Noll’s excellent guides: Running Hadoop on Ubuntu Linux (Single Node Cluster) and Running Hadoop on Ubuntu Linux (Multi-Node Cluster) . This post was inspired by these very informative articles. Hadoop’s key design goal is to provide storage and computation on lots of homogenous “commodity” machines; usually a fairly beefy machine running Linux. With that goal in mind, the Hadoop team has logically focused on Linux platforms in their development and documentation. Their Quickstart even includes the caveat that “Win32 is supported as a development platform. Distributed operation has not been well tested on Win32, so this is not a production platform.” If you want to use Windows to run Hadoop in pseudo-distributed or distributed mode (more on these modes in a moment), you’re pretty much left on your own. Now, most people will still probably not run Hadoop in production on Windows machines, but the ability to deploy on the most widely used platform in the world is still probably a good idea for allowing Hadoop to be used by many of the developers out there that use Windows on a daily basis.

Transcript of Running Hadoop on Windows

Page 1: Running Hadoop on Windows

Running Hadoop on Windows

What is Hadoop?

Hadoop is a an open source Apache project written in Java and designed to provide users with two things: a distributed file system (HDFS) and a method for distributed computation. It’s based on Google’s published Google File System and MapReduce concept which discuss how to build a framework capable of executing intensive computations across tons of computers. Something that might, you know, be helpful in building a giant search index. Read the Hadoop project description and wiki for more information and background on Hadoop.

What’s the big deal about running it on Windows?

Looking for Linux? If you’re looking for a comprehensive guide to getting Hadoop running on Linux, please check out Michael Noll’s excellent guides: Running Hadoop on Ubuntu Linux (Single Node Cluster) and Running Hadoop on Ubuntu Linux (Multi-Node Cluster). This post was inspired by these very informative articles.

Hadoop’s key design goal is to provide storage and computation on lots of homogenous “commodity” machines; usually a fairly beefy machine running Linux. With that goal in mind, the Hadoop team has logically focused on Linux platforms in their development and documentation. Their Quickstart even includes the caveat that “Win32 is supported as a development platform. Distributed operation has not been well tested on Win32, so this is not a production platform.” If you want to use Windows to run Hadoop in pseudo-distributed or distributed mode (more on these modes in a moment), you’re pretty much left on your own. Now, most people will still probably not run Hadoop in production on Windows machines, but the ability to deploy on the most widely used platform in the world is still probably a good idea for allowing Hadoop to be used by many of the developers out there that use Windows on a daily basis.

Caveat Emptor

I’m one of the few that has invested the time to setup an actual distributed Hadoop installation on Windows. I’ve used it for some successful development tests. I have not used this in production. Also, although I can get around in a Linux/Unix environment, I’m no expert so some of the advice below may not be the correct way to configure things. I’m also no security expert. If any of you out there have corrections or advice for me, please let me know in a comment and I’ll get it fixed.

This guide uses Hadoop v0.17 and assumes that you don’t have any previous Hadoop installation. I’ve also done my primary work with Hadoop on Windows XP. Where I’m aware of differences between XP and Vista, I’ve tried to note them. Please comment if something I’ve written is not appropriate for Vista.

Bottom line: your mileage may vary, but this guide should get you started running Hadoop on Windows.

Page 2: Running Hadoop on Windows

A quick note on distributed Hadoop

Hadoop runs in one of three modes:

Standalone: All Hadoop functionality runs in one Java process. This works “out of the box” and is trivial to use on any platform, Windows included.

Pseudo-Distributed: Hadoop functionality all runs on the local machine but the various components will run as separate processes. This is much more like “real” Hadoop and does require some configuration as well as SSH. It does not, however, permit distributed storage or processing across multiple machines.

Fully Distributed: Hadoop functionality is distributed across a “cluster” of machines. Each machine participates in somewhat different (and occasionally overlapping) roles. This allows multiple machines to contribute processing power and storage to the cluster.

The Hadoop Quickstart can get you started on Standalone mode and Psuedo-Distributed (to some degree). Take a look at that if you’re not ready for Fully Distributed. This guide focuses on the Fully Distributed mode of Hadoop. After all, it’s the most interesting where you’re actually doing real distributed computing.

Pre-Requisites

Java

I’m assuming if you’re interested in running Hadoop that you’re familiar with Java programming and have Java installed on all the machines on which you want to run Hadoop. The Hadoop docs recommend Java 6 and require at least Java 5. Whichever you choose, you need to make sure that you have the same major Java version (5 or 6) installed on each machine. Also, any code you write for running using Hadoop’s MapReduce must be compiled with the version you choose. If you don’t have Java installed, go get it from Sun and install it. I will assume you’re using Java 6 in the rest of this guide.

Cygwin

As I said in the introduction, Hadoop assumes Linux (or a Unix flavor OS) is being used to run Hadoop. This assumption is buried pretty deeply. Various parts of Hadoop are executed using shell scripts that will only work on a Linux shell. It also uses passwordless secure shell (SSH) to communicate between computers in the Hadoop cluster. The best way to do these things on Windows is to make Windows act more like Linux. You can do this using Cygwin, which provides a “Linux-like environment for Windows” that allows you to use Linux-style command line utilities as well as run really useful Linux-centric software like OpenSSH. Go download the latest version of Cygwin. Don’t install it yet. I’ll describe how you need to install it below.

Hadoop

Go download Hadoop core. I’m writing this guide for version 0.17 and I will assume that’s what you’re using.

Page 3: Running Hadoop on Windows

More than one Windows PC on a LAN

It should probably go without saying that to follow this guide, you’ll need to have more than one PC. I’m going to assume you have two computers and that they’re both on your LAN. Go ahead and designate one to be the Master and one to be the Slave. These machines together will be your “cluster”. The Master will be responsible for ensuring the Slaves have work to do (such as storing data or running MapReduce jobs). The Master can also do its share of this work as well. If you have more than two PCs, you can always setup Slave2, Slave3 and so on. Some of the steps below will need to be performed on all your cluster machines, some on just Master or Slaves. I’ll note which apply for each step.

Step 1: Configure your hosts file (All machines)

This step isn’t strictly necessary but it will make your life easier down the road if your computers change IPs. It’ll also help you keep things straight in your head as you edit configuration files. Open your Windows hosts file located at c:\windows\system32\drivers\etc\hosts (the file is named “hosts” with no extension) in a text editor and add the following lines (replacing the NNNs with the IP addresses of both master and slave):

master NNN.NNN.NNN.NNNslave NNN.NNN.NNN.NNN

Save the file.

Step 2: Install Cygwin and Configure OpenSSH sshd (All machines)

Cygwin has a bit of an odd installation process because it lets you pick and choose which libraries of useful Linux-y programs and utilities you want to install. In this case, we’re really installing Cygwin to be able to run shell scripts and OpenSSH. OpenSSH is an implementation of a secure shell (SSH) server (sshd) and client (ssh). If you’re not familiar with SSH, you can think of it as a secure version of telnet. With the ssh command, you can login to another computer running sshd and work with it from the command line. Instead of reinventing the wheel, I’m going to tell you to go here for step-by-step instructions on how to install Cygwin on Windows and get OpenSSH’s sshd server running. You can stop after instruction 6. Like the linked instructions, I’ll assume you’ve installed Cygwin to c:\cygwin though you can install it elsewhere.

If you’re running a firewall on your machine, you’ll need to make sure port 22 is open for incoming SSH connections. As always with firewalls, open your machine up as little as possible. If you’re using Windows firewall, make sure the open port is scoped to your LAN. Microsoft has documentation for how to do all this with Windows Firewall (scroll down to the section titled “Configure Exceptions for Ports”).

Step 3: Configure SSH (All Machines)

Hadoop uses SSH to allow the master computer(s) in a cluster to start and stop processes on the slave computers. One of the nice things about SSH is it supports

Page 4: Running Hadoop on Windows

several modes of secure authentication: you can use passwords or you can use public/private keys to connect without passwords (“passwordless”). Hadoop requires that you setup SSH to do the latter. I’m not going to go into great detail on how this all works, but suffice it to say that you’re going to do the following:

1. Generate a public-private key pair for your user on each cluster machine.2. Exchange each machine user’s public key with each other machine user in the

cluster.

Generate public/private key pairs

To generate a key pair, open Cygwin and issue the following commands ($> is the command prompt):$> ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa$> cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Now, you should be able to SSH into your local machine using the following command:$> ssh localhost

When prompted for your password, enter it. You’ll see something like the following in your Cygwin terminal.

hayes@localhost's password:Last login: Sun Jun 8 19:47:14 2008 from localhost

hayes@calculon ~$>

To quit the SSH session and go back to your regular terminal, use:$> exit

Make sure to do this on all computers in your cluster.

Exchange public keys

Now that you have public and private key pairs on each machine in your cluster, you need to share your public keys around to permit passwordless login from one machine to the other. Once a machine has a public key, it can safely authenticate a request from a remote machine that is encrypted using the private key that matches that public key.

On the master issue the following command in cygwin (where “<slaveusername>” is the username you use to login to Windows on the slave computer):

$> scp ~/.ssh/id_dsa.pub <slaveusername>@slave:~/.ssh/master-key.pub

Enter your password when prompted. This will copy your public key file in use on the master to the slave.

On the slave, issue the following command in cygwin:

Page 5: Running Hadoop on Windows

$> cat ~/.ssh/master-key.pub >> ~/.ssh/authorized_keys

This will append your public key to the set of authorized keys the slave accepts for authentication purposes.

Back on the master, test this out by issuing the following command in cygwin:

$> ssh <slaveusername>@slave

If all is well, you should be logged into the slave computer with no password required.

Repeat this process in reverse, copying the slave’s public key to the master. Also, make sure to exchange public keys between the master and any other slaves that may be in your cluster.

Configure SSH to use default usernames (optional)

If all of your cluster machines are using the same username, you can safely skip this step. If not, read on.

Most Hadoop tutorials suggest that you setup a user specific to Hadoop. If you want to do that, you certainly can. Why setup a specific user for Hadoop? Well, in addition to being more secure from a file permissions and security perspective, when Hadoop uses SSH to issue commands from one machine to another it will automatically try to login to the remote machine using the same user as the current machine. If you have different users on different machines, the SSH login performed by Hadoop will fail. However, most of us on Windows typically use our machines with a single user and would probably prefer not to have to setup a new user on each machine just for Hadoop.

The way to allow Hadoop to work with multiple users is by configuring SSH to automatically select the appropriate user when Hadoop issues its SSH command. (You’ll also need to edit the hadoop-env.sh config file, but that comes later in this guide.) You can do this by editing the file named “config” (no extension) located in the same “.ssh” directory where you stored your public and private keys for authentication. Cygwin stores this directory under “c:\cygwin\home\<windowsusername>\.ssh”.

On the master, create a file called config and add the following lines (replacing “<slaveusername>” with the username you’re using on the Slave machine:

Host slaveUser <slaveusername>

If you have more slaves in your cluster, add Host and User lines for those as well.

On each slave, create a file called config and add the following lines (replacing “<masterusername>” with the username you’re using on the Master machine:

Page 6: Running Hadoop on Windows

Host masterUser <masterusername>

Now test this out. On the master, go to cygwin and issue the following command:

$> ssh slave

You should be automatically logged into the slave machine with no username and no password required. Make sure to exit out of your ssh session.

For more information on this configuration file’s format and what it does, go here or run man ssh_config in cygwin.

Step 4: Extract Hadoop (All Machines)

If you haven’t downloaded Hadoop 0.17, go do that now. The file will have a “.tar.gz” extension which is not natively understood by Windows. You’ll need something like WinRAR to extract it. (If anyone knows something easier than WinRAR for extracting tarred-gzipped files on Windows, please leave a comment.)

Once you’ve got an extraction utility, extract it directly into c:\cygwin\usr\local. (Assuming you installed Cygwin to c:\cygwin as described above.)

The extracted folder will be named hadoop-0.17.0. Rename it to hadoop. All further steps assume you’re in this hadoop directory and will use relative paths for configuration files and shell scripts.

Step 5: Configure hadoop-env.sh (All Machines)

The conf/hadoop-env.sh file is a shell script that sets up various environment variables that Hadoop needs to run. Open conf/hadoop-env.sh in a text editor. Look for the line that starts with “#export JAVA_HOME”. Change that line to something like the following:

export JAVA_HOME=c:\\Program\ Files\\Java\\jdk1.6.0_06

This should be the home directory of your Java installation. Note that you need to remove the leading “#” (comment) symbol and that you need to escape both backslashes and spaces with a backslash.

Next, locate the line that starts with “#export HADOOP_IDENT_STRING”. Change it to something like the following:

export HADOOP_IDENT_STRING=MYHADOOP

Where MYHADOOP can be anything you want to identify your Hadoop cluster with. Just make sure each machine in your cluster uses the same value.

To test these changes issue the following commands in cygwin:

Page 7: Running Hadoop on Windows

$> cd /usr/local/hadoop$> bin/hadoop version

You should see output similar to this:

Hadoop 0.17.0Subversion http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r 656523Compiled by hadoopqa on Thu May 15 07:22:55 UTC 2008

If you see output like this:

bin/hadoop: line 166: c:\Program Files\Java\jdk1.6.0_05/bin/java: No such file or directorybin/hadoop: line 251: c:\Program Files\Java\jdk1.6.0_05/bin/java: No such file or directorybin/hadoop: line 251: exec: c:\Program Files\Java\jdk1.6.0_05/bin/java: cannot execute: No such file or directory

This means that your Java home directory is wrong. Go back and make sure you specified the correct directory and used the appropriate escaping.

Step 6: Configure hadoop-site.xml (All Machines)

The conf/hadoop-site.xml file is basically a properties file that lets you configure all sorts of HDFS and MapReduce parameters on a per-machine basis. I’m not going to go into detail here about what each property does, but there are 3 that you need to configure on all machines: fs.default.name, mapred.job.tracker and dfs.replication. You can just copy the XML below into your conf/hadoop-site.xml file.

<?xml version=”1.0″?><?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?><!– Put site-specific property overrides in this file. –><configuration><property><name>fs.default.name</name><value>hdfs://master:47110</value></property><property><name>mapred.job.tracker</name><value>master:47111</value></property><property><name>dfs.replication</name><value>2</value></property></configuration>

Page 8: Running Hadoop on Windows

For more information about what these configuration properties (and others) do, see the Hadoop cluster setup docs and the hadoop-default.xml documentation.

Step 7: Configure slaves file (Master only)

The conf/slaves file tells the master where it can find slaves to do work. Open yours in a text editor. It will probably have one line which says “localhost”. Replace that with the following:

masterslave

Step 8: Firewall Configuration (All Machines)

If you’re using Windows Firewall, you will need to ensure that the appropriate ports are open so that the slaves can make HTTP requests for information from the master. (This is different from the port 22 needed for SSH.) The list of ports for which you should make exceptions are as follows: 47110, 47111, 50010, 50030, 50060, 50070, 50075, 50090. These should all be open on the master for request coming from your local network. For more information about these ports, see the Hadoop default configuration file documentation.

You should also make sure that Java applications are allowed by the firewall to connect to the network on all your machines including the slaves.

Step 9: Starting your cluster (Master Only)

To start your cluster, make sure you’re in cygwin on the master and have changed to your hadoop installation directory. To fully start your cluster, you’ll need to start DFS first and then MapReduce.

Starting DFS

Issue the following command:

$> bin/start-dfs.sh

You should see output somewhat like the following (note that I have 2 slaves in my cluster which has a cluster ID of Appozite, your mileage will vary somewhat):

starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-namenode-calculon.outmaster: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-datanode-calculon.outslave: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-datanode-hayes-daviss-macbook-pro.local.outslave2: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-datanode-XTRAPUFFYJR.outmaster: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-secondarynamenode

Page 9: Running Hadoop on Windows

-calculon.out

To see if your distributed file system is actually running across multiple machines, you can open the Hadoop DFS web interface which will be running on your master on port 50070. You can probably open it by clicking this link: http://localhost:50070. Below is a screenshot of my cluster. As you can see, there are 3 nodes with a total of 712.27 GB of space. (Click the image to see the larger version.)

Starting MapReduce

To start the MapReduce part of Hadoop, issue the following command:

$> bin/start-mapred.sh

You should see output similar to the following (again noting that I’ve got 3 nodes in my cluster):

starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-jobtracker-calculon.outmaster: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-tasktracker-calculon.outslave: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-tasktracker-hayes-daviss-macbook-pro.local.outslave2: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-Appozite-tasktracker-XTRAPUFFYJR.out

You can view your MapReduce setup using the MapReduce monitoring web app that comes with Hadoop which runs on port 50030 of your master node. You can probably open it by clicking this like: http://localhost:50030. Below is a screenshot from my browser. There’s not much exciting to see here until you have an actual MapReduce job running.

Page 10: Running Hadoop on Windows

Testing it out

Now that you’ve got your Hadoop cluster up and running, executing MapReduce jobs or writing to and reading from DFS are no different on Windows than any other platform so long as you use cygwin to execute commands. At this point, I’ll refer you to Michael Noll’s Hadoop on Ubuntu Linux tutorial for an explanation on how to run a large enough MapReduce job to take advantage of your cluster. (Note that he’s using Hadoop 0.16.0 instead of 0.17.0, so you’ll replace “0.16.0″ with “0.17.0″ where applicable.) Follow his instructions and you should be good to go. The Hadoop site also offers a MapReduce tutorial to you can get started writing your own jobs in Java. If you’re interested in writing MapReduce jobs in other languages that take advantage of Hadoop, check out the Hadoop Streaming documentation.

How to stop your cluster

When you’re ready to stop your cluster, it’s simple. Just top MapReduce and then DFS.

To stop MapReduce, issue the following command on the master:

$>bin/stop-mapred.sh

You should see output similar to the following:

stopping jobtrackerslave: stopping tasktrackermaster: stopping tasktrackerslave2: stopping tasktracker

To stop DFS, issue the following command on the master:

$> bin/stop-dfs.sh

You should see output similar to the following:

Page 11: Running Hadoop on Windows

stopping namenodemaster: stopping datanodeslave: stopping datanodeslave2: stopping datanodemaster: stopping secondarynamenode

And that’s it

I hope this helps anyone out there trying to run Hadoop on Windows. If any of you have corrections, questions or suggestions please comment and let me know. Happy Hadooping!

What is Apache Hadoop?A look at the components and functions of the Hadoop ecosystem.

Apache Hadoop has been the driving force behind the growth of the big data industry. You'll hear it

mentioned often, along with associated technologies such as Hive and Pig. But what does it do, and

why do you need all its strangely-named friends, such as Oozie, Zookeeper and Flume?

Hadoop brings the ability to cheaply process large amounts of data, regardless of its structure. By large,

we mean from 10-100 gigabytes and above. How is this different from what went before?

Existing enterprise data warehouses and relational databases excel at processing structured data and

can store massive amounts of data, though at a cost: This requirement for structure restricts the kinds of

data that can be processed, and it imposes an inertia that makes data warehouses unsuited for agile

exploration of massive heterogenous data. The amount of effort required to warehouse data often

means that valuable data sources in organizations are never mined. This is where Hadoop can make a

big difference.

This article examines the components of the Hadoop ecosystem and explains the functions of each.

The core of Hadoop: MapReduce

Created at Google in response to the problem of creating web search indexes, the MapReduce

framework is the powerhouse behind most of today's big data processing. In addition to Hadoop, you'll

find MapReduce inside MPP and NoSQL databases, such as Vertica or MongoDB.

The important innovation of MapReduce is the ability to take a query over a dataset, divide it, and run it

in parallel over multiple nodes. Distributing the computation solves the issue of data too large to fit onto

a single machine. Combine this technique with commodity Linux servers and you have a cost-effective

alternative to massive computing arrays.

At its core, Hadoop is an open source MapReduce implementation. Funded by Yahoo, it emerged in

2006 and, according to its creator Doug Cutting, reached "web scale" capability in early 2008.

As the Hadoop project matured, it acquired further components to enhance its usability and functionality.

The name "Hadoop" has come to represent this entire ecosystem. There are parallels with the

emergence of Linux: The name refers strictly to the Linux kernel, but it has gained acceptance as

referring to a complete operating system.

Hadoop's lower levels: HDFS and MapReduce

Page 12: Running Hadoop on Windows

Above, we discussed the ability of MapReduce to distribute computation over multiple servers. For that

computation to take place, each server must have access to the data. This is the role of HDFS, the

Hadoop Distributed File System.

HDFS and MapReduce are robust. Servers in a Hadoop cluster can fail and not abort the computation

process. HDFS ensures data is replicated with redundancy across the cluster. On completion of a

calculation, a node will write its results back into HDFS.

There are no restrictions on the data that HDFS stores. Data may be unstructured and schemaless. By

contrast, relational databases require that data be structured and schemas be defined before storing the

data. With HDFS, making sense of the data is the responsibility of the developer's code.

Programming Hadoop at the MapReduce level is a case of working with the Java APIs, and manually

loading data files into HDFS.

Improving programmability: Pig and Hive

Working directly with Java APIs can be tedious and error prone. It also restricts usage of Hadoop to

Java programmers. Hadoop offers two solutions for making Hadoop programming easier.

Pig is a programming language that simplifies the common tasks of working with Hadoop:

loading data, expressing transformations on the data, and storing the final results. Pig's

built-in operations can make sense of semi-structured data, such as log files, and the

language is extensible using Java to add support for custom data types and

transformations.

Hive enables Hadoop to operate as a data warehouse. It superimposes structure on data

in HDFS and then permits queries over the data using a familiar SQL-like syntax. As with

Pig, Hive's core capabilities are extensible.

Choosing between Hive and Pig can be confusing. Hive is more suitable for data warehousing tasks,

with predominantly static structure and the need for frequent analysis. Hive's closeness to SQL makes it

an ideal point of integration between Hadoop and other business intelligence tools.

Pig gives the developer more agility for the exploration of large datasets, allowing the development of

succinct scripts for transforming data flows for incorporation into larger applications. Pig is a thinner

layer over Hadoop than Hive, and its main advantage is to drastically cut the amount of code needed

compared to direct use of Hadoop's Java APIs. As such, Pig's intended audience remains primarily the

software developer.

Improving data access: HBase, Sqoop and Flume

At its heart, Hadoop is a batch-oriented system. Data are loaded into HDFS, processed, and then

retrieved. This is somewhat of a computing throwback, and often, interactive and random access to data

is required.

Enter HBase, a column-oriented database that runs on top of HDFS. Modeled after Google's BigTable,

the project's goal is to host billions of rows of data for rapid access. MapReduce can use HBase as both

a source and a destination for its computations, and Hive and Pig can be used in combination with

HBase.

Page 13: Running Hadoop on Windows

In order to grant random access to the data, HBase does impose a few restrictions: Hive performance

with HBase is 4-5 times slower than with plain HDFS, and the maximum amount of data you can store in

HBase is approximately a petabyte, versus HDFS' limit of over 30PB.

HBase is ill-suited to ad-hoc analytics and more appropriate for integrating big data as part of a larger

application. Use cases include logging, counting and storing time-series data.

The Hadoop Bestiary

Ambari Deployment, configuration and monitoring Flume Collection and import of log and event data HBase Column-oriented database scaling to billions of rows HCatalog Schema and data type sharing over Pig, Hive and MapReduce HDFS Distributed redundant file system for Hadoop Hive Data warehouse with SQL-like access Mahout Library of machine learning and data mining algorithms MapReduce Parallel computation on server clusters Pig High-level programming language for Hadoop computations Oozie Orchestration and workflow management Sqoop Imports data from relational databases Whirr Cloud-agnostic deployment of clusters Zookeeper Configuration management and coordination

Getting data in and out

Improved interoperability with the rest of the data world is provided by Sqoop and Flume. Sqoop is a tool

designed to import data from relational databases into Hadoop, either directly into HDFS or into Hive.

Flume is designed to import streaming flows of log data directly into HDFS.

Hive's SQL friendliness means that it can be used as a point of integration with the vast universe of

database tools capable of making connections via JBDC or ODBC database drivers.

Coordination and workflow: Zookeeper and Oozie

With a growing family of services running as part of a Hadoop cluster, there's a need for coordination

and naming services. As computing nodes can come and go, members of the cluster need to

synchronize with each other, know where to access services, and know how they should be configured.

This is the purpose of Zookeeper.

Production systems utilizing Hadoop can often contain complex pipelines of transformations, each with

dependencies on each other. For example, the arrival of a new batch of data will trigger an import,

which must then trigger recalculations in dependent datasets. The Oozie component provides features

to manage the workflow and dependencies, removing the need for developers to code custom solutions.

Management and deployment: Ambari and Whirr

One of the commonly added features incorporated into Hadoop by distributors such as IBM and

Microsoft is monitoring and administration. Though in an early stage, Ambari aims to add these features

to the core Hadoop project. Ambari is intended to help system administrators deploy and configure

Hadoop, upgrade clusters, and monitor services. Through an API, it may be integrated with other

system management tools.

Page 14: Running Hadoop on Windows

Though not strictly part of Hadoop, Whirr is a highly complementary component. It offers a way of

running services, including Hadoop, on cloud platforms. Whirr is cloud neutral and currently supports the

Amazon EC2 and Rackspace services.

Machine learning: Mahout

Every organization's data are diverse and particular to their needs. However, there is much less

diversity in the kinds of analyses performed on that data. The Mahout project is a library of Hadoop

implementations of common analytical computations. Use cases include user collaborative filtering, user

recommendations, clustering and classification.

Using Hadoop

Normally, you will use Hadoop in the form of a distribution. Much as with Linux before it, vendors

integrate and test the components of the Apache Hadoop ecosystem and add in tools and

administrative features of their own.

Though not per se a distribution, a managed cloud installation of Hadoop's MapReduce is also available

through Amazon's Elastic MapReduce service.