Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests...

19
Hadoop at ContextWeb February 2009

Transcript of Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests...

Page 1: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

Hadoop at ContextWeb

February 2009

Page 2: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

2

ContextWeb: Traffic

Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

Page 3: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

3

ContextWeb Architecture highlights

Pre – Hadoop aggregation framework Logs are generated on each server and aggregated in memory to 15

minute chunks Aggregation of logs from different servers into one log Load to DB Multi-stage aggregation in DB About 20 different jobs end-to-end Could take 2hr to process through all stages

Page 4: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

Hadoop Data Set

Up to 100GB of raw log files per day. 40GB compressed40 different aggregated data sets 15TB total to cover 1 year

(compressed) Multiply by 3 replicas …

Page 5: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

5

Architectural Challenges

How to organize data set to keep aggregated data sets fresh. Logs constantly appended to the main Data Set. Reports and

aggregated datasets should be refreshed every 15 minutes

Mix of .NET and Java applications. (80%+ .Net, 20%- Java) How to make .Net application write logs to Hadoop?

Some 3rd party applications to consume results of MapReduce Jobs (e.g. reporting application) How make 3rd party or internal Legacy applications to read data from

Hadoop ?

Page 6: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

Hadoop Cluster

Today: 26 nodes/208 Cores DELL 2950, 1.8TB per node 43TB total capacity NameNode high availability using DRBD Replication. Hadoop 0.17.1 -> 0.18.3 In-house developed Java framework on top of hadoop.mapred.* PIG and Perl Streaming for ad-hoc reports ~1,000 MapReduce jobs per day OpdWise scheduler Exposing data to Windows: WebDav Server with WebDrive clients Reporting Application: Qlikview Cloudera support for Hadoop Archival/Backup: Amazon S3

By end of 2009 ~50 nodes/400 Cores ~85TB total capacity

Page 7: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

Internal Components

• Diskso 2x 300 GB 15k RPM SAS.o Hardware RAID 1 mirroring.o SMART monitoring.

 •  Network

o Dual 1Gbps on-board NICs.o Linux bonding with LACP.

Page 8: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

8

Redundant Network Architecture

• Linux bondingo See bonding.txt from Linux kernel docs.o LACP, aka 802.3ad, aka mode=4.

(http://en.wikipedia.org/wiki/Link_Aggregation_Control_Protocol)o Must be supported by your switches. o Throughput advantage

Observed at 1.76Gb/s o Allows for failure of either NIC instead of a single

heartbeat connection via crossover. 

Page 9: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

The Data FlowHADOOP

Wide Historic Data (files By Day, no rollup, 60 days)

RawLogD 0214

RawLogD 0215

RawLogD yyyymmdd

S3

Archivalprocess

Rollups (middle level)

Rpt Geo

Ref

D

ata

Scheduled drop

Rollups (3rd level)

Pub Adv

?

Rollups (3rd level)

State City ? ? ?

WebDav

Ref

Qlikview

Internal reportsExternal reports/

Portals

Scheduled drop

Txt files

Ad S

erving P

latform

Log C

ollection

process

Optimization

Recomendations

Page 10: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

10

Partitioned Data Set: approach

Date/Time as dimension for PartitioningSegregate results of MapReduce jobs into Daily and Hourly

DirectoriesEach Daily/Hourly directory is regenerated if input into MR

job contains data for this Day/HourUse revision number for each directory/file. This way multi-

stage jobs could overlap during processing

Page 11: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

11

Partitioned Data Set: processing flow

HDFS

Historic Data (By Day)

RawLogD 0214_r4

RawLogD 0215_r4

RawLogD 0216_r4

HADOOP15 minute log

LogRpt15 yyyy0215_

hhmm

Map Reduce

RawLogD 0214_r5

Aggregated data for Advertisers (By Day)

AdvD 0214_r3

AdvD 0215_r4

AdvD 0216_r4

AdvD 0214_r4

Map Reduce

AdvMR

IncomingMR

From Ad Serving Platform

To Reporting and Predictions

Page 12: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

Workflow

Opswise scheduler

Page 13: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

13

Getting Data in and out

Mix of .NET and Java applications. (80%+ .Net, 20%- Java) How to make .Net application write logs to Hadoop?

Some 3rd party applications to consume results of MapReduce Jobs (e.g. reporting application) How make 3rd party or internal Legacy applications to read data from

Hadoop ?

Page 14: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

14

Getting Data in and out: distcp

Hadoop Distcp <src> <trgt> <src> - hdfs <trgt> - /mnt/abc – network share

Easy to start – just allocate storage on network shareBut…Difficult to maintain if there are more than 10 types of data to

copyNeed extra storage. Outside of HDFS. (oxymoron!)Extra step in processingClean up

Page 15: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

15

Getting Data in and out: WebDAV driver

WebDAV server is part of Hadoop source code tree Needed some minor clean up. Was co-developed with IponWeb.

Available http://www.hadoop.iponweb.net/Home/hdfs-over-webdav

There are multiple commercial Windows WebDav clients you can use (we use WebDrive)

Linux Mount Modules available from http://dav.sourceforge.net/

Page 16: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

16

Getting Data in and out: WebDav

(Windows/Linux)

HADOOP/HDFS

MasterData Node

Data Node

Data Node

Data Node

Data Node

Client (Windows/Linux)

WebDav Server

Data consumers

Webdav client

ListgetProperties

Data

Data

Data

HD

FS

apiClient (Windows/Linux)

Data consumers

Webdav client

Page 17: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

WebDAV and compression

But your results are compressed…Options:

Decompress files on HDFS – an extra step again Refactor your application to read compressed files…

• Java – Ok• .Net – much more difficult. Cannot decompress SequenceFiles• 3rd party- not possible

Page 18: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

WebDAV and compression

Solution – extend WebDAV to support compressed SequenceFiles

Same driver can provide compressed and uncompressed files If file with requested name foo.bar exists – return as is foo.bar If file with requested name foo.bar does not exist – check if there is

a compressed version foo.bar.seq. Uncompress on the fly and return as if foo.bar

Outstanding issues Temporary files are created on Windows client side There are no native Hadoop (de)compression codecs on Windows

Page 19: Hadoop at ContextWeb February 2009. 2 ContextWeb: Traffic Traffic – up to 6 thousand Ad requests per second. Comscore Trend Data:

QlikView Reporting Application

Load from TXT files is supportedIn-memory DBAJAX support for integration into WEB portals