Hadoop performance optimization tips

18
Performance Optimization tips

Transcript of Hadoop performance optimization tips

Page 1: Hadoop performance optimization tips

Performance Optimization tips

Page 2: Hadoop performance optimization tips

Compression

• Parameter: mapred.compress.map.output: Map Output Compression

• Default: False

• Pros: Faster disk writes, lower disk space usage, lesser time spent on data transfer (from mappers to reducers).

• Cons: Overhead in compression at Mappers and decompression at Reducers.

• Suggestions: For large cluster and large jobs this property should be set true. The compression codec can also be set through the property mapred.map.output.compression.codec (Default is org.apache.hadoo p.io.compress.DefaultCodec)

Page 3: Hadoop performance optimization tips

Speculative Execution

• Parameter: mapred.map/reduce.tasks.speculative.execution: Enable/Disable task (map/reduce) speculative Execution

• Default: True

• Pros: Reduces the job time if the task progress is slow due to memory unavailability or hardware degradation.

• Cons: Increases the job time if the task progress is slow due to complex and large calculations. On a busy cluster speculative execution can reduce overall throughput, since redundant tasks are being executed in an attempt to bring down the execution time for a single job.

• Suggestions: In large jobs where average task completion time is significant (> 1 hr) due to complex and large calculations and high throughput is required the speculative execution should be set to false.

Page 4: Hadoop performance optimization tips

Number of Maps/Reducers

• Parameter: mapred.tasktracker.map/reduce.tasks.maximum: Maximum tasks (map/reduce) for a tasktracker

• Default: 2

• Suggestions: Recommended range -(cores_per_node)/2 to 2x(cores_per_node), especially for large clusters. This value should be set according to the hardware specification of cluster nodes and resource requirements of tasks (map/reduce).

Page 5: Hadoop performance optimization tips

File block size• Parameter: dfs.block.size: File system block size

• Default: 67108864 (bytes)

• Suggestions:

• Small cluster and large data set: default block size will create a large number of map tasks.

– e.g. Input data size = 160 GB and dfs.block.size = 64 MB then the minimum no. of maps= (160*1024)/64 = 2560 maps.

– If dfs.block.size = 128 MB minimum no. of maps= (160*1024)/128 = 1280 maps.

– If dfs.block.size = 256 MB minimum no. of maps= (160*1024)/256 = 640 maps.

• In a small cluster (6-10 nodes) the map task creation overhead is considerable.

• So dfs.block.size should be large in this case but small enough to utilize all the cluster resources.

• The block size should be set according to size of the cluster, map task complexity, map task capacity of cluster and average size of input files.

Page 6: Hadoop performance optimization tips

Sort size

• Parameter: io.sort.mb: Buffer size (MBs) for sorting

• Default: 100

• Suggestions:

• For Large jobs (the jobs in which map output is very large), this value should be increased keeping in mind that it will increase the memory required by each map task.

• So the increment in this value should be according to the available memory at the node.

• Greater the value of io.sort.mb, lesser will be the spills to the disk, saving write to the disk.

Page 7: Hadoop performance optimization tips

Sort factor

• Parameter: io.sort.factor: Stream merge factor

• Default: 10

• Suggestions: For Large jobs (the jobs in which map output is very large and number of maps are also large) which have large number of spills to disk, value of this property should be increased.

• The number of input streams (files) to be merged at once in the map/reduce tasks, as specified by io.sort.factor, should be set to a sufficiently large value (for example, 100) to minimize disk accesses.

• Increment in io.sort.factor, benefits in merging at reducers since the last batch of streams (equal to io.sort.factor) are sent to the reduce function without merging, thus saving time in merging.

Page 8: Hadoop performance optimization tips

JVM reuse

• Parameter :mapred.job.reuse.jvm.num.tasks: Reuse single JVM

• Default: 1

• Suggestions: The minimum overhead of JVM creation for each task is around 1 second. So for the tasks which live for seconds or a few minutes and have lengthy initialization, this value can be increased to gain performance.

Page 9: Hadoop performance optimization tips

Reduce parallel copies

• Parameter: mapred.reduce.parallel.copies: Threads for parallel copy at reducer. The number of threads used to copy map outputs to the reducer

• Default: 5

• Suggestions : For Large jobs (the jobs in which map output is very large), value of this property can be increased keeping in mind that it will increase the total CPU usage.

Page 10: Hadoop performance optimization tips

The Other Threads

• dfs.namenode{/mapred.job.tracker}.handler.count :server threads that handle remote procedure calls (RPCs) – Default: 10

– Suggestions: This can be increased for larger server (50-64).

• dfs.datanode.handler.count :server threads that handle remote procedure calls (RPCs) – Default: 3

– Suggestions: This can be increased for larger number of HDFS clients (6-8).

• tasktracker.http.threads : number of worker threads on the HTTP server on each TaskTracker– Default: 40

– Suggestions: The can be increased for larger clusters (50).

Page 11: Hadoop performance optimization tips

Revelation-Temporary space

• Temporary space allocation: – Jobs which generate large intermediate data (map output) should have

enough temporary space controlled by property mapred.local.dir.

– This property specifies list directories where the MapReduce stores intermediate data for jobs.

– The data is cleaned-up after the job completes.

– By default, replication factor for file storage on HDFS is 3, which means that every file has three replicas.

– As a rule of thumb, at least 25% of the total hard disk should be allocated for intermediate temporary output.

– So effectively, only ¼ hard disk space is available for business use.

– The default value for mapred.local.dir is ${hadoop.tmp.dir}/mapred/local.

– So if mapred.local.dir is not set, hadoop.tmp.dir must have enough space to hold job’s intermediate data.

– If the node doesn’t have enough temporary space the task attempt will fail and starts a new attempt, thus impacting the performance.

Page 12: Hadoop performance optimization tips

Java- JVM • JVM tuning:

– Besides normal java code optimizations, JVM settings for each child task also affects the processing time.

– On slave node end, the task tracker and data node use 1 GB RAM each.

– Effective use of the remaining RAM as well as choosing the right GC mechanism for each Map or Reduce task is very important for maximum utilization of hardware resources.

– The default max RAM for child tasks is 200MB which might be insufficient for many production grade jobs.

– The JVM settings for child tasks are governed by mapred.child.java.opts property.

– Use JDK 1.6 64 BIT

• + +XX:CompressedOops helpful in dealing with OOM errors

– Do remember changing Linux open file descriptor:

• Check: more /proc/sys/fs/file-max

• Change: vi /etc/sysctl.conf -> fs.file-max = 331287

• Set: sysctl -p

– Set java.net.preferIPv4Stack set to true, to avoid timeouts in cases where the OS/JVM picks up an IPv6 address and must resolve the hostname.

Page 13: Hadoop performance optimization tips

Logging Is a friend to developers, Foe in production

• Default - INFO level

– dfs.namenode.logging.level

– hadoop.job.history

– hadoop.logfile.size/count

Page 14: Hadoop performance optimization tips

Static Data strategies

• Available Approaches

– JobConf.set(“key”,”value”)

– Distributed cache

– HDFS shared file

• Suggested approaches if above ones not efficient

– Memcached

– Tokyocabinet/TokyoTyrant

– Berkley DB

– HBase

– MongoDB

Page 15: Hadoop performance optimization tips

Tuning as suggested by - Arun C Murthy• Tell HDFS and Map-Reduce about your network! – Rack locality script:

topology.script.file.name

• Number of maps – Data locality

• Number of reduces – You don’t need a single output file!

• Amount of data processed per Map - Consider fatter maps, Custom input format

• Combiner - multi-level combiners at both Map and Reduce

• Check to ensure the combiner is useful!

• Map-side sort -io.sort.mb, io.sort.factor, io.sort.record.percent, io.sort.spill.percent

• Shuffle – Compression for map-outputs – mapred.compress.map.output ,

mapred.map.output.compression.codec , lzo via libhadoop.so, tasktracker.http.threads

– mapred.reduce.parallel.copies, mapred.reduce.copy.backoff, mapred.job.shuff le.input.buff er.percent, mapred.job.shuff le.merge.percent, mapred.inmem.merge.threshold, mapred.job.reduce.input.buff er.percent

• Compress the job output

• Miscellaneous -Speculative execution, Heap size for the child, Re-use jvm for maps/reduces, Raw Comparators

Page 16: Hadoop performance optimization tips

Anti-Patterns• Applications not using a higher-level interface such as Pig unless really

necessary.

• Processing thousands of small files (sized less than 1 HDFS block, typically 128MB) with one map processing a single small file.

• Processing very large data-sets with small HDFS block size, that is, 128MB, resulting in tens of thousands of maps.

• Applications with a large number (thousands) of maps with a very small runtime (e.g., 5s).

• Straightforward aggregations without the use of the Combiner.

• Applications with greater than 60,000-70,000 maps.

• Applications processing large data-sets with very few reduces (e.g., 1).

Page 17: Hadoop performance optimization tips

Anti-Patterns• Applications using a single reduce for total-order amount the output

records.

• Applications processing data with very large number of reduces, such that each reduce processes less than 1-2GB of data.

• Applications writing out multiple, small, output files from each reduce.

• Applications using the DistributedCache to distribute a large number of artifacts and/or very large artifacts (hundreds of MBs each).

• Applications using tens or hundreds of counters per task.

• Applications doing screen scraping of JobTracker web-ui for status of queues/jobs or worse, job-history of completed jobs.

• Workflows comprising hundreds or thousands of small jobs processing small amounts of data.

Page 18: Hadoop performance optimization tips

End of session

Day – 3: Performance Optimization tips