Containers, DevOps, Apache Mesos and Cloud - Reshaping how ...
Apache Mesos
description
Transcript of Apache Mesos
![Page 1: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/1.jpg)
Benjamin Hindman – @benh
Apache Mesoshttp://incubator.apache.org/mesos@ApacheMesos
![Page 2: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/2.jpg)
historyBerkeley research project including Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy Katz, Scott Shenker, Ion Stoica
http://incubator.apache.org/mesos/research.html
![Page 3: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/3.jpg)
Mesos aims to make it easier to build distributed applications/frameworks and share cluster resources
![Page 4: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/4.jpg)
applications/frameworks
services analytics
![Page 5: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/5.jpg)
analyticsservices
applications/frameworks
![Page 6: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/6.jpg)
how?
MesosNode Node Nod
e Node
Hadoop service …
Node
Node
Hadoop
Node
Node
service…
![Page 7: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/7.jpg)
level of abstractionmore easily share the resources via multi-tenancy and elasticity (improve utilization)run on bare-metal or virtual machines – develop against Mesos API, run in private datacenter (Twitter), or the cloud, or both!
![Page 8: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/8.jpg)
Hadoop
Spark
service
shared cluster
static partitioning sharing with Mesos
![Page 9: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/9.jpg)
featuresAPIs in C++, Java, Pythonhigh-availability via zookeeperisolation via linux control groups (LXC)
![Page 10: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/10.jpg)
in progress»official apache release»more linux cgroup support (oom and i/o, in
particular, networking)»resource usage monitoring, reporting»new allocators (priority based, usage
based)»new frameworks (storm)»scheduler management (launching,
watching, re-launching, etc)
![Page 11: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/11.jpg)
400+ nodes running production servicesgenomics researchers using Hadoop and SparkSpark in use by Yahoo! ResearchSpark for analyticsHadoop and Spark used by machine learning researchersYour Name
Here
![Page 12: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/12.jpg)
demonstration
![Page 13: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/13.jpg)
linux environment$ yum install -y gcc-c++$ yum install -y java-1.6.0-openjdk-devel.x86_64$ yum install -y make.x86_64$ yum install -y patch.x86_64$ yum install -y python26-devel.x86_64$ yum install -y ant.noarch
![Page 14: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/14.jpg)
get mesos$ wget http://people.apache.org/~benh/mesos-0.9.0-incubating-RC3/mesos-0.9.0-incubating.tar.gz$ tar zxvf mesos-0.9.0-incubating.tar.gz$ cd mesos-0.9.0
![Page 15: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/15.jpg)
build mesos$ mkdir build$ cd build$ ../configure.amazon-linux-64$ make$ make install
![Page 16: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/16.jpg)
deploy mesos (1)/usr/local/var/mesos/deploy/masters:
ec2-50-17-28-135.compute-1.amazonaws.com
/usr/local/var/mesos/deploy/slaves:
ec2-184-73-142-43.compute-1.amazonaws.comec2-107-22-145-31.compute-1.amazonaws.com
![Page 17: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/17.jpg)
deploy mesos (2)on slaves (i.e., ec2-184-73-142-43.compute-1.amazonaws.com, ec2-107-22-145-31.compute-1.amazonaws.com)
/usr/local/var/mesos/conf/mesos.conf:
master=ec2-50-17-28-135.compute-1.amazonaws.com
![Page 18: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/18.jpg)
deploy mesos (3)$ /usr/local/sbin/mesos-start-cluster.sh
![Page 19: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/19.jpg)
build hadoop$ make hadoop$ mv hadoop/hadoop-0.20.205.0 /etc/hadoop$ cp protobuf-2.4.1.jar /etc/hadoop$ cp src/mesos-0.9.0.jar /etc/hadoop
![Page 20: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/20.jpg)
configure hadoop (1)conf/mapred-site.xml:
<configuration> <property> <name>mapred.job.tracker</name> <value>ip-10-108-207-105.ec2.internal:9001</value> </property> <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.MesosScheduler</value> </property>
<property> <name>mapred.mesos.master</name> <value>ip-10-108-207-105.ec2.internal:5050</value> </property></configuration>
![Page 21: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/21.jpg)
configure hadoop (2)conf/hadoop-env.sh:
#!/bin/sh
export JAVA_HOME=/usr/lib/jvm/jre
# Google protobuf (necessary for running the MesosScheduler). export PROTOBUF_JAR=${HADOOP_HOME}/protobuf-2.4.1.jar
# Mesos.export MESOS_JAR=${HADOOP_HOME}/mesos-0.9.0.jar
# Native Mesos library. export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so
export HADOOP_CLASSPATH=${HADOOP_HOME}/build/contrib/mesos/classes:${MESOS_JAR}:${PROTOBUF_JAR}
...
![Page 22: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/22.jpg)
configure hadoop (3)conf/core-site.sh:
<configuration> <property> <name>fs.default.name</name> <value>hdfs://ip-10-108-207-105.ec2.internal:9000</value> </property></configuration>
![Page 23: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/23.jpg)
configure hadoop (4)conf/masters:
ec2-50-17-28-135.compute-1.amazonaws.com
conf/slaves:
ec2-184-73-142-43.compute-1.amazonaws.comec2-107-22-145-31.compute-1.amazonaws.com
![Page 24: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/24.jpg)
configure hadoop (4)conf/masters:
ec2-50-17-28-135.compute-1.amazonaws.com
conf/slaves:
ec2-184-73-142-43.compute-1.amazonaws.comec2-107-22-145-31.compute-1.amazonaws.com
![Page 25: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/25.jpg)
starting hadoop$ pwd/etc/hadoop$ ./bin/hadoop jobtracker
![Page 26: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/26.jpg)
running wordcount$ ./bin/hadoop jar hadoop-examples-0.20.205.0.jar wordcount macbeth.txt output
![Page 27: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/27.jpg)
starting another hadoop<configuration> <property> <name>mapred.job.tracker</name> <value>ip-10-108-207-105.ec2.internal:9002</value> </property>
<property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:50032</value> </property>
<property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:50062</value> </property></configuration>
![Page 28: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/28.jpg)
get and build spark$ git clone git://github.com/mesos/spark.git$ cd spark$ git checkout --track origin/mesos-0.9$ sbt/sbt compile
![Page 29: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/29.jpg)
configure spark$ cp conf/spark-env.sh.template conf/spark-env.sh
conf/spark-env.sh:
#!/bin/sh
export SCALA_HOME=/root/scala-2.9.1-1export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.soexport SPARK_MEM=1g
![Page 30: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/30.jpg)
run spark shell$ pwd/root/spark$ MASTER=$HOSTNAME:5050 ./spark-shell
![Page 31: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/31.jpg)
setting log_diron slaves (i.e., ec2-184-73-142-43.compute-1.amazonaws.com, ec2-107-22-145-31.compute-1.amazonaws.com)
/usr/local/var/mesos/conf/mesos.conf:
master=ec2-50-17-28-135.compute-1.amazonaws.comlog_dir=/tmp/mesos
![Page 32: Apache Mesos](https://reader033.fdocuments.in/reader033/viewer/2022051004/568163be550346895dd4db9c/html5/thumbnails/32.jpg)
re-deploy mesos$ /usr/local/sbin/mesos-stop-slaves.sh
$ /usr/local/sbin/mesos-start-slaves.sh