Making sense of Apache Bigtop's role in ODPi and how it matters to Apache Apex

Post on 16-Apr-2017

220 views 0 download

Transcript of Making sense of Apache Bigtop's role in ODPi and how it matters to Apache Apex

Making sense of Apache Bigtop, ODPi and why it all matters to Apache Apex

Roman Shaposhnik, rvs@apache.org, @rhatrDirector of Open Source Strategy, Pivotal Inc.

A slide deck build via “Apache Way”• Bigtop community contributors• Roman Shaposhnik• Konstantin Boudnik• Nate D'Amico• Evans Ye & Darren Chen (Trend Micro)

What is Apache Bigtop?• Apache Bigtop is to Hadoop what Debian is to Linux• A 100% open, community driven distribution of bigdata

management platform based on Apache Hadoop• A place where all communities around big data come

together• The thing everybody (Pivotal, Cloudera, Hortonworks,

WANDisco, IBM, Amazon, TrendMicro) is building off of• A cutting edge, quickly evolving distribution and a set of

tools

GNU Software Linux kernel

Hadoop Ecosystem(Pig, Hive, Spark) Linux kernel

Hadoop(HDFS + YARN + MR)

ODPi is a nonprofit organization committed to simplification & standardization of the big data ecosystem with a common reference

specification called ODPi Core.As a shared industry effort , ODPi is focused on promoting and advancing the state of Apache Hadoop®

and Big Data Technologies for the Enterprise.

February 2015 December 2015September 2015

What has ODPi done so far (1.0.1)?• Runtime specification• https://github.com/odpi/specs/blob/master/ODPi-Runtime.md

• Validation testsuite• http://repo.odpi.org/ODPi/1.0/acceptance-tests/

• Reference implementation binaries• http://repo.odpi.org/ODPi/1.0/{centos6, ubuntu-14.04}

What are we working on?• Operations specification

• https://github.com/odpi/specs/blob/master/ODPi-Operations.md

• ISV “ODPi compatible” policy

• Expanding ODPi core beyond Apache Hadoop & Ambari• Hive• ????

• How can you help?• Share usecases• Test against reference implementation• Contribute to upstream ASF projects

What’s in is Bigtop?

• A set of binary packages• just like CDH/PHD/HDP/ODPi/etc.

• Integration code• Packaging code• Deployment code• Orchestration code• Validation code• Continuous Integration infrastructure

Integration/packaging

• Linux packages• RPM, DEB• RHEL/CentOS(Fedora), SLES(OpenSUSE), Debian, Ubuntu• VirtualBox, VMWare, etc. VM images

• Challenge: Linux packaging is node-centric• “smart” tarballs• Docker or BOSH images

Integration testing based on iTest

• Clean-room provisioning• these ain’t your gramp’s unit tests

• Versioned test artifacts• JVM-base test artifacts• Matching stacks of components and integration tests• Plug’n’play architecture: Gradle/Groovy, JARs/artifacts

Puppet 3.x deployment

• Master-less puppet• $ puppet apply bigtop-deploy/puppet/manifests/site.pp # on each node

• Cluster topology is kept in Hiera

bigtop::hadoop_head_node: "hadoopmaster.example.com" hadoop::hadoop_storage_dirs: - ”/mnt” hadoop_cluster_node::cluster_components: - yarn - zookeeper bigtop::bigtop_repo_uri: "http://bigtop-repos.s3.amazonaws.com/releases/1.1.0/…”

One click Bigtop provisioning

Who is this for?

• For Hadoop app developers, cluster admins, users• Run a Hadoop cluster to test your code on• Try & test configurations before applying to Production• Play around with Bigtop Big Data Stack

• For contributors• Easy to test your packaging, deployment, testing code

• For vendors• CI out of the box —> patch upstream code made easier

Works great, but…

•Need to add vagrant public key into docker images• Too many issues with auto-created boot2docker hosting VM• A bug for docker provider keep opening for almost 2y•Waiting for machine to boot' hangs infinitely

• Can not share same code for different providers anyway•Not all the docker options supported in Vagrantfile•Does not support Docker Swarm• Slow

Docker Compose

Implementation

• Create docker containers:• docker-compose scale bigtop=3

• Volumes:• Bigtop Puppet configurations• Bigtop Puppet code• /etc/hosts

•Compatible with Docker Machine and Swarm

Docker Machine and Swarm

Juju orchestration

$ juju boostrap$ juju deploy hadoop-processing

https://jujucharms.com/hadoop-processing/

Juju orchestration

$ juju add-unit slave -n 2

Juju orchestration

$ juju action do namenode/0 smoke-test$ juju action do resourcemanager/0 smoke-test$ watch -n 0.5 juju action status

Early Mission AccomplishedFoundation for commercial Hadoop distros/services

Leveraged by app providers…

Blue prints for data engineering

• BigPetStore• Data Generator• Examples using tools in Hadoop ecosystem to process

data• Build system and tests for integrating tools and multiple

JVM languages• Started by Dr. Jay Vyas, prinicipal software engineer at

Red Hat, Inc.

Datamodel

Transaction Purchase Model

Lambda/Stream Architectures

HDFS + Zookeeper +

New focus and target end users

Data engineers vs distro builders

Enhance Operations/Deployment

Reference implementations & tutorials

Data data data…Smarter/Realistic test data -bigpetstore -bigtop-bazaar -weather data gen

Tutorial/Learning Data sets -githubarchive.org -more tbd…

Thank You, Q&A