101 ways to configure kafka - badly (Kafka Summit)

49
101* ways to configure Kafka - badly Audun Fauchald Strand Lead Developer Infrastructure @audunstrand bio: gof, mq, ejb, mda, wli, bpel eda, soa, ws*,esb, ddd Henning Spjelkavik Architect @spjelkavik bio: Skiinfo (Vail Resorts), FINN.no enjoys reading jstacks

Transcript of 101 ways to configure kafka - badly (Kafka Summit)

Page 1: 101 ways to configure kafka - badly (Kafka Summit)

101* ways to configure Kafka - badly

Audun Fauchald StrandLead Developer Infrastructure

@audunstrandbio: gof, mq, ejb,

mda, wli, bpel eda, soa, ws*,esb, ddd

Henning Spjelkavik Architect

@spjelkavikbio: Skiinfo (Vail Resorts),

FINN.noenjoys reading jstacks

Page 2: 101 ways to configure kafka - badly (Kafka Summit)

agenda

introduction to kafka

kafka @ finn.no

101* mistakes

questions

“From a certain point onward

there is no longer any turning

back. That is the point that

must be reached.”

― Franz Kafka, The Trial

Page 3: 101 ways to configure kafka - badly (Kafka Summit)

Top 5

1. no consideration of data on the inside vs outside

2. schema not externally defined3. same config for every

client/topic4. 128 partitions as default config5. running on 8 overloaded nodes

Page 4: 101 ways to configure kafka - badly (Kafka Summit)

FINN.no

2nd largest website in norway

classified ads ( Ebay, Zillow in one)

60 millions pageviews a day

80 microservices

130 developers

1000 deploys to production a week

6 minutes from commit to deploy (median)

Page 5: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

Schibsted Media Group6800 people in 30 countries

FINN.no is a part of

Page 6: 101 ways to configure kafka - badly (Kafka Summit)

kafka @ finn.no

Page 7: 101 ways to configure kafka - badly (Kafka Summit)

kafka @finn.no

architecture

use cases

tools

Page 8: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

in the beginning ...

Architecture governance board decided to use RabbitMQ as message queue.

Kafka was installed for a proof of concept, after developers spotted it januar 2013.

Page 9: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

2013 - POC

“High” volume

Stream of classified ads

Ad matching

Ad indexed

mod05

zk

kafka

mod07

zk

kafka

mod01

zk

kafka

mod03

zk

kafka

mod06

zk

kafka

mod08

zk

kafka

mod02

zk

kafka

mod04

zk

kafka

dc 1

dc 2

Version 0.8.1

4 partitions

common client java library

thrift

Page 10: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

2014 - Adoption and complaininglow volume/ high reliability

Ad Insert

Product Orchestration

Payment

Build Pipeline

click streams

mod05

zk

kafka

mod07

zk

kafka

mod01

zk

kafka

mod03

zk

kafka

mod06

zk

kafka

mod08

zk

kafka

mod02

zk

kafka

mod04

zk

kafka

dc 1

dc 2

Version 0.8.1

4 partitions

experimenting with configuration

common java library

Page 11: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

toolingalerting

Page 12: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

2015 - Migration and consolidation“reliable messaging”

asynchronous communication between services

store and forward

zipkin

slack notifications

dc 1

dc 2

Version 0.8.2

5-20 partitions

multiple configurations

broker05

zk

kafka

broker01

zk

kafka

broker03

zk

kafka

broker04

zk

kafka

broker02

zk

kafka

Page 13: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

toolingGrafana dashboard visualizing jmx stats

kafka-manager

kafka-cat

Page 14: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

2016 - Confluent

zk04 zk

broker01

broker05

kafka

kafka

broker03

kafka

broker04

kafka

broker02

kafka

zk05 zk

zk02 zk zk03 zk

zk01 zk

platform

schema registry

data replication

kafka connect

kafka streams

Page 15: 101 ways to configure kafka - badly (Kafka Summit)

101* mistakes

“God gives the nuts, but he does not crack them.” ― Franz Kafka

Page 16: 101 ways to configure kafka - badly (Kafka Summit)

Pattern Language

why is it a mistake

what is the consequence

what is the correct solution

what has finn.no done

Page 17: 101 ways to configure kafka - badly (Kafka Summit)

Top 5

1. no consideration of data on the inside vs outside

2. schema not externally defined3. same config for every

client/topic4. 128 partitions as default config5. running on 8 overloaded nodes

Page 18: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

mistake: no consideration of data on the inside vs outside

https://flic.kr/p/6MjhUR

Page 19: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

why is it a mistakeeverything published on Kafka (0.8.2) is visible to any client that can access

Page 20: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the consequencedirect reads across services/domains is quite normal in legacy and/or enterprise systems

coupling makes it hard to make changes

unknown and unwanted coupling has a cost

Kafka had no security per topic - you must add that yourself

Page 21: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the correct solutionConsider what is data on the inside, versus data on the outside

Convention for what is private data and what is public data

If you want to change your internal representation often, map it before publishing it publicly (Anti corruption layer)

Page 22: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what has finn.no doneDecided on a naming convention (i.e Public.xyzzy) for public topics

Communicates the intention (contract)

Page 23: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

mistake: schema not externally defined

Page 24: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

why is it a mistakedata and code needs separate versioning strategies

version should be part of the data

defining schema in a java library makes it more difficult to access data from non-jvm languages

very little discoverability of data, people chose other means to get their data

difficult to create tools

Page 25: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the consequencedevelopment speed outside jvm has been slow

change of data needs coordinated deployment

no process for data versioning, like backwards compatibility checks

difficult to create tooling that needs to know data format, like data lake and database sinks

Page 26: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the correct solutionconfluent.io platform has a separate schema registry

apache avro

multiple compatibility settings and evolutions strategies

connect

Take complexity out of the applications

Page 27: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what has finn.no donestill using java library, with schemas in builders

confluent platform 2.0 is planned for the next step, not (just) kafka 0.9

Page 28: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

mistake: running mixed load with a single, default configuration

https://flic.kr/p/qbarDR

Page 29: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

why is it a mistakeHistorically - One Big Database with Expensive License

Database world - OLTP and OLAP

Changed with Open Source software and Cloud

Tried to simplify the developer's day with a single config

Kafka supports very high throughput and highly reliable

Page 30: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the consequenceTrade off between throughput and degree of reliability

With a single configuration - the last commit wins

Either high throughput, and risk of loss - or potentially too slow

Page 31: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the correct solutionUnderstand your use cases and their needs!

Use proper pr topic configuration

Consider splitting / isolation

Page 32: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

Defaults that are quite reliable

Exposing configuration variables in the client

Ask the questions;

● at least once delivery● ordering - if you partition, what must have strict ordering● 99% delivery - is that good enough?● what level of throughput is needed

what has finn.no done

Page 33: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

ConfigurationConfiguration for production

● Partitions● Replicas (default.replication.factor)● Minimum ISR (min.insync.replicas)● Wait for acknowledge when producing messages (request.required.acks, block.on.buffer.full)● Retries● Leader election

Configuration for consumer

● Number of threads● When to commit (autocommit.enable vs consumer.commitOffsets)

Page 34: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

Gwen Shapira recommends...● akcs = all● block.on.buffer.full = true● retries = MAX_INT● max.inflight.requests.per.connect = 1● Producer.close()● replication-factor >= 3● min.insync.replicas = 2● unclean.leader.election = false● auto.offset.commit = false● commit after processing● monitor!

Page 35: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

mistake: default configuration of 128 partitions for each topic

https://flic.kr/p/6KxPgZ

Page 36: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

why is it a mistakepartitions are kafkas way of scaling consumers, 128 partitions can handle 128 consumer processes

in 0.8; clusters could not reduce the number of partitions without deleting data

highest number of consumers today is 20

Page 37: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the consequenceour 0.8 cluster was configured with 128 partitions as default, for all topics.

many partitions and many topics creates many datapoints that must be coordinated

zookeeper must coordinate all this

rebalance must balance all clients on all partitions

zookeeper and kafka went down (may 2015)

Users could note create ads for two days

Page 38: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the correct solutionsmall number of partitions as default

increase number of partitions for selected topics

understand your use case (throughput target)

reduce length of transactions on consumer side

Max partitions on a broker => 1500 advised in our case - we had 38k

http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/

Page 39: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what has finn.no done5 partitions as default

2 heavy-traffic topics have more than 5 partitions

Page 40: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

mistake: deploy a proof of concept hack - in production ; i.e why we had 8 zk nodes

https://flic.kr/p/6eoSgT

Page 41: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

why is it a mistakeKafka was set up by Ops for a proof of concept - not for hardened production use

By coincidence we had 8 nodes for kafka, the same 8 nodes for zookeeper

Zookeeper is dependent on a majority quorum, low latency between nodes

The 8 nodes were NOT dedicated - in fact - they were overloaded already

Page 42: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the consequenceZookeeper recommends 3 nodes for normal usage, 5 for high, and any more is questionable

More nodes leads to longer time for finding consensus, more communication

If we get a split between data centers, there will be 4 in each

You should not run Zk between data centers, due to latency and outage possibilities

Page 43: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what is the correct solutionHave an odd number of Zookeeper nodes - preferrably 3, at most 5

Don’t cross data centers

Check the documentation before deploying serious production load

Don’t run a sensitive service (Zookeeper) on a server with 50 jvm-based services, 300% over committed on RAM

Watch GC times

Page 44: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

what has finn.no done

dc 1

dc 2

broker05

zk

kafka

broker01

zk

kafka

broker03

zk

kafka

broker04

zk

kafka

broker02

zk

kafka

Version 0.8.2

5-20 partitions

multiple configurations

Page 45: 101 ways to configure kafka - badly (Kafka Summit)
Page 46: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

“They say ignorance is bliss.... they're wrong ” ― Franz Kafka

Page 47: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

References / Further readingDesigning data intensive systems, Martin Kleppmann

Data on the inside - data on the outside, Pat Helland

I Heart Logs, Jay Kreps

The Confluent Blog, http://confluent.io/

Kafka - The definitive guide

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations

http://www.finn.no/apply-herehttp://www.schibsted.com/en/Career/

Page 48: 101 ways to configure kafka - badly (Kafka Summit)

“It's only because of their stupidity that they're able to be so sure of themselves.” ― Franz Kafka, The Trial

Audun Fauchald Strand

@audunstrand

Henning Spjelkavik

@spjelkavik

http://www.finn.no/apply-herehttp://www.schibsted.com/en/Career/

Q?

Page 49: 101 ways to configure kafka - badly (Kafka Summit)

#kafkasummit @spjelkavik @audunstrand

Runner upUsing pre-1.0 software

Have control of topic creation

Kafka is storage - treat it like one also ops-wise

Client side rebalancing, misunderstood

Commiting on all consumer threads, believing that you only commited on one