ThingMonk 2016 - Concursus Event sourcing for the IOT By Tareq Abedrabbo & Dominic Fox

Post on 16-Apr-2017

323 views 2 download

Transcript of ThingMonk 2016 - Concursus Event sourcing for the IOT By Tareq Abedrabbo & Dominic Fox

ConcursusEvent Sourcing for the Internet of ThingsThingMonk 2016

IntroductionsDominic Fox

Twitter: @dynamic_proxy

Email: dominic.fox@opencredo.com

Tareq Abedrabbo

Twitter: @tareq_abedrabbo

Email: tareq.abedrabbo@opencredo.com

Concursus

Page: https://opencredo.com/publications/concursus/

Github: http://github.com/opencredo/concursus

What is Concursus?

A framework (or toolkit) for processing and organising messy data in an distributed context.

Event Sourcing

“Event Sourcing ensures that all changes to application state are stored as a sequence of events. Not just can we query these events, we can also use the event log to reconstruct past states, and as a foundation to automatically adjust the state to cope with retroactive changes.”

http://martinfowler.com/eaaDev/EventSourcing.html

What is Concursus?Problems Concursus addresses:

Processing events in a scalable and reliable way

Processing guarantees and ordering: exactly once, out of order, repeated or missed delivery, etc..

Building meaningful domain models to reason about and build business logic around

Flexibility: building additional views as needed

Building Blocks• Java 8 (Kotlin is supported too)

• Cassandra as an event store (other backbends are supported)

• Kafka or RabbitMQ as message brokers

• Hazelcast for transient state

Sources of InspirationStream processing frameworks such as Apache Storm and Spark

Google papers: Cloud dataflow, MillWheel

Apache Spark papers

The Axon CQRS framework

Domain Driven Design

Tendencies:

• From internet of users to internet of things

• From “presence” to “presents”

• From monoliths to microservices

Why Concursus?

From Internet of Users to Internet of Things

From Presence to Presents

From Monoliths to Microservices

“Write First, Reason Later”

“Write First, Reason Later”

Example

Domain Model: Events

aggregateType: lightbulbaggregateId: 69016fb5-1d69-4a34-910b-f8ff5c702ad9

eventTimestamp: 2016-03-31T10:31:17.981Zparameters: { “wattage”: 60 }

Domain Model: Events

aggregateType: lightbulbaggregateId: 69016fb5-1d69-4a34-910b-f8ff5c702ad9

eventTimestamp: 2016-03-31T10:36:42.171Zparameters: { “location”: “hallway”}

Domain Model: Events

aggregateType: lightbulbaggregateId: 69016fb5-1d69-4a34-910b-f8ff5c702ad9

eventTimestamp: 2016-03-31T10:36:42.171ZprocessingTimestamp: 2016-03-31T10:36:48.3904Zparameters: { “location”: “hallway”}

Domain Model: Events

Domain Model: SummaryEvery Event occurs to an Aggregate, identified by its type and id.Every Event has an eventTimestamp, generated by the source of the event.An Event History is a log of Events, ordered by eventTimestamp, with an additional processingTimestamp which records when the Event was captured.

Network

Event sources

Event processors

Events arrive:• Partitioned• Interleaved• Out-of-order

Processing Model: Ordering

Log is:• Partitioned by aggregate id• Ordered by event timestamp

Processing Model: Ordering

CREATE TABLE IF NOT EXISTS concursus.Event ( aggregateType text, aggregateId text, eventTimestamp timestamp, streamId text, processingId timeuuid, name text, version text, parameters map<text, text>, characteristics int, PRIMARY KEY((aggregateType, aggregateId), eventTimestamp, streamId)) WITH CLUSTERING ORDER BY (eventTimestamp DESC);

Cassandra Schema

CassandraEvent Store

RabbitMQ Topic

DownstreamprocessingLog

events

Publish events

Cassandra & AMQP

CassandraEvent Store

RabbitMQ Topic

Downstreamprocessing

out-of-order events

ordered query results

Cassandra & AMQP

CassandraEvent Store

Kafka Topic

Downstreamprocessing

Event store listener

Publish events

Log events

Cassandra & Kafka

Processing Model: SummaryEvents arrive partitioned, interleaved and out-of-order.Events are sorted into event histories by aggregate type and id.Events are sorted within event histories by event timestamp, not processing timestamp.Event consumers need to take into account the possibility that an event history may be incomplete at the time it is read – consider using a watermark to give incoming events time to “settle”.

Programming Model: Core Metaphor

Consumer<Event>

Programming Model: Core Metaphor

You give me a Consumer<Event>, and I send Events to it one at a time:

Emitting Events

I implement Consumer<Event>, and handle Events that are sent to me.

Handling Events

Java 8 Mapping

Java 8 Mapping

Java 8 Mapping

Event-handling middleware is a chain of Consumer<Event>s that transforms, routes, persists and dispatches events. A single event submitted to this chain may be:■ Checked against an idempotency filter (e.g. a Hazelcast distributed cache)■ Serialised to JSON■ Written to a message queue topic■ Retrieved from the topic and deserialised■ Persisted to an event store (e.g. Cassandra)■ Published to an event handler which maintains a query-optimised view of part of the

system■ Published to an event handler which maintains an index of aggregates by event property

values (e.g. lightbulbs by wattage)

Event-Handling Middleware

Thank you for listeningAny questions?

Three Processing Schedules

1.Transient

2.Durable

3.Persistent

Three Processing Schedules

1.Transient

2.Durable

3.Persistent

Three Processing Schedules

1.Transient

2.Durable

3.Persistent

Three Processing Schedules

1.Transient - what happens

2.Durable - what’s happening

3.Persistent - what happened

“Write First, Reason Later”