Data Science methods for treatment personalization in ...

32
Data Science methods for treatment personalization in Persuasive Technology Prof. dr. M.C.Kaptein Professor Data Science & Health Principal Investigator @ JADS 12 April 2019

Transcript of Data Science methods for treatment personalization in ...

Page 1: Data Science methods for treatment personalization in ...

Data Science methods for treatmentpersonalization in Persuasive Technology

Prof. dr. M.C.KapteinProfessor Data Science & Health

Principal Investigator @ JADS

12 April 2019

Page 2: Data Science methods for treatment personalization in ...

Kaptein · Heuvel

Undergraduate Topics in Computer ScienceSeries Editor: Ian Mackie

Maurits Kaptein · Edwin van den Heuvel

Statistics for Data Scientists An introduction to probability, statistics, and data analysis

Undergraduate Topics in Computer Science

Statistics for Data Scientists

Maurits KapteinEdwin van den Heuvel

Computer Science

UTiCS

Statistics for Data Scientists

An introduction to probability, statistics, and data analysis

This book provides an undergraduate introduction to analysing data for data science, computer science, and quantitative social science students. It uniquely combines a hands-on approach to data analysis – supported by numerous real data examples and reusable [R] code – with a rigorous treatment of probability and statistical principles. Where contemporary undergraduate textbooks in prob-ability theory or statistics often miss applications and an introductory treatment of modern methods (bootstrapping, Bayes, etc.), and where applied data analysis books often miss a rigorous theoretical treatment, this book provides an accessible but thorough introduction into data analysis, using statistical methods combining the two viewpoints. The book further focuses on methods for dealing with large data-sets and streaming-data and hence provides a single-course introduction of statistical methods for data science.

9 783030 105303

ISBN 978-3-030-10530-3

Page 3: Data Science methods for treatment personalization in ...

Personalization

With personalization we try to find the “right content for the rightperson at the right time” [10].

Applications in Communication, Persuasive technology, Marketing,Healthcare, etc., etc.

More formally, we assume we have a population of N units, which represent themselves sequentially. For each unit

i = 1, . . . , i = N we first observe their properties ~xi , and subsequently, using some decision policy π(), we choose

a treatment ai , (i.e. π : (xi , d)→ at ). After the content is shown, we observe the associated outcome, or reward

ri , and our aim is to choose π() such that we maximize∑N

i=1 ri .

Page 4: Data Science methods for treatment personalization in ...

Overview

Selecting persuasive interventions

Selecting personalized persuasive interventions

Applications in persuasive technology design

Available software

Page 5: Data Science methods for treatment personalization in ...

Section 1

Selecting persuasive interventions

Page 6: Data Science methods for treatment personalization in ...

The multi-armed bandit problem

For i = 1, . . . , i = N

I We select and action ai . (Often actions k = 1, . . . , k = K ,not always).

I Observe reward ri .

Select actions according to some policyπ : {a1, . . . , ai−1, r1, . . . , ri−1} 7→ ai .

Aim: Maximize (expected) cumulative reward∑N

i=1 ri(or, minimize Regret which is simply

∑Ni=1(πmax − π()) [3].

Page 7: Data Science methods for treatment personalization in ...

The canonical solution: the “experiment”

For i = 1, . . . , i = n (where n << N):

I Choose k with Pr(ai = k) = 1K .

I Observe reward.

Compute r1, . . . , rK and create guideline / business rule.

For i > n:

I Choose ai = arg maxk(r1, . . . , rK ) [12, 6, 9].

Page 8: Data Science methods for treatment personalization in ...

Alternative solutions

1. ε-Greedy:For i = 1, . . . , i = N:

I w. Probability ε choose k with Pr(ai = k) = 1K .

I w. Probability 1− ε choose a = arg maxk(r1, . . . , rK )(given the data up to that point) [2].

2. Thompson sampling:Setup a Bayesian model for r1, . . . , rK

For i = 1, . . . , i = N:I Play arm with a probability proportional to your belief that it is

the best arm.I Update model parameters

Easily implemented by taking a draw from the posterior [4, 1].

Page 9: Data Science methods for treatment personalization in ...

Performance of different content selection policies

0 200 400 600 800 1000

05

1015

2025

30

Time step

Cum

ulat

ive

regr

etEpsilonFirstEpsilonGreedyThompsonSampling

Figure: Comparison in terms of regret between three different banditpolicies on a 3-arm Bernoulli bandit problem with true probabilitiesp1 = .6, p2 = p3 = .4 in terms of regret. Figure averages over m = 10.000simulation runs. Thompson sampling outperforms the other policies.

Page 10: Data Science methods for treatment personalization in ...

Intuition behind a well performing allocation policy

A good policy effectively weights exploration and exploitation:

I Exploration: Try out the content that we are unsure about:learn.

I Exploitation: Use the knowledge we have / choose thecontent we think are effective: earn.

We can think about the experiment as moving all exploration up front. In this case, it is a) hard to determine how

much we need to explore (since there is no outcome data yet), and b) we might make a wrong decision.

Page 11: Data Science methods for treatment personalization in ...

Section 2

Selecting personalized persuasive interventions

Page 12: Data Science methods for treatment personalization in ...

The problem

For i = 1, . . . , i = N

I We observe the context ~xi .

I We select and action ai .

I Observe reward ri

Aim remains the same, but problem more challenging: the bestaction might depend on the context.

Page 13: Data Science methods for treatment personalization in ...

The current approach

I Do experiments within subgroups of users(or, re-analyze existing RCT data to find heterogeneity).

I Subgroup selection driven by a theoretical understanding ofthe underlying mechanism.

I Effectively solve a non-contextual problem within eachcontext.Thus, we see the problem as many separate problems

In the limit: no room for exploration when users are fully unique!(N = 1)

Page 14: Data Science methods for treatment personalization in ...

Searching the context × action space

Weight

20

40

60

80

Dose

0.0

0.2

0.4

0.6

0.8

1.0

Survival

0.0

0.2

0.4

0.6

Weight

20

40

60

80

Dose

0.0

0.2

0.4

0.60.81.0

Survival

0.0

0.2

0.4

0.6

0.0 0.2 0.4 0.6 0.8 1.0

0.1

0.2

0.3

0.4

0.5

0.6

Dose

Surv

ival

Weight = 20

0.0 0.2 0.4 0.6 0.8 1.0

0.1

0.2

0.3

0.4

0.5

0.6

Dose

Surv

ival

Weight = 60

I Different outcome for each action for each covariate.

I We need to learn this relation efficiently.

Page 15: Data Science methods for treatment personalization in ...

An alternative approach

It is easy to extend Thompson sampling to include a context

For i = 1, . . . , i = N:

I Create a model to predict E(rt) = f (at , xt) and quantify youruncertainty (e.g., Bayes)

I Exploration: Choose actions with uncertain outcomes.

I Exploitation: Choose action with high expected outcomes.

Very flexible models available for E(rt ) = f (at , xt ) [8, 7, 13] and efficient procedures are available for incorporating

uncertainty: LinUCB [5], Thompson Sampling [11], Bootstrap Thompson Sampling [6], etc.

Page 16: Data Science methods for treatment personalization in ...

Performance

0 100 200 300 400

0.40

0.50

0.60

0.70

Ave

rage

rew

ard

EpsilonFirstEpsilonGreedyLinUCBDisjoint

0 100 200 300 400

0.40

0.50

0.60

Cum

ulat

ive

rew

ard

rate

EpsilonFirstEpsilonGreedyLinUCBDisjoint

Figure: Simple comparison of LinUCB (“frequentist Thompsonsampling”) with non-contextual approaches for a 3-armed Bernoullibandit with a single, binary, context variable. Already in this very simplecase the difference between the contextual and non-contextual approachis very large.

Page 17: Data Science methods for treatment personalization in ...

Section 3

Applications in persuasive technology design

Page 18: Data Science methods for treatment personalization in ...

Personalized reminder messages1

I Susceptibility estimatedbased on behavioral response

I Selection of strategiesAdaptive, Original,Pre-tested, Random

I Adaptation done use(hierarchical) Thompsonsampling

I Large differences in successprobability

I N = 1129

● ●

● ●●

●●

● ●

●● ●

●● ● ● ●

● ●●

●●

● ●●

●● ● ●

●● ●

●●

● ●●

● ●● ●

● ●

●●

● ●

●●

● ●

● ●

● ● ● ●

●●

●●●

●●

● ● ●

● ●● ●

●● ● ●

● ●

●● ● ● ●

●●

● ●

●●

●●

●●

●●

●●

●●

●●

● ●

●●

●●

● ●

●●

●● ●

●●

● ● ●

● ●●

●●

●●

● ●

●●

●●

●●

●● ●

●●

●●

●●

●● ●

●●

● ●

●●

●● ●

●●

●● ●

● ●

●●

●●

● ●

● ● ●●●

●●

● ●

● ●

● ●●

● ●● ●

●● ● ● ●

● ●

● ●

●●

● ●

●●

●●

●●

●●

●●

● ●

●●

● ●

● ● ●

●● ●

●●

● ●

●●

● ●

●●

● ●●

●●

● ● ●

● ●

● ●

● ● ●

●●

● ●

● ●

●● ● ●

●●

● ●●●

●●

● ●

●●

●●

●●

●● ●

● ●

●●

● ●●

● ●

● ● ●

●●

●●

● ● ●

●● ●

● ● ●●

●●

● ● ●

●●

●●

●●

●● ●

●● ●●

●●

●●

●●

●●

● ● ●

●●

●●

● ● ● ●● ● ●

●●

● ●

●● ● ●

● ● ●

● ● ●

● ● ●●

● ●

●●

● ● ● ●●

● ●

● ●

●●

●●

● ●●

●●

● ●

●●

● ●

● ●●

● ●

● ●●

●●

●●

●●●

●●

● ● ● ●

●● ●

● ●●

●●

●● ●

●●

●●

●●

● ●●

●●

●●

●●

●●

●●

● ●●

●●

●●

●●

● ●

●●

● ●● ●

● ●

● ●

●● ●

●●

●●

● ●

●●

●●●

●●

●● ●

●●

● ●

● ● ●

●● ● ●●

● ●

● ● ●●

●●

●●

●● ●

●●

●● ●

●●

● ●

● ●●

●●

● ●

● ● ●

●●●

●●● ●

● ●●●

● ●

●● ●

●●

● ●

●●

● ●

● ● ● ●

● ●

● ●●●

●●

●●

●●

● ●●

●●

● ●

● ●

●●

●●

●● ●

● ●●

● ● ●● ●

●●

●●●

●●

● ●

●● ●

● ●●

● ●

●●

● ●●

● ● ●●

●●

●●

●● ●

● ●●

●●●

●● ●

●●

●● ●

●●

●●

●●●

●●

●●

● ● ●●

● ●

●●

●●

●●

●●

● ● ●●

● ●

●●

●●

●●

● ●

● ●

●●

●●

●●

●●

●●

●●

●● ●

● ●●

● ●

●●

●●

● ●

● ●●

● ●

●● ●

● ●

●●

●●

●● ●

●●

●●

●●

●●

●●

● ● ●●

● ● ●

●●

● ●

● ● ●●

●● ●

●●

● ●

● ●

●●

●●

● ● ●●● ●

● ●●

● ●

●●

● ●

●●

● ●●

● ●●

●● ●

● ●

● ●

●●

●●

● ● ●

●●

●●

●● ●

●●

●●

●●

● ●

●●

● ●

●● ●

● ●

●●

● ●●

●● ● ●

●●

●●

●●

● ●

● ●

● ●●

● ● ●●

● ●

●● ● ●

● ●

● ●

● ● ●

● ●●

● ●

● ● ●

● ●

●● ●

●●

● ●●

● ●

●●

● ●

●● ● ●

●● ●

● ●

●● ●

●●

● ●

●●

● ●

●●

●●

●● ●

●●

● ●

●●

●●

● ●●

●●

● ●●

●●

●● ●

●● ●

●● ●●

● ●

●●

● ●●

● ●

●●

● ●

● ●●

● ●●

●●

● ●●

●●

●●

● ●

●●

● ● ●●

● ●

●● ●

●●

● ●●

● ●

●● ●

● ●

●● ●

●●

●● ●

●●

● ●●●

●●

● ●

● ●

●●

● ●● ● ●

● ●●

● ●

●● ●

● ●

● ●

●●

● ●

●● ● ●

● ● ● ●● ● ●

● ●●

●●

●●

●● ●

●● ●●

● ●● ●

●●

● ● ●

●● ●

● ● ●

●●

●● ●

● ● ●

●●

●● ●

● ●●

●●

●●

●●

●●

●● ●

●●●

●●

● ●

● ●● ●

●●

● ●

●●

●●

●●

● ●

●●

●●

●● ●●

● ●

● ●

● ● ●

●●

● ●

● ●

● ●

● ● ●

●●

● ●●

● ● ●

● ●

●●

● ●●

● ●

● ●●

●● ●

● ●●

● ●● ●

●●

●●

●●

●●

● ●

●●

● ●

● ●●

●●

● ●●

● ●

●●

● ●

● ●● ●

●● ● ●

●●

●●

●●

● ●●

●● ● ●

● ●

●●

●●

●● ●

●●

●● ●

● ● ●●

●●

● ●

● ●

● ●●

●●

● ●

●●

●●

● ●●

● ●

●●

●● ●

●●

● ● ●●

● ● ●

● ●

●● ●

●●

●●

● ● ●

●●

● ●●

● ●

● ●●

●●

●●

●● ●

●● ●

● ●

●●

●●

●● ●

●●

● ●

● ●

●●

● ●

● ● ●

●● ●

●●

● ●

●●

●●

● ●

●●

● ●

●●

●●

●●

●●

● ●

● ● ●●

● ●

●●

● ●

●●

●●

●●

●●

● ● ● ●

●●

● ●●

● ●

●●

●●

● ●●

●●

● ●●

●●

●●

● ●

● ●

● ●

● ●●

●●

●●

●●

● ●●

●● ●●

● ●●

● ●

●●

●●

●● ● ●

● ● ●

●●

● ● ●● ●

●●

●● ●

● ●

● ● ●●

●● ●

●●

● ●

●●

●●

● ●●

●●

●●

●●

●●

● ●● ●

●●

● ● ●

●● ● ●

●●

● ●●

●●

● ●

●●

●●

●●

●●

●●

●●

● ●

●●

● ● ● ●

● ●● ●

● ●

● ●

●●

●●

● ●

●● ●

● ●●

●●

●●

●● ●

●●

● ●

● ●●

●●

● ● ●

● ●

●●

●●

●●

● ● ● ●

●● ●

●● ● ● ●

● ● ●

●●

●●

● ●●

●●● ●

● ●

● ●●

●●

●●

●●

● ●●

●● ●

●●

●● ●

●●

●●

●●

● ●● ●

●●

● ●

●●

● ●

●●

●●●

●●

● ●

●●

● ●

● ●●

● ●

●●

●●

●●

●●

● ●

●●

●●

● ●●

● ●

● ●

●●

●● ●

●●

●●

● ● ●● ●●

● ●●

●●

● ●

● ●

●●

●● ●

●●

●●

● ●●

● ●●

●● ●

●●

●● ●

●● ● ● ●

● ● ●

●● ●

● ●●

● ●

● ●●

●●

● ●

● ●● ●

●●

●●

●●

●●

● ● ●●

●●

● ●

●●

● ● ●

●●

●●

●●

● ●

● ●

● ● ●

● ●

●●

●●

●●

● ● ●

●●

● ●●

● ● ●

● ● ● ●●

●●

● ●●

●●

●●

●● ●

● ●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●

● ● ●●

● ●

●●

●●

● ●● ●

●●

●●

●● ●

●● ●

● ●

●●

● ●

● ●●

● ● ●●

●●

● ● ● ●

● ●

● ● ●●

● ●

● ●●

●●

●●

●●

● ●

●●

●●

● ●

● ● ●●

●●

●●

●● ●

● ●●

● ● ● ●●

● ●●

●●

●●

●●

● ●

● ●

●●

●●

●●

● ●

●●●

● ●

● ●●●

●●

● ●

●● ●

●●

●● ●

● ●●

● ● ●● ●

● ● ● ●●

●● ●

●●

●●

● ● ●

●●

●●

●●

● ●

●●

● ● ●● ●

●●● ●

●● ● ●

●●

●●

●●

● ●

● ●

● ●

●●

● ●

●● ● ● ●

●●

● ●

●●

● ●● ●

● ●

●●

●●

●● ●

●●

●●

● ●

●●

● ●●

●● ●

● ●● ●

● ●

● ●

●●

●●

●●

●●

●● ●

●●

●●

● ● ●

●● ●

● ●

● ● ●●

● ●● ●

● ●

●●●

●●

● ●

●● ●

●●

●● ●

●● ●

● ● ● ●

● ●●

● ●●

●●

● ●

● ●●

●●

● ● ●

●●

● ● ●●

●●

●● ●

●● ●

●● ●

●●

●●

●● ●

● ● ●●●

●● ●

●● ●

● ●

●●

● ●●

● ●●

● ●

●● ●

●● ●

● ●

●●

● ● ●●

●● ●

● ●●

●● ●

●●

●●

●●

●●

●●

●●

● ●● ●

●●

● ●

●● ● ●

● ●

● ●●

● ●

●●

●●

●●

●●

● ●

● ●●●

● ●● ●●

●●

● ●●

●●

●●

●●

● ●● ●

● ●●● ●

●● ●●

●●

● ●

●●

● ●

●●

● ●

● ●

●●

●●

●●

● ●

● ●

● ●

● ●●

● ●

● ●

●●

●● ●

●●

● ●

● ●

●●

● ●

●●

● ●● ●

● ● ●

●● ●

●●

●●

● ●● ●

●●

● ●●

● ●●

●●

●●

●● ●

●●

●●

●●

● ●

●●

●●

● ●●

●●

●● ●

● ● ● ● ●

● ● ●●

●●

●●

●●

●●

● ●●

● ●

● ● ●●

●●

● ●

● ●●

● ●

● ●●

●● ●

● ●

●● ●

● ●

●●

●●

●●

●●● ●

●● ●

● ●

●●

●●

● ●

● ●

●●

●●

●●

●●

●● ● ● ●●

● ●

●●

● ●

● ●●

● ●●

● ●

●●

● ●

●●

●●

●● ● ●

●●

● ●

●●

●●

● ●

●● ●

● ● ●

●●

●●

●● ●

●●

●●

●●

● ●

● ●●

●●

●● ●

● ●●

● ●●

●●

● ●●

●●

●●

●●

● ●●

●●

●● ●

● ●

●●

● ● ●

● ●●

● ●

●●

●●

● ●●

●●

● ●

●●

● ● ●

●●

● ● ●

●●

● ●

●●

● ● ●

●●

● ●

●● ● ● ●

●●

● ●●

●●

●●

● ● ● ●●●

● ●●

● ●● ● ●●

●● ●

● ●●

●●

●●

●●

● ●

● ●●

●●

● ● ● ●●

●● ● ●

● ●

● ●

●● ● ●

● ●

● ●●

● ●

●● ●

●●

● ●●

●●

●●

● ●

●●

● ●

● ●

● ●

● ●

●●

● ●● ● ● ●

● ●

● ●

●●

●●

● ●

●●

●●

●●

●●

● ●

●●

● ●

●●

●●

● ●

●●

●●

●●

●●

●● ●

●●

●●

●●

● ●●

●●

● ●

●● ●

● ●●

● ●

●●

●●

● ●

●●

● ● ●

●● ●

● ●

●●

●●

● ● ●● ●

●●

● ●

● ●

●● ●

● ●

● ●

● ●

●●

●● ● ● ●● ● ●

●●

● ●

●●

●●

● ●●

● ●

● ●

● ●

● ●

● ●●

● ●

●●

● ●

●● ●

● ●●

● ●

●●

●● ●

● ● ●

●●

●●●

● ● ●●

●●

● ●●

●●

● ● ● ●

●● ●

●●

● ●●

● ●● ● ● ●

●●

●● ●

●●

● ●

● ●●

● ● ●

●● ●

●●

● ●

●●

● ●●

●● ● ●

● ●●

●●

● ● ● ●

●● ● ● ● ● ● ●

●●

●●

● ●

● ●

●●

● ●●

●●

●●

●●

●●

●●

● ● ● ●

●●

● ●●

● ●

● ●

●●

●●●

●●

●●

● ●

●● ● ● ● ● ●

●●

● ●

●●

● ●●

●●

●●

●●

● ●

● ●

●●

●●

●●

● ●

● ●●

● ●

● ● ●

● ●●

● ●● ●

● ●● ●

● ●

●●

● ● ●●

● ●●

●●

● ● ●●

● ●

●●

●● ●● ●

●●

● ● ●●

●●

● ●

● ● ● ●

● ●

● ● ● ● ●●

● ●

●● ●

● ●

● ●●

● ●

● ●●

● ●● ●● ●

●● ●

●●

● ●

●● ●

● ●●

● ●●

● ●

●● ●

● ●

● ●

●●

●● ●

● ●

●● ●

● ● ●

●●

● ●●

●●

●● ●

●●

● ●

●●

●● ●

●●

● ●

● ●

●●

● ● ●●

● ●

●●

● ●● ● ●

● ● ●

●●

●●

●●

● ●

●●

●●

● ●

●● ● ●

●● ●

● ●

●●

● ●

●●

● ● ●●

●●

● ●

●●

●● ●

●●

● ● ● ●●● ●

● ●

● ●●

●●

● ●●

●● ● ● ●

●●

● ●●

● ● ●

●● ●

●● ●

●●

●●

●●

● ●

●●

● ●

●●

●●

●●

●●

●●

● ●

●●

● ●

● ●

●● ●

●● ●

● ●●

●●

● ●

● ● ● ●

●●

●●

●●

● ●

●●

●●

● ●

● ●

● ●●

●● ●

● ●● ●

●●

●●

●●

● ● ● ●

● ● ●

●●

●●

●●

●●

●● ●

● ●●

●● ●

● ●

● ● ● ● ●

● ●●

●●

●●

●●

●●

● ●●

● ● ●

●●

●●

●●

● ●

●●

● ●

●●

●●

●●

●● ●

●●

●●

●●

● ●●

●●

●●

●●

● ●

● ●●

●● ●

● ● ●●

● ●

● ●●

● ●●

●●

● ● ● ●

●●

● ● ● ●

●●

●● ●

● ●● ●

● ●● ●

●●

● ●

●●

●●

● ●

● ● ●

●●

●●

●●

●●

●● ●

●●●

●● ●●

● ●● ● ●

●●

●●

● ●

●●

● ●●

●●

● ●

●●

●●

●●

● ●

● ● ●

● ●

● ●●

● ●

●● ●

●●

● ● ● ●

●●

● ●

●●

●●

● ●

● ●

●●

●●

●● ●

● ●

● ●

● ●

●●

● ●●

● ●

●●

●●

●●

● ● ●

● ●●

●●

● ●

● ● ●

● ●

●● ●

●●

● ●●

● ●

●●

●● ●

●●

●●

●●

● ●

●●

●●

●●●

●● ●

●●●

●● ●

●●

● ●

● ●

● ●●

●●

● ● ●

●●

●●

● ●●

●●●

● ●

●●

● ● ● ●●

● ●

● ●

●●

●●

●● ●

● ●

●●

●●

●●

●●

● ●

●●

● ●●

● ●●

●●

●●

●●

●●

●●

● ●● ●

● ●

● ●● ● ●

● ●

● ●

●●

●●

● ●●

●●

● ● ●

● ●

● ●

●●

● ●●

●●

●●

●●

● ●

● ●●

●●

● ● ●

●●

●●

● ●

●●

●●

● ●●

●●

● ●●

●●

●●

●●

●●

● ●

●●

● ●

●●

●●

●●

● ● ●●

● ● ● ●●

●●

●●

● ●

● ● ● ●

● ●●

●●

●●

● ●

●●

● ●

● ●●

●● ●

●●

●●

● ● ●

● ●●

●●

●●

●●

●●

●●

●●

●● ● ●

●● ●

●●

● ● ●

● ●●

●● ●

●● ●

●● ●

● ●● ●

●●

●●

●● ●

● ●●

● ● ● ●

●● ●

●●

● ●

●●

● ●

● ● ●●

●● ●

● ●●

●●

●● ●

● ●● ●

● ●

● ●

●●

●●

●●

● ●

● ●

●●

●●

●●

● ● ●

● ●

●●

● ● ●

● ●●

● ●

● ●●

● ●

●●

●●

● ●

●●

● ●

● ●

●●

● ●● ●●

● ● ● ● ●

●●

●●

● ● ●

●●

●● ●

● ●

●●

●●

●●

●●

●●

●● ●

●●

● ●●

●●

●●●

●●

●●

●●

● ●

● ●

●●

●● ●●

● ●

●● ●

●●

●●

●●

● ● ●

● ● ●

● ●● ●

●● ● ●

●●

●●●

● ●

●●

● ● ●●

● ● ●●

● ●● ●

● ●●

●●

●●

● ●●

●●

●●

● ● ●

● ●●

● ● ●

●● ●

●●●

●● ●

● ●

●●

●●

● ●●●

● ●●

●●

●●

●●

●●

●● ●

● ●

● ●● ●

● ●●

● ●

●● ●

● ●

● ●

● ●

●●

●●

● ●

● ●

●●

● ●

● ●

●●

●●

●●

●●

●●

●● ●

● ●

● ●

●●

● ●

●●

●●

●●●

●● ●

● ●

● ●

● ●

●● ● ● ●

● ● ●

● ●

● ● ●

●● ●● ●

●● ● ●

●●

● ●

● ●

● ●● ●

● ●

●●

● ●●

● ●●

● ●●

● ●● ●

●●

●●

●●

● ●●

●●

● ●

●●

●●

● ●

● ●●

● ●●

● ●

●●

● ●

●●

●● ●

●●

●● ●

●● ● ●

●●

● ●●

● ● ●● ●

● ●

● ●

●● ●

● ● ● ●

●● ●

●●

●● ●

● ●● ●

●●

● ●

●●

● ●

● ●● ●

●●

●●

●●

● ●

●● ●

●●

●●

●● ●

●●

●●

●●

● ●

● ●

● ● ●

● ● ● ●

●●

●● ●

●●

● ● ●

●●

●● ●

●●

●●

●●

●●

●●

●●

● ● ●●

● ● ●

●●

● ●

● ●●

● ● ● ●

● ●●

●●

●●

● ●

●●

● ●

● ● ● ●

●●

●●

●●

● ● ●

● ●

● ● ●

●● ●

● ●● ● ● ● ●

● ●●

● ●●

●●

●●

●● ●

● ●

●● ● ● ●

●●

●●

● ●

● ●

●●

●●

●● ●●

● ●

● ●●

●●

●● ●

●● ● ●

●●

●●

● ●

●●

●● ●

●●

●● ●

● ●●

● ●●

●●

● ●

●●

● ●

● ● ●

●●

●●

● ●

●●

●●

● ●

●● ●

● ●●

●●

● ● ●

●●

●●

●●

●● ●

● ●

●●

●●

● ●

● ●

●●

●● ●

● ● ●

● ●

● ●●

●●

●●

●● ●

● ●

●●

● ●

● ●●

● ●

●●

● ● ●

●●

●● ● ● ● ●

●●

● ● ●

●●●

●● ● ●

● ●●

● ●

●●

●●

●●

● ●●

●●●

●●

● ● ●

●● ●

●●

● ●

● ● ●

● ●●

● ● ●

● ●●

● ●

● ● ●● ●

●●

●●

● ●

● ●● ●

●●

●●

●●

● ●●

●●

●●

●● ●

● ●●

● ● ●

●●

●●

● ● ●

● ●

● ●

● ●

● ● ● ● ●

● ●

● ●

●●

●●

● ●

● ● ●●

● ●

● ●

●● ●

● ●

● ● ●●

●●

● ● ●

● ●

● ●

● ●

● ● ● ●●

● ●

● ●

● ●● ●

●● ● ●

●●

●●

●●

● ●

●●

●●

●●

● ● ●●

● ●

● ●●

● ●

● ●●●

●●

●● ●

●●

● ●●

●●

●●

●●

●●

● ●●

●●

●●

●●

●●

●● ●

●●●

● ● ●●

● ●

●●

●●

●●

● ●

●●

●●

● ●

● ●

● ●

●●

● ●

●● ●

●●

●●

●●

●●

●●

● ●● ●

● ●

●●

●●

●● ● ●

●● ●

● ●

● ●

●●

●●

● ●

●●

● ●

●●

●● ● ●

● ●●

● ●●

●●

●● ●

● ●●

●●

●●

●●

● ●

● ● ● ●

● ●●

●● ●

● ●

● ●●

●●

● ●

●● ● ●

●● ● ●

● ●● ●

● ●

●●

● ● ●

●● ●

● ●

● ●

●●

● ●

●● ●

●●

● ●

● ●

●●

●●

● ●●

●●

●●

● ●

●● ●

● ●●

● ●

●●

● ●

● ●●

● ● ●

●●

●●

● ●

●●

● ●●

● ●

● ● ●● ●

● ●

●●

●●

●●

●● ●

● ●● ● ●

●●

●● ●

●●

●●

● ●

● ●● ●

●●

●●

● ●

●● ●

● ● ●●

●●

●●

● ●

● ●

●●

●● ●

● ●

●●

●● ●

●●

● ●

● ●

●●

● ●

●●

●●

●●

●●

●●

●●● ●

●●

●● ●

●● ●

●●

● ●

●●

● ● ●

● ●

●●

●● ●

● ●

● ●

●●

●●

●●

● ●

● ●

●● ●

● ●●

●● ● ●

●●

●● ●

●●

● ● ●●

● ● ●

●●

●●

● ●●

● ●●

●● ●

●● ●

● ●

●●

●●

●●

● ●

● ●

●●

●● ● ● ●

●●

●●

●●

● ●●

● ●

●●

●●

●●

● ●●

● ●●

●●

● ● ●

●●

●●

●●

●●

●●

●●

●●

●●

● ● ●

● ● ●●

● ●●

●●

●●

●● ●

●●● ●

● ●●

● ●●

● ●●

● ●●

●● ● ● ●

● ●

●● ●

● ●●

●●

●●

●●

● ●●

● ● ●

●●

● ●

●●

● ● ●

● ●

●●

● ● ●●

●●

● ●

●●

● ●

●●

●●

● ●●

●●

●●

● ●●

● ●●

●●

●●

● ●●

●●

● ●

●●

●●

● ●●

●●

● ●

●● ●

●●

● ●

● ● ● ●● ●

●●

●●

● ●●

●● ●

●●

●● ●

● ● ●●●

●● ●

● ●● ●

● ●●

●●

● ●● ●

● ●●

● ●

●●

●●

●● ●

● ●

●● ● ● ●

● ●● ●

● ● ●

●●

●●

● ● ●●

●●

●● ● ●

●●

●●

●●

● ●● ●

● ●

●●

●●

● ●

●●

● ●

●●

● ●● ● ● ●

● ●●

●●

●●

●●

● ●

● ●●

●●

●● ● ●

●●

●●

●●

● ●

● ●●

●●

● ● ●●

●● ●

●● ●

●●

● ●

● ●

●●

● ●●

●●

●●

●●

●●

● ●●

● ●

●●

● ●

●● ● ●

●●

● ●

●●

●●

●●

●● ●

●● ●

●● ● ●

●●

●●

● ● ●●

●●

●● ●

●●

● ●●

● ●

● ●● ● ●

●●

●●●

● ● ●

● ● ●●

●●

● ●●

●●

●●

●●

● ●

●●

● ●

●● ●

● ●●

●● ●

● ● ●●

● ● ●

●●

● ● ● ●●

●●

● ●●

●●

●● ●

● ● ● ● ●● ● ●

●●

●●

●●

●●

● ● ●

●●

●●

●●

● ●●

●●

● ●●

●● ●

●●

● ●

●●

●●

● ●

●●●

●●

●●

●●

● ●

● ●

●●

●●

●●

● ●

● ● ● ●

●●●

● ●●

● ●●●

● ● ●● ●

●● ●

●●

●●

●● ● ●

●●

●● ●

●●

● ●●

● ● ●

●● ●

● ● ●●

● ●

● ●●

●● ●

●●

● ● ●

●● ●

●● ●

●●

●●

● ●

● ● ●

● ●

●● ●

●●●

●●

●●

● ●

● ●

● ●

●●

●● ●

●●●

●●

●● ●

●●

●●

● ●●●

● ●

● ●

● ●

● ●

●● ●

●●

●●

● ● ●

●●

●●

●●

● ●

●●

●●

● ●

● ●●

● ●

●●

● ●

●●

●●

●●

● ●

● ●

●●

●●

●●

●●

● ● ●

● ●

● ●

●●

●●●

●●

●●

●● ●

● ●

●●

● ●

●●

●●

●●

● ●●

●●

●●

●●

● ●

● ●

● ●

●●

●●

● ●●

●● ●

● ●●

●●

● ●

● ●

●●

●● ●

●● ●

●●

● ●● ●

●●

●● ●

● ●

●●

● ●

● ●

● ● ●

●●

● ●

●●

●●

●●

●●

●●

● ●

● ●●

● ●

● ●●

●●

●●

●●

● ● ●

●●

● ●

●●

●●

●●

●●

● ●●

● ●

● ●

● ●

● ●●

● ●● ●

● ● ●

●●

● ●●

● ●

●●

●●

● ●

●●

●● ●

● ●●

● ● ●●

●●

●●

●●

● ● ●

● ●●

● ●

● ● ●

●●

● ●

●●

● ●

●●

● ●

● ●

●●

●●

●●

● ●●

●●

●●

● ●● ● ●

●●

● ●● ●

● ●

●●

●●

●● ●

● ●

●●

●●

●●

●●

● ●

●●

●●

●●

● ●

● ●●

●●

● ●●

●●

● ●

● ●●

●● ● ●

●● ●

● ●

● ●

●●

● ●

●● ●

● ●● ●

●●

●●

●●

●● ●

●●

●●

● ● ●

●●

●●

● ● ●● ●

●●

● ●

● ●

● ●

● ●

● ●

● ●

● ●

●●

● ●

● ●

●●

● ●

●●

● ●

●● ●

● ●

●●

●●

●●

● ●●

● ●

●●

●●

● ●

●●

●● ●

●●

●●

●●

● ●

●●

● ●

● ●● ●●

●● ● ●

●●

●●

● ● ●

● ●

● ●

●●

●●

● ●

● ●

● ●

● ●

● ● ●●

●●

●●

●●

● ●● ● ● ●●

● ●

● ●●

●●

● ●●

●● ●

●● ●

● ●

● ●

●●

●●

●●

● ●

●●

● ●

●●

●●

●●

●●

●●

●●

● ●● ●

●●

●● ●

●●

● ● ●●

● ●

● ●● ●

● ●●

● ● ●

● ● ●●

● ●

● ● ● ● ●●

● ●

● ●●

●●●

●●

●●

● ●

●●

● ●

● ● ●

● ● ●

●●

●●

●●

● ●

●●

●●

● ●

●● ●

●●

●●

●●

●●

● ●

●●

● ●

●● ● ●

●● ●

●●

●●

●●

● ●●

●●

● ●●● ●●

● ●●

●●

●●

●●

●● ●

● ● ●

● ●

●●

●●

● ●

● ●●

● ●● ●

● ●

● ●

● ● ●

●● ●

●● ● ●

● ● ●● ●

●●●

● ●

● ●

●●

● ●

● ●

●●

● ●●

●●

● ●

●●

●●

● ●

●●

●●

●●

●●

● ● ●● ●

● ●

● ●

● ●

●●

●●

●●

● ●

●● ●

● ●

●● ●

●●

0 5 10 15 20 25 30

0.0

0.2

0.4

0.6

0.8

1.0

Reminder Number

Pro

babi

lity

of S

ucce

ss

Adaptive messageOriginal messagePre−tested messageRandom message

1Kaptein & van Halteren (2012).Adaptive Persuasive Messaging to Increase Service Retention.Journal of Personal and Ubiquitous Computing

Page 19: Data Science methods for treatment personalization in ...

Optimizing the decoy effect2

0.3

0.2

0.1

0.0

0

2000

p( )T^p( )TD^

pse

lectin

g ta

rget

time

p( )TDmax

A2orig

A2alt1000

10 20 30 40 50

0.25

0.00

0.50

0.75

0

20001000

100 150

0.1

0.0

0.2

0.3

0

20001000

Attribute value A2

20 40 60

0.1

0

20001000

0.2

0.0

0.4

0.6

0

20001000

A2alt A2

orig

0.2

0.3

0.4

0.5

0.0

pse

lectin

g ta

rget

time

pse

lectin

g ta

rget

time

1.00 1.25 1.50 1.75 2.00

Laptop

EconomistHotel

JuiceBeer

0 1 2 3 4

2Kaptein, M.C., van Emden, D., & Iannuzzi, D. (2016) “Tracking the decoy; Maximizing the decoy effectthrough sequential experimentation” Palgrave Communications

Page 20: Data Science methods for treatment personalization in ...

Personalized persuasive messages in e-commerce3

I Change persuasive messages

I Using online hierarchicalmodels

I Three large scale(n > 100.000) evaluations

3Kaptein, Parvinen, McFarland (2018) “Adaptive Persuasive Messaging” European Journal of Marketing

Page 21: Data Science methods for treatment personalization in ...

Personalized persuasive messages in e-commerceResults

Page 22: Data Science methods for treatment personalization in ...

Section 4

Available software

Page 23: Data Science methods for treatment personalization in ...

Streaming Bandit

I Back end solution for online execution of bandit policies

I Setup a REST server to handle action selection

I Recently released first stable version

We identify two steps:

1. The summary step: In each summary step θt′−1 is updated bythe new information {xt′ , at′ , rt′}. Thus,θt′ = g(θt′−1, xt′ , at′ , rt′) where g() is some update function.

2. The decision step: In the decision step, the modelr = f (a, xt′ ; θt′) is evaluated for the current context and thepossible actions. Then, the recommended action at time t ′ isselected.

Implemented in getAction() and setReward() calls.

Page 24: Data Science methods for treatment personalization in ...

Streaming Bandit 24

I See http:

//sb.nth-iteration.com

I Currently used in severalonline evaluations of banditpolicies

4Kruijswijk, van Emden, Parvinen & Kaptein, 2018, StreamingBandit: Experimenting with Bandit PoliciesJournal of Statistical Software.

Page 25: Data Science methods for treatment personalization in ...

Contextual — [R]

Package for offline evaluation of bandit policies seehttps://github.com/Nth-iteration-labs/contextual

Page 26: Data Science methods for treatment personalization in ...

Questions?

Page 27: Data Science methods for treatment personalization in ...

Contact

Prof. Dr. Maurits Kaptein

Archipelstraat 136524LK, NijmegenThe Netherlands

email: [email protected]: +31 6 21262211

Page 28: Data Science methods for treatment personalization in ...
Page 29: Data Science methods for treatment personalization in ...

Section 5

References

Page 30: Data Science methods for treatment personalization in ...

[1] Shipra Agrawal and Navin Goyal. Analysis of thompsonsampling for the multi-armed bandit problem. In Conferenceon Learning Theory, pages 39–1, 2012.

[2] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer.Finite-time analysis of the multiarmed bandit problem.Machine learning, 47(2-3):235–256, 2002.

[3] Donald A Berry and Bert Fristedt. Bandit problems:sequential allocation of experiments (monographs on statisticsand applied probability). London: Chapman and Hall,5:71–87, 1985.

[4] Olivier Chapelle and Lihong Li. An empirical evaluation ofthompson sampling. In Advances in neural informationprocessing systems, pages 2249–2257, 2011.

[5] Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire.Contextual bandits with linear payoff functions. InProceedings of the Fourteenth International Conference onArtificial Intelligence and Statistics, pages 208–214, 2011.

Page 31: Data Science methods for treatment personalization in ...

[6] Dean Eckles and Maurits Kaptein. Thompson sampling withthe online bootstrap. arXiv preprint arXiv:1410.4009, 2014.

[7] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Theelements of statistical learning, volume 1. Springer series instatistics New York, NY, USA:, 2001.

[8] Ian Goodfellow, Yoshua Bengio, Aaron Courville, and YoshuaBengio. Deep learning, volume 1. MIT press Cambridge, 2016.

[9] Maurits Kaptein. The use of thompson sampling to increaseestimation precision. Behavior research methods,47(2):409–423, 2015.

[10] Maurits C Kaptein. Computational Personalization: Datascience methods for personalized health. Tilburg University,2018.

[11] Emilie Kaufmann, Nathaniel Korda, and Remi Munos.Thompson sampling: An asymptotically optimal finite-timeanalysis. In International Conference on Algorithmic LearningTheory, pages 199–213. Springer, 2012.

[12] Lihong Li, Wei Chu, John Langford, and Robert E Schapire.A contextual-bandit approach to personalized news article

Page 32: Data Science methods for treatment personalization in ...

recommendation. In Proceedings of the 19th internationalconference on World wide web, pages 661–670. ACM, 2010.

[13] Abdolreza Mohammadi and MC Kaptein. Contributeddiscussion on article by pratola [comment on” mt pratola,efficient metropolis-hastings proposal mechanisms for bayesianregression tree models”]. 2016.