Product Adoption in Social Networks - HKBUSecon.hkbu.edu.hk/eng/Doc/NetLearn170921m_slides.pdf ·...
-
Upload
trinhthien -
Category
Documents
-
view
212 -
download
0
Transcript of Product Adoption in Social Networks - HKBUSecon.hkbu.edu.hk/eng/Doc/NetLearn170921m_slides.pdf ·...
Introduction Model Examples General Networks Tree Networks Examples The End
Product Adoption in Social Networks
Simon Board Moritz Meyer-ter-Vehn
UCLA
September 25, 2017
Introduction Model Examples General Networks Tree Networks Examples The End
Motivation
How do societies learn about new innovations?
I New products, e.g. electric bikes
I New behaviors, e.g. smoking
I New principles, e.g. racial equality
We study a tractable model of learning on social networks
I Agents have opportunity to try product in random order.
I Before trying, agent sees which neighbors have adopted.
I After trying, agent adopts product if he likes it.
How does learning depend on network structure?
Introduction Model Examples General Networks Tree Networks Examples The End
Results
Any network
I Characterize diffusion via system of ODEs.
I Show existence of a unique equilibrium.
Large random networks
I Product adoption improving over time.
I Product adoption improving in number of links.
I Impact of conversion rates on learning.
Network structure
I Directed vs undirected networks
I Direct information vs indirect information
I Cyclical vs acyclical
Introduction Model Examples General Networks Tree Networks Examples The End
Literature
Social learning with sampling
I Smith and Sorensen (1996)
I Banerjee and Fudenberg (2004)
I Monzon and Rapp (2014)
Social learning on networks
I Celen and Kariv (2004)
I Acemoglu, Dahleh, Lobel and Ozdaglar (2011)
I Lobel and Sadler (2015, 2016)
Asymmetric social learning
I Guarino, Harmgart and Huck (2011)
I Herrera and Horner (2013)
I Hendricks, Sorensen, Wiseman (2012)
Introduction Model Examples General Networks Tree Networks Examples The End
Yet another Paper on Social Learning?
“A significant gap in our knowledge concerns short-rundynamics and rates of learning in these models....Thecomplexity of Bayesian updating in a network makes thisdifficult, but even limited results would offer a valuablecontribution to the literature.”
The Oxford Handbook of the Economics of Networks 2016
Contribution
I Characterize such short-run dynamics via ODEs.
I Provide comparatives statics in time and linkage.
I Social learning on a given network.
I Modeling innovation: Action depends only on public info.
Introduction Model Examples General Networks Tree Networks Examples The End
Model
Players, Actions, Payoffs
I Product quality θ ∈ {L = 0, H = 1}, with Pr(z = H) = 12 .
I At time ti ∼ U [0, 1], agent i=1, . . . , N chooses to test or not.
I Adopts tested product with prob. zθ, where zL < zH .
I Benefit θ, and cost of trying ci ∼ F .
Information
I Commonly known directed network G = {(i, j)}.I Agent i observes which of his successors Si use product at ti.
Introduction Model Examples General Networks Tree Networks Examples The End
The Inference Problem
Why has j not adopted the product?
I j tried product, but did not like it?
I j is not yet aware of product?
I j chose not to try product (because k did not adopt it)?
Introduction Model Examples General Networks Tree Networks Examples The End
i→ j
Definition: Adoption Rate
xθi,t: Probability i has adopted product θ by time t
Leader, j:
xθj,t = Pr(j adopts|tests) Pr(j tests)
Follower, i:
xθi,t = Pr(i adopts|tests) Pr(i tests)
Introduction Model Examples General Networks Tree Networks Examples The End
i→ j
Definition: Adoption Rate
xθi,t: Probability i has adopted product θ by time t
Leader, j:
xθj,t = zθF (1/2)
Follower, i:
xθi,t = Pr(i adopts|tests) Pr(i tests)
Introduction Model Examples General Networks Tree Networks Examples The End
i→ j
Definition: Adoption Rate
xθi,t: Probability i has adopted product θ by time t
Leader, j:
xθj,t = zθF (1/2)
Follower, i:
xθi,t = zθ(Pr(j adopt) Pr(i tests|j adopt)+Pr(j not) Pr(i tests|j not)))
Introduction Model Examples General Networks Tree Networks Examples The End
i→ j
Definition: Adoption Rate
xθi,t: Probability i has adopted product θ by time t
Leader, j:
xθj,t = zθF (1/2)
Follower, i:
xθi,t = zθ
(xθj,tF
(xHj,t
xLj,t
)+ (1− xθj,t)F
(1− xHj,t1− xLj,t
))
where F(xH
xL
):= F
(xH
xH+xL
)
Introduction Model Examples General Networks Tree Networks Examples The End
Adoption curves
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1Time
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Pr(Try|A)
Pr(Try|N)Adoption Agent i
Adoption Agent jHigh Quality
Low Quality
Assumptions: c ∼ U [1/2, 2/3], zL = 1/2, zH = 1.
Introduction Model Examples General Networks Tree Networks Examples The End
i� j
The ignorant neighbor
I Before ti, agent j never observes adoption by i.
I xθj,∅,t: Probability j has adopted product θ by t ≤ ti.
xθj,∅,t = zθ(Pr(i adopt) Pr(j tests|i adopt)+Pr(i not) Pr(j tests|i not)))
Introduction Model Examples General Networks Tree Networks Examples The End
i� j
The ignorant neighbor
I Before ti, agent j never observes adoption by i.
I xθj,∅,t: Probability j has adopted product θ by t ≤ ti.
xθj,∅,t = zθ(Pr(j tests|i not)))
Introduction Model Examples General Networks Tree Networks Examples The End
i� j
The ignorant neighbor
I Before ti, agent j never observes adoption by i.
I xθj,∅,t: Probability j has adopted product θ by t ≤ ti.
xθj,∅,t = zθF
(1− xHi,∅,t1− xLi,∅,t
)
Introduction Model Examples General Networks Tree Networks Examples The End
i� j
The ignorant neighbor
I Before ti, agent j never observes adoption by i.
I xθj,∅,t: Probability j has adopted product θ by t ≤ ti.
xθ∅,t = zθF
(1− xH∅,t1− xL∅,t
)
Introduction Model Examples General Networks Tree Networks Examples The End
i� j
The ignorant neighbor
I Before ti, agent j never observes adoption by i.
I xθj,∅,t: Probability j has adopted product θ by t ≤ ti.
xθ∅,t = zθF
(1− xH∅,t1− xL∅,t
)
Actual adoption rate
xθt = zθ
(xθ∅,tF
(xH∅,t
xL∅,t
)+ (1− xθ∅,t)F
(1− xH∅,t1− xL∅,t
))
Introduction Model Examples General Networks Tree Networks Examples The End
Adoption curves
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1Time
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Pr(Try|A)
Pr(Try|N)
Adoption Real Agent
Adoption Ignorant AgentHigh Quality
Low Quality
Assumptions: c ∼ U [1/2, 2/3], zL = 1/2, zH = 1.
Introduction Model Examples General Networks Tree Networks Examples The End
Individual Adoption Rates . . . are not enough
A general formula for individual adoption rates
I xθA,t: Probability A ⊆ Si have adopted θ by t.
xθi = zθ∑A⊆Si
xθAF
(xHAxLA
)
But cannot recover joint xθA from marginals xθj
Introduction Model Examples General Networks Tree Networks Examples The End
A Bigger State Space
State of network λ ∈ {∅, a, b}N
I λi = ∅: i hasn’t moved yet, t ≤ ti.I λi = a: i has moved, tried, and adopted the product.
I λi = b: i has moved, but not adopted the product.
Additional Notation
I Distribution x = (xλ), and xΛ :=∑
λ∈Λ xλ for sets Λ.
I If λi = a, b, write λ−i for “same state with λi = ∅”.
Inference
I If A ⊂ Si have adopted at ti, i knows λ ∈ Λ(i, A), namely:
λj = a for all j ∈ A λj 6= a for all j ∈ Si \A λi = ∅
Introduction Model Examples General Networks Tree Networks Examples The End
ODE for General Networks
Theorem 1.
xθλ = − 1
1− t∑i:λi=∅
xθλ
+1
1− t∑i:λi=a
xθλ−izθF
(xHΛ(i,A)
xLΛ(i,A)
)
+1
1− t∑i:λi=b
xθλ−i
[1− zθF
(xHΛ(i,A)
xLΛ(i,A)
)]
Implications
I Existence, uniqueness, discrete-time approximation ...
I But: ODE cannot be computed, since it is 2 · 3N -dimensional.
Introduction Model Examples General Networks Tree Networks Examples The End
Trees
Large Random Networks
I Analysis is complicated by (a) Self-reference, (b) Correlation.
I Neither matters in large random networks.
(Homogeneous) Trees: Network G is
I . . . a tree if there is at most one path i→ . . .→ j.
I . . . homogeneous of degree k if every node has out-degree k.
Introduction Model Examples General Networks Tree Networks Examples The End
Adoption in Trees
I Successors’ adoption (xθj)j∈Si conditionally independent.
I Probability A ⊆ Si adopt
xθA =∏j∈A
xθj∏
j∈Si\A
(1− xθj)
I Adoption rates
xθi = zθ∑A⊆Si
xθAF
(xHAxLA
)I 2N -dimensional ODE.
Introduction Model Examples General Networks Tree Networks Examples The End
Adoption in Homogeneous Trees
I All nodes are symmetric.
I Probability ν of k adopt
xθ(ν,k) :=
(νk
)(xθ)ν(1− xθ)k−ν
I Adoption rate
xθ = zθk∑ν=0
xθ(ν,k)F
(xH(ν,k)
xL(ν,k)
)
I 2-dimensional ODE.
Introduction Model Examples General Networks Tree Networks Examples The End
Adoption curves for the infinite line, k = 1.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1Time
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Pr(Try|A)
Pr(Try|N)Adoption Rate
High Quality
Low Quality
Assumptions: c ∼ U [1/2, 2/3], zL = 1/2, zH = 1.
Introduction Model Examples General Networks Tree Networks Examples The End
Mean Field Approximation
Large (homogeneous) Random Networks
I Fix N nodes and randomly draw k out-links for every node i.
I xθ(i,G),t: Adoption rate of agent i in network G.
Theorem 2 (Mean Field Approximation).
Adoption in large random network converge to homog. tree, xθt :
limN→∞
Pr(|xθ(i,G),t − xθt | < ε) = 1 for all t, θ, ε > 0
Proof Sketch
I Large (hom) random network is “local” tree (with Pr→ 1).
I In discrete time t = ν/m (Euler) only “local” network matters.
I Discrete time solution converges to continuous time.
Introduction Model Examples General Networks Tree Networks Examples The End
Speed of convergence - N -cyclesI Agents i ∈ N with links i→ i+ 1 mod n.I xθj,i,t: Probability j has adopted at t ≤ ti
xθj,i,t = zθ
(xθj+1,i,tF
(xHj+1,j,t
xLj+1,j,t
))
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1Time
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
Adoption, n=1
Adoption, n=2
Adoption, n=3,4
High Quality
Low Quality
Introduction Model Examples General Networks Tree Networks Examples The End
Comparative Statics(for finite trees)
Introduction Model Examples General Networks Tree Networks Examples The End
Adoption curves
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1Time
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Adoption, k=1
Adoption, k=5
Adoption, k=20
High Quality
Low Quality
Assumptions: c ∼ U [1/2, 2/3], zL = 1/2, zH = 1.
Introduction Model Examples General Networks Tree Networks Examples The End
Ranking Product Adoption
DefinitionInformation structure (µH , µL) has better product adoption than(µH , µL) if test chance is greater (smaller) for H (L).
Pr(test|H) ≥ Pr(test|H) and Pr(test|L) ≥ Pr(test|L)
Application
I Learning from successor adopt. (xHA,t, xLA,t)A⊆Si .
I Allows comparison across• Time t,• Agents i,• Networks G.
I E.g., i’s adoption improves over time iff for all t > s
xHi,t ≥ xHi,s and xLi,s ≥ xLi,t
Introduction Model Examples General Networks Tree Networks Examples The End
Product Adoption improves in Blackwell-Order
Assumption F (c)c is convex, and F (c)(1− c) is concave.
I Satisfied if |F ′′(c)/F ′(c)| < 2
Introduction Model Examples General Networks Tree Networks Examples The End
Product Adoption improves in Blackwell-Order
Assumption F (c)c is convex, and F (c)(1− c) is concave.
Lemma 1.If experiment (µH , µL) is Blackwell-sufficient for (µH , µL), then ithas better product adoption.
Proof
I Posterior ψ = ψ(A) = µH(A)µH(A)+µL(A)
sufficient for signal A.
I Try-out probability
Pr(test|H) = Pr(ψ ≤ c|H) =∑ψ
µH(ψ)F (ψ)
Introduction Model Examples General Networks Tree Networks Examples The End
Product Adoption improves in Blackwell-Order
Assumption F (c)c is convex, and F (c)(1− c) is concave.
Lemma 1.If experiment (µH , µL) is Blackwell-sufficient for (µH , µL), then ithas better product adoption.
Proof
I Posterior ψ = ψ(A) = µH(A)µH(A)+µL(A)
sufficient for signal A.
I Try-out probability
Pr(test|H) =∑ψ
µ(ψ)2ψF (ψ)
Introduction Model Examples General Networks Tree Networks Examples The End
Product Adoption improves in Blackwell-Order
Assumption F (c)c is convex, and F (c)(1− c) is concave.
Lemma 1.If experiment (µH , µL) is Blackwell-sufficient for (µH , µL), then ithas better product adoption.
Proof
I Posterior ψ = ψ(A) = µH(A)µH(A)+µL(A)
sufficient for signal A.
I Try-out probability
Pr(test|H) =∑ψ
µ(ψ)2ψF (ψ)
I Since 2ψF (ψ) convex, and µ(ψ) spread of µ(ψ):
Pr(test|H) =∑ψ
µ(ψ)2ψF (ψ) >∑ψ
µ(ψ)2ψF (ψ) = Pr(test|H)
Introduction Model Examples General Networks Tree Networks Examples The End
Sufficiency for Dichotomies with Binary Signals
Lemma 2.A draw from (xH , xL) is sufficient for a draw from (xH , xL) iff
xH
xL≥ xH
xLand
1− xH
1− xL≤ 1− xH
1− xL
Thus: Adoption improves over time if trace (xHt , xLt )t concave.
Introduction Model Examples General Networks Tree Networks Examples The End
Product Adoption as a Function of Time
Theorem 3.For any agent in any tree, product adoption improves over time.
For s < t: xHt ≥ xHs and xLs ≥ xLt
Induction over distance to leaves, m:
I Anchor, m = 0: No info., so xθ = zθ Pr(test) ≡ zθF (12).
I Step m→ m+ 1• Adoption improves in t for level-m agents j.• Trace (xHj,t, x
Lj,t)t concave.
• Lemma 2: (xHj,t, xLj,t) Blackw.-sufficient for (xHj,s, x
Lj,s) if t > s.
• Lemma 1: adoption improves in t for level-(m+ 1) agent i.
Introduction Model Examples General Networks Tree Networks Examples The End
Product Adoption as a Function of Conversion Rates
I More informative conversion rates: zH ≥ zH and zL ≥ zL.
I Normalize product adoption xθt/zθ.
Theorem 4.Product adoption improves in informativeness of conversion rates.
For any t:˙xH
zH≥ xH
zHand
xL
zL≥
˙xL
zL(*)
Proof: Induction Step
I Assume (*) for all j ∈ Si. Then, a fortiori
xHj ≥ xHj and xLj ≥ xLj
I (xHj , xLj )j∈Si Blackwell-sufficient for (xHj , x
Lj )j∈Si .
I Lemma 1: (*) holds for i.
Introduction Model Examples General Networks Tree Networks Examples The End
Product Adoption as a Function of Links
I Tree G with sub-tree G ⊆ G.
I Agent i ∈ G’s adoption rates: xθi in G, xθi in G.
Theorem 5.Product adoption improves in links.
˙xHi ≥ xHi and xLi ≥ ˙xLi (*)
Proof: Induction Step
I Assume (*) for all j ∈ Si. Then also
xHj ≥ xHj and xLj ≥ xLj
I (xHj , xLj )j∈Si Bl.-sufficient for (xHj , x
Lj )j∈Si (more and better).
I Lemma 1: (*) holds for i.
Introduction Model Examples General Networks Tree Networks Examples The End
Reconsidering the Comparative Static in Links
Rationale for Theorem 5
I More links generate more information.
I More information improves product adoption.
But do more links generate more information?
Introduction Model Examples General Networks Tree Networks Examples The End
Reconsidering the Comparative Static in Links
Rationale for Theorem 5
I More links generate more information.
I More information improves product adoption.
But do more links generate more information?
Introduction Model Examples General Networks Tree Networks Examples The End
Reconsidering the Comparative Static in Links
Rationale for Theorem 5
I More links generate more information.
I More information improves product adoption.
But do more links generate more information?
Introduction Model Examples General Networks Tree Networks Examples The End
The Self-Referential Link Harms Product Adoption
i has better adoption in i→ j than in i� j
I Idea: j’s choice is more informative in i→ j than in i� j.
I i→ j: i updates based on xθj,t, where xθj,t = zθF (1).
I i� j: i updates based on xθ∅,t, where xθ∅,t = zθF
(1−xH∅,t1−xH∅,t
).
I Thus, xθj,t = λtxθ∅,t with λt > 1 independent of θ.
I Conclude with Lemmas 1 and 2.
Contrast to Trees
I In tree, additional link generates additional information.
I But cannot learn from link back to oneself, j → i.
I Rather, j → i reduces probability that j generates information
Introduction Model Examples General Networks Tree Networks Examples The End
The Correlating Link is Ambiguous
Information of i not Blackwell-ranked
I Correlated: Greater chance of best signal {j, k}.I Independent: Stronger inference from worst signal ∅.
Numerical Simulation: Independence slightly better
Introduction Model Examples General Networks Tree Networks Examples The End
Conclusion
A Tractable Model of Social Learning on Networks
I Describe Adoption Rates via ODEs.
I Allows for comparative statics b/c of complementarity.
I Captures real network features
I Modeling innovation: Learn private information after move.
Quo vadis?
I Relax assumptions: Perfect info, homgeneity, undirected...
I Examples: Small networks with rich structure.
Introduction Model Examples General Networks Tree Networks Examples The End
Relaxing Three Modeling Assumptions
Homogeneous number of links
I Real networks heterogeneous.
I Heterogeneity covered, but ODE N -dimensional.
Common Knowledge
I May know popularity of friends, but friends of friends?
I Relaxing common knowledge simplifies analysis.
Directed Network
I Reasonable for blogs, Twitter, but not for friends.
I i� j allowed, but doesn’t arise in large random networks.
Introduction Model Examples General Networks Tree Networks Examples The End
Poisson Network with Known Successors
Model
I Out-degrees drawn iid from P (·|k); no cycles.
I i observe only number of successors |Si| (and their adoption).
I Limit of large random network with link probability k/N .
Adoption rate
I Probability of ι successors: P (ι|k) := e−kkι/ι!
I Probability ν of ι have adopted: B(ν, ι;x) :=(ιν
)xν(1− x)ι−ν
xθ = zθ
( ∞∑ι=0
P (ι|k)
ι∑ν=0
B(ν, ι;xθ)F
(B(ν, ι;xH)
B(ν, ι;xL)
))
Introduction Model Examples General Networks Tree Networks Examples The End
Poisson Network with Unknown Successors
Model
I Idea: Not so clear how many pertinent friends I have.
I i observe only adopting successors A ⊆ Si, but not Si.
Adoption rate
I Prob. ι adopting successors: P (ι|kxθ) := e−kxθ(kxθ)ι/ι!
xθ = zθ
( ∞∑ι=0
P (ι|kxθ)F(P (ι|kxH)
P (ι|kxL)
))
I Less separation than with known successors.
Introduction Model Examples General Networks Tree Networks Examples The End
Undirected Poisson Networks
Complications
I i’s neighbors j have P (·|k) + 1 neighbors.
I Before ti, j conditions on λi = ∅.
But, assume neighbors are unknown
I Complications cancel, since j can’t see i before ti.
I j observes ν ∼ P (·|kxθ) adoptions from ι ∼ P (·|k) neighbors.
I Same adoption rates xθt as in directed Poisson network.
Introduction Model Examples General Networks Tree Networks Examples The End
Complete Network
Definitions
I State: (α, β) agents have (adopted,passed).
I xθα,β,t: Prob. α others adopted, β others passed at t ≤ ti.I Aggregate xθα :=
∑β<n−α x
θα,β
Adoption Rates
xθα,β = − 1
1− t(n− 1− β − α)xθα,β
+1
1− t(n− 1− β − (α− 1))xθα−1,βz
θF
(xHα−1
xLα−1
)
+1
1− t(n− 1− (β − 1)− α))xθα,β−1
(1− zθF
(xHαxLα
))