Learning Summary Statistics for Approximate Bayesian...

1
Purpose The purpose of my study is to use various machine learning methods to facilitate the genera6on of summary sta6s6cs in Approximate Bayesian Computa6on (ABC). Specifically, I will use the following methods Moment es)ma)on and linear regression Radius basis func)on Neural network What is an ABC algorithm? In Bayesian Es6ma6on, one of the most important step is to calculate the likelihood func6on However, in many important applica6ons, this is infeasible. ABC algorithm can get posterior es6ma6on without knowing the likelihood func6on. So the central ques6on is to find a good summary sta6s6c using machine learning methods to reduce the dimension. The following theorem guarantees the validity. Learning Summary Statistics for Approximate Bayesian Computation Yiwen Chen, Department of Management Science and Engineering, Stanford University 1. Background Information Blum, M. G. (2010). Choosing the summary sta6s6cs and the acceptance rate in approximate bayesian computa6on. In Proceedings of COMPSTAT'2010, pages 47-56. Springer. Fearnhead, P. and Prangle, D. (2012). Construc6ng summary sta6s6cs for approximate bayesian computa6on: semi-automa6c approximate bayesian computa6on. Journal of the Royal Sta6s6cal Society: Series B (Sta6s6cal Methodology), 74(3):419-474. Jiang, B., Wu, T.-y., Zheng, C., and Wong, W. H. (2015). Learning summary sta6s6c for approximate bayesian computa6on via deep neural network. arXiv preprint arXiv:1510.02175. 3. Evaluate with Moving Average Model 4. Evaluate with Hidden MC model 5 Conclusions and Insights 6. Reference The hidden Markov Chain model is as follows Automa6c summary sta6s6c genera6on is a good way to do ABC algorithm when we don’t know the exact sufficient sta6s6cs. More complex models tend to generate more scabered posterior distribu6on, but the mean and standard devia6on seem beber. In prac6ce, the efficiency of above methods are moment method> linear regression > NN > RBF dsf p ( X | ! θ = θ ) Assuming the underlying model And underlying parameter A typical 6me series with the above parameters 0 0 1 1 θ 1 θ 1 θ θ System Measurements γ 1 1 γ 2. Method X i = ε i + θε i 1 + γε i 2 ( θ , γ ) = (1, 1) (a) Linear Regression (b) Moment (c) RBF (d) Neural Network Linear regression and neural network results in beber predic6on than the other two methods. Moment es6ma6on tends to have a more centered posterior, while RBF results in a more scabered one. (a) RBF (b) Neural Network All three methods tend to have similar results in this case, while complex methods have beber predic6ons.

Transcript of Learning Summary Statistics for Approximate Bayesian...

Page 1: Learning Summary Statistics for Approximate Bayesian ...cs229.stanford.edu/proj2015/051_poster.pdf · Learning summary stas6c for approximate bayesian computaon via deep neural network.

Purpose• Thepurposeofmystudyistousevariousmachinelearningmethodstofacilitatethegenera6onofsummarysta6s6csinApproximateBayesianComputa6on(ABC).Specifically,Iwillusethefollowingmethods– Momentes)ma)onandlinearregression– Radiusbasisfunc)on– Neuralnetwork

WhatisanABCalgorithm?•  InBayesianEs6ma6on,oneofthemostimportantstepistocalculatethelikelihoodfunc6on

• However,inmanyimportantapplica6ons,thisisinfeasible.ABCalgorithmcangetposteriores6ma6onwithoutknowingthelikelihoodfunc6on.

• Sothecentralques6onistofindagoodsummarysta6s6cusingmachinelearningmethodstoreducethedimension.

• Thefollowingtheoremguaranteesthevalidity.

Learning Summary Statistics for Approximate Bayesian Computation

Yiwen Chen, Department of Management Science and Engineering,

Stanford University

1. Background Information

•  Blum,M.G.(2010).Choosingthesummarysta6s6csandtheacceptancerateinapproximatebayesiancomputa6on.InProceedingsofCOMPSTAT'2010,pages47-56.Springer.

•  Fearnhead,P.andPrangle,D.(2012).Construc6ngsummarysta6s6csforapproximatebayesiancomputa6on:semi-automa6capproximatebayesiancomputa6on.JournaloftheRoyalSta6s6calSociety:SeriesB(Sta6s6calMethodology),74(3):419-474.

•  Jiang,B.,Wu,T.-y.,Zheng,C.,andWong,W.H.(2015).Learningsummarysta6s6cforapproximatebayesiancomputa6onviadeepneuralnetwork.arXivpreprintarXiv:1510.02175.

3. Evaluate with Moving Average Model 4. Evaluate with Hidden MC model

5 Conclusions and Insights

6. Reference

•  ThehiddenMarkovChainmodelisasfollows

•  Automa6csummarysta6s6cgenera6onisagoodwaytodoABCalgorithmwhenwedon’tknowtheexactsufficientsta6s6cs.

• Morecomplexmodelstendtogeneratemorescaberedposteriordistribu6on,butthemeanandstandarddevia6onseembeber.

•  Inprac6ce,theefficiencyofabovemethodsare

momentmethod>linearregression>NN>RBF

dsf

p(X |!θ = θ )

• Assumingtheunderlyingmodel

• Andunderlyingparameter• Atypical6meserieswiththeaboveparameters

0

0

1

1

θ

1−θ

1−θ θ

System

Measurements

γ 11−γ

2. Method

Xi = ε i +θε i−1 + γε i−2(θ ,γ ) = (1,−1)

(a)LinearRegression (b)Moment

(c)RBF (d)NeuralNetwork

• Linearregressionandneuralnetworkresultsinbeberpredic6onthantheothertwomethods.

• Momentes6ma6ontendstohaveamorecenteredposterior,whileRBFresultsinamorescaberedone.

(a)RBF (b)NeuralNetwork

•  Allthreemethodstendtohavesimilarresultsinthiscase,whilecomplexmethodshavebeberpredic6ons.