Center for Radiative Shock Hydrodynamics Fall 2011 Review
description
Transcript of Center for Radiative Shock Hydrodynamics Fall 2011 Review
1
Center for Radiative Shock Hydrodynamics
Fall 2011 Review
Assessment of predictive capability Derek Bingham
2
Experiment designScreening (identifying most important inputs)Emulator constructionPredictionCalibration/tuning (solving inverse problems)Confidence/prediction interval estimationAnalysis of multiple simulators
Will focus the framework where we can quantify uncertainties in predictions and the impact of the sources of variability
CRASH has required innovations to most UQ activities
3 Page
where:o model or system inputso system responseo simulator responseo calibration parameters o observational error *Kennedy and O’Hagan (2001); Higdon et al. (2004)
3
The predictive modeling approach is often called model calibration*
4 Page
where:o model or system inputso system responseo simulator responseo calibration parameters o observational error
4
The predictive modeling approach is often called model calibration
5 Page
where:o model or system inputso system responseo simulator responseo calibration parameters o observational error
5
Gaussian Process Models(looking at other models)
The predictive modeling approach is often called model calibration
6 Page
where:o model or system inputso system responseo simulator responseo calibration parameters o observational error
6
Goal is to estimate unknown calibration parameters and also make predictions of the physical system
The predictive modeling approach is often called model calibration
7 Page
Vector of observations and simulations denoted as
The Gaussian process model specifications links simulations and observations through the
covariance
8
We have used 2-D CRASH simulations and observations to build and explore the predictive
model for shock location and breakout time Experiment data:o 2008 and 2009 experimentso Experiment variables: Be thickness, Laser energy, Xe fill pressure,
Observation timeo Response: Shock location (2008) and shock breakout time (2009)
2-D CRASH Simulationso 104 simulations, varied over 5 inputso Experiment variables: Be thickness, Laser energy, Observation timeo Calibration parameters: Electron flux limiter, Be gamma, Wall opacity
Can sample from joint posterior distribution of the calibration parameters
9
Breakout time calibration
Shock location calibration
Joint calibration
10
A look at the posterior marginal distributions of the calibration parameters
11
Statistical model can be used to evaluate sensitivity of codes or system to inputs
2-D CRASH shock breakout time sensitivity plots
12
The statistical model is used to predict shock breakout time incorporating sources of uncertainty
13
(μs)
(μs)
(μs)
The statistical model is used to predict shock location incorporating sources of uncertainty
14
Have simulations from 1-D and 2-D models
2-D models runs come at a higher computational cost
Would like to use all simulations, and experiments, to make predictions
We developed a new statistical model for combining outputs from multi-fidelity simulators
15
Have simulations from 1-D and 2-D models
2-D models runs come at a higher computational cost
Would like to use all simulations, and experiments, to make predictions
1-D CRASH Simulationso 1024 simulationso Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation timeo Calibration parameters: Electron flux limiter, Laser energy scale factor
2-D CRASH Simulationso 104 simulationso Experiment variables: Be thickness, Laser energy, Xe fill pressure, Observation timeo Calibration parameters: Electron flux limiter, Wall opacity, Be gamma
We developed a new statistical model for combining outputs from multi-fidelity simulators
16
The available shock information comes from models and experiments
where:o model or system inputso system responseo simulator responseo vectors of calibration parameters
Modeling approach in the spirit of Kennedy and O’Hagan (2000); Kennedy and O’Hagan (2001); Higdon et al. (2004)
1-D simulator …calibration parameters are adjusted
2-D simulator …calibration parameters are adjusted
Experiments … calibration parameters are fixed and unknown
17
Idea is that the 1-D code does not match the 2-D code for two reasons
Calibrate lower fidelity code to higher fidelity code
18
Link the simulator responses and observations through joint model and discrepancies
19
Link the simulator responses and observations through joint model and discrepancies
20
Link the simulator responses and observations through joint model and discrepancies
21
Link the simulator responses and observations through joint model and discrepancies
Comments:o For deciding what variables belong in the discrepancy, one can
ask “what is fixed at this level”o The interpretation of the calibration parameters changes
somewhato Discrepancies are almost guaranteed for this specification
22
Link the simulator responses and observations through joint model and discrepancies
Gaussian Process Models
23
Need to specify prior distributions
Approach is Bayesian
Inverted-gamma priors for variance components
Beta priors for the correlation parameters
Log-normal priors for the calibration parameters
24
Can illustrate using a simple example
Low fidelity model
25
Can illustrate using a simple example
Low fidelity model
High fidelity model
26
Can illustrate using a simple example
Low fidelity model
High fidelity model
True model + replication error
27
How would this work in practice? Evaluate each computer model at at different input settings
We evaluated the low fidelity (LF) model 20 times with inputs (x, t1, tf) chosen according to a Latin hypercube design
The high fidelity (HF) model was evaluated 5 times with inputs (x, t2, tf) chosen according to a Latin hypercube design
The experimental data was generated by evaluating the true model 3 times and adding replication error from a N(0,0.2)
28
Observations and response functions at the true value of the calibration parameters
29
We can construct 95% posterior prediction intervals at the observations
30
Comparison of predicted response surfaces
31
New methodology applied to CRASH for breakout time
32
Observations
Able to build a statistical model that appears to predict the observations well
Prediction error is in the order of the experimental uncertainty
Care must be taken choosing priors for the variances of GP’s
33
Approach to combine outputs from experiments and several different computer models
Experiments:
The mean function is just one of many possible response functions
View computer model evaluations as biased versions of this “super-reality”
Developing new statistical model for combining simulations and experiments
34
Experiments:
Computer model:
Each computer model will be calibrated directly to the observations
Information for estimating individual unknown calibration parameters comes from observations and models with that parameter as on input
Super-reality model for prediction and calibration
35
Use the model calibration framework to perform a variety of tasks such as explore the simulation response surfaces, making predictions for experiments and sensitivity analysis
Developed new statistical model for calibration of multi-fidelity computer models with field data
Can make predictions with associated uncertainty informed by multi-fidelity models
Developing model to combine several codes (not necessarily ranked by fidelity) and observations
Have deployed state of the art UQ techniques to leverage CRASH codes and experiments
36
Allocation of computational budget
The goal is to use available simulations and experiments to evaluate the allocation of the computational budget to computational models
Since prediction is our goal, will use the reduction in the integrated mean square error (IMSE)
This measures the prediction variance, averaged across the input space
The optimal set of simulations is the one that maximized the expected reduction in the IMSE
37
Criterion can be evaluated in the current statistical framework
Can compute an estimate of the mean square error at any potential input, conditional on the model parameters
Would like a new trial to improve the prediction everyone in the input region
This criterion is difficult to optimize
38
A quick illustration – CRASH 1-D using shock location
Can use the 1-D predictive calibration model to evaluate the value of adding new trials
Suppose wish to conduct 10 new field trials
Which 10? What do we expect to gain?
39
Expected reduction in IMSE for up to 10 new experiments
Exp
ecte
d re
duct
ion
in IM
SE
Number of follow-up experiments
40
Can compare the value of new experiments to simulations
One new field trial yields an expected reduction in the IMSE of about 5%
The optimal IMSE design with 200 1-D new computer trials yields an expected reduction of of about 3%
The value of an experiment is substantially more than that of a computer trial
Can do the same exercise when there are multiple codes
41
Fin