Reference Value Estimators in Key Comparisons

22
Reference Value Estimators in Key Comparisons Margaret Polinkovsky Advisor: Nell Sedransk NIST May 2004

description

Reference Value Estimators in Key Comparisons. Margaret Polinkovsky Advisor: Nell Sedransk NIST May 2004. Statistics, Metrology and Trade. Statistics Estimation for measurements (1 st moment) Attached Uncertainty (2 nd moment) Incredible precision in National Metrology Institute (NMI) - PowerPoint PPT Presentation

Transcript of Reference Value Estimators in Key Comparisons

Page 1: Reference Value Estimators in Key Comparisons

Reference Value Estimators in Key

Comparisons

Margaret Polinkovsky

Advisor: Nell Sedransk

NISTMay 2004

Page 2: Reference Value Estimators in Key Comparisons

Statistics, Metrology and Trade

Statistics Estimation for measurements

(1st moment) Attached Uncertainty

(2nd moment) Incredible precision in National

Metrology Institute (NMI) Superb science Exquisite engineering Statistical analysis

Page 3: Reference Value Estimators in Key Comparisons

What are Key Comparisons?

Each comparability experiment Selected critical and indicative settings –

“Key” Tightly defined and uniform experimental

procedures Purpose

Establish degree of equivalence between national measurement standards

Mutual Recognition Arrangement (MRA) 83 nations

Experiments for over 200 different criteria

Page 4: Reference Value Estimators in Key Comparisons

Sellera Buyerb

Specifications Requirements

Measurements Measurements

NMIbNMIa

Comparability

NMI: National Metrology Institute

products

money

Equivalence

Page 5: Reference Value Estimators in Key Comparisons

Elements of Key Comparisons

Key points for comparisons Experimental design for testing

Participating NMIs Measurement and procedure for testing Statistical design of experiments

Analysis of target data Statistical analysis of target data Scientific review of measurement

procedure

Page 6: Reference Value Estimators in Key Comparisons

Issues for Key Comparisons

Goals:To estimate NMI-NMI differencesTo attach uncertainty to NMI-NMI differencesTo estimate Key Comparison Reference Value (KCRV)

To establish individual NMI conformance to the group of NMIsTo estimate associated uncertainty

Complexity Artifact stability; Artifact compatibility; Other factors

Pilot NMI

NMIs

Page 7: Reference Value Estimators in Key Comparisons

Statistical Steps

Step 1 Design Experiment (statistical)

Step 2 Data collected and statistically analyzed Full statistical analysis

Step 3 Reference value and degree of

equivalence determined Corresponding uncertainties estimated

Page 8: Reference Value Estimators in Key Comparisons

Present State of Key Comparisons

No consensus among NMIs on best choice of procedures at each step

Need for a statistical roadmap Clarify choices Optimize process

Page 9: Reference Value Estimators in Key Comparisons

Outcomes of Key Comparisons

Idea “True value” Near complete adjustment for other

factors Model based, physical law based

Non-measurement factors Below threshold for measurement

Precision methodology assumptions Highly precise equipment used to

minimize variation Repetition to reduce measurement error

Page 10: Reference Value Estimators in Key Comparisons

Outcomes of Key Comparisons

Each NMI:Observation= “True Value”+ measurement +non-measurable

error error

same for all NMIs varies for NMI varies for NMI

(after adjustment,if any) data based estimate different expert for each NMIcommon artifact “statistical uncertainty“ “non-statistical or physical event uncertainty”

Combined uncertainty

GoalEstimate “True Value”: KCRV

Estimated combined uncertainty and degrees of equivalence

Page 11: Reference Value Estimators in Key Comparisons

Problems to Solve Define Best Estimator for KCRV

Data from all NMIs combined Many competing estimators

Unweighted estimators Median ( for all NMIs) Simple mean

Weighted by Type A Weighted by 1/Type A (Graybill-Deal)

Weighted by both Type A and Type B Weighted by 1/(Type A + Type B) (weighted sum) DerSimonian-Laird Mandel-Paule

Page 12: Reference Value Estimators in Key Comparisons

Role of KCRV

Used as reference value “95% confidence interval”

Equivalence condition for NMI

)(2 KCRVUKCRV

)(2)( KCRVNMIvalueUKCRVNMIvalue

Page 13: Reference Value Estimators in Key Comparisons

Research Objectives Objectives

Characterize behavior of 6 estimators Examine differences among 6 estimators Identify conditions governing estimator

performance Method

Define the structure of inputs to data Simulate Analyze results of simulation

Estimator performance Comparison of estimators

Page 14: Reference Value Estimators in Key Comparisons

Model for NMIi

Reference value: KCRV

(Laboratory/method) bias : Type B

(Laboratory/method) deviation : Type B

Measurement error:Type A

iiiiY **

*i

i

*i

Page 15: Reference Value Estimators in Key Comparisons

Process

2 = 2 + 2 (Type B)

= systematic bias

2 = extra-variation

NMI

y = + + * +

s2 = data-based variance estimate (Type A)

= experiment-specific bias* = experiment-specific deviation = random variation

Datays2

Expertise2

Sources of Uncertainty

NMI

NMI

NMI

NMI

Process

Process

Process

Process

Page 16: Reference Value Estimators in Key Comparisons

Translation to Simulation Scientist:

unobservable Systematic Bias ~N( , )

Simulate random Extra-variation

Simulate random

Data: observable Observed “Best Value” Variance estimate

s2~ 1/df( ) Simulate random

Experimental Bias

Simulate random Experimental Deviation

Simulate random

2df

2

11

2 ~ k

),(~ 2* N

),0(~ 22

* kN

Uncertainty

Type A: s2

Type B:222

Page 17: Reference Value Estimators in Key Comparisons

Increase in Bias

0.28

0.33

0.38

0.43

0.48

0.53

Mean Median GD_o DL MP

Estimator

MS

E

bias=0%, uncer=1

bias=10%, uncer=1

bias=30%, uncer=1

bias=50%, uncer=1

Page 18: Reference Value Estimators in Key Comparisons

Difference in Uncertainty

0.28

0.33

0.38

0.43

0.48

0.53

Mean Median GD_o DL MP

Estimator

MS

E

bias=0%, uncer=1

bias=0%, uncer=1/2

bias=0%, uncer=1/3

bias=0%, uncer=1/5

bias=0%, uncer=1/10

Page 19: Reference Value Estimators in Key Comparisons

Bias and Uncertainty

0.28

0.33

0.38

0.43

0.48

0.53

Mean Median GD_o DL MP

Estimator

MS

E

bias=0%, uncer=1

bias=0%, uncer=1/2

bias=0%, uncer=1/10

bias=20%, uncer=1/2

bias=20%, uncer=1/10

bias=100%, uncer=1/25

bias=100%, uncer=1/100

Page 20: Reference Value Estimators in Key Comparisons

Conclusions and Future Work

Conclusions Uncertainty affects MSE more than Bias Estimators performance

Graybill-Deal estimator is least robust Dersimonian-Laird and Mandel-Paule perform well

When 1 NMI is not exchangeable the coverage is effected

Number of labs changes parameters Future work

Use on real data of Key Comparisons Examine other possible scenarios Further study degrees of equivalence

Pair-wise differences

Page 21: Reference Value Estimators in Key Comparisons

Looking Ahead

Use on real data of Key Comparisons

Examine other possible scenarios Further study degrees of

equivalence Pair-wise differences

Page 22: Reference Value Estimators in Key Comparisons

References R. DerSimonian and N. Laird. Meta-analysis in clinical trials.

Controlled Clinical Trials, 75:789-795, 1980 F. A. Graybill and R. B. Deal. Combining unbiased estimators.

Biometrics, 15:543-550, 1959 R. C. Paule and J. Mandel. Consensus values and weighting

factors. Journal of Research of the National Bureau of Standards, 87:377-385

P.S.R.S. Rao. Cochran’s contributions to the variance component models for combining estimators. In P. Rao and J. Sedransk, editors. W.G. Cochran’s Impact on Statistics, Volume II. J. Wiley, New York, 1981

A. L. Rukhin. Key Comparisons and Interlaboratory Studies (work in progress)

A. L. Rukhin and M.G. Vangel. Estimation of common mean and weighted mean statistics. Jour. Amer. Statist. Assoc., 73:194-196, 1998

J.S. Maritz and R.G. Jarrett. A note on estimating the variance of the sample median. Jour. Amer. Statist. Assoc., 93:303-308, 1998

SED Key Comparisons Group (work in progress)