Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

73
Model-Data Weak Formulation (MDWF): Galerkin Approximation Recapitulation Unlimited-Observations Statement Limited-Observations Statement Experimentally Observable Spaces Interpretation: Variational Data Assimilation 57

Transcript of Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Page 1: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model-Data Weak Formulation (MDWF):Galerkin Approximation

RecapitulationUnlimited-Observations StatementLimited-Observations StatementExperimentally Observable SpacesInterpretation: Variational Data Assimilation

57

Page 2: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model-Data Weak Formulation (MDWF):Galerkin Approximation

RecapitulationUnlimited-Observations StatementLimited-Observations StatementExperimentally Observable SpacesInterpretation: Variational Data Assimilation

58

Page 3: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Abstraction

We consider aphysical system

and associated true state field

utrue ∈ X ,

which we assume is deterministic and stationary.

We wish to predict the field utrue based on

a best-knowledge mathematical model A, f, and

M experimental observations,within an integrated (variational) weak formulation.

59

Page 4: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model Bias . . .

Given the true state of the physical system

(field) utrue ∈ X ,

define the model bias as residual

g ≡ Autrue − f ∈ Y ,

such that

Autrue = f + g .

Recall Aubk = f : ubk = utrue if and only if g = 0.

60

Page 5: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

. . .Model Bias

Given that

Autrue = f + g

we must simultaneously estimate

model bias: g ∈ Y , and

state: utrue ∈ X ;

unique decomposition of f = Autrue − g in terms of

g and utrue

requires experimental observations.61

Page 6: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model-Data Weak Formulation (MDWF):Galerkin Approximation

RecapitulationUnlimited-Observations Statement

PreliminariesSaddle Formulation

Limited-Observations StatementExperimentally Observable SpacesInterpretation: Variational Data Assimilation

62

Page 7: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model-Data Weak Formulation (MDWF):Galerkin Approximation

RecapitulationUnlimited-Observations Statement

PreliminariesSaddle Formulation

Limited-Observations StatementExperimentally Observable SpacesInterpretation: Variational Data Assimilation

63

Page 8: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Operators and Sesquilinear Forms

Recall

inner products (·, ·)X , (·, ·)Y ,

duality pairings ·, ·X ×X , ·, ·Y ×Y ,

representation operators X : X → X , Y : X → Y ,

associated with our spaces X (Ω),Y(Ω), and

operator A : X → Y , or equivalently

sesquilinear form a : X × Y → C , and

source f ∈ Y ,

associated with our best-knowledge model.64

Page 9: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model Bias

Recall the definition of model bias

g ≡ Autrue − f ∈ Y

in terms of best-knowledge model and true state.

Now introduce the Y-representation of g

qmod ≡ Y

−1g ∈ Y ;

we shall refer to both g and qmod as model bias.

Recall that model bias can also represent error inthe best-knowledge model A, f, δA, δf.

65

Page 10: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Observable Field (Assumption: Strong)

Recall the “true state” of the physical system,

utrue ∈ X ,

which we assume is deterministic and stationary.

Now express the “observable” state as

uobs ≡ u

true − qobs,

for qobs ∈ X deterministic and stationary:qobs = 0 — perfect observations;qobs = 0 — imperfect observations;

qobs → ∞ — uninformative observations;

qobs represents a measurement-induced perturbation.

66

Page 11: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Weighting Parameter

Introduce non-negative real parameter,ν ∈ R+,0 ,

which can be interpreted (conceptually) as†

trust in best-knowledge modeltrust in experimental observations

.

A priori theory shall suggest

νopt ≡

AqobsYqmodY

as a guideline for choice of ν.Note as approach perfect observations, ν → 0;

as approach uninformative observations, ν → ∞.†We can view ν

−1 as the gain on the data “innovation”.67

Page 12: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model-Data Weak Formulation (MDWF):Galerkin Approximation

RecapitulationUnlimited-Observations Statement

PreliminariesSaddle Formulation

Limited-Observations StatementExperimentally Observable SpacesInterpretation: Variational Data Assimilation

68

Page 13: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Statement

Given ν ∈ R+,0, find (u,ψ) ∈ X × Y such that

a(u, v)− (ψ, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(u,φ)− ν(ψ,φ)Y =− a(utrue,φ)

− ν(qmod,φ)Y , ∀φ ∈ Y ,

or

a(u, v)− (ψ, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(u,φ)− ν(ψ,φ)Y =− a(uobs,φ)− a(qobs,φ)

− ν(qmod,φ)Y , ∀φ ∈ Y ,

since utrue = u

obs + qobs.

69

Page 14: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Solution (by Construction)

Proposition 1. The unique solution to our saddle problemis (u,ψ) = (utrue, qmod) ∈ X × Y .

Sketch of proof:

Satisfaction of second equation: trivial.

Satisfaction of first equation (also trivial):

a(u, v)− (ψ, v)Y

= f, vY ×Y ,

from definitions of qmod and g.

Uniqueness follows from our assumptions on A. 70

Page 15: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Solution (by Construction)

Proposition 1. The unique solution to our saddle problemis (u,ψ) = (utrue, qmod) ∈ X × Y .

Sketch of proof:

Satisfaction of second equation: trivial.

Satisfaction of first equation (also trivial):

a(utrue, v)− (qmod, v)Y

= f, vY ×Y ,

from definitions of qmod and g.

Uniqueness follows from our assumptions on A. 71

Page 16: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Solution (by Construction)

Proposition 1. The unique solution to our saddle problemis (u,ψ) = (utrue, qmod) ∈ X × Y .

Sketch of proof:

Satisfaction of second equation: trivial.

Satisfaction of first equation (also trivial):

a(utrue, v)− (qmod, v)Y = Au

true− Y q

mod, vY ×Y

= f, vY ×Y ,

from definitions of qmod and g.

Uniqueness follows from our assumptions on A. 72

Page 17: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Solution (by Construction)

Proposition 1. The unique solution to our saddle problemis (u,ψ) = (utrue, qmod) ∈ X × Y .

Sketch of proof:

Satisfaction of second equation: trivial.

Satisfaction of first equation (also trivial):

a(utrue, v)− (qmod, v)Y = Au

true− Y q

mod, vY ×Y

= Autrue

− g, vY ×Y

= f, vY ×Y ,

from definitions of qmod and g.

Uniqueness follows from our assumptions on A. 73

Page 18: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Solution (by Construction)

Proposition 1. The unique solution to our saddle problemis (u,ψ) = (utrue, qmod) ∈ X × Y .

Sketch of proof:

Satisfaction of second equation: trivial.

Satisfaction of first equation (also trivial):

a(utrue, v)− (qmod, v)Y = Au

true− Y q

mod, vY ×Y

= Autrue

− g, vY ×Y

= f, vY ×Y ,

from definitions of qmod and g.

Uniqueness follows from our assumptions on A. 74

Page 19: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Interpretation

We interpret the first equation

a(u, v)− (ψ, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(u,φ)− ν(ψ,φ)Y =− a(uobs,φ)− a(qobs,φ)

− ν(qmod,φ)Y , ∀φ ∈ Y ,

as the “model” equation: best-knowledge plus model bias.We interpret the second equation

a(u, v)− (ψ, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(u,φ)− ν(ψ,φ)Y =−a(uobs,φ)− a(qobs,φ)

−ν(qmod,φ)Y , ∀φ ∈ Y ,

as the “data” equation: model-observation connection. 75

Page 20: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model-Data Weak Formulation (MDWF):Galerkin Approximation

RecapitulationUnlimited-Observations StatementLimited-Observations StatementExperimentally Observable SpacesInterpretation: Variational Data Assimilation

76

Page 21: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Unlimited-Observations: Impractical

Our infinite-observations saddle

a(u, v)− (ψ, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(u,φ)− ν(ψ,φ)Y =−

(Auobs,φ)Y

a(uobs,φ) −

(Aqobs,φ)Y a(qobs,φ)

− ν(qmod,φ)Y , ∀φ ∈ Y ,

is not actionable since we do not know1. the observable field, uobs, everywhere;2. (precisely) the observational imperfection, qobs;3. the model bias, qmod.

We can not evaluate the “knowns” for the second equation.77

Page 22: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Limited-Observations: Discretization Procedure

To construct our limited-observations saddle we

1a. project Auobs over appropriate approximationspace YM ⊂ Y of finite dimension M ≥ 0;

1b. search for ψM,ν ≈ ψ in YM to ensureuniqueness of discrete solution (uM,ν,ψM,ν);

2. assume that AqobsY is negligibly small;

3. assume that νqmodY is negligibly small.

The latter two assumptions reflectthe achievable accuracy (for ν ≈ ν

opt).

78

Page 23: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Limited-Observations Saddle

Given ν ∈ R+,0, find (uM,ν,ψM,ν) ∈ X × YM such that

a(uM,ν, v)− (ψM,ν, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(uM,ν,φ)− ν(ψM,ν,φ)Y = −a(uobs,φ), ∀φ ∈ YM ;

recall that uobs is the experimentally observable field.

Note for M = 0 (YM ≡ 0): ψM,ν = 0 and hence

uM,ν = ubk

since

a(ubk, v) = f, vY ×Y , ∀v ∈ Y ;

in absence of data, recover best-knowledge model.79

Page 24: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model-Data Weak Formulation (MDWF):Galerkin Approximation

RecapitulationUnlimited-Observations StatementLimited-Observations StatementExperimentally Observable SpacesInterpretation: Variational Data Assimilation

80

Page 25: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Observation Functionals [F]

We introduce Mmax observation functionals

om∈ X , m = 1, . . . ,Mmax,

which reflecttransducer position or focus, and

transducer filter characteristics,

associated with the data acquisition procedure.

We define a single observation as

m ∈ 1, . . . ,Mmax → om(uobs) ∈ C ,

where recall uobs = utrue − q

obs is the observable field.81

Page 26: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Example: Gaussian Observation Functionals

Consider Ω ⊂ Rd, H10(Ω) ⊂ X ⊂ H

1(Ω), and define

om(v) = Gauss (v; xc

m, σ) ≡

Ω

1

(2π)d/2σde−

|x−xcm|2

2σ2 v(x)dx

forobservation centers x

cm∈ Ω, m = 1, . . . ,Mmax,

observation filter width σ ∈ R+;

we apply corrections near the domain boundary ∂Ω.

In theory, we may consider σ → 0 but only for d = 1;for real transducers, σ will be finite for any d.

82

Page 27: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Hierarchical Spaces Eo

M

Given choices of

observation functionals om

, m = 1, . . . ,Mmax ,

define, for 1 ≤ M ≤ Mmax, Eo

0 ≡ 0

Eo

M≡ spanA−∗

om, 1 ≤ m ≤ M;

note spaces are hierarchical, Eo

M⊂ E

o

M+1.

We say that Eo

Mis experimentally observable

with respect to observation functionals om1≤m≤M .

83

Page 28: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Projection of Observable Field

Recall that (uM,ν,ψM,ν) ∈ X × YM satisfies

a(uM,ν, v)− (ψM,ν, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(uM,ν,φ)− ν(ψM,ν,φ)Y = −

projection of observable field: (Auobs,φ)Y

a(uobs,φ) , ∀φ ∈ YM .

We now require YM = Eo

M: then, for φ ∈ YM ,

Auobs,

M

m=1

αmA−∗om

φ∈YM

Y

=A

∗M

m=1αmA

−∗om, u

obs

X ×X

=M

m=1αm

om, u

obsX ×X

corresponds to M (realizable) single observations.84

Page 29: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Model-Data Weak Formulation (MDWF):Galerkin Approximation

RecapitulationUnlimited-Observations StatementLimited-Observations StatementExperimentally Observable SpacesInterpretation: Variational Data Assimilation

85

Page 30: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Regularization-Misfit Minimization [ZN](restricted to Galerkin approximation)

The state uM,ν ∈ X satisfies

uM,ν = argminw∈X

f − Aw

2Y

regularization

+1νΠMA(uobs − w)2Y

model-observation misfit

,

where the projector ΠM : Y → YM is given by

(ΠMw, v)Y = (w, v)Y , ∀ v ∈ YM , for any w ∈ Y .

For perfect observations and ν → 0 we recovera penalty formulation for constrained estimation.

Saddle: primal-dual Euler-Lagrange equations.††The primal-only Euler-Lagrange equation constitutes a single-field

formulation for uM,ν ∈ X (not well-suited to computation).86

Page 31: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF Galerkin: Analysis

A Priori EstimatesStability Constant: Monotonic ImprovementApproximation Theory: Simple ExampleLimit of Small Model Bias

87

Page 32: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF Galerkin: Analysis

A Priori EstimatesPerfect Observations (P-O) [QV]Imperfect Observations (I-O)

Stability Constant: Monotonic ImprovementApproximation Theory: Simple ExampleLimit of Small Model Bias

88

Page 33: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF Galerkin: Analysis

A Priori EstimatesPerfect Observations (P-O) [QV]Imperfect Observations (I-O)

Stability Constant: Monotonic ImprovementApproximation Theory: Simple ExampleLimit of Small Model Bias

89

Page 34: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Formulation Rappel (I-O) . . .

Recall (u,ψ) ∈ X × Y satisfies

a(u, v)− (ψ, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(u,φ)− ν(ψ,φ)Y = −a(uobs,φ)− a(qobs,φ)

− ν(qmod,φ)Y , ∀φ ∈ Y ,

and (uM,ν,ψM,ν) ∈ X × YM satisfies

a(uM,ν, v)− (ψM,ν, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(uM,ν,φ)− ν(ψM,ν,φ)Y = −a(uobs,φ), ∀φ ∈ YM .

Now recall (P-O) ≡ qobs = 0 ⇒ choose ν = 0.

90

Page 35: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

. . . Formulation Rappel (P-O) . . .

Then (u,ψ) ∈ X × Y satisfies†

a(u, v)− (ψ, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(u,φ) = −a(uobs,φ)− a(qobs,φ), ∀φ ∈ Y ,

and (uM,0,ψM,0) ∈ X × YM satisfies

a(uM,0, v)− (ψM,0, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(uM,0,φ) = −a(uobs,φ), ∀φ ∈ YM .

We henceforth suppress the subscript ν(= 0).

†This nonsymmetric saddle may be converted to a symmetric saddle.91

Page 36: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

. . . Formulation Rappel (P-O)

Equivalently, (u,ψ) ∈ X × Y satisfies

(Au, v)Y − (ψ, v)Y = f, vY ×Y , ∀v ∈ Y ,

−(Au,φ)Y = −(Auobs,φ)Y , ∀φ ∈ Y ,

and (uM ,ψM) ∈ X × YM satisfies

(AuM , v)Y − (ψM , v)Y = f, vY ×Y , ∀v ∈ Y ,

−(AuM ,φ)Y = −(Auobs,φ)Y , ∀φ ∈ YM .

Recall A = Y−1A such that

a(w, v) = Aw, vY ×Y = (Aw, v)Y ,

for any w ∈ X , v ∈ Y . 92

Page 37: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

A Priori Estimates (P-O): Definitions

Define observation-constrained spacesY⊥

M≡ w ∈ Y : (w, v)Y = 0, ∀v ∈ YM

X⊥

M≡ w ∈ X : (Aw, v)Y

a(w,v)

= 0, ∀v ∈ YM

and an associated inf-sup constant† ν = 0

βM,ν=0 ≡ infw∈X⊥

M

AwYwX

;

note

βM=0,ν=0 = β0 ≡ infw∈X

supv∈Y

Aw, vY ×Y

wXvY> 0.

†The norm · X can be replaced with a semi-norm | · |X .93

Page 38: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

A Priori Estimates (P-O)

Proposition 2: The error e ≡ utrue − uM satisfies

AeY = infφ∈YM

qmod

model bias

−φY

or in X -norm

eX ≤1

βM,ν=0inf

φ∈YM

qmod

− φY ;

furthermore

qmod

− ψMY = infφ∈YM

qmod

− φY .

Recall ν = νopt = 0 (for qobs = 0).

94

Page 39: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Proof of Proposition 2 (P-O): Preliminaries

Error Equation: The error e ≡ utrue − uM satisfies

(Ae, v)Y − (qmod− ψM , v)Y = 0, ∀v ∈ Y , EQN1

−(Ae,φ)Y = 0, ∀φ ∈ YM , EQN2

since ψ = qmod.

Projection Operator: Recall ΠM : Y → YM (≡ Eo

M)

(ΠMw, v)Y = (w, v)Y , ∀v ∈ YM , for any w ∈ Y .

Constrained Spaces: Recall

Y⊥

M= w ∈ Y : (w, v)Y = 0, ∀v ∈ YM,

X⊥

M= w ∈ X : Aw ∈ Y

M.

95

Page 40: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Proof of Proposition 2 (P-O) . . .

To recover the first result

(a) from EQN2, Ae ∈ Y⊥

M, e ∈ X⊥

M;

(b) from EQN1 tested on v ∈ Y⊥

M⊂ Y ,

(Ae, v)Y = (qmod, v)Y , ∀v ∈ Y⊥

M;

(c) from (a) and (b)

Ae = qmod − ΠMq

mod;

(d) from definition of projection

AeY = infφ∈YM

qmod − φY .

Error in bias induces error in state. 96

Page 41: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

. . . Proof of Proposition 2 (P-O) . . .

or, in a picture

97

Page 42: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

. . . Proof of Proposition 2 (P-O) . . .

or, in a picture

98

Page 43: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

. . . Proof of Proposition 2 (P-O) . . .

or, in a picture

99

Page 44: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

. . . Proof of Proposition 2 (P-O) . . .

or, in a picture

100

Page 45: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

. . . Proof of Proposition 2 (P-O)To recover the second result

(e) from (a), (d), and definition of stability constant†

βM,ν=0 ≡ infw∈X⊥

M

AwYwX

,

we obtainβM,ν=0eX ≤ inf

φ∈YM

qmod

− φY .

To recover the third result

(f) from EQN1 tested on v ∈ Y

qmod − ψM = Ae;

(g) from (c), ψM = ΠMψ.

†We may replace wX with any desired semi-norm |w|X .101

Page 46: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

A Priori Estimates (P-O): Contributions

(Recall) Proposition 2:

utrue

− uMX ≤1

βM,ν=0inf

φ∈YM

qmod

− φY .

The state error depends on YM ⊂ YM+1

1. the stability constant:βM,ν=0 (↑) as M (↑);

2. the model-bias best-fit error:inf

φ∈YM

qmod

− φY (↓) as M (↑).

Note implicit dependence on regularity of qmod.102

Page 47: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Output Error Estimation (P-O)

Proposition 3: for any out ∈ X ,

|out(e)| ≤ inf

ζ∈YM

A−∗out

− ζY infφ∈YM

qmod

− φY .

The output error depends on

1. the model-bias best-fit error:

infφ∈YM

qmod

− φY (↓) as M (↑);

2. the adjoint best-fit error:

infζ∈YM

A−∗out

− ζY (↓) as M (↑).

Note for out = o· , |out(e)| = 0.

103

Page 48: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF Galerkin: Analysis

A Priori EstimatesPerfect Observations (P-O) [QV]Imperfect Observations (I-O)

Stability Constant: Monotonic ImprovementApproximation Theory: Simple ExampleLimit of Small Model Bias

104

Page 49: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Energy Norm (I-O)

Define an energy norm ΠM : Y → YM

|||w|||M,ν ≡ (Aw2Y + ν−1ΠMAw2Y)1/2

and a stability constant or semi-norm |w|X

βM,ν ≡ infw∈X

|||w|||M,ν

wX;

note consistency in the ν → 0 limit,

βM,ν→0 = infw∈X

|||w|||M,ν→0

wX→ inf

w∈X⊥M

AwYwX

≡ βM,ν=0

since ΠMAwY is penalized by ν−1 → ∞.

105

Page 50: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

A Priori Estimate (I-O): Contributions

Proposition 4: The state estimate satisfies ν = νopt

utrue

− uM,νX ≤

1

βM,νopt

inf

φ∈YM

qmod

− φ2Y + 4qmod

YAqobsY dictates error as M→∞

12.

The state error depends on1. the stability constant: βM,νopt (↑) as M (↑);2. the model-bias best-fit error:

infφ∈YM qmod − φY (↓) as M (↑);3. the observation imperfection: AqobsY .

Note AqmodY does not depend on M .106

Page 51: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

A Priori Estimate (I-O): Observations → Perfect

As AqobsY/qmodY → 0 (and νopt → 0),

utrue

− uM,ν=0X ≤1

βM,ν=0inf

φ∈YM

qmod

− φY ;

if infφ∈YM

qmod − φY → 0 as M → ∞ then

utrue − uM,ν=0X → 0 as M → ∞.

Note A, fbk will still play role in convergence:inf-sup constant, βM,ν=0;magnitude and regularity of qmod;experimentally observable spaces YM .

107

Page 52: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF Galerkin: Analysis

A Priori EstimatesStability Constant: Monotonic Improvement

Perfect Observations (P-O)Imperfect Observations (I-O)

Approximation Theory: Simple ExampleLimit of Small Model Bias

108

Page 53: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF Galerkin: Analysis

A Priori EstimatesStability Constant: Monotonic Improvement

Perfect Observations (P-O)Imperfect Observations (I-O)

Approximation Theory: Simple ExampleLimit of Small Model Bias

109

Page 54: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Stabilization (P-O)

Proposition 5. The inf-sup constant

βM,0 ≡ infw∈X⊥

M

AwYwX

is a non-decreasing function of M .

Proof. We note thatYM+1 ⊃ YM ⇒ X

M+1 ⊂ X⊥

M;

more data implies greater stability.

110

Page 55: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Improvement in Stability: Ideal Case . . .

Consider a generalized SVD of A ∈ L(X ,Y ),

(Aξj,Av)Y = σ2j(ξj, v)X , ∀v ∈ X ,

ηj =1

σjAξj,

forsingular values in R+: σ1 ≤ σ2 ≤ · · · ;trial singular functions in X : (ξm, ξn)X = δmn;test singular functions in Y : (ηm, ηn)Y = δmn.

Note σ1 = β0 > 0 (by assumption).

111

Page 56: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

. . . Improvement in Stability: Ideal Case

Proposition 6: For P-O ⇒ ν = 0,

the choice† om= Xξm, 1 ≤ m ≤ M ,

yields

βM,ν=0 = σM+1 :

the first M singular functions are “deflated.”

Proof. Express inf-sup as Rayleigh quotient.

†Note this choice yields non-experimentally observable spaces YM , howeveran SVD Anti-Node computational heuristic will be extracted.

112

Page 57: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF Galerkin: Analysis

A Priori EstimatesStability Constant: Monotonic Improvement

Perfect Observations (P-O)Imperfect Observations (I-O)

Approximation Theory: Simple ExampleLimit of Small Model Bias

113

Page 58: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Monotonicity: Statement (P-O,I-O)

Proposition 7: For any ν ∈ R+0 , hierarchical YM

βM ,ν ≥ βM −1,ν, M = 1, . . . ,M ;

and in particular

βM ,ν ≥ βM=0,ν = β0, M = 1, . . . ,M,

where (recall)

β0 ≡ infw∈X

supv∈Y

Aw, vY ×Y

wXvY

is the “no-data” inf-sup of the best knowledge model.

114

Page 59: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Proposition 5: Proof Sketch

Define the minimizer fixed ν

χM ≡ argminw∈X

|||w|||2M

w2X

;

for any w ∈ X , ΠM Aw2Y≥ ΠM −1Aw2Y , and hence

β2M ≡

|||χM |||2M

χM 2X

=AχM 2

Y+ ν

−1ΠM AχM 2Y

χM 2X

≥AχM 2

Y+ ν

−1ΠM −1AχM 2Y

χM 2X

=|||χM |||2

M −1

χM 2X

≥|||χM −1|||

2M −1

χM −12X

≡ β2M −1.

115

Page 60: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF Galerkin: Analysis

A Priori EstimatesStability Constant: Monotonic ImprovementApproximation Theory: Simple ExampleLimit of Small Model Bias

116

Page 61: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

One Space Dimension: Definitions

Consider

Ω =]0, 1[ ;

X = Y = H10(Ω) ;

· X = · Y = | · |H1(Ω) ;

A = Y ;

point-wise observation functionals ;

om≡ δ(·; xc

m), m = 1, . . . ,M ;

uniformly spaced observation centers xcmMm=1 .

Note om

bounded for Ω ⊂ R1.

117

Page 62: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

One Space Dimension:Experimentally Observable Spaces YM

Native basis functions for YM : φm = A−∗om

, 1 ≤ m ≤ M .118

Page 63: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

One Space Dimension: Model Bias Approximation

Proposition 8: For

qmod ∈ H

2(Ω),

we obtain

infφ∈YM

qmod

− φY ≤ CM−1q

modH2(Ω)

for C independent of M and qmod.

Proof. Note YM ≡ spanA−∗omMm=1 is equivalent to

piecewise linear polynomials over nodes xcmMm=1;

now apply standard approximation theory.†

†Note that qmod = Y−1g vanishes at end points of Ω.

119

Page 64: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF Galerkin: Analysis

A Priori EstimatesStability Constant: Monotonic ImprovementApproximation Theory: Simple ExampleLimit of Small Model Bias

120

Page 65: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Strategy: Stabilization

Recall a priori estimate:

utrue

− uM,νX ≤

1

βM,νopt

inf

φ∈YM

qmod

− φ2Y + 4qmod

YAqobsY dictates error as M→∞

12;

if qmodY utrueX , AqobsY utrueX ,

we can control the state error

(solely) by improvements in βM,νopt.

Best conditions: few small singular values.

121

Page 66: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

“Optimal” Spaces for Stabilization (P-O)

Goal: deflate dangerous modes (E-optimality [FM]),but remain experimentally observable.

Algorithm SVD Anti-Node: compute

wmin ≡ arg infw∈X⊥

M

AwYwX

;

locate the most sensitive observation point

xcM+1 = arg sup

x∈Ω|wmin(x)| ;

and set

YM+1 = YM ⊕ (A−∗Gauss( · ; xcM+1, σ)) ;

observe least stable mode.122

Page 67: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

“Optimal” Spaces for Stabilization (I-O)

Goal: deflate dangerous modes (E-optimality [FM]),but remain experimentally observable.

Algorithm SVD Anti-Node: compute

wmin ≡ arg infw∈X

|||w|||M,ν

wX;

locate the most sensitive observation point

xcM+1 = arg sup

x∈Ω|wmin(x)| ;

and set

YM+1 = YM ⊕ (A−∗Gauss( · ; xcM+1, σ)) ;

observe least stable mode.123

Page 68: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF: Petrov-Galerkin Approach (in brief)

MotivationFormulationAnalysis

124

Page 69: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF: Petrov-Galerkin Approach (in brief)

MotivationFormulationAnalysis

125

Page 70: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

ConundrumFor MDWF formulation, in data equation

−a(uM,ν,φ)− ν(ψM,ν,φ)Y = −

projection of observable field: (Auobs,φ)Y

a(uobs,φ) , ∀φ ∈ YM ,

we need YM ≡ Eo

M— experimentally observable.

For Galerkin, in model equation

a(uM,ν, v)− (ψM,ν, v)Y = f, vY ×Y , ∀v ∈ Y ,

we need ψM,ν ∈ Y trial=testM

= YM = Eo

M.

Unfortunately, YM = Eo

Mprovides

good stability: small model bias ,but poor approximation: large model bias .

126

Page 71: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

MDWF: Petrov-Galerkin Approach (in brief)

MotivationFormulationAnalysis

127

Page 72: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Unlimited Observations

Given ν ∈ R+,0, find (u,ψ) ∈ X × Y such that

a(u, v)− (ψ, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(u,φ)− ν(ψ,φ)Y =− a(uobs,φ)− a(qobs,φ)

− ν(qmod,φ)Y , ∀φ ∈ Y .

The unlimited-observations formulation is unchangedfrom the case of Galerkin approximation.

128

Page 73: Model-Data Weak Formulation (MDWF): Galerkin Approximation ...

Limited Observations

Given ν ∈ R+,0, find (uM,ν,ψM,ν) ∈ X × Y trialM

a(uM,ν, v)− (ψM,ν, v)Y = f, vY ×Y , ∀v ∈ Y ,

−a(uM,ν,φ)− ν(ψM,ν,φ)Y = −a(uobs,φ), ∀φ ∈ YtestM

;

for

Y testM

≡ Eo

Mexperimentally observable

but

Y trialM

informed only by approximation considerations.

Strategy [DG,DPW]: choose o tomaximize stability from SVD anti-node considerations.

129