young1980.pdf
Transcript of young1980.pdf
-
8/18/2019 young1980.pdf
1/25
This article was downloaded by: [Ohio State University Libraries]On: 19 June 2012, At: 05:18Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House37-41 Mortimer Street, London W1T 3JH, UK
International Journal of ControlPublication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/tcon20
Refined instrumental variable methods of recursivetime-series analysis Part III. ExtensionsPETER YOUNG
a & ANTHONY JAKEMAN
a
a Centre for Resource and Environmental Studies, Australian National University, Canberra
Australia
Available online: 21 May 2007
To cite this article: PETER YOUNG & ANTHONY JAKEMAN (1980): Refined instrumental variable methods of recursive time-
series analysis Part III. Extensions, International Journal of Control, 31:4, 741-764
To link to this article: http://dx.doi.org/10.1080/00207178008961080
PLEASE SCROLL DOWN FOR ARTICLE
Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions
This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form toanyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation that the contentswill be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses shouldbe independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims,proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly inconnection with or arising out of the use of this material.
http://dx.doi.org/10.1080/00207178008961080http://www.tandfonline.com/page/terms-and-conditionshttp://dx.doi.org/10.1080/00207178008961080http://www.tandfonline.com/loi/tcon20
-
8/18/2019 young1980.pdf
2/25
Refined instrumental variable methods of recursive
time-series n lysis
Part 111. Extensions
PETER YOUNGtS and ANTHONY JAKEMANt
This is th e final paper in a series of t hre e which have been concerned w i t h th e compre-
hensive evalua tion of the refined instrumental variab le
I V )
method of recursive
time-series analysis. The paper shows how the refined
IV
procedure can be extended
in various important directions and how it can provide the basis for the synthesis of
optim al generalized equat ion error GE E) algor ithms for a wide class of etochastic
dynamic systems. The topics discussed include the estimation of param eters in
continuous-time differential equation models from continuous o r discrete d at a
the estimation of time-variable parameters in continuous o r discret e-tim e models of
dynamic systems the design of stochestic state reconstruction Wiener-Kalmen)
filters direct from da ta the estimation of parametere in multi-input, single ou tp ut
MISO)
transfer function models the design
of
simple stochastic approximation
SA)
implementatio ns of t he refined
I V
algorithms and the use of the recursive algorithms
in self-adaptive self tuning) control.
1 ntroduction
I n the first two parts of this paper Young and Jake man
1979
a, Jakeman
and Young 1979 a ) we have been concerned with th e description an d compre-
hensive evaluation of th e refined instrumental variable
I V )
approach to
time-series analysis for single input, single output SISO) and multivariable
dynamic systems described by discrete-time series models. In
this, the third
and final part of th e paper, we consider how the refined I V method can be
extended in various directions to handle continuous time-series models and
discrete or continuous time-series models with time-variable parameters. We
also discuss briefly other extensions including off-line and on-line adaptive
methods of designing stat e reconstruction Kalm an) filters for stochastic
systems
;
th e development of
IV
estimation procedures for specific time-series
models, such as the multiple input-single ou tput transfer function model an d
finally the estimation of parameters in multivariable system models in those
situations where the observation space is less tha n the dimension of th e model
space. Fo r convenience, in a11 cases except the latt er , we shall consider refined
I V estimation algorithms with non-symmetric matrix gains. Bearing in mind
th e results of th e first two parts of the paper, however, it is clear tha t symmetr ic
matrix gain alternatives could be implemented and i t is likely th at, a t least for
reasonable sample size, they would perform in a similar manner.
Received 10 August, 1979
t Centre for Resource and Environmental Studies, Australian National University,
Canberra, Australia.
1Currently Visiting Professor, Control and Management Systems Division,
Engineering Department, University
of
Cambridge
002&7179/80/3104 0741 02.00 1980 Taylor Francls Ltd
-
8/18/2019 young1980.pdf
3/25
742
P Young and
A
akeman
2
Continuous time dynamic systems described by ordinary differential equations
models
The refined
IV
procedure can be applied to both SISO and multivariable
continuous time-series models bu t, for simplicity of exposition, we will describe
here only the SISO implementation. The extension of the SISO procedures
to the multivariable situation is, however, quite obvious by analogy with the
discrete-time case discussed in Part
I1
of the paper (Jakeman an d Young
1979 a) . Using nomenclature similar to tha t used previously, the continuous-
time SISO model is illustrated in block
M
(within dotted lines) of Fig. and
can be written as
(ii)
(iii
YV
= x t )
f
1)
where
s
is the differential operator,
i
.e. sx(t
=
dx t /dl (Ioosel
y
interpreted here
as the Laplace operator)
; A B,
C and
D
are polynomials in s of the following
.
form,
and
[ t)
is a continuous-time white noise process.
It is well known (e.g.
AstrGm 1970, Jazwinski 1970) th at theoretical and analytical difficul ties
arise because of the use of continuous white noise in mathematical models of
dynamic systems, particularly transfer function formulations such as eqn.
1 ) (ii). In the present context , this difficulty is manifested in the form of
practical problems associated with the recursive estimation of t he parameters in
the
C
and
D
polynomials characterizing the noise model. For the moment ,
however, i t is convenient to assume a continuous-time model of the form
1 (ii)
although, as we shall see, i t is necessary in practice to evaluate t he noise-
components of the model in discrete-time
in
order to circumvent estimation
problems.
2.1. Discrete and continuous time recursive
algorithms
It
is clear that the model
(1)
is algebraically equivalent to the discrete-time
SISO model discussed in previous parts of this paper.
Let us consider,
therefore, the situation whcre we wish to implement the estimation algorithm
in discrete-time using sampled da ta from the continuous-time system we will
refer to this as CD (continuous-discrete) analysis (Young 1979 a). Using an
approach similar to th at used in previous parts of the paper, it is then possible
to obtain estimates of t he parameters in the continuous-time model polynomials
-
8/18/2019 young1980.pdf
4/25
Refined instrumental variable
methods
of recursive time-se ries an aly sis 743
A
and B by minimizing a least squares cost function of the form
Here
y
=
[yo yo
yOTlTand
u
=
[u
uo2
,
u O T I T
where the first zero
subscript on u and y indicates that the variables are, respectively, the basic
input and out put variables i.e. the
'
zeroth
'
derivatives of u and y ), while the
second subscript
i
= l
2
, T denotes the sampled values of the variables
at
time ti i.e. y t,) and
u t,).
Figure
1.
Refined IV
algorithm for continuous-time systems.
'noise
M
:
w .
I
~ s i
,
~ t )
,
YO*
t
-
r.5thte.--
I
1) REFIND
reconstruct~oni
I A s I
o r
1
- - tate
I
fi lter
Now, by direct analogy with the analysis in the DD discrete-discrete) case
of Part I, the recursive estimate d of the unknown parameter vector
a
[a a ..
,
a b b , , ..., b - IT can be obtained from th e following discrete-time
algorithm,
i a = a,-,
P,-,ak*[sz
+z , * ~ k - l ~ k* ] -l ~ k *T,-, Ok*)
or
ii) a
=
-P,lt,*{~,*~
-
Ok*)
and
iii) P, =
P,-,- Pk-,g,*[e2 + z ~ * ~k-lltk*]-l~k*TPk-l
u(t)
where
Here 6 is an estimate of the variance of
e t ) ; i k
s the outpu t of an adaptive
' auxiliary model ' as shown in Fig. 1 and t he st ar superscript indicates that
z
ariable
...
I
i
~ l t e r s
* ,
a c
kit)
~ S I -
recurswe
or
u*.dt)
-
a u x ~ lary
model
iteraiive update
,
I
v
C sl
- GoRITw
[s)A s)
a
k i t )
-
8/18/2019 young1980.pdf
5/25
744
P. Young and
A .
Jakeman
the Jariables are filtered by adaptive prefilters
' C B A
again as shown in
Fig. 1. The i =
1, 2 ,
n subscript on the variables within th e square brackets
in
5)
denotes th e ith time-derivative of the variabIe, while the
k
subscript
outside th e brackets indicates th at the enclosed variables are all sampled a t th e
kth sampling instant.
.
This algorithm has close similarity with the
I V
algorithm suggested some
years ago by Young 1969), h e only difference lies in the na ture of the prefilters ;
in the previous algorithm, these were termed ' sta te variable filters an d were
introduced mainly to avoid direct differentiation of noisy signals. I n th is
sense, the function of the present filters is identical
:
their presence means that
it is not the direct derivatives of the variables
y l ) , i t )
and
u t )
hat are required
for estimation b ut t he derivatives of the filtered variables y* t), j . * t ) and u* t).
And these filtered derivatives, unlike th e direct derivatives, are physically
realizable s a product of th e filtering operation Young 1964, 1969). Of
course the prefilters here do more than just avoid differentiat ion of noisy
signals ; they also represent the mechanism for inducing asymptot ic statisti cal
efficiency.
In the present case, the ' optimal prefilters are defined in terms of estimates
of t he a
priori
unknown polynomials
A ,
C and
D. t
is necessary, therefore,
to define some adaptive procedure for synthesizing the prefilters as t he estima-
tion proceeds. In the situation where C .0, i.e.
t )
is white noise, the
adaptive synthesis of the prefilters 1 A is fairly straightforward
:
both the
prefilter and auxiliary model parameters can be updated either recursively or
itera tively as shown in Fig. 1, exactly as .in the discrete-time model case
described in Part I of this paper. When th e noise
[ t )
is coloured i.e. C 1.0
and/or
D
1.0), however, the situation is no t so straightforward : in contrast to
the discrete-time model situation, i t is not easy to construct a similarly moti-
vated recursive estimator for the continuous-time noise model parameters since
the derivatives of the white noise e t ) do not exist in theory.
While i t may be possible to solve this noise estimation problem by con-
sidering either band-limited noise or purely autoregressive noise where
derivatives of
e t )
do not occur), we feel that it may be better to consider a
hybrid approach. Here, the noise is estimated in purely discrete-time DD)
terms by the use of the
A M L
or refined
A M L
algorithms described previously.
This does not create any implementation problems because the noise model is
only required for adaptive prefiltering operations, which can easily be carried
out in discrete-time when using D analysis. The general implementat ion
in this case is shown in Fig. 2 a) and the detailed structure of the deriva tive
generating filters
l / A s )
is illustrated in
Fig.
2 b).
t
should be noted here
th at t he filter in Fig. 2 b ) s similar to the ' stat e variable filter suggested by
Kohr see, e.g. Kohr and Hoberock 1966) : the only difference is that the
coefficients ti i = 1, 2, n are not constant, as in the Kohr case, but are
aduplively
adjusted, either iteratively or recursively, as the estimation proceeds.
U p to this point we have assumed that, while the algorithm
4)
is imple-
mentcd in discrete-time, the signals y t),4 t)and
u t )
are available in continuous-
time form so that they can be passed through the continuous-time prefilters
prior to sampling. In practice, however, it could well be th at both in put an d
outp ut signals are naturally in sampled data form. This difficulty can be
circumvented, albeit in an approximate manner, by assuming th at t he signals
-
8/18/2019 young1980.pdf
6/25
Refined instrumental variable methods
of
recursive time series analysis
7 4 5
remain constant over the sampling interval and passing them directly into the
continuous-time filters. In other words, the sampled dat a ar e converted to a
continuous time staircase form prior to filtering. I n this manner, the
prefilters perform an additional, useful, interpolation role and provide
estimates of the continuous-time filtered variables.
and I
and
I hold I
I hold i
x
J i. .Y J
a1
Figure
2.
Refined I V algorithm
: D
implementation.
a) verall implementation
X
closed : continuous-time data available
;
X operative
:
discrete data only
available ; b ) state variable filter l / A s ) applied to u t) .
r - - . -
Of course th e estimates of the filtered variables emerging from the prefilters
are in no sense optimal and th e efficacy of this approach is clearly dependent
upon the sampling period T the approximation will be good for small T
and will become progressively worse as T s lengthened. Fortunately the
estimation results do not appear particula.rly sensitive to the choice of T nd
acceptable performance can be obtained from qui te coarse sampling frequencies.
I
v F
(see block
above 1n a1)
I
u:(t)
I
uE-, t )
I
u:-l t)
I :
I
: 4u; ( t )
I
u. ( t )
I
I
I
I
m > I
=-
I
I
_
- .
(b
-
8/18/2019 young1980.pdf
7/25
746
P Young
and
A
Jakeman
Finally, i t is worth noting tha t, if continuous-time measurements of y(t ) an d
u(t)are available, then it is possible to consider a continuous-time implementa-
tion.of the estimation algorithm itself (i.e. using CC analysis). This is a logical
development of early continuous-time gradient procedures for estimat ing
dynamic system parameters (see, e.g. Young 1965 a, Levadi 1964, Ka ya an d
Yamamura 1962, Young 1976). The most obvious impleinentation would be
an estimation algorithm of the form
(i)
(ii)
which is a continuous-time equivalent of the discrete-time recursive algorithm.
Algorithms of thi s form are also discussed by Solo (1978).
Note that it would be difficult to implement the estimation algorithm (6)
for other than
D
1.0 because of the difficulty in estimating th e an d D
polynomials (unless we once again consider some hybrid mechanization which
would be rather impractical). Thus when f(t ) s not white noise, th e estimates
produced by the algorithm will not have any optimal properties. They will,
however,. be consistent, asymptotically unbiased and, on the basis of previous
experience, they should be reasonably efficient (see Jakeman 1979). Note also
th at we can reduce the computational complexity furthe r by replacing (t) in
(6) by a simpler stochastic approximation
SA)
gain (e.g. Young 1976). Thi s
would be a continuous-time equivalent of t he SA algorithms discussed in
7
2.2. Experimental results
The CD approach to t he continuous-time model estimation discussed in t he
previous section has been evaluated by Monte Ca;rlo simulation analysis applied
to two systems described by second order differential equations.
In
the first
case, the sys tem was of th e form
with u (t ) chosen as a random binary signal with levels plus and minus 1.0.
In the second, the system was modified to
with K 0-781, w 1-6,
0.5
and u(t) chosen as the following combination
of three sinusoidal signals,
u(t ) =sin (0.5wdt) sin (w,t) +sin (l.5wdt)
where w is the damped natura l frequency of the system i.e. w o,Z/(1-
c2 .
In both of these examples, the noise l ( t) was simulated white noise adju ste d
to give several different signal/noise ratios S (defined as in Young and Ja keman
1979 a) .
-
8/18/2019 young1980.pdf
8/25
Refined instrumental variable methods. of recursive time-series analysis
747
Tables 1
a )
nd 1 b )are typical of the results obtained during the analysis.
For each sample size, column represents the average parameter value over 10
experiments, while columns and 3 represent standard deviation from th e true
and average parameter value, respectively. I n Table 1 , th e sampling interval
T,
is chosen to be quite rapid a t
0.1
sec which represents
1/31.4
of th e Shannon
maximum sampling period, P ,
=
n /w , In Table 1 b) , wo different sampling
intervals are compared one fairly coarse
P, /8) ,
he other quite small
P, /40)
there seems to be some bias on a,
in
the coarse sampling situation.
Number
o
samples
Parameter True
value
100 500
Table
a). S=10, T,=0.1.
Sampling rate
Parameter True
value PJ40 Pel8
Table b ) .
S= 10, 500 samples.
Other simulations have tended to confirm that quite coarse sampling
intervals can be tolerated but suggest that the degree of bias is, no t surprisingly,
a function of the system dynamic characteristics and the value of S As a
result, the algorithm should always be used with great care if th e sampling ra te
is low and, as
a
rule of thumb, sampling intervals should always be chosen less
than P,/10. But there is clearly a need for more research on this topic before
the algorithm can be used with confidence with coarsely sampled data.
The algorithm 4) has also been applied with, some success both to multi-
variable systems Jakeman
1979
and to real data. Typical of the latt er are
the results shown in Table c) and Fig.
3.
Table c)compares the estimates
obtained when carrying out
CD
and DD time-series analysis on da ta obtained
during fluorescence decay experiments on 1-naphthol Jakeman e t al 1978).
I t is clear th a t the continuous-time and discrete-time models have virtually
identical dynamic characteristics in this case, where the sampling period was
short in relation to P , approximately P,/113).
-
8/18/2019 young1980.pdf
9/25
P .
Young nnd
1. akernan
6 dye
tr cer
data
model a t ) = b , l ~ ( t - ~ l
+
b,stGs
odel
vtput
.
C t )
time
hours)
Figure 3 .
Results from model of dye tracer concentration in Murrumbidgee River,
Australia.
Dynamic characteristics
Parameter
Model estimates Time constant Steady state.
(nsec) gain
Discrete time T 0.212 nsec a
=
.9724 7.575 1.0
bo= 0.0276
Continuous time; time unit = 0- 212 a
=
35.9622 7.624 1-0
nsec bo= 1.00
Table c).
Figure 3 shows the observed and estimated dye concentration in a river,
where the estimated concentration is generated by
a
second order differential
equation model estimated using algorithm 4. The data used in this exercise
wcre collected during dye tracer experiments carried out on t he Murrumbidgee
River system in Australia (Whitehead et al 1978). Here, as can be seen from
Fig.
3 ,
i t was not possible to maintain a completely regular sampling interval
but T s approximately
half
an hour P , / 3 0 ) . This demonstrates how data
with irregular sampling intervals can be used, provided the longest sampling
interval does not lead to serious interpolation errors and estimation bias.
3
Syst em s described by time variable parameter models
The idea of modifying recursive algorithms to allow for the estim ation of
time-variable model parameters has been exploited many times since th e
publication of R. E. Kalman s seminal papers on st ate variable filter-estimation
theory in the early nineteen sixties (Kalman
1960,
Kalman and Bucy
1 9 6 1 ) .
In the case of V algorithms, thi s particular extension has so far been heuristic
(see Young 1 9 6 9 ) . Bu t now, with the advent of the refined V algorithm, i t is
possible
to
pu t such modifications on a sounder theoretical base and t o const ruct
algorithms which have greater practical potential.
There are a num ber of ways in which th e time variable parameter modifica-
tions can be introduced. The most straightforward is simply to take no te of
-
8/18/2019 young1980.pdf
10/25
Refined instrumental variable nzetho s of recursive t ime series ana lysis
749
the relationship between the refined I V algorithm and the Kalman estimation
algorithms and introduce additional a priori information in the form of a
stochastic model for the parameter variations. The general form of this model
is the following discrete-time, Gauss-Markov model,
Here
@
and I are assumed
known
and possibly time variable matrices, while
qk is a discrete white noise vector with zero mean .and covariance matrix Q
which is independent of the observational white noise source e i.e.
where S,, is th e Kronecker delta function.
This device is now well known in the recursive parameter estimation litera-
ture and it is straightforward to show (see, e.g. Young 1974) how the simple
recursive linear least squares regression equation can be modified in the light
of the additional a priori information inherent in (7) to include additional
prediction equations which allow for the update between samples of both the
parameter estima tes and the covariance matrix of th e parametric estimation
errors,
P .
Unlike the basic
I V
algorithm, th e refined I V algorithm would appear to
have certain optimal properties. In particular, the theoretical results of Pierce
(1972), together with the stochastic simulation results reported in Pa rt s and
I1
of this paper, have shown th at the pk matrix generated
by
the refined
algorithm provides a good empirical estimate of t he covariance matr ix
P of
the estimation errors, where
P
=
E
piT
and
B =
a
A It
is possible, therefore, to employ the same approach used to
modify the recursive linear regression equations to similarly modify the refined
I V
algorithms. The resulting algorithmt takes the following prediction-
correction form (see, e.g. Young
1974,
p.
214)
i )
klkvl
(ii) Pklk-
~ f i - @ ~
r Q F T
ii
a k ~ ~ ~ ~ - ~ - p ~ ~ ~ - ~ ~ ~ * [ 6 ~z ~ ~klk-l~k*] -l
correction
on receipt { ~ k * ~k~k-l-~k*)
( i ~ ) i PkIk-l- kIk-I~k*[@ bkIk-I~k*]-l
sample
~ k * ~
klk-l
Equations 8) (i) to (iv) constitute the refined I V algorithm for estimating
stochastically variable parameters in a discrete time-series model of
a
S SO
t t will be noted tha t the derivation of this algorithm is made a little more
obvious if the symmetric gain matrix form of the refined IV algorithm is utilized.
? from this algorithm is a somewhat closer approximation to P than
f ,
n the non-
,
symmetric gain
c se
(see Young and Jakeman 1979 a).
-
8/18/2019 young1980.pdf
11/25
750 P
Young and
A
Jakeman
system.
At first sight, this algorithm appears somewhat restrictive since i t
requires
a priori
knowledge of th e matrices nd
I
in the stochastic model of
th e parameter variations. Bu t pas t experience with similar algori thms e.g.
Young 1969, Norton 1975) has indicated th at t he assumption is no t as limiting
as i t might appear. First , it is possible to consider a class of simple random
walk models which represent special cases of 7) with very simple and
I
matrices and which seem to offer some considerable practical potential.
Second,
priori
information on the natur e of parameter variations can some-
times be utilized to arrive
at
simple Gauss-Markov models.
In the first case, the three random walk models that have proven most
useful in practice are as follows :
The pure random walk
RW)
The smoothed random walk SRW)
where a is a constant scalar with 0
< a <
1.0.
The
integrated random walk
IRW)
Th e R W model 9) was first used in the early nineteen sixties see, e.g. Kop p
and Orford
1963,
Lee
1964).
The
IRW
model 11) was suggested in t he
parameter estimation con text by Norton 1975) who has used it successfully
in a
number of practical applications. The SRW model 10) is of more rec ent
origin Young and .Kaldor
1978
and seems to provide .a good compromise
between models 9) and
l l ) ,
lthough it requires the specification of one addi-
tional parameter, th e smoothing constant u
=
1/ ~ , , here i- is the approximate
exponential smoothing constant in sampling intervalst.
All of the models 9) to
1
1) are non-stationary in a statistical sense and so
they allow for wide variation in the parameters. Their different characteris tics
are described fully by Norton 1975) and Young and Kaldor 1978). P u t
simply, the progression from model 9) through 10) to 11)allows for grea ter
overall variation in the estimated parameters for any specified covariance
matrix Q accompanied by greater smoothing of the short -term variations.
Jn th e case where more general and
I
matrices are considered it may
often be possible to a ss l~m e ha t, for physical reasons, the variations in th e
parameter are correlated with the variations in other
measured
variables
affecting th e system. For example, the parameters in an aerospace vehicle are
known t o be functions of variables such as dynamic pressure, Mach number,
altitude, etc. Young 1979 b) . Or again, th e numerator coefficients in a
t
Strictly, the time constant, T = -T,/log,
1
-a time units, where
T,
s the
sampling interval in time units.
-
8/18/2019 young1980.pdf
12/25
Refine d in strumen tal variable methods of recursive time -ser ies an aly sis 751
transfer function model between rainfall and runoff flow in hydrological systems
are known to be functions of soil moisture and evapo-transpiration (Young 1975,
Whitehead and Young i975).
I n such examples, t is often possible to define a in th e following form
where
T k
s a matrix (often diagonal) of t he relevant mea suredvari able s ;
a,*
is a vector of residual parameter variations which,
if
the T ransformation is
effective, will be only slowly variable and can be described, for example,. by
one of th e random walk models. I n the case of the RW (9), i.e.
ak*= akPl*
Tk -1
(13)
we see th at , upon subs titution from 13) into
12),
the variations
in
ak are given
by a Gauss-Markov m odel such as (7 ) with
@
=0
TkTk-,-I
and r
= I?,= Tk .
t is clearly possible, therefore, t o utilize the refined I V algorithm
(8)
with
0
and
Q
in the prediction eqns. (8) (i) and (ii) defined accordingly.
Such an
approach has been used previously with other recursive algorithms by Young
(1969, 1979 b).
The implementation of the algorithm defined by eqns. 8 (i) t o (iv) offers
several problems. I n particular, the equations imply th e parallel implementa-
tion of t he refined AML algorithm and its interactive use with th e refined I V
algorithm, as described by Young an d Jakeman (1979 a) . This introduces
considerable complexity and, for the present paper, we have once again
implemented only the special case where
El
is white noise, i.e. C(z-1) = D z-1) =
1.0. Here, the full refined AML is not required and the prefilters (nominally
e/Ab are defined
s 1/A^
This simpler algorithm works very well and
seems to give good results even if 5 is coloured noise. Moreover, the algori thm
in this form
has
also been modified furthe r to allow for an off-line smoothing
solution in which the recursive estimate a t any sampling ins tan t k is a condi-
tional estimate ti based on t he whole da ta se t of
N
sampIes. Th e smoothing
algori thm is an extension of Norton s work (Norton 1975) within a n I V context
and it requires both forward and backward recursive processing of t he dat a
(Young and Kaldor 1978, Kaldor 1978, Gelb 1974).
Finally, it should be remarked that the above approach to time variable
parameter estimation can be extended straightforwardly both to the multi-
variable and continuous-time situations. Such extensions are fairly obvious
and so they are not considered in detail in the present paper.
3.1. Experimental results
The IV algorithm (8) has been applied to the follpwing second order discrete
time system
where 5 is white noise chosen
to
make S = 20 ; while a, = 0.5, Vk
and both
b and
a
are time-variable with
0.3 ; k= 1,
..., 30
0.5 ; k=31, ... 60
0.4 ; k=61,
..., 100
-
8/18/2019 young1980.pdf
13/25
752
P oung nd A . Jake-
and
-0.35
;
k
1 ...,
60
a l k
=
-0.6 ; k=61, ... 100
Figures
4
(a),
b )
and c) show the estimation results obtained when an
IRW
model (11) is assumed, with
Q
selected as a diagonal matrix with elements
0.001, 0, and 0-001 respectively. Both t he recursive filtering an d smoothing
estimates are shown in all cases (for the constant parameter a,, the smoothed
estimate is, of course, itself a constant). I t is interesting to note th a t very
similar results to these were obtained using the SRW model with a = 0.9 nd
the diagonal elements of Q set
to
0.05, 0, and 0.05, respectively.
t is clear th at in thi s example where step changes in parameters occur, th e
smoothing algorithm is not pa,rticularly appropriate, since it attempts to
provide a smooth transition where abrupt changes are actually being en-
countered. I n practice, however, it is quite likely th at smoother changes in
parameters will often occur nd t is here th at the smoothing algorithm will have
maximum potential. Bu t it should be emphasized th at t he smoothing
algorithm used.here is
a n
off-line procedure and is computationally expensive
in comparison to
the
filtering algorithm (8). On line, fixed lag smoothing
algorithms (Gelb 1974) could be developed, however, if
c i r cu m s t an ~ s o
demanded.
I recursive es t im te
I S
0 5 0
1 O
number of
sarqdes
Figure
4.
ime variable parameter estimation for second order, stochastic system.
-
8/18/2019 young1980.pdf
14/25
Refined instrumental variable methods of recursive time-series analysis
75
Algorithms such as 8) with.parameter variation model 12) and 13) have
been used successfully for the estimation of the rapi d.1~ hanging parameters
of a simulated missile system e.g. Young 1979 b). I n this example, additional
flexibility was required to estimate the particularly rapid changes in parameters
th at occurred over the rocket boost period and this was introduced by making
the covariance matrix
Q
also a function of
k
The algorithm 8) has also been applied to many other sets of s imulated
and real data . These include time-series obtained from a large econometric
model of the Australian economy Young and Jakernan 19.79 b) real da ta
from the United States economy Young 1978) ; an d various sets of environ-
mental dat a Young 1975, Whitehead and Young 1975, Young 1978). In the
latter examples, the time-variable estimation was utilized specifically for the
identification of non-linearities in the model structure, a procedure for which it
seems singularly well suited.
4. Sta te rCconstruction
filter
design
Recently Young 1979 c) has shown how the estimates of th e st at e of
a
stochastic, discrete-time, SISO system can be obtained as a linear function
of the output s of t he adaptive prefilters used in the refined IV algorithm when
i t is applied t o t he following
ARMAX
model
In particular, i t can be shown tha t the sta te estimate jCk, is generated theoreti-
cally from a relationship of th e following form,
'jCk='k~
16)
where
z k = [ N l < l k :Nn
-
8/18/2019 young1980.pdf
15/25
764
P
oung nd
A J a k e m n
The s ta te reconstruction filter (16) can be implemented in practice by replacing
by its estimate obtained from a recursive IVAML algorithm and using t he
outputs of t he prefilters in the same algorithm to define
2 .
f th e fully recursive I V solution is utilized in the above fashion then t he
overall procedure constitutes an optimal adaptive Kalman filter, in the sense
th at th e asymptotically optimal state estimates are obtained as a by-product of
an asymptotically efficient parameter estimation scheme.
In the off-line
recursive-iterative implementation, use of th e algorithm in this fashion
represents a Kalman filter design procedure in which the asymptotic gain
Kalm an (or Wiener) filter represented by eq n. ,( l6 ) s synthesized direct ly from
th e system data?. This should be particularly useful in practice because i t
obviates the need for specifying noise statistics and solving the covariance
matrix R,iccati equation, as required in normal Kalman filter design. Note also
that this approach to state estimation can also be applied in the purely
stochastic situation where no input variables
u
are present : here the refined
AML algorithm would provide the source of parameter estimates and prefiltered
variables.
An extension of t he above approach to continuous-time systems with
deterministic inputs is fairly obvious but involves some technical problems.
The expression for the estimate of th e continuous-t ime sta te -vec tor .jC(t) is of
the same basic form as
16)
(see Fig. I , i.e.
where, theoretica.lly, Z(t) is t he continuous equivalent of Zk. But the noise
model in th e continuous-time equivalent of eqn. (15) has equal order numerato r
and denominator polynomials which introduces estimation problems (see, e g
Phillips 1959, Phadke and Wu 1974).
We
will not discuss these problems in this present paper but will merely
note th at , in this situation, a straightforward ye t clearly suboptimal approach
is to use a more arbitrary state variable filter ( l / B ) . For example, th e choice
of
D
could be based on the heuristic notion t ha t its passband should encompass
the passband of the system under study (e.g. b = d ) . In this manner, the
filter will pass frequency components of interest in the estimation bu t at te nu at e
.high frequency noise. The resultant IV algorithm is then identical to that
suggested by Young (1969), and the s tate estimate obtained from (22) with
replaced by
p
although not optimal in any minimum variance sense, will be
asymptotically unbiased and consistent, i.e. jC(t)+x(t) for 1-co where is
the true stat e vector.
The I V algorithm in this latter case can be considered as
a
stochastic
equivalent of the adaptive observer suggested by Kreisselmeier 1 977 which
is based on th e deterministic equivalent of eqn. (22) and uses
a
continuous-time
deterministic estimation algorithm with .constant gains. Bu t unlike th e
Kreisselmeier observer, this stochastic sta te reconstruction filter will function
satisfactorily, albeit non-optimally, in the presence of even high levels of noise.
t
This would seem to satisfy Kalman s requirement (1960), hat the two problems
(parameter nd state estimation) should be optimized jointly if possible .
-
8/18/2019 young1980.pdf
16/25
Refined instrumental variable
metho s
of
recursive
time series analys is
7
Final ly, with the utilization of suitable multivariable canonical forms, it is
possible to extend the arguments in this section to multivariable systems
Jakeman and Young 1979 c . Such extensions will, however, suffer from the
disadvantages of complexity associated with all multivariable black box
methods see Jakernan and Young
1979
a) and they will need
to
be considered
in
detail before their practical potential can be evaluated.
4.1 . Experimental
results
Figures 5 a) nd b ) show the results obtained when the off-line Kalman
filter design procedure is applied to the following model
which is simply the nnovations or Kalman filter description
of
model 5,
considered
in
Par t of the paper. These results were obtkined using Monte
Carlo analysis with ten random simulations and the figures show the ensemble
averages of the two sta te variable estimates compared with the true st ate
variables generated by the model. The variance associated with the ensemble
averages was quite small as shown by the standard error bounds marked on t he
la1
ru
value
l o \
st imt
0 5 100
number of samples
Figure
5.
Joint parameter state estimation : output
of
adaptive state reconstruction
Kalman) filter for second order, stochastic system.
-
8/18/2019 young1980.pdf
17/25
756
P . Young
and
A . Jakernan
plots.
t
is interesting to observe th at t he estimate of the first element of
gk s the optimally filtered output of the system ik hich corresponds with t he
optimal one step ahead prediction of the output.
5.
The general refined
IV
approach to estimation and
ts
application to some
special model forms
So far in this paper we have talked mainly in terms of specific mathematical
model forms. One attraction of th e refined IV method, however, is t ha t it
suggests a general
approach
to stochastic model estimation that has some
similarities with the alternative prediction error
(PE)
minimization approach,
but which leads to algorithms with subtle bu t important practical differences of
mechanization. In this section, we will outline this approach and show tha t
it can be considered
to
arise from a conceptual basis which we will term
generalized equation-error (GE E) minimizationt. We will also demons trate
the efficacy of the approach by showing how i t can be applied to two specific
model forms, namely the multi-input, single output (MISO) transfer function
model ; and the tanks in series representation, as used in chemical engineering
and water resources modelling work.
5 . 1 . Generalized equution-error G E E )minimizat ion
The general refined I V approach
to
time-series analysis consists of th e
following three steps.
1 ) Formula te the stochastic, dynamic model so th at t he stochastic charac-
teristics are defined in relation t o a source term consisting of a white noise
innovations process
ek
(or k n the multivariable case), and then obtain an
expression for e, in terms of all the other model variables.
2) By defining appropriate prefilters, manipulate the expression for
e,
until i t is a linear relationship (or set of relationships) in the unknown para-
meters of the basic
deterministic
part
of
the
model
(i.e. the an d polynomial
coefficients in all examples considered so far). Because of their similarity to
the equation-error relationships used in ordinary IV analysis, these linear
expressions can be considered as generalized equation-error (GEE) functions.
3 )
Ap ply the recursive or recursive-iterative I V algorithms to the estima-
tion of th e parameters in the GEE model(s) obtained in st ep (2 ) , with t he I V s
generated in the usual manner as functions of the ou tpu t of a n adap tive
auxil iary model (in th e form of the deterministic par t of the system model).
f
prefilters are required in the definition of GEE , then they will also need to be
made adaptive, if necessary by reference to a noise model parameter estimation
algorithm (e.g. th e refined
A m
lgorithm) utilized
in
parallel a nd co-ordinated
with the IV algorithm.
This decomposition of t he estimation problem into parallel but co-ordinated
system and noise model estimation, as outlined in step
3 ) ,
is central to the
concept of
GEE
minimization
;
and it contributes
t
the robustness of t he
resultant algorithms in comparison with equivalent prediction error. (PE)
minimization algorithms (Ljung
1976,
Young and Jakeman 1979 c).
f
GEE
has been used previously
t
denote any EE function defined in terms of
prefiltered variables ; here we use i t more specifically to mean an optimal
GEE
function with prefilters chosen to induce asymptotic statistical efficiency.
-
8/18/2019 young1980.pdf
18/25
Refined instrumental variable metho s of recursive time series analysis 7 7
The robustness is enhanced further by the IV mechanization, which ensures
th at the algorithm is not susceptible to contravention of theoretical assumptions
about the nature of the noise process. In particular, the supposition th at the
noise arises from a white noise source, usually via some dynamic model (e.g. a
rational transfer function, as in the
ARMA
model) is not restrictive
:
provided
that the system input signals (u, or uk)are independent of the noise, then t he
refined IV algorithms will yield estimates which are asymptotically unbiased
and consistent
even if the noise ssumplions are imrrect.
This remains true
even if t he noise is highly structured, e.g. a periodic signal or d.c. bias. On th e
other hand, if the assumptions
are
valid, then the resulting estimates will, as
we have seen in this series of papers, have the additional desirable property of
asymptotic statistical efficiency.
5 . 2 .
The M I S transfer function model
To exemplify the
GEE
minimization approach, let us consider the MIS0
transfer function model. Most time-series research in the M I S 0 case has been
directed towards the so-called
ARMAX
representation, where the transfer
functions relating each separate input to the output have common denominator
polynomials. An alternative and potentially more useful model form is the
following MISO transfer function model where them individual input-output
transfer functions are defined independently (see, e.g. Box and Jenk ins 1970),
This model can be considered as the dynamic equivalent of regression
analysis, with the regression coefficients replaced by transfer functions. I n
this sense, such model has wide potential for application.
Considering the two input case for convenience, we note from (24) th at the
white noise source e is defined
as
I t is now straightforward to show that (25) can be written
in
two GEE forms.
First,
i
a single star superscript is utilized to denote prefiltering by CIDA,,
then
Here lk is defined as
t l k =Yk- k
where gZk s the output of the auxiliary model between the second input
u
and the output, i.e. i t is th at part of the output explained by the second input
alone. Similarly,
e
can be defined in terms of
t2
where
f2k =Yk
28)
In this case,
ek
=
A,(2k** B2u2,**
P9)
where the double star superscript denotes prefiltering by CIDA,.
-
8/18/2019 young1980.pdf
19/25
758
P. Young
and
A. Jakeman
By decomposing the problem into the two expressions (26) and (29), we have
been able to define two separate GEE S which are linear
in
t2ie unknown moae?
parameters for each transfer function in turn.
Now let us define,
where aif,j
= 1 2, ,
n
; and bij, j = 0,
1,
,
n
are th e j t h coefficients of
A
and Bi respectively. It is now possible to obtain estimates 8 and 6 of a
and a, from the following refined I V algorithm
where Qi and Kik* are defined as follows
-
*
Qik*= [gi,k-l
,
..,xi,k-n
uik*, . . . I Ui,k-n*]T
I
(ik*=[fi,k-L .., ' i,k-n*r ~ i k * ,.., Ui,k-n*IT
Algorithm (30) is used twice for
=
1 2 but when i = 2 the single star super-
scripts are replaced by double sta r superscripts. The adaptive prefiltering is
then executed in the same manner as for the SISO case, with th e refined AML
algorithm providing estimates of the and polynomial coefficients. The
extension to th e general case of inputs is obvious. There is also a symmetric
gain version of (30)with gik*Tbeing replaced by gik* everywhere except within
th e braces.
5.3. The tanks-in-series model
In chemical engineering and water resources research i t is quite often useful
to describe a dynamic system by means of a serial connection of identica l
tanks with each tank described by a first order differential equa tion with
transfer function b/(s+a) ; in other words the input u t ) and the output y t )
are related by
where m is the number of tanks in series and t (t ) s a noise term. Using GEE
approsch,it is possible to obtain refined I V estimates of a and b for different
and so identify and estimate the tanks-in-series model. We will not discuss
this in detail here since i t is done elsewhere (Jakeman and Young 1979 b), b u t
an example of it s use will be described
in
the next section.
5.4. Experimental results
Jakeman et a2 (1979) have evaluated the M I S 0 transfer function model
estimation procedure using both simulated and real da ta.
Figure 6 compares
the deterministic out put of a
MIS0
air pollution model obtained in thi s manne r
with the measured data. Here the d i t a are in the form of atmospheric ozone
measurements a t a downstream location in the San Joaquin Valley of Cali-
fornia. These are modelled in terms of two inpu t ozone measurements a t
-
8/18/2019 young1980.pdf
20/25
Refined instrumental variable methods of recursive time-series analysis
759
upstream in relation to th e prevailing wind) locations.
This analysis proved
particularly useful for interpreting across gaps in downstream da ta .
Table shows the results obtained when th e tanks-in-series estimation
procedure was applied t 100 samples of simulated dat a obtained f rom a second
order system
m
=
2). The results again include averages an d sta nda rd errors
over ten experiments.
mo el output i t
5
100
num er of mrrQles-
Figure 6. Results from model of ozone concentration in San Joaquin Valley,
California.
Signal to noise ratio,
Parameter True
value 10 30
Table 2 T,=0.2 sec P,/15-7).
6. ultivariable systems
with
limited dimension observation space
Another example of t he many possibilities which are opened up by th e
refined
V
approach
to
time-series analysis is the case where the number
of
observed variables in a multivariable system is less than th e number of out pu t
variables in the model. In theory, the symmetric matr ix gain refined
V
algorithm for multivariable systems described by Ja kem an an d Young 1979 a)
can be modified to allow for such a situation. This will bnly be possible, .of
course, provided the complete model is identifiable from the limited observa-
tions. The conditions for identifiability in these situations are not t he subject
of the present paper but, if we assume th at the model is identifiable i.e. unique
estimates of
all
the model parameters can be obtained -from the available
observat ions) then the modifications to t he symmetric gain refined
V
algorithm
are fairly straightforward.
-
8/18/2019 young1980.pdf
21/25
-
8/18/2019 young1980.pdf
22/25
Refined instrumental variable methods of recursive time-series analysis 761
7. Stochastic approximation algorithms
t is well known that the recursive least squares and related algorithms,
such as those discussed in th is series of papers, can be interpre ted as special
examples of matr ix-gain , multi-dimensional, stochastic approximation (SA)
procedures (Tsypkin 1971, Young 1976). t is clearly possible, therefore, to
modify any of t he refined IV algorithms to form simpler, scalar gain al ter-
natives. While such SA procedures are computationally efficient, they will
not usually possess the rapid convergence characteristics and low sample
statistical efficiency of their matrix gain equivalents. They may prove
advantageous, however, where da ta are plentiful bu t computational load must
be kept t o a minimum.
I n the basic SA algorithms, the matrix gain
p is replaced by a scalar gain
which obeys the conditions of Dvoretzky (see, e.g. Tsypkin 1971). I n the
discrete-time case, the best known gain sequence of thi s type is y, = y/k when y
is a constant scalar
:
in other words, the gain sequence is made a monotonically
decreasing function of the recursive step number , k. I n the continuous-time
case the best known example is simply the continuous-time equivalent of y,.
Such SA algorithms can also be modified (normally heuristically) to allow
for variation in the estimated parameters
:
this is achieved by restricting the
monotonic decrease in gain in some manner, usually by making y, or y(t)
approach a constant yo exponentially as k or approaches infinity. This
modification is based on a partial analogjr with the behaviour of th e f , matrix
in algorithm
(8)
when a W model (9) for parameter variations is used to define
and I (Young 1979 d) .
The simple SA versions of the refined IV algorithms cannot be recom-
mended for general application since their rates of convergence can be intoler-
ably low (see, e.g. Ho and Blaydon 1966). Bu t i t is possible to consider
somewhat more complicated algorithms which represent a compromise between
the simplicity of basic SA and the complexity of the fully recursive matrix-gain
algorithms. simple example would be the following modification t o the
refined IV algorithm given by eqn. (4) (i) in Pa rt
I
of the paper (Young and
Jakeman 1979 a, p. 4)
ak
YkfjkD ik*{zk*T -yk*)
36) .
Here f , ~s a 2n 1, diagonal matrix with elements ( k - i*) -2 , i = 1 2 n,
and u , - ~ * ) - ~ , 0 1,
.
.
,
n ; while y, is a SA sequence, say y/k.
In other
words, the gain matrix pk s replaced by a purely diagonal approximation
y,P,D, so tha t the computational burden is proportional to n rather than n2
for the full refined IV algorithm.
Algorithms such as (36) seem to work reasonably well (see, e.g. Kuma r and
Moore 1979). As might be expected, their performance seems to
fall some-
where between th at of th e full algorithm and the scalar gain equivalent. I n
general, however, the simpler algorithms should only be used when this is
. necessitated by computational limitations, as for example in on-line applica-
tions using special low storage capacity microprocessors.
8.
Self adaptive control
Perhaps the prime motivation for th e development of recursive estimation
algorithms during the early nineteen sixties was their potential use in
-
8/18/2019 young1980.pdf
23/25
76
P
Young
and A Jakeman
self-adaptive control.
Of late, this, early stimulus has been revived with th e
development of the self-tuning regulator (STR) based on recursive least
squares (R LS) estimation (e.g. Astrom and Wittenmark 1973, Clarke an d
Gawthrop 1975):. In the STR th e effect of t he noise induced asymptotic bias
on the RLS parametric estimates is cleverly neutralized by embedding the
algorithm within a closed adaptive loop which automatically adjusts the
estimates and th e control law t o yield either minimum variance regulation or
some other objective, such as closed loop pole assignment (Wellstead
et a2
1979).
The concept of the STR can be contrasted with the earlier concept of self
adaptive control by identification and synthesis (SAIS),where the objective
is to explicitly obtain unbiased parameter estimates and then to separately
synthesize the control law using these estimates (e.g. Kalman 1958, Young
1965 b, Young 1979 b) .
While the STR seems to possess good practical potential, there a re certa in
situations where the alternative of SAIS will have some advantages. Fo r
example, the stability of the adapt ive loop in the STR is not easy to ensure
a
pr or
because of the close integration between the recursive algorithm and t he
control law, and the highly non-linear nature of the resulting closed loop system.
On the other han d, the separation of estimation .and synthesis in t he S AI S
system means t ha t the question of convergence and stability is largely one of
ensuring th e identifiability of t he system under closed loop control. Thi s will
always be possible provided an external command input is present which is both
sufficiently exciting and statistically independent of the noise in the closed
loop. The requirement for such an inp ut can be problematical, however, in th e
pure regulatory situation, where the STR clearly comes into its own.
In cases where the SAIS procedure seems advantageous, the refined
V
algorith r provides the best, currently available recursive estimation str ategy
:
it is robust, can be applied in continuous or discrete-time and its results can
be used for either deterministic or stochastic control system design. Th e
efficiency of such an SAIS strategy is demonstrated in the self adapti ve
autostabilization system described by Young (1979 b) : here the recursive
estimation is used to synthesize a deterministically designed control system
based on closed loop pole assignment using sta te variable feedback. This
system achieved tight control of the simulated missile over the
whoIe of the
mission, which included a difficult boost phase where parameters changed
rapidly by factors ,of up to 30 in sec.
I n the case of adaptive, stochastic control by SAIS, the present paper has
shown th at the refined V approach provides an added bonus : the single IV
AML
algorithm yields not only the estimates of the model parameters b ut also
estimates of the sta te variables, which can then be used in st ate variable feed-
back control. And in the discrete-time, linear case, such an SAIS system could
be considered optima lly adaptive, since the st ate variable estimates would
then, as we have seen, be optimal in a Kalman sense.
9
Conclusions
This is th e third of three papers which have described and comprehensively
evaluated the refined V approach to time-series analysis.
In the present
paper, we have seen how this approach can be extended in various important
-
8/18/2019 young1980.pdf
24/25
Refined instrumental variable methods of recursive t ime-se ries an aly sis 763
directions a nd can also provide a conceptual basis for t he synthesis of refined
c
I V
algorithms for a wide class of s tochas tic dyn amic systems.
This conceptual base, which we have termed generalized equation-error
GE E) minimization, has some similarities with th e alte rnat ive prediction error
PE ) minimization concept but tends to yield algorithms which a re more ro bust
both in a computational sense and to mis-specification of t he noise charac-
teristics.
We
feel that this robustness, which arises primarily because of the
errors-in-variables formulation Johnst on 1963) and conse quent IV mechaniza-
tion, is an impo rta nt featur e of the refined I V algorithms and more detailed
discussion on this topic will appear in a forthcoming paper Young an d Jak em an
1979 c).
This paper was completed while the authors were visitors in the Control
an d Management Systems Division of the Engineering Dep art ment , University
of Cambridge.
REFERENCES
ASTROM, . J. , 1970,
Introduetwn to Stochastic Control Theory
New York Academic
Press).
ASTROM,
.
J. , and WITTENMARK,. 1973,
Automutica, 9,
185.
BOX,G. E. P., and JENKLNS. M. 1970, Time Series Analysis San Francisco:
Holden Day) .
CLARKE,D. W., and GAWTHROP,. J., 1975, Proc. Instn elect. Engrs, 122, 929.
GELB,A. ed.), 1974, Applied Optimal Estimutwn Boston
M I T
Press).
Ho,
Y.
C., and BLAYDON,., 1966, Proceedings of the
N .
E . C . Conference, U.S.A.
JAKEMAN,. J. 1979,
Proc. I P A C S ym p. on Identification and System Parameter
E s t i m t w n , Darmstadt, Federal Republic of Germany.
JAKEMAN, . J. STEELE,L. P., and YOUNG, . C., 1978, Rep. No. AS/R26, Centre
for Resource and Environmental Studies, Australian National University ;
1979, Rep. No. AS/R35.
JAKEMAN
:
J.
and YOUNG, . C., 1979 a,
In t .
J.
Control,
29,621
;
1979
b
Rep. No.
AS/R36, Centre for Resource and Environmental Studies, Australian National
University 1979
c
Rep. No. AS/R37 submitted to Electron. Lett. .
JAZWINSKI, . H., 1970, Stochastic Processes and Filtering Theory New York
Academic Press).
JOHNSTON., 1963,
Econometric Methods
New York McGraw-Eill).
KALDOR, . , 1978, M.A. Thesis, Centre for Resource and Environmental Studies,
Australian National University.
KALMAN, .
E.,
1958, Tra ns. Am . Soc. mech. Engrs, 80-D, 468 ; 1960, T r a m . A m .
Soc. mech. Engrs, 82-D, 35.
KALMAN,. E., and BUCY
.
S., 1961, Tra ns. Am . Soc. mech. Engrs, 83-D, 95.
KAYA,Y., and YAMAMURA,
.,
1962, A . I . E . E . T r a n s. A p p . I n d . , 80
11
378.
KOHR,R.
H., and HOBEROCK,. L., 1966,
P ro c. J . A . C . C . , p.
616.
KOPP,
R.
E. nd ORFORD, . J . , 1963, A I A A
J.
I, 2300.
KREISSELMEIER,., 1977,I E E E Trans . au tom. Contro l,
22 ,
2.
KUMAR, ., and MOORE,
.
B.,
1979,
Automatics
to appear).
LEE, R . C. K., 1964, Optimal Estimation, Identification and Control Boston MIT
Press).
LEVADI,V. S., 1964, International Conference on M icrowaves, Circuit Theo ry and
In.formation Theory, Tokyo.
LJUNO .,
1976, System Identif ication: Advances and
Case
Studies, edited by R K.
Mehra and D.
G.
Lainiotis New York Academic Press).
NORTON,. 1975,
Proc. l ns tn elect.. Engrs,
122, 663.
-
8/18/2019 young1980.pdf
25/25
764
Refined instrumental variable metho of recursive time -serie s an aly sis
PHADKE,
.
S., and Wu, S. M., 1974,
J . Am. statist. Ass.,
69,
325.
PHILLIPS, A. W., 1959,
Biometriku, 46
67.
PIERCE,D. A. 1972, Biometrika, 59, 73.
SOLO,V.
1978, Rep. No. AS/R20, Centre for Resource and Environmental Studies,
Australian National University.
T S Y P K I N ,
A.
Z . ,
1971,
Adaption and Learning
in
Adomatic Sys tems
New York
Academic Press).
WELLSTEAD,.
E.,
EDMUNDS,. M.,.PRAoER,
D.
and ZANKER ., 1979,
In t .
J .
Control, 30
1.
WHITEHEAD
.
G .
and YOUNG,
.
C., 1975,
Com pde r Simulution of Water Resource
Systems,
edited by
G .
C. Vansteenkiste Amsterdam North Holland).
WHITEHEAD,.
G.
YOUNG,
.
C., and IV~ICHELL, ., 1978,
Proc. I . E . Hydrology Sy mp. ,
Canberra, p.
1
YOUNG,. C.
964,
I .E .E. E. Trans . Aerosp .,
2, 1106 1965
a,
Rad.
and Elect. Eng.
J . E R E , 29,345 1965 b, The ory of Self Aahp tive Conlrol Sy ste ms , edited by
P. H. Hammond New York Plenum Press)
;
1969,
Proce edings of the Wo rld
Congress,
Warsaw see also
A d o m a t i m ,
6 271) ; 1974,
Bull.
Inst. Math.
Appl . ,
10
209
; 1975,
J1 R . statist. Soc.
B
7, 149
;
1976,
Oplimisation
in
Action,
edited
b y L.
C. W. Dixon New York
Academic Press), 517
;
1978,
The Modeling of Enviro nmen tal Sy ste ms , edited by G . C. Vansteenkiste
Amsterdam North Holland)
;
1979 a,
Proceedings of
the
I P A C S y m p. o n
Identification
nd
System Parameter Estimation,
Darmstadt, Federal Republic
of Germany ; 1979 b, Ib id .
;
1979 c,
Electron. Lett.,
15,358 1979d
Recursive
Estimation
New York Springer).
YOUNO,
.
C. nd
JAKEMAN. J .
1979 a,
In t . J .Control,
29, l
;
1979 b,
Proceedings
01
the I P A C Symposium on Computer
ded
Design of Control Sy ste ms ,
Zurich
;
1979 c, Rep. No. AS/R27, Centre for Resource and Environmental Studies,
Australian National University.
Y o u ~ a ,
. C., and KALDOR.
.
1978, Rep. No. AS/R14, Centre for Resource and
Environmental Studies, Australian National University.