EXERCISES to Advanced Dynamics and Control -...
Transcript of EXERCISES to Advanced Dynamics and Control -...
Finn Haugen
EXERCISES to
Advanced Dynamics
and Control
TechTeach techteach.no
August 2012
100 NOK (techteach.no/shop)
Exercises to Advanced Dynamics and Control
Finn HaugenTechTeach
August 2010
ISBN 978-82-91748-18-4
2
Contents
I EXERCISES 1
1 State-space models 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 A general state-space model . . . . . . . . . . . . . . . . . . . 3
1.3 Linear state-space models . . . . . . . . . . . . . . . . . . . . 4
1.4 Linearization of non-linear models . . . . . . . . . . . . . . . 5
2 Frequency response 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 How to calculate frequency response from sinusoidal inputand output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 How to calculate frequency response from transfer functions . 8
2.4 Application of frequency response: Signal filters . . . . . . . . 8
3 Frequency response analysis of feedback control systems 11
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Definition of setpoint tracking and disturbance compensation 11
3.3 Definition of characteristic transfer functions . . . . . . . . . 12
3.4 Frequency response analysis of setpoint tracking and distur-bance compensation . . . . . . . . . . . . . . . . . . . . . . . 12
3
4
4 Stability analysis of dynamic systems 17
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Stability properties and impulse response . . . . . . . . . . . 17
4.3 Stability properties and poles . . . . . . . . . . . . . . . . . . 18
4.4 Stability properties of state-space models . . . . . . . . . . . 18
5 Stability analysis of feedback systems 19
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Pole-based stability analysis of feedback systems . . . . . . . 19
5.3 Nyquist’s stability criterion . . . . . . . . . . . . . . . . . . . 20
5.4 Stability margins . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5 Stability analysis in a Bode diagram . . . . . . . . . . . . . . 23
5.6 Robustness in terms of stability margins . . . . . . . . . . . . 23
6 Discrete-time signals 27
7 Difference equations 29
7.1 Difference equation models . . . . . . . . . . . . . . . . . . . 29
7.2 Calculating responses from difference equation models . . . . 30
8 Discretizing continuous-time models 33
8.1 Simple discretization methods . . . . . . . . . . . . . . . . . . 33
8.2 Discretizing a simulator of a dynamic system . . . . . . . . . 34
8.3 Discretizing a signal filter . . . . . . . . . . . . . . . . . . . . 35
8.4 Discretizing a PID controller . . . . . . . . . . . . . . . . . . 35
9 Discrete-time state space models 39
5
9.1 General form of discrete-time state space models . . . . . . . 39
9.2 Linear discrete-time state space models . . . . . . . . . . . . 40
9.3 Discretization of continuous-time state space models . . . . . 40
10 The z-transform 43
10.1 Definition of the z-transform . . . . . . . . . . . . . . . . . . 43
10.2 Properties of the z-transform . . . . . . . . . . . . . . . . . . 43
10.3 z-transform pairs . . . . . . . . . . . . . . . . . . . . . . . . . 44
10.4 Inverse z-transform . . . . . . . . . . . . . . . . . . . . . . . . 44
11 Discrete-time (or z-) transfer functions 45
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
11.2 From difference equation to transfer function . . . . . . . . . 45
11.3 From transfer function to difference equation . . . . . . . . . 45
11.4 Calculating time responses for discrete-time transfer functions 46
11.5 Static transfer function and static response . . . . . . . . . . 46
11.6 Poles and zeros . . . . . . . . . . . . . . . . . . . . . . . . . . 46
11.7 From s-transfer functions to z-transfer functions . . . . . . . 47
12 Frequency response of discrete-time systems 49
13 Stability analysis of discrete-time dynamic systems 51
13.1 Definition of stability properties . . . . . . . . . . . . . . . . . 51
13.2 Stability analysis of transfer function models . . . . . . . . . 51
13.3 Stability analysis of state space models . . . . . . . . . . . . . 52
14 Analysis of discrete-time feedback systems 53
6
15 Stochastic signals 55
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
15.2 How to characterize stochastic signals . . . . . . . . . . . . . 55
15.3 White and coloured noise . . . . . . . . . . . . . . . . . . . . 56
15.4 Propagation of mean value and co-variance through staticsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
16 Estimation of model parameters 59
16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
16.2 Parameter estimation of static models with the Least squares(LS) method . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
16.3 Parameter estimation of dynamic models . . . . . . . . . . . . 62
17 State estimation with observers 65
18 State estimation with Kalman Filter 69
19 Testing robustness of model-based control systems with sim-ulators 71
20 Feedback linearization 75
21 LQ (Linear Quadratic) optimal control 81
22 Model-based predictive control (MPC) 85
23 Dead-time compensator (Smith predictor) 87
II SOLUTIONS 89
A Models with parameter values 155
7
A.1 Electric motor . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
A.2 Ship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
8
Preface
This book contains exercises with soutions to Advnaced Dynamics and
Control, TechTeach, August 2010. The exercises can all be solved with justmanual calculations (using paper and pencil). So, computer-basedexercises are not covered.1
The following freely available material may also be useful on exercises:
• SimView, which is a collection of ready-to-run simulators.
• TechVids, which is a collection of instructional streaming videos. Inthe videos theory is explained, and simulators are run and explained.You can download these simulators, and run them while you areplaying the videos.
This book is organized in chapters and sections which correspond to thetext-book.
Appendix A describes mathematical models of a some physical systems.Several exercises in this book are based on these systems. Their modelscan be used in computer-based exercises with e.g. Matlab/Simulink orLabVIEW.
Finn Haugen, MSc
TechTeach
Skien, Norway, August 2010
1The formulation of computer-based exercises depends largely on the tool being used,e.g. MATLAB/SIMULINK, LabVIEW, Scilab/Scicos, Octave., and the tool being usedmay vary from one school/university to another. Therefore, this book does not containsuch exercises.
9
10
Part I
EXERCISES
1
Chapter 1
State-space models
1.1 Introduction
No exercises here.
1.2 A general state-space model
Exercise 1.1
Figure 1.1 shows two coupled liquid tanks. u1 and u2 are control signals.Mass balance of tank 1 is
ρA1h1 = ρKpu1︸ ︷︷ ︸q1
− ρKv1
√ρgh1G︸ ︷︷ ︸
q2
(1.1)
Mass balance of tank 2 is
ρA2h2 = ρKv1
√ρgh1G︸ ︷︷ ︸
q2
− ρKv2u2
√ρgh2G︸ ︷︷ ︸
q3
(1.2)
Valve 1 has fixed opening. Valve 2 is a control valve with control signal ubetween 0 and 1. The square root functions stems from the common valvecharacteristic which expresses that the flow is proportional to the squareroot of the pressures drop across the valve. Here, the pressure drops areassumed to be equal to the hydrostatic pressures at the bottom the tanks.
3
4
q1 [m3/s]
A1 [m2]
[kg/m3]q2
h1 [m]
q3A2 [m2]
Tank 1
Tank 2
[m3/s]
[m3/s]
Valve 1
Valve 2
u2
u1
Pump 1
h2 [m]
Figure 1.1:
For example, for tank 1 the hydrostatic pressure is ρgh1. The parameter Gis the relative density of the liquid.1
Assume that the input variables are u1 and u2, and that the outputvariables are y1 = h1 and y2 = h2. Write the model (1.1) — (1.2) as astate-space model. Is the state-space model linear or nonlinear?
1.3 Linear state-space models
Exercise 1.2
Write the following model as a state-space model on matrix-vector form:
x1 = x22x2 = 8u2 − 6x2 − 2x1 + 4u1y = 5x1 + 7u1 + 6x2
(1.3)
1G = ρ/ρwater.
5
1.4 Linearization of non-linear models
Exercise 1.3
Given the state-space model (23.2) — (23.4). Linearize the model.
6
Chapter 2
Frequency response
2.1 Introduction
No exercises here.
2.2 How to calculate frequency response fromsinusoidal input and output
Exercise 2.1
Figure 2.1 shows the input signal and the corresponding output signal of asystem.
1. What is the frequency of the signal in Hz and in rad/s?
2. Calculate the amplitude gain and the phase lag at the frequencyfound in Problem 1 above. What is the amplitude gain in dB?
Exercise 2.2
Figure 2.2 shows a Bode diagram of a system. Assume that the inputsignal u is a sinusoid of amplitude U = 0.8 and frequency ω = 1.0 rad/s.Write the corresponding steady-state output response ys(t).
7
8
Figure 2.1:
2.3 How to calculate frequency response fromtransfer functions
Exercise 2.3
Calculate the frequency response functions A(ω) and φ(ω) of the transferfunction
H(s) =K
(1 + T1s) (1 + T2s)e−τs (2.1)
2.4 Application of frequency response: Signalfilters
Exercise 2.4
9
Figure 2.2:
Assume that it is specified that a given RC filter shall have bandwidth 100Hz. Find proper values of the resistance R the capacitance C. (Tip: C canbe selected between 10−4 and 10−6 F because this gives a practical size ofthe capacitor. Unless you insist on some other value, you can use 10−5 F.)
10
Chapter 3
Frequency response analysisof feedback control systems
3.1 Introduction
No exercises here.
3.2 Definition of setpoint tracking anddisturbance compensation
Exercise 3.1
Figure 3.1 shows a principal block diagram of a control system. Thesetpoint ySP and the disturbance v are two input signals to the controlsystem. There is actually a third input that is present in practical controlsystems — more or less, namely measurement noise n, which is typically arandom signal. Assume that n is added to the measurement signal that isthe output of the sensor. Most control systems contains a lowpass filterwhich attenuates the measurement noise so that the resulting measurementsignal entering the controller becomes smoother.
Include n and the filter in the block diagram shown in Figure 3.1 (make anew drawing).
11
12
Process
Sensor
v
yySP u
e
Controller
Control system
Figure 3.1:
3.3 Definition of characteristic transfer functions
Exercise 3.2
Assume given a control system as shown in Figure 3.2. The transferfunctions are as follows:
Hu(s) =Ku
Tus+ 1e−τs (3.1)
Hv(s) =Kv
Tvs+ 1e−τs (3.2)
Hm(s) = Km (3.3)
Hc(s) = KpTis+ 1
Tis(PI controller) (3.4)
Find the loop transfer function L(s), the sensitivity function S(s), and thetracking function T (s).
3.4 Frequency response analysis of setpointtracking and disturbance compensation
Exercise 3.3
13
em(s)Hc(s) Hu1(s)
Hsens(s)
u(s)y(s)
ym(s)
Hm(s)ymSP(s)ySP(s)
Hv(s)
v(s)
1/Hm(s)
e(s) = ySP(s) - y(s)
Process
Combinedtransfer function
Scaling
Controller
[%]
[%]
[%] [%]
ScalingDisturbancetransfer functionU0(s)
Hs(s)
Scaling Sensor
Measurementsignal
Hm(s)
Hsu(s)[mA]
Scaling
Hu(s)
Combinedcontrol variabletransfer function
Figure 3.2:
Figure 3.3 shows the amplitude gain curves of the loop transfer function L,the tracking function T and the sensitivity function S of a feedback controlsystem.
1. Read off from the frequency response curves the following threealternative bandwidths:
• The crossover frequency ωc.• The −3 dB frequency ωt of the tracking function• The −11dB frequency ωs of the sensitivity function
2. Assume that the setpoint is a sinusoid of amplitude ASP = 4 andfrequency 1 rad/s. What is the amplitude, Ay, of the steady-statesinusoidal process output variable? What is the amplitude, Ae, of thesteady-state sinusoidal control error?
3. Assume that the process disturbance is a sinusoid of frequency 1rad/s. Assume that this disturbance creates a steady-state sinusoidalresponse in the process output variable of amplitude AyOL 0.5 whenthe process is controlled with a constant control system, i.e. in openloop control. What is the amplitude, AyCL , of the process outputvariable using feedback control, i.e. in closed loop control?
14
Loop transfer function, LTracking function, TSensitivity function, S
Figure 3.3:
4. Estimate the response time, Tr, of the response on the processoutput variable due to a step change of the setpoint.
Exercise 3.4
1. The diagram to the left of Figure 3.4 shows a non-controlled thermalprocess which is a liquid tank with throughput and heating. Assumethat the amplitude gain of the frequency response of the transferfunction from inlet temperature Tin to outlet temperature T is asshown in the Bode diagram in Figure 3.5.
Assume that Tin contains a frequency component of amplitude ATinof frequency 0.1 rad/s. Calculate the amplitude AT of thecorresponding steady-state response in T .
2. The diagram to the right of Figure 3.4 shows a temperature controlsystem of the process. Assume that the amplitude gain of thefrequency response of the transfer function from inlet temperatureTin to outlet temperature T of the control system is as shown in theBode diagram in Figure 3.6.
Assume that that Tin contains a frequency component of amplitudeATin of frequency 0.1 rad/s. What is the amplitude AT of thecorresponding steady-state response in T? Compare the answer withproblem 1 above. Is there any improvement by using control?
15
c [J/(kg K)]
Tin [K]
F [kg/s]
V [m3]
T [K]
P [J/s]
F
T
Stirring motorTe [K]U [(J/s)/K]
T [K]
P [J/s]
T
TT
TC
Controlloop
TSP
Non-controlled tank Controlled tank
Figure 3.4:
Figure 3.5:
16
Figure 3.6:
Chapter 4
Stability analysis of dynamicsystems
4.1 Introduction
No exercises here.
4.2 Stability properties and impulse response
Exercise 4.1
Determine the stability property of the following transfer function bycalculating its impulse response, h(t).
H(s) =y(s)
u(s)=
1
s+ 1(4.1)
Also, make a rough sketch of h(t).
To calculate h(t), you can use the following Laplace transform:
k
Ts+ 1⇐⇒ ke−t/T
T(4.2)
17
18
4.3 Stability properties and poles
Exercise 4.2
Determine the stability property of the following transfer functions:
H1(s) =1
s+ 1(4.3)
H2(s) =1− s
1 + s(4.4)
H3(s) =1
1− s(4.5)
H4(s) =1
(s+ 1)(s− 1) (4.6)
H5(s) =1
s(4.7)
H6(s) =1
s3(4.8)
H7(s) =e−s
s+ 1(4.9)
H8(s) = −1
s+ 1(4.10)
H9(s) =1
s2 + s+ 1(4.11)
H10(s) =1
s2 + 1(4.12)
H11(s) =1
(s+ 1)s(4.13)
4.4 Stability properties of state-space models
Exercise 4.3
Determine the stability property of the following state-space model:
[x1x2
]=
[0 10 −2
][x1x2
]+
[01
]u (4.14)
Chapter 5
Stability analysis of feedbacksystems
5.1 Introduction
No exercises here.
5.2 Pole-based stability analysis of feedbacksystems
Exercise 5.1
Figure 5.1 shows a feedback control system. The transfer function of the
Kp Hpm(s)ym(s)ySP(s)
Process andsensorController
Figure 5.1:
process and sensor is
Hpm(s) =1
s(5.1)
19
20
1. What is the stability property of Hpm(s), i.e. of the process itself —also called the open-loop system?
2. For which values of the controller gain Kp is the control systemasymptotically stable?
3. Has this exercise demonstrated that it is possible to obtain anasymptotically stable feedback system even though the process itselfis asymptotically stable?
5.3 Nyquist’s stability criterion
Exercise 5.2
Given a control system with loop transfer function
L(s) =Kp
(s+ 1)3s(5.2)
Figure 5.2 shows the Nyquist curve of L with Kp = 0.4. (The curve
Figure 5.2:
21
actually encircles the whole right half plane.)
Use Nyquist’s stability criterion to calculate the values of Kp that makesthe control system become
• Asymptotically stable.
• Marginally stable.
• Unstable. In this case, what is the number of poles in the right halfplane?
Exercise 5.3
Given a closed loop system having the following loop transfer function:
L(s) =K
s− 1 (5.3)
1. Show that the corresponding open loop system is unstable bycalculating the pole of the system.
2. Figure 5.3 shows the Nyquist curve of L with K = 2. Find using the
Figure 5.3:
Nyquist stability criterion for which values of K the closed loop
22
system is asymptotically stable. Confirm the answer by calculatingthe pole of the closed loop system.
Exercise 5.4
Figure 5.4 shows the Nyquist curve of L(jω) of a feedback system which isopen stable. The loop gain is then K = 1. For which values of K is the
Re(L)-0.2-0.4-0.6-0.8-1.0
Im(L)
Figure 5.4:
feedback system asymptotically stable?
5.4 Stability margins
Exercise 5.5
Figure 5.5 shows the Nyquist curve of the loop transfer function L of anasymptotically stable control system. What is the gain margin GM andthe maximum sensitivity gain |S(jω)|max?
Exercise 5.6
Figure 5.6 shows the amplitude gain curves of the loop transfer function L,the tracking function T and the sensitivity function S of a feedback controlsystem. Determine the stability margin in terms of |S(jω)|max. Is thevalue in the range of reasonable values of this stability margin?
23
Figure 5.5:
5.5 Stability analysis in a Bode diagram
Exercise 5.7
Figure 5.7 shows the Bode diagram of the loop transfer function of a givencontrol system.
1. Read off the stability margins GM and PM , and the crossoverfrequencies ωc and ω180 in the Bode diagram.
2. How large increase of the loop gain will bring the system to thestability limit? What is the period Tp of the steady-state oscillationsexisting in the system when the system is at the stability limit.
5.6 Robustness in terms of stability margins
Exercise 5.8
24
Loop transfer function, LTracking function, TSensitivity function, S
Figure 5.6:
Given a feedback control system with time delay τ = 4.2 min. The controlsystem has phase margin
PM = 45 (5.4)
and crossover frequency
ωc = 0.2 rad/min (5.5)
Assume that the time delay increases, but the controller parameters arenot changed. With which value of the time delay τ is the control systemmarginally stable?
Exercise 5.9
Ziegler-Nichols’ closed-loop method is based on bringing the closed loop tomarginal stability with a P controller with a proper controller gain value.Explain in terms of frequency response why the Ziegler-Nichols’closed-loop method can not be used for tuning a PID controller for thefollowing processes. It is assumed that the parameters of the transferfunctions have positive values.
H1(s) =K
s(integrator) (5.6)
H2(s) =K
Ts+ 1(1. order system) (5.7)
25
Figure 5.7:
H3(s) =Kω0
2
s2 + 2ζω0 + ω02(2. order system) (5.8)
26
Chapter 6
Discrete-time signals
Exercise 6.1
Given the following continuous-time signal (a ramp):
xc(t) = 2t (6.1)
where t is time in seconds.
1. Assume that the signal is sampled with sampling time (time-step)Ts = 0.5 s. Express xd as a function of the discrete time tk. Writethe discrete signal or sequence (time series) xd from time 0 to 2. Plotx with both discrete time tk and time index k along the abscissa.
2. Repeat Problem 1 above, but now with Ts = 0.1 s.
27
28
Chapter 7
Difference equations
7.1 Difference equation models
Exercise 7.1
Given the following difference equation:
y(k + 3) + ay(k + 1) = b1u(k + 2) + b0u(k) (7.1)
Write the corresponding difference equation having only zero or negativetime shifts.
Exercise 7.2
Particularly in the area of discrete-time (digital) signal processing thedifference equations constituting the mathematical model of signal filtersare represented with mathematical block diagrams, in the same way asdifferential equations of continuous-time systems are represented withblock diagrams.
Figure 7.1 shows the most frequently used blocks — or the elementary
blocks — used in block diagrams of difference equation models.
A comment about the time delay block: The output y(k) is equal to thetime delayed input, y(k − 1):
y(k − 1) = z−1y(k) (7.2)
29
30
Ku(k)
Gain:
y(k)=u1(k)+u2(k)-u3(k)Sum(incl. subtraction):
z-1y(k) y(k-1)=z-1y(k)
Time delayof one time step:
u1(k)
y(k)=Ku(k)
u3(k)
u2(k)
Figure 7.1: Elementary blocks for drawing block diagrams of difference equation
models
Or, equivalently:y(k) = z−1y(k + 1) (7.3)
The operator z−1 is here a time-step delay operator, which is actually thez-transfer function of the time-step delay. (z-transfer functions aredescribed in Chapter 11 of the text-book.)
Draw a block diagram of the following difference equation:
y(k + 1) = ay(k) + bu(k) (7.4)
where a and b are constant parameters. The block diagram shall have u(k)as input and y(k) as output.
7.2 Calculating responses from differenceequation models
Exercise 7.3
Given the signal filter
y(k) =1
3[u(k) + u(k − 1) + u(k − 2)]
31
which is a moving average lowpass filter.
1. What is the steady-state response in y when the input u is aconstant? Does the filter let a constant input pass unchanged, insteady-state?
2. Assume that the filter input u is a ramp:
u(k) = u(0), u(1), u(2), u(3), u(4) (7.5)
= 0, 0.5, 1.0, 1.5, 2.0 (7.6)
= 0.5k (7.7)
Calculate the response y at k = 0..4. (You can assume that u is zeroat negative k.)
Also, calculate the general response y(k). Will there be a constantdifference from zero between the output and the input as time indexgoes to infinity?
32
Chapter 8
Discretizing continuous-timemodels
8.1 Simple discretization methods
Exercise 8.1
In Section 8.1 in the text-book the Forward discretization method and theBackward discretization method are based on differentiation
approximations. However, they can be interpreted as integration
approximations, too. The Tustin’s method of discretization can also(easily) be described as an integration approximation method.
Given the following differential equation:
x = f(x, u) (8.1)
u is input variable. x is the state variable. f is some function — linear ornonlinear. The state variable at discrete time tk is found by integration:
x(tk) = x(tk−1) +∫ tk
tk−1
f dτ (8.2)
The integral in (8.2) can be approximated in several ways:
• Forward approximation:
x(tk) ≈ x(tk−1) + Tsf [x(tk−1), u(tk−1)] (8.3)
33
34
• Backward approximation:
x(tk) ≈ x(tk−1) + Tsf [x(tk), u(tk)] (8.4)
• Tustin’s approximation:
x(tk) ≈ x(tk−1) +Ts2f [x(tk), u(tk)] + f [x(tk−1), u(tk−1)] (8.5)
Illustrate each of the above approximations in a figure. You can base yourdrawing on the the sketch shown in Figure 8.1.
f
t
Ts
tk-1 tk
fk
fk-1
Figure 8.1:
8.2 Discretizing a simulator of a dynamic system
Exercise 8.2
See Exercise 1.1 which shows a mathematical model of two coupled liquidtanks. Develop a simulation algorithm for levels h1 and h2 based onForward discretization. Why is is better to apply Forward discretizationthan Backward discretization in this example?
35
8.3 Discretizing a signal filter
Exercise 8.3
The transfer function of a first order highpass filter is
H(s) =y(s)
u(s)=
Tfs
Tfs+ 1(8.6)
where Tf [s] is the filter constant. Derive a discrete-time filter algorithm inthe form of a difference equation relating the output y and the input ubased on Backward differentiation. The time-step is Ts [s].
8.4 Discretizing a PID controller
Exercise 8.4
In the text-book the continuous-time PID control function is discretized bydiscretizing the time-differentiated control function. An alternative way ofobtaining a discrete-time PID control function is to discretize the P term,the I term, and the D term (in a parallel or additive PID controller)individually, and then summing the individual discrete-time terms. This iswhat this exercise is about.
For simplicity a PI (not a PID) controller will be discretized here.
The starting point is the continuous-time PI control function:
u = u0 +Kpe︸︷︷︸up
+Kp
Ti
∫ t
0e dτ
︸ ︷︷ ︸ui
(8.7)
1. Discretize (8.7) according to the alternative method describes above.The I (integral) term ui can be discretized with the same method asused in the text-book: Discretize the time-differentiated version ofthe I term using the Backward method.
2. Suggest a way to implement integral anti windup in yourdiscrete-time PI controller.
Exercise 8.5
36
As you will see in this exercise, you may get a surprise when adiscrete-time P controller operates in steady state!
Assume that the controller is originally a standard discrete-time PIDcontroller as developed in the text-book:
u(tk) = u(tk−1) + [u0(tk)− u0(tk−1)] (8.8)
+Kp [e(tk)− e(tk−1)] (8.9)
+KpTsTi
e(tk) (8.10)
+KpTdTs
[e(tk)− 2e(tk−1) + e(tk−2)] (8.11)
1. What is the corresponding discrete-time P controller (including themanual control signal)? (Just remove the integral term and thederivative term by setting Ti =∞ and Td = 0, respectively).
2. Assume that the control system is in steady state (all signals beingconstant), and that the (steady state) control error for some reason isdifferent from zero, say es. Show that the control error is not
reduced even if the controller gain is increased (you would probablyexpect the the error to be reduced if the controller gain is increased).
3. Now, turn the controller into an I controller, with constant manualcontrol signal. (The points of this task appear clearer with an Icontroller than with a PI or a PID controller.) Show that the controlsignal will change only as long as the control error is different fromzero.
Exercise 8.6
Assume a process which is to be controlled with a discrete-time PIDcontroller. The process has response time of 50 sec. What is the maximumtime-step or sampling time that the controller should use? A typicalsamling time in industrial controllers is 0.1 s. Is this ok for the givenprocess?
Exercise 8.7
Figure 8.2 shows an air heater. A fan with fixed speed blows air throughthe pipe. The air is heated by a electric heater. The control signal u is the
37
Fan opening
Heater
Temperature sensor
Temperature sensor
Tube
Measurement signal (to external controller )
[V]
Control signal (from external controller ) [V]
Potensiometer for measuring the fan opening [V]
Air
Figure 8.2:
voltage signal which controls (adjusts) the power supplied to the heater.The temperature is controlled with a discrete-time PI controllerimplemented in a PC.
Figure 8.3 shows the temperature reference and measurement and thecontrol signal with different time-steps Ts.
1. What is the time-step Ts in case B and C in Figure 8.3?
2. Why does the control system get reduced stability as the time-step isincreased?
38
Control signalTemperature reference and measurement
Increasing time-step Ts
Ts = 0.055 s
sec
Figure 8.3:
Chapter 9
Discrete-time state spacemodels
9.1 General form of discrete-time state spacemodels
Exercise 9.1
In Exercise 1.1 a mathematical model of two coupled liquid tanks ispresented. The model written as a continuous-time state space model is
h1 =1
A1
(
Kpu1 −Kv1
√ρgh1G
)
(9.1)
h2 =1
A2
(
Kv1
√ρgh1G
−Kv2u2
√ρgh2G
)
(9.2)
By applying Forward differentiation approximation to the time-derivatives,we get the following discrete-time model:
h1 (tk) ≈h1 (tk+1)− h1 (tk)
Ts=
1
A1
(
Kpu1 (tk)−Kv1
√ρgh1 (tk)
G
)
(9.3)
h2 (tk) ≈h2 (tk+1)− h2 (tk)
Ts=
1
A2
(
Kv1
√ρgh1 (tk)
G−Kv2u2 (tk)
√ρgh2 (tk)
G
)
(9.4)
39
40
Assume that both levels are output variables. Write this model as adiscrete-time state space model on the standard form
h1 (tk+1) = f1 [h1(tk), h2(tk), · · · ] (9.5)
h2 (tk+1) = f2 [h1(tk), h2(tk), · · · ] (9.6)
(which corresponds to the compact standard form x (tk+1) = f [x(tk), · · · ]).Is the state space model linear or nonlinear?
9.2 Linear discrete-time state space models
Exercise 9.2
Write the following difference equations model as a state space model onmatrix-vector form.
x1(k + 1) = −0.5x1(k) (9.7)
x2(k + 1) = 2u(k)− x2(k)− 3x1(k) (9.8)
y(k) = x2(k) + 4u(k) (9.9)
9.3 Discretization of continuous-time state spacemodels
Exercise 9.3
Consider the nonlinear continuous-time state space model (9.1) — (9.2).According to Exercise 9.1: If you discretize this model using Forwarddifferentiation approximation, the resulting discrete-time model can bewritten on the following standard state-space model form:
h1 (tk+1) = f1 [h1(tk), h2(tk), · · · ] (9.10)
h2 (tk+1) = f2 [h1(tk), h2(tk), · · · ] (9.11)
What if you instead use the Backward differentiation approximation?
Exercise 9.4
41
1. Can you discretize the following (first order) state-space model usingthe ZOH (Zero Order Hold) method? (Yes or no.)
x = −K1 |x|x+K2u (9.12)
(K1 and K2 are constants.)
2. It can be shown that a linearizing (9.12) gives the followingstate-space model:
∆x = −2K1 |x|∆x+K2∆u (9.13)
Can this model be discretized using the ZOH method? (Yes or no.)
42
Chapter 10
The z-transform
10.1 Definition of the z-transform
Exercise 10.1
Calculate the Z-transform of δ(k) which a unit impulse. This is a signalhaving amplitude 1 at discrete time k = 0 and zero at other points of time.
10.2 Properties of the z-transform
Exercise 10.2
The Linearity property of the Z-transform can be expressed as
k1y1(z) + k2y2(z)⇐⇒ k1y1(k) + k2y2(k) (10.1)
Show that the Linearity property holds for this example of a signal:
y(k) = A+B = AS(k) +BS(k) (10.2)
where A and B are constants, and S(k) is the unity step functions, i.e. astep occuring at time zero. (Hint: Show that the Z-transform of the rightside of (10.1) is equal to the left side of (10.1) for the given signal.)
43
44
10.3 z-transform pairs
Exercise 10.3
Calculate the Z-transform of a step signal of amplitude 4 which occurs attime-index 2. You can use S(k) to represent a unit step function, i.e. astep occuring at time zero.
10.4 Inverse z-transform
Exercise 10.4
Calculate the inverse Z-transform of
y(z) =z
z − 0.5 (10.3)
Chapter 11
Discrete-time (or z-) transferfunctions
11.1 Introduction
No exercises here.
11.2 From difference equation to transferfunction
Exercise 11.1
Given the signal filter
y(k) =1
3[u(k) + u(k − 1) + u(k − 2)]
which is a moving average lowpass filter. Find the z transfer function fromu to y.
11.3 From transfer function to differenceequation
Exercise 11.2
45
46
Assume that the following transfer function from input signal u to outputsignal y of a physical system is found by some system identificationmethod:
H(z) =y(z)
u(z)=
a
bz2 + cz + d
What is the corresponding difference equation relating u and y?
11.4 Calculating time responses for discrete-timetransfer functions
Exercise 11.3
Assume that the input signal u to the following transfer function is animpulse of amplitude A.
H(z) =y(z)
u(z)=
z
z − 1 (11.1)
Calculate the output response y(k).
11.5 Static transfer function and static response
Exercise 11.4
In Exercise 11.1 the following transfer function of a moving avaragelowpass filter was found:
H(z) =y(z)
u(z)=1
3
[1 + z−1 + z−2
](11.2)
1. Calculate the corresponding static transfer function Hs.
2. Assume that the filter input is a constant, u(k) = U . Calculate thecorresponding steady-state filter output ys from Hs.
11.6 Poles and zeros
Exercise 11.5
47
Given the following transfer function:
H(z) =bz−2 + z−1
1− az−1(11.3)
Calculate the poles and the zeros of the transfer function.
11.7 From s-transfer functions to z-transferfunctions
Exercise 11.6
The s-transfer function of a (continuous-time) integrator is
Hcont(s) =y(s)
u(s)=1
s(11.4)
Derive a corresponding z-transfer function Hdisc(z) assuming Backwarddiscretization. The time-step is Ts.
48
Chapter 12
Frequency response ofdiscrete-time systems
Exercise 12.1
Given the following transfer function:
H(z) =1
z(12.1)
What is the amplitude gain function and the phase lag function?
Exercise 12.2
Given a continuous-time filter with transfer function
Hcont(s) =1
Tfs+ 1(12.2)
withTf = 1 sec (12.3)
Discretization of Hcont(s) using the Backward method of discretizationwith time-step Ts gives the following discrete-time filter:
Hdisc(z) =az
z − (1− a)(12.4)
where
a =Ts
Tf + Ts(12.5)
49
50
Figure 12.1 shows the frequency responses of Hcont(s) and Hdisc(z) withtime-step
Ts = 0.2 sec (12.6)
Figure 12.1:
1. Why is there a difference between the frequence responses of Hcont(s)and Hdisc(z), and why is the difference more apparent at higherfrequencies than at lower frequencies? (Qualitative answers are ok.)
2. The frequency response curves of Hdisc(z) are unique up to a certainfrequency — which frequency? Express this frequency in Hz andrad/s. Is that frequency indicated in Figure 12.1?
Chapter 13
Stability analysis ofdiscrete-time dynamicsystems
13.1 Definition of stability properties
Exercise 13.1
Given the following transfer function:
H(z) =y(z)
u(z)=
1
z − 0.5 (13.1)
Calculate the first four values of the impulse response. Determine thestability property from the impulse response.
13.2 Stability analysis of transfer function models
Exercise 13.2
Determine the stability property of the following transfer functions:
H1(s) =1
z − 0.5 (13.2)
H2(s) =1
z + 0.5(13.3)
51
52
H3(s) =z − 2z − 0.5 (13.4)
H4(s) =1
z − 1 (13.5)
H5(s) =1
z − 2 (13.6)
H6(s) =1
(z − 1)2(13.7)
H7(s) =1
z(13.8)
H8(z) =1
z2 − 2.5z + 1 (13.9)
Exercise 13.3
Discretizing the continuous-time transfer function
Hcon(s) =y(s)
u(s)=
K
Ts+ 1(13.10)
using the Forward discretization method with time-step Ts yields thefollowing discrete-time transfer function:
Hdis(z) =y(z)
u(z)=
KTsT
z −(1− Ts
T
) (13.11)
For which (positive) values of Ts is the discrete-time system asymptoticallystable? (You can assume that T is positive.)
13.3 Stability analysis of state space models
Exercise 13.4
Determine the stability property of the following state space model.
x(k + 1) =
[1 0.50 0.9
]x(k) +
[01
]u(k) (13.12)
Chapter 14
Analysis of discrete-timefeedback systems
Exercise 14.1
Given a feedback control system where the controller (a P controller)
Hc(z) = Kp (14.1)
controls a process. The transfer function of the combined process andsensor (these systems are in series) is
Hpm(z) =z−1
1− z−1
Calculate for which values of the controller gain Kp the control system isasymptotically stable.
Exercise 14.2
Assume that Figure 14.1 shows the Bode diagram of the loop transferfunction of some discrete-time control system.
Read off the stability margins GM and PM , and the crossover frequenciesωc and ω180 in the Bode diagram.
53
54
Figure 14.1:
Chapter 15
Stochastic signals
15.1 Introduction
No exercises here.
15.2 How to characterize stochastic signals
Exercise 15.1
Given the following sequence of measurement values:
x(k) = x(0), x(1), x(2) = 0.73, 1.23, 0.89 (15.1)
1. Calculate the mean value, mx.
2. Calculate the variance, σ2x.
3. Calculate the standard deviation, σx.
4. Calculate the auto-covariance, Rx(L) with L = 0 and L = 1 for thefollowing options (the same options that are defined forcross-covariance in the text-book):
(a) Raw estimate
(b) Normalized estimate
(c) Unbiased estimate
55
56
(d) Biased estimate
Exercise 15.2
Given a random sequence, x(k), uniformly distributed between −A and+A.
1. Calculate the mean value or expectation value mx using this formula:
mx =
∫ ∞
−∞xP (x)dx =
∫ A
−AxP (x)dx (15.2)
where P (x) is the probability density. For a uniformly distributedsignal,
P (x) =1
2A(15.3)
since the area under the P (x)-curve must be 1 (the width of thecurve is 2A, so the height is 1/(2A) for the area to be 1).
2. Show that the variance of the sequence is
σ2x =A2
3(15.4)
15.3 White and coloured noise
Exercise 15.3
Draw by hand the principal auto covariance of white noise with variance 4.What is the standard deviation σx of this signal?
Exercise 15.4
The following discrete-time first order lowpass filter is one example of ashaping filter:
x(k) = ax(k − 1) + (1− a)v(k) (15.5)
which is a discrete-time first order lowpass filter. x is the filter output, andv(k) is the filter input, which is assumed to be white noise.
57
1. Explain (no calculations are needed) why a = 0 makes the outputbecome white, and hence, there is no “shaping” through the filter.
2. Explain (no calculations are needed) why a between 0 and 1 makesthe output become “coloured”.
15.4 Propagation of mean value and co-variancethrough static systems
Exercise 15.5
Assume that you in a computer tool — e.g. LabVIEW or Matlab — are togenerate a random signal y having mean value 3 and variance 4, and thatthe tool has a function that generates a random signal u of mean value 0and variance 1. How can you obtain y from u? (Express y as amathematical function of u.)
58
Chapter 16
Estimation of modelparameters
16.1 Introduction
No exercises here.
16.2 Parameter estimation of static models withthe Least squares (LS) method
Exercise 16.1
See Example 16.1 in the text-book. Draw the three data sets (points) in acartesian diagram. Draw in the same diagram the straight line which is theLS solution. Also indicate the prediction errors with lines in the diagram.
Exercise 16.2
Assume that the parameters a and b in the differential equation
h(k) + a√h(k − 1) = h(k − 1) + bu(k − 1) (16.1)
is to be estimated with the Least Squares (LS) method. Assume that thefollowing samples of the variables h and u exist:
h(0), h(1), h(2), h(3), h(4) (16.2)
59
60
u(0), u(1), u(2), u(3), u(4) (16.3)
Write the total regression model
Y = Φθ (16.4)
which makes the basis for the LS estimation. However, you shall notcalculate the estimate in this Problem. The regression model contains onlythe samples of h and u that are available. (You are to find the vector Y ,matrix Φ, and the vector θ.)
Exercise 16.3
This exercise demonstrates that a LS-estimate converges towards anerroneous value — in other words the estimate is biased — if the noise or themodel error has a mean value different from zero.1
The constant K in the model
y = K (16.5)
is be estimated. Assume that the real (true) value of K is K0. Theobservation yk is
yk = K0 + ek (16.6)
where ek is noise (or equation error or model error or prediction error).Calculate the LS estimate of K (you can assume that there are Nobservations). Assume that yk in (16.5) is given by (16.6).
Exercise 16.4
Figure 16.1 shows the criterion function V as a function of model order nin a fictitous problem about parameter estimation. Which order should beselected?
Exercise 16.5
This exercise is based on Example 16.1 in the text-book. To stick tostandard nomenclature, let us here use y in stead of z which was used inthe example.
1 In general, the estimate will be biased if the noise is coloured (non-white).
61
Model order
V
n1 n2 n3 n4 n4n0
Figure 16.1:
In the example, parameters c and d in the model
y = cx+ d (16.7)
are fitted to the three data points
(y1,x1) = (0.8, 1.0) (16.8)
(y2,x2) = (3.0, 2.0) (16.9)
(y3,x3) = (4.0, 3.0) (16.10)
The LS-solution is [cLSdLS
]=
[1.6−0.6
](16.11)
1. Calculate the LS criterion
Vnorm =1
mV (θLS) (16.12)
=1
m
N∑
k=1
e2k (16.13)
=1
m
N∑
k=1
(yk − ϕkθLS)2 (16.14)
=1
m
N∑
k=1
[yk − (cLSxk + dLS)]2 (16.15)
for the data given above.
62
2. To indicate that (16.11) actually is the LS solution, you are now tocalculate the criterion Vnorm with some other values of c and d thanthe LS values. The question is: Will Vnorm be larger with thesenon-LS values? You can (as an example) use the values
c = 2.2 (16.16)
d = −1.4 (16.17)
which gives a perfect match to the frist two of the three data sets:
(y1,x1) = (0.8, 1.0) (16.18)
(y2,x2) = (3.0, 2.0) (16.19)
Show that c = 2.2 and d = −1.4 is a perfect match to these two datasets.
Calculate Vnorm given by (16.15) with c = 2.2 and d = −1.4. HasVnorm larger value than the LS-value of Vnorm that you calculated inProblem 1 above? If yes, you have demonstrated that c = 2.2 andd = −1.4 are not LS parameters!
16.3 Parameter estimation of dynamic models
Exercise 16.6
Assume that you know in advance that a given physical process hastime-delay of 2 sec. You are to estimate a discrete-time transfer functionto from experimental input and output (measurement) sequences. Thesampling time is 0.5 sec. You are not sure about the order of the transferfunction, but what is the minimum order that it should have?
Exercise 16.7
Here is a mathematical model of an electric motor (a DC motor) with loadtorque:
Jω =KT
Ra(va −Keω) + TL (16.20)
Assume that the inertia J and the torque TL will be estimated with the LSmethod. The motor is excited with the armature voltage va. Therotational speed ω is measured with a tachometer. KT , Ra and Ke haveknown values.
63
Write the model on the standard form y = ϕθ. Use the center differencemethod to calculate ω.
Exercise 16.8
In e.g. subspace methods of system identification, the model that isestimated is always a linear model. If the model is actually nonlinear, alinear model will give a good representation of the system only near anoperating point. How can you process the input and output data (thesequences) before using them in system identification to improve theaccuracy of the estimated linear model, assuming the process is nonlinear?
64
Chapter 17
State estimation withobservers
Exercise 17.1
Figure 17.1 shows an electric motor (a DC motor). It is manipulated withan input voltage signal, u, and the rotational speed is measured with atachometer which produces a output voltage signal which is proportionalto the speed. Assume that the speed, S [krpm = kilo revolutions perminute] is calculated continuously from the tachometer voltage. A proper
Figure 17.1:
mathematical model of the motor is
TmS(t) + S(t) = Km[u(t) + L(t)] (17.1)
65
66
L is the equivalent load torque (represented in the same unit as the controlvariable, namely voltage). L can be regarded as a process disturbance. Km
is gain. Tm is time-constant. (Model parameter values are given inAppendix A.1, but these values are not needed in the present exercise.)
The tasks below are about designing an observer which estimates the loadL (and the speed S) based on the measurement of S. Assume that thevalue of L and how it varies is completely unknown, so the bestassumption is to model it as an unknown constant.
1. Find the observer formulas for continuous-time implementation.(You can use S and L as names of the state variables.) Assume thatthe observer gain is given. Task 3 is about calculating this gain.
2. Find the observer formulas (algorithm) discrete-time implementation.The time-step is Ts. (Assume that the observer gain is given.)
3. Find the observer gains K1 and K2 as a function of the observerresponse-time Tr. (Use manual calculations.)
4. Check if the system is observable.
5. If you want to speed up the response of the estimates, would youthen increase or descrease the observer response time Tr? Does thismake the estimates more noisy or less noisy?
6. Assume that the estimates are too noisy due to measurement noise.How can you obtain smoother estimates without adjusting theparameters of the observer itself?
7. Assume that the load estimate, Le, will be used in feedforwardcontrol of the motor speed. Derive the feedforward control function,i.e. uf = ....
8. Assume that you want to build a speed control system with increasedrobustness against speed sensor failure. Explain how you can use theobserver to obtain this.
Exercise 17.2
See Figure 17.1 in the text-book.
1. Explain that the estimation loop is based on updating thetime-derivative of the estimates with a P (proportional) compensatoror “controller”.
67
2. Modeling errors, i.e. differences between the model assumed torepresent the real system and the model used in the observer tocalculate the state estimates, may cause non-zero steady-state errorsin the state estimates with the P compensator. Think about basiccontroller theory. Suggest an alternative compensator that will causethe steady-state estimation error to become zero.
Exercise 17.3
This exercise is about using an observer for estimating acceleration fromposition measurement. In other words: You will design a soft
accelerometer.
1. Write a state-space model with position, velocity, and acceleration asstate variables. You can use the state names p, v, and a. Assumethat you do not know how the acceleration varies, so the bestassumption is that it’s time-derivative is zero.
2. Derive the observer formulas for continuous-time implementation.Assume that the observer gain is given. Task 3 is about calculatingthis gain.
3. Derive the observer formulas (algorithm) discrete-timeimplementation. The time-step is Ts. (Assume that the observer gainis given.)
4. Find the observer gains K1 and K2 as a function of the observerresponse-time Tr. (Use manual calculations.)
68
Chapter 18
State estimation withKalman Filter
Exercise 18.1
The system in this exercise is the same as in Exercise 17.1 about observer,namely an electric motor. Figure 18.1 shows the motor. It is manipulatedwith an input voltage signal, u, and the rotational speed is measured witha tachometer which produces a output voltage signal which is proportionalto the speed. Assume that the speed, S [krpm = kilo revolutions perminute] is calculated continuously from the tachometer voltage. A proper
Figure 18.1:
69
70
mathematical model of the motor is
TmS(t) + S(t) = Km[u(t) + L(t)] (18.1)
L is equivalent load torque (represented in the same unit as the controlvariable, namely voltage). L can be regarded as a process disturbance. Km
is gain. Tm is time-constant. (Model parameter values are given inAppendix A.1, but these values are not needed in the present exercise.)
The tasks below are about designing a Kalman Filter which estimates theload L (and the speed S) based on the measurement of S. Assume thatthe value of L and how it varies is completely unknown, so the bestassumption is to model it as an unknown constant.
1. Find the Kalman Filter formulas for estimating S and L. (You canuse S and L as names of the state variables.) You can assume thatthe Kalman Filter gain K is given. Task 2 is about calculating thisgain. The time step is Ts.
2. Define the quantities needed for calculating the steady-state KalmanFilter gain, K. (You are not required to calculate K. In practice youwill use a proper function in e.g. Matlab or LabVIEW.)
3. Check if the system is observable.
4. How can you in a practical experiment find a reasonable value of themeasurement noise covariance R?
5. How can you tune the Kalman Filter to speed up the response of theestimate of Lest = Lc? Is there any drawback related to increasingthe response of the estimate?
6. Assume that the estimates are too noisy due to measurement noise.How can you obtain smoother estimates without adjustingparameters of the Kalman Filter itself?
7. Assume that the load estimate, Lest = Lc, will be used in feedforwardcontrol of the motor speed. Derive the feedforward control function,i.e. uf (tk) = ....
8. Assume that you want to build a speed control system with increasedrobustness against speed sensor failure. Explain how you can use theKalman Filter to obtain this.
Chapter 19
Testing robustness ofmodel-based control systemswith simulators
Exercise 19.1
Figure 19.1 shows a model-based speed control system of an electric motor.The motor is controlled with an input voltage signal, u, and the
rotational speed, S, is measured with a tachometer which produces avoltage being proportional to the speed.
A proper mathematical model of the motor is the following differentialequation:
S(t) =1
Tm−S(t) +Km[u(t) + L(t)] (19.1)
L is equivalent load torque (represented in the same unit as the controlvariable, namely voltage). L can be regarded as a process disturbance. Km
is gain. Tm is time-constant. (Parameter values are given in Appendix A.1,but these values are not needed in the present exercise.)
The control system is based on feedback control and feedforward control.The feedback controller is a PI controller which is tuned using theSkogestad’s model-based tuning formulas for a “time-constant process”:
Kp =Tm
KmTC(19.2)
Ti = min [Tm, cTC ] (19.3)
71
72
PI controller
Speedreference
Estimator of load torque
PI controllertuning withSkogestad’s
method
Controlsignal
Measured speed
(contains random
measurement noise)
uSr S
Lest
Feedforwardcontroller
uf
Motor with speed sensor (tachometer)
Measurement lowpass filter
Figure 19.1:
where TC is the specified closed-loop time-constant. We set c = 2. Assume(for simplicity) that it decided to use
Ti = Tm (19.4)
The feedforward controller is based on the motor model (19.1) fromestimated load torque Lest and speed reference Sr:
uf (t) =1
KmTmSr(t) + Sr(t)−Lest(t) (19.5)
Lest is estimated with an estimator (an observer or a Kalman Filter —which one of these does not matter here), which uses the mathematicalmodel of the motor to calculate the estimate.
The feedback controller, the feedforward controller, and the estimator aremodel-based. Hence, the whole control system is model-based.
Assume that you intend to test the robustness of the model-based controlsystem against model errors, and also that you want to see how the
73
measurement noise is influencing the behaviour of the control system.Unfortunately, you can not perform experiments on the real motor. Instead you must use a simulated motor. Explain how you can do thissimulated experiments. Draw a block diagram similar to the one shown inFigure 19.1, where you indicate which model parameters to use in theindividual blocks. You can assume that it is interesting to see if the controlsystem is robust against model parameter variations of ±20 %.
74
Chapter 20
Feedback linearization
Exercise 20.1
Figure 20.1 shows a tank where continuous flows of cold liquid and hotliquid are mixed in a tank.1 The liquid in the tank is assumed beinghomogeneous. The product flow rate out of the tank is controlled with aordinary flow control loop.
The level and the temperature of the tank shall be controlled to follow(track) their reference values. The cold liquid and the hot liquid flows canbe manipulated, so they are control variables. (To make the actual flowsbecome equal to the demanded flow as calculated by the multivariablecontroller, local flow control loops around the pumps may be needed, butthese control loops are not shown in the figure.)
It is not obvious how to control the cold flow and the hot flow to obtainthe reference level and temperature because both flows affect both thelevel and the temperature. The control problem may be solved with a“traditional” control structure where e.g. the level controller adjusts thecold flow, and the temperature controller adjusts the ratio between hotand cold flow. But, instead of such a traditional control structure, you willdesign a model-based multivariable controller based on feedbacklinearization.
The mathematical model of the process is as follows. The model is basedon the following assumptions:
1This example is not very realistic, but it is assumed to be relatively simple and easyto understand.
75
76
FC
LT
Feed:Cold liquid
F [kg/s]
TT
FT
Multivariablecontroller(feedback
linearization )Fc [kg/s] Fh [kg/s]
Fc
Fh
Tr
Lr
L [m]
T [K]
Tc [K] Th [K]
Feed:Hot
liquid
Product
Pump
Fsp [kg/s]
References(setpoints)
A [m2]
Productionrate control
Level and temperature
control
Figure 20.1:
• The density ρ and the specific heat capacity c are the same in allflows and in the tank.
• The temperature is homogeneuous in the liquid in the tank.
• There is no heat transfer through the walls of the tank.
• Energy dependent on pressure and kinetics is disregarded.
Mass balance of the mixed liquid of the tank is
ρAh = Fc + Fh − F (20.1)
Energy balance of the liquid in the tank is (it is assumed that both thelevel and the temperature can vary)
cρA·
(LT ) = cFcTc + cFhTh − cFT (20.2)
77
In (20.2),
cρA·
(LT ) = cρALT + cρALT (20.3)
With (20.3) inserted into (20.2) and then cancelling c, (20.2) becomes
ρALT = Fc (Tc − T ) + Fh (Th − T ) (20.4)
The process model is now (20.1) and (20.4).
1. Write the process model (20.1) and (20.4) on the standard form to beused in a feedback linearization controller where Fc and Fh arecontrol variables, and L and T are state variables to be controlled (totrack their respective references).
2. Design the control function based on feedback linearization, but youdo not have to give the formulas for tuning the gains and the integraltimes of the internal PI controllers, as this is the topic of thefollowing task. The references are Lr and Tr.
3. Calculate the gains, KpL and KpL , and the integral times, TiL andTiT , of the internal PI controllers. It is specified that the controlloops of the decoupled processes (which are just integrators) shallhave response times (time constants) TCL and TCT , respectively.
4. Which parameters and variables must be known at any instant oftime to make the control function implementable?
5. Assume that there is a change in the level reference. Will this changecause any change in the temperature?
Will a change in the temperature reference cause any change in thelevel?
6. Assume for example that any change in the temperature of the coldinflow, Tc, is regarded as a disturbance to the control system. Doesthe control function implement feedforward from this disturbance?
Exercise 20.2
Figure 20.2 shows a ship. In this exercise we concentrate on the so-calledsurge (forward-backward) direction, i.e., the movements in the otherdirections are disregarded. The wind acts on the ship with the force Fw
78
Wind forceFw [N]
Hydrodynamicforce Fh [N]
Thruster forceFt [N]
Mass m [kg]
Position (relative to earth ) y [m]
Water current speed (rel. to earth) uc [m/s]
Figure 20.2:
which is a function of the wind attack angle φ and the wind speed Vw.This function is assumed to be known for a given ship. The hydrodynamicdamping force Fh (damping from the water) is proportional to the squareof the difference between the ship speed, y, and the water current speed uc.The proportionality constant is D.
Applying Newtons’s Law of Motion we obtain the following mathematicalmodel of the surge motion:
my = −D|y − uc| (y − uc)︸ ︷︷ ︸Fh
+ Fw(φ, Vw) + Ft (20.5)
(Model parameter values are given in Appendix A.2, but these values arenot needed in the present exercise.)
1. Design the position control function using feedback linearization.The position reference is r [m]. The closed loop time-constant isspecified as TC . (According to the comments about Skogestad’smethod for tuning a PID controller for a double-integrator, you canuse TC/2 in the Skogestad’s formulas.) Express the PID parametersas a function of TC .
2. Which variables and parameters must have known values (from
79
measurements or estimators) to make the controller functionimplementable.
3. Let’s define the wind force Fw as a disturbance. Explain how thefeedback linearization controller implements feedforward from thisdisturbance. Assuming that the ship model is correct and that Fw isperfectly known, what is then the impact that Fw will have on theship position y?
80
Chapter 21
LQ (Linear Quadratic)optimal control
Exercise 21.1
See Figure 20.2 which shows a ship. In this exercise we concentrate on theso-called surge (forward-backward) direction, i.e., the movements in theother directions are disregarded. The wind acts on the ship with the forceFw which is a function of the wind attack angle φ and the wind speed Vw.This function is assumed to be known for a given ship. The hydrodynamicdamping force Fh (damping from the water) is proportional to the squareof the difference between the ship speed, y, and the water current speed uc.The proportionality constant is D.
Applying Newtons’s Law of Motion we obtain the following mathematicalmodel of the surge motion:
my = −D|y − uc| (y − uc)︸ ︷︷ ︸Fh
+ Fw(φ, Vw) + Ft (21.1)
(Model parameter values are given in Appendix A.2, but these values arenot needed in the present exercise.)
This exercise is about ship position control using LQ optimal control.Assume that the water current uc has a known value at any instant of time(in a practical application it may have been estimated with a stateestimator, e.g. a Kalman Filter). Also, assume that the wind force Fw hasa known value at any instant of time (with a mathematical wind model thewind force can be calculate from information about the wind attack angle
81
82
φ and the wind speed Vw provided by the sensor). We also assume that theship position y and speed y are known at any instant of time.
1. Write the ship model as a (nonlinear) state-space model using x1 = yand x2 = y as state variables. Ft is control variable.
2. The ship position will be controlled with LQ control with integralaction. Figure 21.1 shows a block diagram of the ship. Enhance this
y = x1uShip
y = x2
Fwuc
Figure 21.1:
block diagram so that it shows the control system in detail, includingthe feedbacks and the integrator of the controller.1 You can assumethat the controller gains have known values (calculation of thesegains is the focus of a following task).
3. Augment the state-space model found in Problem 1 above with thestate variable of the integral part of the controller. (This augmentedstate-space model is needed in the following problem.)
4. Figure 21.2 illustrates the matrices needed to compute thesteady-state LQ controller gain Gs (using a proper function in e.g.Matlab or LabVIEW).Matrices A and B are found by linearizing theaugmented state-space model found in the problem above. Theelements of the weight matrices Q and R — at least their initial valuessince these values may be adjusted later — shall be expressed asfunctions of the allowable maximum values of the proper variable inthe model.
5. Suppose you want to reduce the fuel consumption (less aggressivecontrol). How can you adjust some of the weights of the LQ criterionto obtain this?
6. Suppose you want to increase the damping in the control system. Inother words: You want to limit (reduce) the speed of the ship. Howcan you adjust some of the weights of the LQ criterion to obtain this?
1Feedforward from wind force Fw and water current uc is an enhancement of the controlsystem, but we will not include feedforward in this exercise.
83
Steady state LQ
controller gainGs
A
QR
B Gs
Transition matrixInput gain matrix
State weight matrixControl signal weight matrix
Figure 21.2: Information needed to compute the steady-state LQ controller
gain, Gs.
84
Chapter 22
Model-based predictivecontrol (MPC)
Exercise 22.1
If the setpoint profile is known in advance, the MPC controller will startadjusting the control signal before the setpoint is actually changed. (Thisis demonstrated in the text-book.) Assume that a PID controller withordinary feedforward from setpoint is used in stead of MPC. Will then thecontrol signal start increasing before the reference change?
Exercise 22.2
See Exercise 21.1 which is about positional control of a ship. In thatexercise the position is controlled with LQ control. Now, assume MPC instead of LQ control. The state variables are x1 = y and x2 = y. Thecontrol variable is Ft.
1. Assume that you want to make the control system more sluggish sothat the positional control error is allowed to become larger. (Onepossible motivation may be that tight position control is not requiredfor a period, for example if the ship is parked, waiting for a certainoperation to be initiated.) Which parameter of the criterion is properfor adjustment, and should you increase of decrease that parameter?
2. The thruster force Ft has of course a positive limit and a negativelimit (assuming that the thruster can rotate both directions). Can
85
86
the MPC algorithm take these limits into account when it calculatesthe optimal thruster force to be applied to the ship?
Exercise 22.3
MPC may be used for direct manipulation of the control variables. MPCcan also be used in a hierarchic control system. Figure 22.1 shows usingMPC in outer control loop while PID controllers are used in inner controlloops. This control structure is similar to a conventional control structurefrequently used in industry — which?
MPC
PID1
Process(plant)PID
2
r1MPC
rPID1
y1
y2r2MPCrPID2
u1
u2
Figure 22.1:
Suggest concrete examples of r1MPC , rPID1, and u1 (select any applicationyou want).
Chapter 23
Dead-time compensator(Smith predictor)
Exercise 23.1
1. Assume given a process with time-constant 1.2 min and dead-time 2min. Will dead-compensation improve the disturbance compensationproperties of the control system compared to using standard PIDcontrol?
What about the reference tracking — will it be improved?
2. What are the answers if the time-constant is 0.4 min and thedead-time is 2 min?
Exercise 23.2
Given a process with the following transfer function from control variableto process measurement:
Hp(s) =2
20s+ 1e−50s (23.1)
Calculate the parameter values of an internal PI controller usingSkogestad’s tuning method from the specification that the closed loop timebecomes 10 sec. (You can use parameter c = 2 in Skogestad’s formula forTi.)
Exercise 23.3
87
88
Dead-time compensators may improve control when the process hasdead-time. But, even better control can be obtained by eliminating thedead-time!
Figure 23.1 shows a level control system of a wood-chip tank. The level
y [m]Wood chip
Wood-chip tank
LTLCControlvariable
Outflow
Conveyor belt
Figure 23.1:
controller adjusts the inflow to control the level in the tank, despiteoutflow variations.
1. In one specific paper pulp factory the level controller adjusts thespeed of the inlet screw, while the conveyor belt is running with fixedspeed. Explain why the control loop has dead-time.
2. The dead-time limits the speed of the level control system. Suggestan alternative way that the level controller can adjust the inflow sothat the dead-time is eliminated!
Part II
SOLUTIONS
89
91
Solution to Exercise 1.1
Density ρ can be cancelled. (1.1) becomes
h1 =
f1(·)︷ ︸︸ ︷1
A1
(
Kpu1 −Kv1
√ρgh1G
)
(23.2)
(1.2) becomes
h2 =
f2(·)︷ ︸︸ ︷1
A2
(
Kv1
√ρgh1G
−Kv2u2
√ρgh2G
)
(23.3)
The measurement equations become
y1 =
g1(·)︷︸︸︷h1 (23.4)
y2 =
g2(·)︷︸︸︷h2 (23.5)
The state-space model is nonlinear due to the square root functions.
Solution to Exercise 1.2
Firstly, we isolate the first order derivatives on the left side, and list thevariables in the proper order to prepare for the matrix-vector form:
x1 = x2x2 = −x1 − 3x2 + 2u1 + 4u2y = 5x1 + 6x2 + 7u1
(23.6)
Finally, [x1x2
]
︸ ︷︷ ︸x
=
[0 1−1 −3
]
︸ ︷︷ ︸A
[x1x2
]
︸ ︷︷ ︸x
+
[0 02 4
]
︸ ︷︷ ︸B
[u1u2
]
︸ ︷︷ ︸u
(23.7)
and
y =[5 6
]︸ ︷︷ ︸
C
[x1x2
]
︸ ︷︷ ︸x
+[7 0
]︸ ︷︷ ︸
D
[u1u2
]
︸ ︷︷ ︸u
(23.8)
or, compactly:x = Ax+Bu (23.9)
92
y = Cx+Du (23.10)
Solution to Exercise 1.3
Linearization of the differential equations:
[˙∆h1˙∆h2
]=
[∂f1∂h1
∂f1∂h2
∂f2∂h1
∂f2∂h2
]∣∣∣∣∣0
·[∆h1∆h2
](23.11)
+
[∂f1∂u1
∂f1∂u2
∂f2∂u1
∂f2∂u2
]∣∣∣∣∣0
·[∆u1∆u1
](23.12)
=
−Kv1A1
√ρgG
12√h1
0
Kv1A2
√ρgG
12√h1
Kv2u2A2
√ρgG
12√h2
∣∣∣∣∣∣0︸ ︷︷ ︸
A
·[∆h1∆h2
](23.13)
+
[ Kp
A10
0Kv2A2
√ρgh2G
]∣∣∣∣∣0︸ ︷︷ ︸
B
·[∆u1∆u1
](23.14)
Linearization of the output equation:
[∆y1∆y2
]=
[∂g1∂h1
∂g1∂h2
∂g2∂h1
∂g2∂h2
]∣∣∣∣∣0
·[∆h1∆h2
](23.15)
+
[∂g1∂u1
∂g1∂u2
∂g2∂u1
∂g2∂u2
]∣∣∣∣∣0
·[∆u1∆u2
](23.16)
=
[1 00 1
]∣∣∣∣0︸ ︷︷ ︸
C
·[∆h1∆h2
](23.17)
+
[0 00 0
]∣∣∣∣0︸ ︷︷ ︸
D
·[∆u1∆u2
](23.18)
Solution to Exercise 2.1
1. Figure 23.2 shows the signals with amplitudes and time-lag indicated.From the figure we see that the period of the input signal is
Tp = 10 sec (23.19)
93
Figure 23.2:
which corresponds to the frequency
f1 =1
Tp=1
10= 0.1 Hz (23.20)
or, alternatively,
ω1 = 2πf1 = 2 · π · 0.1 = 0.63 rad/s (23.21)
2. The amplitude gain (at frequency f1) is
A =Y
U=1.4
2= 0.7 = 20 · log10(0.7) dB = −3.1 dB (23.22)
The phase lag φ can be calculated by firstly measuring the time lag∆t between input u(t) and output y(t) and then calculating φ with
φ = −ω∆t [rad] (23.23)
From Figure 23.2 you can find
∆t = 1.8 s (23.24)
94
Hence,
φ = −ω∆t = −0.63 · 1.8 = −1.13 rad = −1.13 · 180π= −65 deg
(23.25)Alternatively, we can calculate φ from the following ratio (360degrees corresponds to one period):
φ = −∆tTp· 360 deg = −1.8 s
10 s· 360 deg = −65 deg (23.26)
Solution to Exercise 2.2
The steady-state response is
ys(t) = UA sin(ωt+ φ) (23.27)
The amplitude of the input signal is U = 0.8. The amplitude gain A is readoff from the upper curve in the Bode diagram at frequency ω = 1.0 rad/s:
A = −3 dB (23.28)
which is
A = 10−3/20 = 0.71 (23.29)
The phase lag φ at frequency ω = 1.0 rad/s is read off from the lower curvein the Bode diagram:
φ = −45 deg = −45 · π
180rad = −0.79 rad (23.30)
Hence,
ys(t) = 0.8 · 0.71 · sin(1.0 · t− 0.79) = 0.57 sin(t− 0.79) (23.31)
Solution to Exercise 2.3
95
We set s = jω in H(s), and then turn the factors into polar forms, whichwe then combine to finally get a polar form of H(jω):
H(jω) =K
(1 + T1jω) (1 + T2jω)e−jωτ (23.32)
=K[√
12 + (T1ω)2ej arctan
(T1ω1
)][√12 + (T2ω)
2ej arctan
(T2ω1
)]e−jωτ
(23.33)
=K
√1 + (T1ω)
2√1 + (T2ω)
2
︸ ︷︷ ︸|H(jω)|
e
j[−arctan (T1ω)− arctan (T2ω)− ωτ ]︸ ︷︷ ︸
argH(jω)
(23.34)
So, we have
A(ω) = |H(jω)| = K√1 + (T1ω)
2√1 + (T2ω)
2(23.35)
and
φ(ω) = argH(jω) = − arctan (T1ω)− arctan (T2ω) (23.36)
Solution to Exercise 2.4
The bandwidth is
fb [Hz] =ωb [rad/s]
2π=
1
2π· 1
RC(23.37)
With
C = 10−5 F (23.38)
we get
R =1
2πfbC=
1
2π · 100 Hz · 10−5 F = 159 Ω (23.39)
Solution to Exercise 3.1
Figure 23.3 shows a block diagram where the measurement noise n and a(lowpass) filter is included.
Solution to Exercise 3.2
96
Process
Sensor
v
yySP u
e
Controller
Measurement noise n
Filter
Figure 23.3:
L(s) = Hc(s)Hu(s)Hm(s) = KpTis+ 1
Tis
Ku
Tus+ 1e−τsKm (23.40)
S(s) =1
1 + L(s)(23.41)
=1
1 +KpTis+1Tis
Ku
Tus+1e−τsKm
(23.42)
=Tis (Tus+ 1)
Tis (Tus+ 1) +KpKuKm (Tis+ 1) e−τs(23.43)
T (s) =L(s)
1 + L(s)(23.44)
=Kp
Tis+1Tis
Ku
Tus+1e−τsKm
1 +KpTis+1Tis
Ku
Tus+1e−τsKm
(23.45)
=KpKuKm (Tis+ 1) e
−τs
Tis (Tus+ 1) +KpKuKm (Tis+ 1) e−τs(23.46)
Solution to Exercises 3.3
1. From Figure 3.3 we read off
ωc = 2.5 rad/s (23.47)
97
ωt = 5.5 rad/s (23.48)
ωs = 0.55 rad/s (23.49)
(Hence, there is quite large difference between the differentbandwidths in this example.)
2. The calculation from dB is made with the formula x = 10x[dB]/20.
Ay =
=−0.8dB=0.91︷ ︸︸ ︷|T (j1rad/s)|ASP = 0.91ASP (23.50)
Ae =
=−8dB=0.40︷ ︸︸ ︷|S(j1rad/s)|ASP = 0.40ASP (23.51)
3.
AyCL =
=−8dB=0,40︷ ︸︸ ︷|S(j1rad/s)|AyOL = 0.40 · 0.5 = 0.2 (23.52)
4.
Tr ≈2
ωt=
2
5.5= 0.36 s (23.53)
Solution to 3.4
1. In Figure 3.5, which applies to the non-controlled system, we read offthat the amplitude gain at frequency 0.1 rad/s is
GNC(0.1) ≈ 0 dB = 1 (23.54)
2. In Figure 3.6, which applies to the controlled system, we read offthat the amplitude gain at frequency 0.1 rad/s is
GC(0.1) ≈ −32 dB = 0.025 (23.55)
Hence, with control the response in T due to Tin is about40 times less than the response in the non-controlled system.
Solution to Exercise 4.1
The Laplace transform of the impulse response is
h(t) = y(s) = H(s)u(s)︸︷︷︸=1
= H(s) =1
s+ 1(23.56)
98
Figure 23.4:
Using (4.2) with k = 1 and T = 1 we get
h(t) =ke−t/T
T= e−t (23.57)
Figure 23.4 shows h(t). Since h(t) goes to zero as time goes to infinity, thesystem is asymtptotically stable.
Solution to Exercise 4.2
The transfer function
H1(s) =1
s+ 1(23.58)
is asymptotically stable since the pole p = −1 is in the left half plane.
The transfer function
H2(s) =1− s
1 + s(23.59)
is asymptotically stable since the pole p = −1 is in the left half plane. Thevalue of the zero, which is 1, does not determine the stability.
The transfer function
H3(s) =1
1− s(23.60)
99
is unstable since the pole p = 1 is in the right half plane.
The transfer function
H4(s) =1
(s+ 1)(s− 1) (23.61)
is unstable since one of the poles, p1 = 1, is in the right half plane.
The transfer function
H5(s) =1
s(23.62)
is marginally stable since the pole p = 0 is in origin, which is on the
imaginary axis, and this pole is single (there are no multiple poles).
The transfer function
H6(s) =1
s3(23.63)
is unstable since there are multiple poles, p1,2,3 = 0, on the imaginary axis.
The transfer function
H7(s) =e−s
s+ 1(23.64)
is asymptotically stable since the pole p = −1 is in the left half plane.
The transfer function
H8(s) = −1
s+ 1(23.65)
is asymptotically stable since the pole p = −1 is in the left half plane.
The transfer function
H9(s) =1
s2 + s+ 1(23.66)
is asymptotically stable since both the poles
p1,2 =−1±
√12 − 4 · 1 · 14 · 1 =
−1± j√3
4(23.67)
are in the left half plane.
The transfer function
H10(s) =1
s2 + 1(23.68)
is marginally stable since the poles
p1,2 = ±j (23.69)
are on the imaginary axis and they are single (not multiple).
100
The transfer function
H11(s) =1
(s+ 1)s(23.70)
has polesp1,2 = 0,−1 (23.71)
One pole is on the imaginary axis, and the other is in the left half plane.The system is marginally stable.
Solution to Exercise 4.3
The stability property is determined by the system eigenvalues, which arethe roots of the characteristic equation:
det(sI −A) = det
(s
[1 00 1
]−[0 10 −2
])(23.72)
= det
([s −10 s+ 2
])(23.73)
= s (s+ 2) + 0 (23.74)
The roots ares1,2 = 0, − 2 (23.75)
One pole is in the origin, and the other pole is in the left half plane.Thererfore, the system is marginally stable.
Solution to Exercise 5.1
1. Hpm(s) has pole equal to 0. Consequently the system ismarginally stable.
2. The transfer function of the controller is
Hc(s) = Kp (23.76)
The loop transfer function is
L(s) = Hc(s)Hpm(s) =Kp
s=
n0(s)
d0(s)(23.77)
The characteristic polynomial is
c(s) = d0(s) + n0(s) = s+Kp (23.78)
The pole of the control system is the root of (23.78):
p = −Kp (23.79)
The control system is asymptotically stable with Re(p) < 0, i.e.
Kp > 0 (23.80)
101
3. Sure!
Solution to Exercise 5.2
We start by determining the stability property with Kp = 0.4. The numberof right half plane poles of the control system is
PCL =arg[1 +H0(s)]
360+ POL (23.81)
To determine PCL we need to know argL and POL. We have
L(s) =Kp
(s+ 1)3s(23.82)
which does not have any poles in the right half plane (the pole in theorigin belongs to the left half plane in this context). Hence,
POL = 0 (23.83)
argL is found from the Nyquist diagram shown in Figure 5.2. L(jω) doesnot encircle the critical point, and therefore the vector L has zero netchange of angle, and hence argL = 0. So, (23.81) becomes
PCL =0
360+ 0 = 0 (23.84)
Consequently, the control system is asymptotically stable with Kp = 0.4.
From Figure 5.2 we see that the L curve with Kp = 0.4 passes through thenegative real axis at −0.45. This implies that if Kp is increased by a factorof 1/0.45 = 2.22, or in other words: if Kp is increased from 0.4 to0.4 · 2.22 = 0.89, the L curve will pass through the critical point.
Consequently, the control system is marginally stable with Kp = 0.89.
From the results above we can conclude that the control system isasymptotically stable with (positive) Kp < 0.89.
If Kp > 0.89, the L curve encircles the critical point and argL = 720,giving
PCL =720
360+ 0 = 2 (23.85)
Therefore, the control system is unstable with Kp > 0.89, and it has
two poles in the right half plane.
The control system is unstable also with Kp < 0. In this case argL = 360,
and PCL = 1, and the control system has one pole in the right half plane.
102
Solution 5.3
1. The open loop system is unstable because L(s) has a pole in theright half plane. (The pole is p = 1.)
2. K must be halved to make the L curve pass through the criticalpoint:
Kcritical =2
2= 1 (23.86)
Since the open loop system has one pole in the right half plane,POL = 1 in the Nyquist criterion. To make PCL = 0 (asymptoticallystable closed loop system) argL must be 360, which implies that thecritical point must be encircled once. This is achieved with
K > Kcritical = 1 (23.87)
The pole of the closed loop is
p = 1−K (23.88)
This pole is in the left half plane with K > 1, which confirms theresult of the Nyquist stability criterion above.
Solution to Exercise 5.4
The Nyquist curve passes through the critical point with the followingvalues of K:
K = 1/0.8 = 1.25 (23.89)
K = 1/0.4 = 2.5 (23.90)
K = 1/0.2 = 5 (23.91)
The control system is asymptotically stable with
0 < K < 1.25 and 2.5 < K < 5 (23.92)
Solution to Exercise 5.5
See Figure 23.5. As indicated in the figure,
0.4 = |1 + L|min =1
|S|max(23.93)
which gives
|S|max =1
0.4= 2.5 = 8.0 dB (23.94)
103
0.45 = 1/GM0.4 = |1+L|min = 1/|S|max
Figure 23.5:
Furthermore,1
GM= 0.45 (23.95)
which gives
GM =1
0.45= 2.22 = 6.9 dB (23.96)
Solution to Exercise 5.6
From Figure 5.6 we read off
|S(jω)|max = 7 dB (23.97)
which is in the “reasonable” range of
3.5 dB ≤ |S(jω)|max ≤ 9.5 dB (23.98)
Solution to Exercise 5.7
1. See Figure 23.6. From the Bode diagram:
104
GM = 7 dB
PM = 35 deg
wc = 0.34 rad/s
w180 = 0.58 rad/s
Figure 23.6:
GM = 7 dB = 2.2 (23.99)
PM = 35 (23.100)
ωc = 0.34 rad/s (23.101)
ω180 = 0.58 rad/s (23.102)
2. An increase of the loop gain by a factor of
GM = 2.2 (23.103)
will bring the system to the stability limit.
The period is
Tp =2π
ω180=
2π
0.58= 10.8 s (23.104)
Solution 5.8
105
When the control system is marginally stable, the phase margin PM iszero. According to Eq. (5.44) in the text-book the maximum change of thetime delay that is allowed before the system is marginally stable is
∆τ =PM
ωc· π
180=45
0.2· π
180= 3.9 min (23.105)
Since the time delay before the change is 4.2 min, the total value of thetime delay at marginally stability is
τ = 4.2 + 3.9 = 8.1 min (23.106)
Solution to Exercise 5.9
For H1(s) the loop transfer function with a P controller is
L1(s) = KpH1(s) = KpK
s(23.107)
The phase (angle) of L1(jω) converges to −90 as the frequency goes toinfinity. Therefore, the closed loop system will not become marginallystable with any controller gain, and hence, the Ziegler-Nichols’ method cannot be used.
For H2(s) the loop transfer function with a P controller is
L2(s) = KpH2(s) = KpK
Ts+ 1(23.108)
The phase (angle) of L2(jω) converges to −90 as the frequency goes toinfinity. Therefore, the closed loop system will not become marginallystable with any controller gain, and hence, the Ziegler-Nichols’ method cannot be used.
For H3(s) the loop transfer function with a P controller is
L3(s) = KpH3(s) = KpKω0
2
s2 + 2ζω0 + ω02(23.109)
The phase (angle) of L3(jω) converges to −180 as the frequency goes toinfinity. Therefore, the closed loop system will not become marginallystable with any limited controller gain, and hence, the Ziegler-Nichols’method can not be used.
Solution to Exercise 6.1
1.xd(tk) = 2tk (23.110)
106
1 2 3 4
tkk
0
Figure 23.7:
xd = 0, 0.5, 1.0, 1.5, 2.0 (23.111)
Figure 23.7 shows xd with Ts = 0.5 s.
2.xd(tk) = 2tk (23.112)
xd = 0, 0.1, 0.2, 0.3, . . . , 1.5, 1.6, 1.7, 1.8, 1.9, 2.0 (23.113)
Figure 23.8 shows xd with Ts = 0.1 s.
Solution to Exercise 7.1
Each of the time indexes is reduced by 3, giving
y(k) + ay(k − 2) = b1u(k − 1) + b0u(k − 3) (23.114)
Solution to Exercise 7.2
See Figure 23.9.
Solution to Exercise 7.3
1. Let us assume thatu = U (constant) (23.115)
107
0
tk
k1 2 3 4 ... 18 19 20...
Figure 23.8:
b
Gain
y(k)
a
Sum
Gain
Time delay
z-1u(k) y(k+1)
Figure 23.9:
In steady-state,
ys =1
3(U + U + U) = U
Hence, the filter lets a constant input pass unchanged, insteady-state.
108
2. The ramp response is
y(0) =1
3[u(0) + u(−1) + u(−2)] = 1
3[0 + 0 + 0] = 0
y(1) =1
3[u(1) + u(0) + u(−1)] = 1
3[0.5 + 0 + 0] =
1
6= 0.17
y(2) =1
3[u(2) + u(1) + u(0)] =
1
3[1.0 + 0.5 + 0] = 0.5
y(3) =1
3[u(3) + u(2) + u(1)] =
1
3[1.5 + 1.0 + 0.5] = 1
y(4) =1
3[u(4) + u(3) + u(2)] =
1
3[2.0 + 1.5 + 1.0] = 1.5
The general response:
y(k) =1
3[u(k) + u(k − 1) + u(k − 2)]
=1
3[0.5 · k + 0.5 · (k − 1) + 0.5 · (k − 2)]
= 0.5 · k + 1
3(0− 0.5− 1.0)
= 0.5 · k − 0.5= u(k)− 0.5
Hence, there is a constant difference equal to −0.5 between theoutput and the input in steady-state.
Solution to Exercise 8.1
See Figure 23.10.
Solution to Exercise 8.2
Firstly, we write the model with the time-derivatives alone on the left handside:
h1 =1
A1
(
Kpu1 −Kv1
√ρgh1G
)
(23.116)
h2 =1
A2
(
Kv1
√ρgh1G
−Kv2u2
√ρgh2G
)
(23.117)
(The model is now actually on state space model form.)
109
f
t
Ts
tk-1 tk
Forward rectangle (grey)in Forward approximation
Exact integral isarea under curve
Trapesoid area (line)in Tustin’s method
fk
fk-1
Backward rectangle(large rectangle )in Backward approximation
Figure 23.10:
Next, we apply the Forward differentiation approximation:
h1 (tk) ≈h1 (tk+1)− h1 (tk)
Ts=
1
A1
(
Kpu1 (tk)−Kv1
√ρgh1 (tk)
G
)
(23.118)
h2 (tk) ≈h2 (tk+1)− h2 (tk)
Ts=
1
A2
(
Kv1
√ρgh1 (tk)
G−Kv2u2
√ρgh2 (tk)
G
)
(23.119)Solving for h1 (tk+1) and h2 (tk+1), respectively:
h1 (tk+1) = h1 (tk) +TsA1
(
Kpu1 (tk)−Kv1
√ρgh1 (tk)
G
)
(23.120)
h2 (tk+1) = h2 (tk) +TsA2
(
Kv1
√ρgh1 (tk)
G−Kv2u2
√ρgh2 (tk)
G
)
(23.121)
It is better to apply Forward discretization than Backward discretizationin this example because the model is nonlinear (the square root function).The nonlinearity makes it difficult to solve for the levels h1 and h2 in amodel created with Backward discretization, while solving for h1 and h2 is
110
straightforward in a model created with Forward discretization (this isdemonstrated in the text-bok).
Solution to Exercise 8.3
Cross-multiplying the transfer function model:
(Tfs+ 1) y(s) = Tfsu(s) (23.122)
or
Tfsy(s) + y(s) = Tfsu(s) (23.123)
Taking inverse Laplace tranform:
Tf y(t) + y(t) = Tf u(t) (23.124)
Discrete time:
Tf y(tk) + y(tk) = Tf u(tk) (23.125)
Backward discretization:
y(tk) ≈y(tk)− y(tk−1)
Ts(23.126)
and
u(tk) ≈u(tk)− u(tk−1)
Ts(23.127)
Inserting these approximations into (23.125):
Tfy(tk)− y(tk−1)
Ts+ y(tk) = Tf
u(tk)− u(tk−1)Ts
(23.128)
Solving for y(tk):
y(tk) =Tf
Ts + Tfy(tk−1) +
TfTs + Tf
[u(tk)− u(tk−1)] (23.129)
Solution to Exercise 8.4
1. Discretizing the nominal (manual) control variable u0:
u0(tk) = u0 (23.130)
Discretizing the P term:
up(tk) = Kpe(tk) (23.131)
111
Discretizing the I term: The continuous-time I term can be written
ui(tk) =Kp
Ti
∫ tk
0e(t) dt (23.132)
Taking the time-derivative of (23.132):
ui(tk) =Kp
Tie(tk) (23.133)
Applying Backward differentiation approximation to thetime-derivative:
ui(tk) ≈ui(tk)− ui(tk−1)
Ts=
Kp
Tie(tk) (23.134)
Solving for ui(tk):
ui(tk) = ui(tk−1) +KpTsTi
e(tk) (23.135)
The total discrete-time PI control function is then
u(tk) = u0 + up(tk) + ui(tk) (23.136)
2. Integral anti windup can be implemented as follows:
• Calculate an intermediate value of the control variable u(tk)according to (23.136), but do not send this value to the actuator(which controls the process).
• Check if the intermediate u(tk) is greater than the maximumvalue umax (typically 100%) or less than the minimum valueumin (typically 0%). If u(tk) is exceeding one of these limits,calculate the I term ui(tk) once more, but now with
ui(tk) = ui(tk−1) (23.137)
which implies that the I term is fixed, and calculate u(tk) oncemore according to (23.136) using ui(tk) given by (23.137).
• Send u(tk) to the actuator.
Solution to Exercise 8.5
1. Setting Ti =∞ and Td = 0 in the PID controller gives
u(tk) = u(tk−1) + [u0(tk)− u0(tk−1)] +Kp [e(tk)− e(tk−1)] (23.138)
112
2. In steady state (23.138) becomes
u(tk) = u(tk−1) + [u0(tk)− u0(tk−1)]︸ ︷︷ ︸=0
+Kp[es(tk)− es(tk−1)]︸ ︷︷ ︸=0
= u(tk−1) + 0 +Kp · 0 (23.139)
So, the control signal, u, will not change, even if the controller gain isincreased. In other words: The P controller seems “dead”. This is aninherent drawback of a discrete-time P controller based on thealgorithm (23.138).
3. The I controller is
u(tk) = u(tk−1)+KpTsTi
e(tk)︸ ︷︷ ︸
∆u
(23.140)
In (23.140) the control signal will change — or in other words: thecontrol increment or change, ∆u, will be nonzero — only as long asthe control error is nonzero. So, the control signal will keep onchanging until the control error has become zero.
Solution to Exercise 8.6
The time-step should be selected according to
Ts ≤Tr5=50
5= 10 sec (23.141)
So, Ts = 0.1 sec is definitely ok!
Solution to Exercise 8.7
1. Case B: Ts = 0.5 sec
Case C: Ts = 1 sec
2. Increased Ts gives increased time-delay in the AD-converter andconsequently in control loop, causing reduced stability in the controlloop.
Solution to Exercise 9.1
113
Solving (9.3) — (9.4) for h1 (tk+1) and h2 (tk+1), respectively, gives thediscrete-time state space model:
h1 (tk+1) =
f1︷ ︸︸ ︷
h1 (tk) +TsA1
(
Kpu1 (tk)−Kv1
√ρgh1 (tk)
G
)
(23.142)
h2 (tk+1) =
f2︷ ︸︸ ︷
h2 (tk) +TsA2
(
Kv1
√ρgh1 (tk)
G−Kv2u2 (tk)
√ρgh2 (tk)
G
)
(23.143)
The outputs variables are
y1 (tk) = h1 (tk) (23.144)
y2 (tk) = h2 (tk) (23.145)
This state-space model is nonlinear due to the square root functions inwhich the state variables are arguments.
Solution to Exercise 9.2
[x1(k + 1)x2(k + 1)
]=
[−0.5 0−3 −1
]
︸ ︷︷ ︸A
[x1(k)x2(k)
]+
[02
]
︸ ︷︷ ︸B
u(k) (23.146)
y(k) =[0 1
]︸ ︷︷ ︸
C
[x1(k)x2(k)
]+ [4]︸︷︷︸D
u(k) (23.147)
Solution to Exercise 9.3
By applying Backward differentiation approximation to thetime-derivatives, we get the following discrete-time model:
h1 (tk) ≈h1 (tk)− h1 (tk−1)
Ts=
1
A1
(
Kpu1 (tk)−Kv1
√ρgh1 (tk)
G
)
(23.148)
h2 (tk) ≈h2 (tk)− h2 (tk−1)
Ts=
1
A2
(
Kv1
√ρgh1 (tk)
G−Kv2u2 (tk)
√ρgh2 (tk)
G
)
(23.149)
114
By increasing the time-index with one we get
h1 (tk+1)− h1 (tk)
Ts=
1
A1
(
Kpu1 (tk+1)−Kv1
√ρgh1 (tk+1)
G
)
(23.150)
h2 (tk+1)− h2 (tk)
Ts=
1
A2
(
Kv1
√ρgh1 (tk+1)
G−Kv2u2 (tk+1)
√ρgh2 (tk+1)
G
)
(23.151)which can not be written on the following standard state-space model form:
h1 (tk+1) = f1 [h1(tk), h2(tk), · · · ] (23.152)
h2 (tk+1) = f2 [h1(tk), h2(tk), · · · ] (23.153)
because h1 (tk+1) and h2 (tk+1) appear in square root functions on theright sides.
Solution to Exercise 9.4
1. No, because the continuous-time state-space model is nonlinear.
2. Yes, because the model is linear.
Solution to Exercise 10.1
Z y(k) = y(z) =∞∑
k=0
y(k)z−k =∞∑
k=0
δ(k)z−k (23.154)
= 1 · z−0 + 0 · z−1 + 0 · z−2 + · · ·+ 0 · z−n + · · ·(23.155)
= 1 (23.156)
Solution to Exercise 10.2
The Z-transform of the right side of (10.1) becomes
ZAS(k) +BS2(k) = Z(A+B)S(k) = (A+B)z
z − 1 (23.157)
The left side of (10.1) becomes
A · ZS(k)+B · ZS(k) = Az
z − 1 +Bz
z − 1 = (A+B)z
z − 1 (23.158)
115
Hence, the Z-transform of the right side of (10.1) is equal to the left sideof (10.1), and consequently the linear property holds.
Solution to Exercise 10.3
The signal isy(k) = 4 · S(k − 2) (23.159)
Using the Linearity property and the Time-delay property of theZ-transform, we get
Zy(k) = y(z) = 4z−2ZS(k) = 4z−2 z
z − 1 =4
z2 − z(23.160)
Solution to Exercise 10.4
Using the Z-transform pair denoted “Time exponential” we get
y(k) = 0.5k (23.161)
Solution to Exercise 11.1
z transformation gives
y(z) =1
3
[u(z) + z−1u(z) + z−2u(z)
]=1
3
[1 + z−1 + z−2
]u(z) (23.162)
which gives the transfer function
y(z)
u(z)=1
3
[1 + z−1 + z−2
](23.163)
Solution to Exercise 11.2
Cross-multiplication gives
[bz2 + cz + d
]y(z) = au(z)
orbz2y(z) + czy(z) + dy(z) = au(z) (23.164)
which inverse-transformed gives this difference equation:
by(k + 2) + cy (k + 1) + dy(k) = au(k)
Solution to Exercise 11.3
116
The Z-transform of y becomes
y(z) = H(z)u(z) =z
z − 1 ·A (23.165)
y(k) is given by the inverse transform of y(z):
y(k) = Z−1y(z) = Z−1A z
z − 1 = A (23.166)
(which is a step of amplitude A at time zero).
Solution to Exercise 11.4
1. The static transfer function is
Hs = H(z = 1) =1
3
[1 + 1−1 + 1−2
]= 1 (23.167)
2. The steady-state filter output is
ys = HsU = 1 · U = U (23.168)
Solution to Exercise 11.5
It is convenient to start by rewriting the transfer function as follows:
H(z) =z−2b+ z−1
1− az−1· z
2
z2=
z + b
z2 − az=
z + b
(z − a) z(23.169)
Thus, the zero z isz = −b (23.170)
and the poles pi arep1 = a; p2 = 0 (23.171)
Solution to Exercise 11.6
(11.4) can be writtensy(s) = u(s) (23.172)
Inverse Laplace transform gives
y(t) = u(t) (23.173)
Applying Backward discretization to the time-derivative and introducingdiscrete time notation:
y(tk) ≈y(tk)− y(tk−1)
Ts= u(tk) (23.174)
117
Solving for y(tk):
y(tk) = y(tk−1) + Tsu(tk) (23.175)
Taking the Z-transform:
y(z) = z−1y(z) + Tsu(z) (23.176)
The transfer function becomes
Hdisc(z) =y(z)
u(z)=
Ts1− z−1
=zTsz − 1 (23.177)
Solution to Exercise 12.1
The amplitude gain function is
A(ω) = |H(ejωTs)| =∣∣∣∣1
ejωTs
∣∣∣∣ =1
|ejωTs | =1
1= 1 (23.178)
The phase lag function is
φ(ω) = argH(ejωTs) = arg1
ejωTs= arg e−jωTs = −ωTs = −ω · 0.05 [rad]
(23.179)
Solution to Exercise 12.2
1. There a difference between the frequence responses of Hcont(s) andHdisc(z) because the discrete-time filter is derived from thecontinuous-time filter using an approximation of the time-derivativesin the filter model when that model is written as a differentialequation, cf. Section 8.3 in the text-book.
An explanation of why the difference increases with increasingfrequency is that the approximation of the time-derivative becomesless accurate if signals vary faster (i.e. have higher frequency).
2. The frequency response curves of Hdisc(z) are unique up to theNyquist frequency:
fN =1
Ts=
1
0.2= 5 Hz (23.180)
ωN =π
Ts=
π
0.2= 5π rad/s = 15.7 rad/s (23.181)
which is the frequency indicated with a vertical line in Figure 12.1.
118
Solution to Exercise 13.1
The impulse response is equal to the transfer function:
yδ(z) = H(z) =1
z − 0.5 (23.182)
yδ(z) = H(z) (23.183)
From Eq. (10.9) in the text-book,
yδ(k) = 0.5k (23.184)
The first four values of the impulse response is
0.50, 0.51, 0.52, 0.53, 0.54 = 1.0, 0.5, 0.25, 0.125, 0.0625 (23.185)
Obviously, the impulse response converges towards zero as time goes toinfinity. Hence, the transfer function is asymptotically stable.
Solution of Exercise 13.2
The transfer function
H1(s) =1
z − 0.5 (23.186)
is asymptotically stable since the pole p = 0.5 is inside the unit circle.
The transfer function
H2(s) =1
z + 0.5(23.187)
is asymptotically stable since the pole p = −0.5 is inside the unit circle.
The transfer function
H3(s) =z − 2z − 0.5 (23.188)
is asymptotically stable since the pole p = 0.5 is inside the unit circle. (The
zero is 2, but the stability property is independent of the value of the zero.)
The transfer function
H4(s) =1
z − 1 (23.189)
is marginally stable since the pole p = 1 is on the unit circle, and that pole
is single.
The transfer function
H5(s) =1
z − 2 (23.190)
119
is unstable since the pole p = 2 is outside the unit circle.
The transfer function
H6(s) =1
(z − 1)2(23.191)
is unstable since there are multiple (two) poles, namely p1,2 = 1, on theunit circle.
The transfer function
H7(s) =1
z(23.192)
is asymptotically stable since the pole p = 0 is inside the unit circle.
The transfer function
H8(z) =1
z2 − 2.5z + 1 (23.193)
is unstable since one of the poles is outside the unit circle. The poles arep1,2 = 0.5, 2.0.
Solution to Exercise 13.3
The pole of (13.11) is
p = 1− TsT
(23.194)
The system is asymptotically stable if
∣∣∣∣p = 1−TsT
∣∣∣∣ < 1 (23.195)
If
1− TsT
> 0 (23.196)
(23.195) becomes
1− TsT
< 1 (23.197)
which gives
Ts > 0 (23.198)
which is always satisfied.
If
1− TsT
< 0 (23.199)
120
(23.195) becomes
−(1− Ts
T
)< 1 (23.200)
which gives
Ts <T
2(23.201)
So, the system is asymptotically stable if
Ts <T
2(23.202)
Solution to Exercise 13.4
The stability property is determined by the system eigenvalues, which arethe roots of the characteristic equation:
det(zI −A) = det
(z
[1 00 1
]−[1 0.50 0.9
])(23.203)
= det
([z − 1 0.50 z − 0.9
])(23.204)
= (z − 1) (z − 0.9)− 0 · 0.5 (23.205)
= (z − 1) (z − 0.9) (23.206)
The roots arez1,2 = 1, 0.9 (23.207)
One pole is on the unit circle, and the other pole is inside the unit circle.Thererfore, the system is marginally stable.
Solution to Exercise 14.1
The loop transfer function is
L(z) = Hc(z)Hpm(z) = Kpz−1
1− z−1(23.208)
The tracking function is
T (z) =L(z)
1 + L(z)=
Kpz−1
1−z−1
1 +Kpz−1
1−z−1=
Kp
z − 1 +Kp=
Kp
z − (1−Kp)(23.209)
The pole of T (z) isp = 1−Kp (23.210)
The control system is asymptotically stable if the absolute value of thepole is less than one:
|p| = |1−Kp| < 1 (23.211)
121
which is obtained with0 < Kp < 2 (23.212)
Solution to Exercise 14.2
From the Bode diagram:
GM = 7 dB = 2.2 (23.213)
PM = 35 (23.214)
ωc = 0.34 rad/s (23.215)
ω180 = 0.58 rad/s (23.216)
Solution to Exercise 15.1
x(k) = x(0), x(1), x(2) = 0.73, 1.23, 0.89 (23.217)
1. Mean value:
mx =1
N
N−1∑
k=0
x(k) (23.218)
=1
3[x(0) + x(1) + x(2)] (23.219)
=1
3[0.73 + 1.23 + 0.89] (23.220)
= 0.95 (23.221)
2. Variance:
σ2x =1
N − 1
N−1∑
k=0
[x(k)−mx]2 (23.222)
1
3− 1
3−1∑
k=0
[x(k)−mx]2 (23.223)
=1
2
[x(0)−mx]2
+[x(1)−mx]2
+[x(2)−mx]2
(23.224)
=1
2
[0.73− 0.95]2+[1.23− 0.95]2+[1.13− 0.95]2
(23.225)
= 0.0652 (23.226)
122
3. Standard deviation:
σx =√σ2x =
√0.0652 = 0.255 (23.227)
4. Calculate the auto-covariance, Rx(L) with L = 0 and L = 1 for thefollowing options (the same options that are defined forcross-covariance in the text-book):
(a) Raw estimate of auto-covariance: General formula:
Rx(L) =
N−1−|L|∑
k=0
[x(k+L)−mx][x(k)−mx] = Rx(L)Raw (23.228)
L = 0:
Rx(0) =
3−1−|0|=2∑
k=0
[x(k + 0)−mx][x(k)−mx] (23.229)
=2∑
k=0
[x(k)−mx]2 (23.230)
= [x(0)−mx]2 (23.231)
+[x(1)−mx]2 (23.232)
+[x(2)−mx]2 (23.233)
= [0.73− 0.95]2 (23.234)
+[1.23− 0.95]2 (23.235)
+[0.89− 0.95]2 (23.236)
= 0.1304 (23.237)
= Rx(0)Raw (23.238)
L = 1:
Rx(1) =
3−1−|1|=1∑
k=0
[x(k + 1)−mx][x(k)−mx] (23.239)
= [x(1)−mx][x(0)−mx] (23.240)
+[x(2)−mx][x(1)−mx] (23.241)
= [1.23− 0.95][0.73− 0.95] (23.242)
+[0.89− 0.95][1.23− 0.95] (23.243)
= −0.0784 (23.244)
= Rx(1)Raw (23.245)
123
(b) Normalized estimate: General formula:
Rx(L) =1
Rx(0)Raw
N−1−|L|∑
k=0
[x(k + L)−mx][x(k)−mx]
=1
Rx(0)RawRx(L)Raw (23.246)
L = 0:
Rx(0) =1
Rx(0)RawRx(0)Raw = 1 (23.247)
L = 1:
Rx(1) =1
Rx(0)RawRx(1)Raw =
1
0.1304(−0.0784) = −0.6012
(23.248)
(c) Unbiased estimate: General formula:
Rx(L) =1
N − LRx(L)Raw (23.249)
L = 0:
Rx(0) =1
3− 0Rx(0)Raw =1
3· 0.1304 = 0.0435 (23.250)
L = 1:
Rx(1) =1
3− 1Rx(1)Raw =1
2(−0.0784) = −0.0392 (23.251)
(d) Biased estimate: General formula:
Rx(L) =1
NRx(L)Raw (23.252)
L = 0:
Rx(0) =1
3Rx(0)Raw =
1
3· 0.1304 = 0.0435 (23.253)
L = 1:
Rx(1) =1
3Rx(1)Raw =
1
3(−0.0784) = −0.0261 (23.254)
Solution to Exercise 15.2
124
1. Mean value:
mx =
∫ A
−AxP (x)dx (23.255)
=
∫ A
−Ax · 12A
· dx (23.256)
=1
2A
[x2
2
]A
−A(23.257)
=1
2A
[A2
2− (−A)2
2
]
(23.258)
= 0 (23.259)
2. Variance:
σ2x =
∫ A
−A(x−mx)
2 P (x)dx (23.260)
=
∫ A
−A(x− 0)2 1
2Adx (23.261)
=1
2A
∫ A
−Ax2dx (23.262)
=1
2A
[x3
3
]A
−A(23.263)
=1
2A
[A3
3− (−A)3
3
]
(23.264)
=A2
3(23.265)
Solution to Exercise 15.3
See Figure 23.11.
The standard deviation is
σx =√σ2x =
√4 = 2 (23.266)
Solution to Exercise 15.4
1. With a = 0 the filter model is
x(k) = v(k) (23.267)
So, if the input is white, the output is white.
125
L0
Rx(L)
1 2 3-2 -1-3
Var(x)
0
= = 4
Figure 23.11:
2. If a is between 0 and 1, the filter output x(k) depends not only onthe input v(k) but also on x(k − 1) which is x at the previous timestep. Therefore, x(k) will not vary purely randomly (it will notbecome purely white) — it is “coloured”.
Solution to Exercise 15.5
y expressed as a function of u:
y = Gu+C (23.268)
where G and C are calculated from the following formulas:
my = Gmu +C (23.269)
and
σ2y = G2σ2u (23.270)
where
mu = 0 (23.271)
my = 3 (23.272)
σ2u = 1 (23.273)
σ2y = 4 (23.274)
Now, from (23.270) we get
G =
√σ2yσ2u
=
√4
1= 2 (23.275)
126
and from (23.269) we get
C = my −Gmu = 3− 2 · 0 = 3 (23.276)
Solution to Exercise 16.1
Figure 23.12 shows the points, the line and the predictions errors, ei.
x
z
0 1 2 3 4
1
0
2
3
4
z = 1.6x-0.6
(0.8, 1)
(2, 3)(3, 4)
e1
e2
e3
Figure 23.12:
Solution to Exercise 16.2
Writing the model on standard regression form:
h(k)− h(k − 1) =[−√h(k − 1) u(k − 1)
] [ ab
](23.277)
The total model becomes
h(1)− h(0)h(2)− h(1)h(3)− h(2)h(4)− h(3)
︸ ︷︷ ︸Y
=
−√h(0) u(0)
−√h(1) u(1)
−√h(2) u(2)
−√h(3) u(3)
︸ ︷︷ ︸Φ
[ab
]
︸ ︷︷ ︸θ
(23.278)
Solution to Exercise 16.3
127
K is given by
yk = K (23.279)
= 1 ·K (23.280)
= ϕθ (23.281)
The LS estimate is
θLS = KLS (23.282)
= (ΦTΦ)−1ΦT y (23.283)
=
[1 · · · 1
]
1...1
−1[1 · · · 1
]
K0 + e1K0 + e2
...K0 + eN
(23.284)
= N−1 ·[
N ·K0 +N∑
k=1
ek
]
(23.285)
= K0 +1
N
N∑
k=1
ek (23.286)
= K0 +me (23.287)
where me is the mean value of the noise. From (23.287) you can see thatKLS does not converge towards K0 if the mean value me is different from
zero.
Solution to Exercise 16.4
According to the Parsimony Principle, you should select the smallest orderwith relatively small value of the criterion function. So, you should selectorder n2.
Solution to Exercise 16.5
128
1.
Vnorm =1
m
N∑
k=1
[yk − (cLSxk + dLS)]2 (23.288)
=1
3
[y1 − (cLS · x1 + dLS)]2
+[y2 − (cLS · x2 + dLS)]2
+[y3 − (cLS · x3 + dLS)]2
(23.289)
=1
3
[0.8− (1.6 · 1− 0.6)]2+[3.0− (1.6 · 2− 0.6)]2+[4.0− (1.6 · 3− 0.6)]2
(23.290)
= 0.08 (23.291)
2. With the two datasets
(y1,x1) = (0.8, 1.0) (23.292)
(y2,x2) = (3.0, 2.0) (23.293)
the model equations are
0.8 = c · 1 + d = 2.2 · 1 + (−1.4) = 0.8 (23.294)
3.0 = c · 2 + d = 2.2 · 2 + (−1.4) = 3.0 (23.295)
Hence, it is shown that c = 2.2 and d = −1.4 is a perfect match tothese two data sets.
With c = 2.2 and d = −1.4 the criterion becomes
Vnorm =1
m
N∑
k=1
[yk − (cxk + d)]2 (23.296)
=1
3
[y1 − (cLS · x1 + dLS)]2
+[y2 − (cLS · x2 + dLS)]2
+[y3 − (cLS · x3 + dLS)]2
(23.297)
=1
3
[0.8− (2.2 · 1− 1.4)]2+[3.0− (2.2 · 2− 1.4)]2+[4.0− (2.2 · 3− 1.4)]2
(23.298)
= 0.48 (23.299)
which is larger than the LS-value Vnorm = 0.08. Hence, it isdemonstrated that c = 2.2 and d = −1.4 are not LS parameters!
Solution to Exercise 16.6
129
The order should be large enough to include the time-delay. One time stepof 0.5 sec corresponds to a time-delay of 0.5 s, and one such time-delay isrepresented by the factor z−1 in the transfer function. The model shouldtherefore include 2/0.5 = 4 such factors. Hence, the mimimum order of thetransfer function is 4.
Solution to Exercise 16.7
The model written on the standard LS form:
KT
Ra[va(tk)−Keω(tk)]
︸ ︷︷ ︸y
=[ω(tk) 1
]︸ ︷︷ ︸
ϕ
[JTL
]
︸ ︷︷ ︸θ
(23.300)
ω(tk) is calculated with the center difference method:
ω(tk) ≈ω(tk+1)− ω(tk−1)
2Ts(23.301)
where Ts is the time step.
Solution to Exercise 16.8
You can remove the mean values of both the input and the output signals(time series or sequences), and use the devations as new input and output.In more detail: Assume that u(k) is the original input signal and y(k)is the output signal, and that mu and my are the respective mean values.The deviation signals are then
du(k) = u(k)−mu (23.302)
anddy(k) = y(k)−my (23.303)
The signals du(k) and dy(k) are used as input and output signals.
Solution to Exercise 17.1
1. L is modeled asL(t) = 0 (23.304)
(17.1) and (23.304) written as a state-space model (the timeargument t is omitted for simplicity):
S =
f1︷ ︸︸ ︷1
Tm[−S +Km (u+ L)] (23.305)
130
L = 0︸︷︷︸f2
(23.306)
The general observer formula for continuous-time implementation is
xe = f (xe, u) +Ke (23.307)
In detail this becomes
Se = f1 (Se, L3) +K1e =1
Tm[−Se +Km (u+Le)] +K1e (23.308)
Le = f2 (Se, L3) +K2e = K2e (23.309)
where
e = S − Se = Speed measurement − Speed estimate (23.310)
2. The general observer formula for discrete-time implementation is
xe(tk+1) = xe(tk) + Ts [f(·, tk) +Ke(tk)] (23.311)
In detail this becomes
Se(tk+1) = Se (tk) + Ts [f1(·, tk) +K1e (tk)] (23.312)
= Se (tk) + Ts
(1
Tm−Se (tk) +Km [u (tk) + Le (tk)]+K1e
)
(23.313)
Le(tk+1) = Le (tk) + Ts [f2(·, tk) +K2e (tk)] (23.314)
= Le (tk) + TsK2e (tk) (23.315)
3. To find the observer gains K1 and K2 we need a linearizedstate-space model on the form
∆x = A∆x+B∆u (23.316)
∆y = C∆x+D∆u (23.317)
Linearization of the state-space model (23.305), (23.306) gives1
∆S =∂f1∂S
∆S +∂f1∂L
∆L+∂f1∂u∆u (23.318)
= − 1
Tm∆S +
Km
Tm∆L+
Km
Tm∆u (23.319)
1Since the original model is linear, it is actually not necessary to perform the lineariza-tion, but I do it here it anyway to demonstrate the procedure.
131
∆L =∂f2∂S
∆S +∂f2∂L
∆L+∂f2∂u∆u (23.320)
= 0 ·∆S + 0 ·∆L+ 0 ·∆u (23.321)
= 0 (23.322)
The measurement is S. On matrix-vector form the linearizedstate-space model is
[∆S
∆L
]=
∂f1∂S
∂f1∂L
∂f2∂S
∂f2∂L
[∆S∆L
]+
∂f1∂u
∂f2∂u
∆u(23.323)
=
− 1Tm
Km
Tm
0 0
︸ ︷︷ ︸A
[∆S∆L
]+
KmTm
0
︸ ︷︷ ︸B
∆u (23.324)
∆S =[1 0
]︸ ︷︷ ︸
C
[∆S∆L
]+[0]
︸ ︷︷ ︸D
∆u (23.325)
The eigenvalues of the observer error dynamics are the roots of thecharacteristic equation:
0 = det [sI − (A−KC)] (23.326)
= det
[s 00 s
]−([
− 1Tm
KmTm
0 0
]−[K1
K2
] [1 0
])
(23.327)
= det
s+ 1
Tm+K1 −KmTm
K2 s
(23.328)
= s2 +
(1
Tm+K1
)s+
K2Km
Tm(23.329)
The second order polynomial (23.329) is to be compared with thesecond order Butterworth polynomial
B2(s) = (Ts)2 + 1.4142 (Ts) + 1 = T 2s2 + 1.4142Ts+ 1 (23.330)
where T is
T =Trn
(23.331)
where Tr is the specified observer response time and n = 2 is theorder of the observer. Before comparing polynomials we divide(23.330) by T 2 so that it gets the same form as (23.329):
B∗2(s) = s2 +1.4142
Ts+
1
T 2≡ s2 +
(1
Tm+K1
)s+
K2Km
Tm(23.332)
132
Comparing coefficients:
1.4142
T=
1
Tm+K1 (23.333)
1
T 2=
K2Km
Tm(23.334)
which gives
K1 =1.4142
T− 1
Tm=1.4142n
Tr− 1
Tm(23.335)
K2 =Tm
KmT 2=
Tmn2
KmT 2r(23.336)
4. The observability test is made with the matrices A and C of thelinear state space model. The observability matrix is (n = 2)
Mobs =
[C
CA2−1 = CA
](23.337)
=
[1 0
]
−−−−−−−−−−−[1 0
]
− 1Tm
Km
Tm
0 0
(23.338)
=
[1 0
− 1Tm
KmTm
](23.339)
The determinant of Mobs is
det (Mobs) = 1 ·(Km
Tm
)−(− 1
Tm
)· 0 = Km
Tm(23.340)
Since KmTm = 0, the rank of Mobs is full (2), and the system isobservable.
5. To speed up the response of the estimates,the observer response time Tr can be decreased. This will increase
the observer gains, and the estimates become more noisy.
6. To make the estimated become smoother you canlet the estimates pass through lowpass filters.
7. The feedforward control function is derived from the motor model,which is
TmS(t) + S(t) = Km[u(t) + L(t)] (23.341)
133
In this model, the speed is substituted by its reference or setpoint Sr,the load is substituted by its estimate, and then the equation issolved for the control variable. The result is
uf (t) =1
KmTmSr(t) + Sr(t)− Le(t) (23.342)
In Exercise 18.1 experimental results with feedforward control withthis motor are shown. In that exercise a Kalman Filter is used instead of an observer, but you can expect the same results with anobserver.
8. Increased robustness against sensor failure can be obtained asfollows: The feedback to the speed controller (e.g. a PI controller) ispermanently based on the estimated measurement, Se, as calculatedby an observer. If the sensor is failing (assuming some kind ofmeasurement error detection has been implemented, of course), theestimates are not updated by the (erroneous) measurement. This canbe implemented by multiplying K1e in (23.308) and K2e in (23.309)by zero so that the effective continuous-time observer formulas are
Se =1
Tm[−Se +Km (u+ Le)] (23.343)
Le = 0 (23.344)
The discrete-time observer formulas are
Se(tk+1) = Se (tk) + Ts
(1
Tm−Se (tk) +Km [u (tk) + Le (tk)]
)
(23.345)
Le(tk+1) = Le (tk) (23.346)
Hence, the observer is just a simulator of the motor. So, duringsensor failure, the speed controller acts on basis of the simulatedspeed, and this is better than acting on basis of an erroneous speedmeasurement.
In Exercise 18.1 experimental results with sensor failure with thismotor are shown. In that exercise a Kalman Filter is used in stead ofan observer, but you can expect the same results with an observer.
Solution to Exercise 17.2
134
1. According to Figure 17.1 in the text-book, the updating of thetime-derivative of the estimates are proportional to the measurementestimation error:
xe = f (xe, u,wk) +Ke (23.347)
so it is proportional compensation.
2. In control theory it is well-known that proportional + integralcontrol makes the steady-state control error become zero. So, it isreasonable to suggest that a proportional + integral (PI) basedcompensator in an observer can may remove the steady-stateestimation error. Actually, it can be shown that such a proportional+ integral compensation is obtained with augmenting the originalstate variables with state variables representing disturbances to beestimated, cf. Section 17.6 in the text-book. And such stateaugmentation may remove the steady-state estimation error.
Solution to Exercise 17.3
1. State equations:p = v︸︷︷︸
f1
(23.348)
v = a︸︷︷︸f2
(23.349)
a = 0︸︷︷︸f3
(23.350)
Measurement:y = p (23.351)
2. The general observer formula for continuous-time implementation is
xe = f (xe, u) +Ke (23.352)
In detail this becomes
pe = f1 (pe, ve, ae) +K1e = ve +K1e (23.353)
ve = f2 (pe, ve, ae) +K2e = ae +K2e (23.354)
ae = f3 (pe, ve, ae) +K3e = K3e (23.355)
where
e = p− pe = Position measurement − Position estimate (23.356)
135
3. The general observer formula for discrete-time implementation is
xe(tk+1) = xe(tk) + Ts [f(·, tk) +Ke(tk)] (23.357)
In detail this becomes
pe(tk+1) = pe (tk)+Ts [f1(·, tk) +K1e (tk)] = pe (tk) + Ts [ve(tk) +K1e (tk)]
(23.358)ve(tk+1) = ve (tk)+Ts [f2(·, tk) +K2e (tk)] = ve (tk) + Ts [ae(tk) +K2e (tk)]
(23.359)ae(tk+1) = ae (tk) + Ts [f3(·, tk) +K3e (tk)] = ae (tk) + Ts [K3e (tk)]
(23.360)
4. To find the observer gains K1 and K2 we need a linearizedstate-space model on the form
∆x = A∆x+B∆u (23.361)
∆y = C∆x+D∆u (23.362)
Linearization of the state-space model (23.348) — (23.350) gives2
∆p =∂f1∂p∆p+
∂f1∂v∆v +
∂f1∂a∆a = 0 ·∆p+ 1 ·∆v + 0 ·∆a (23.363)
∆v =∂f2∂p∆p+
∂f2∂v∆v +
∂f2∂a∆a = 0 ·∆p+ 0 ·∆v + 1 ·∆a (23.364)
∆a =∂f3∂p∆p+
∂f3∂v∆v +
∂f3∂a∆a = 0 ·∆p+ 0 ·∆v + 0 ·∆a (23.365)
The measurement is p. On matrix-vector form the linearizedstate-space model is (the model has no B∆u-term and no D∆u-term)
∆p∆v∆a
=
∂f1∂p
∂f1∂v
∂f1∂a
∂f2∂p
∂f2∂v
∂f2∂a
∂f3∂p
∂f3∂v
∂f3∂a
∆p∆v∆a
(23.366)
=
0 1 00 0 10 0 0
︸ ︷︷ ︸A
∆p∆v∆a
(23.367)
Measurement equation:
∆p =[1 0 0
]︸ ︷︷ ︸
C
∆p∆v∆a
(23.368)
2Since the original model is linear, it is actually not necessary to perform the lineariza-tion, but I do it here anyway to demonstrate the procedure.
136
The eigenvalues of the observer error dynamics are the roots of thecharacteristic equation:
0 = det [sI − (A−KC)] (23.369)
= det
s 0 00 s 00 0 s
−
0 1 00 0 10 0 0
−
K1
K2
K3
[ 1 0 0]
(23.370)
= det
s+K1 −1 0K2 s −1K3 0 s
(23.371)
= (s+K1) det
[s −10 s
]− (23.372)
(−1) · det[K2 −1K3 s
](23.373)
+0 · det[K2 sK3 0
](23.374)
= (s+K1)s2 − (−1) · (K2s+K3) + 0 (23.375)
= s3 +K1s2 +K2s+K3 (23.376)
The third order polynomial (23.376) is to be compared with the thirdorder Butterworth polynomial
B3(s) = (Ts)3 + 2 (Ts)2 + 2 (Ts) + 1 (23.377)
where T is
T =Trn
(23.378)
where Tr is the specified observer response time and n = 3 is theorder of the observer. Before comparing polynomials we divide(23.377) by T 3 so that it gets the same form as (23.376):
B∗3(s) = s3 +2
Ts2 +
2
T 2s+
1
T 3≡ s3 +K1s
2 +K2s+K3 (23.379)
Comparing coefficients gives
K1 =2
T=2n
Tr(23.380)
K2 =2
T 2=2n2
T 2r(23.381)
K3 =1
T 3=
n3
T 3r(23.382)
137
Solution to Exercise 18.1
1. L is modeled asL(t) = 0 (23.383)
(18.1) and (23.383) written as a state-space model (the timeargument t is omitted for simplicity):
S =
fcont1︷ ︸︸ ︷1
Tm[−S +Km (u+ L)] (23.384)
L = 0︸︷︷︸fcont2
(23.385)
Applying Forward discretization with time step Ts and includingwhite disturbance noise in the resulting difference equations yieldsthe following discrete time state space model of the motor:
S(k + 1) = S(k) + Tsfcont1(·, k) +w1(k) (23.386)
= S(k) +TsTm
−S(k) +Km [u(k) + L(k)]︸ ︷︷ ︸
f1
(23.387)
+w1(k) (23.388)
L(k + 1) = L(k) + Tsfcont2(·, k) +w2(k) (23.389)
= L(k)︸︷︷︸f2
+w2(k) (23.390)
w1 and w2 are independent (uncorrelated) white process noises withassumed variances Q1 and Q2, respectively. The noise vector becomes
w =
[w1w2
](23.391)
having covariance
Q =
[Q11 00 Q22
](23.392)
The speed S is measured, so we add the following measurementequation to the state space model:
y(k) = S(k) + v(k) (23.393)
where v is white measurement noise with assumed variance R.
138
The Kalman Filter formulas are as follows.
It is assumed that the predicted speed estimate Sp(k) exists (initiallyit is a “guess” of the estimate, and later it is predicted using themodel, see below).
The innovation variable is calculated as
e(k) = S(k)− Sp(k) (23.394)
where S(k) is the speed measurement.
The corrected state estimates — which are used as applied estimates —are
Sc(k) = Sp(k) +K11e(k)Lc(k) = Lp(k) +K21e(k)
(23.395)
The predicted state estimates for the next time step are
Sp(k + 1) = Sc(k) +TsTm−Sc(k) +Km [u(k) + Lc(k)]
Lp(k + 1) = Lc(k)(23.396)
2. The quantities needed for calculating the steady-state Kalman Filtergain, K, are indicated in Figure 23.13. These quantities are givenbelow. Transition matrix of linearized model:
Steady state Kalman
Filter gainKs
A
QR
GC
Ks
Transition matrixProcess noise gain matrixMeasurement gain matrix
Process noise auto-covarianceMeasurement noise auto -covariance
Figure 23.13: Illustration of what information is needed to compute the steady-
state Kalman Filter gain, Ks.
A = I + Ts∂fcont∂x
∣∣∣∣Sp(k),Lp(k), u(k)
(23.397)
=
[1 00 1
]+ Ts
∂fcont1∂x1
= − 1Tm
∂fcont1∂x2
= KmTm
∂fcont2∂x1
= 0∂fcont2∂x2
= 0
∣∣∣∣∣∣∣Sp(k),Lp(k), u(k)
(23.398)
=
1− Ts
TmTsKm
Tm
0 1
(23.399)
139
Process noise gain matrix:
G =
[1 00 1
](23.400)
Measurement gain matrix:
C =[1 0
](23.401)
(since the measurement is S = C[S L
]T=[1 0
] [S L
]T).
Process-noise covariance:
Q =
[Q11 00 Q22
](23.402)
Measurement-noise covariance:
R (23.403)
3. The observability test is made with the matrices A and C. Theobservability matrix is (n = 2)
Mobs =
[C
CA2−1 = CA
](23.404)
=
[1 0
]
−−−−−−−−−−−[1 0
]
1− Ts
TmTsKmTm
0 0
(23.405)
=
[1 0
− TsTm
TsKm
Tm
](23.406)
The determinant of Mobs is
det (Mobs) = 1 ·(TsKm
Tm
)−(− TsTm
)· 0 = TsKm
Tm(23.407)
Since TsKmTm= 0, the rank of Mobs is full (2), and the system is
observable.
4. A reasonable value of the measurement noise covariance R can befound bycalculating the variance of a sequence (time series) of the measurement.
5. To speed up the response of the estimate Lest = Lc,the noise variance Q22 can be increased. This will increase the
corresponding Kalman Filter gain, and the estimate becomesmore noisy, which is a drawback of course.
140
6. To make the estimates become smoother you canlet the estimates pass through lowpass filters.
7. The feedforward control function is derived from the motor model,which is
TmS(t) + S(t) = Km[u(t) + L(t)] (23.408)
In this model, the speed is substituted by its reference or setpoint Sr,the load is substituted by its estimate, and then the equation is
solved for the control variable. The result is, in discrete time (tk= k):
uf (tk) =1
KmTmSr(tk) + Sr(tk)− Lest(tk) (23.409)
where Sr(tk) can be calculated with e.g. Backward discretization:
Sr(tk) ≈Sr(tk)− Sr(tk−1)
Ts(23.410)
Here are results of experiments made with the motor shown in Figure18.1:
• Without feedforward: Figure 23.17 (Page 150) shows thespeed reference and measurement, the estimated load, and thecontrol signal. The speed is controlled with a PI controller(with gain 0.5 and integral time 2 sec). The motor was brakedwith an approximately constant load torque (using the hand).The estimate gives a good estimate of the applied load torque -qualitatively.
• With feedforward: Figure 23.18 (Page 151) shows responseswith feedforward control (together with feedback control withPI controller, as in the previous experiment). It is obvious thatthe control system now performs much better since the controlerror is much smaller compared with not using feedforward.
8. Increased robustness against sensor failure can be obtained asfollows: The feedback to the speed controller (e.g. a PI controller) ispermanently based on the estimated measurement, Se, as calculatedby a Kalman Filter. If the sensor is failing (assuming some kind ofmeasurement error detection has been implemented, of course), the(erroneous) measurement is removed from the corrected estimates(23.395). This can be implemented by multiplying Ke in (23.395) byzero. In this situation, the Kalman Filter is just a simulator of themotor. So, during sensor failure, the speed controller acts on basis ofthe simulated speed, and this is better than acting on basis of anerroneous speed measurement.
Here are results of experiments made with the motor:
141
• Full Kalman Filter - despite sensor failure: Figure 23.19(Page 152) shows the speed reference and measurement and thecontrol signal. Sensor fails at time t = 10 sec, as the sensorsignal suddenly becomes zero. This fail lasts for the wholeexperiment. The speed controller (a PI controller) “thinks” thatthe speed is actually zero, and therefore adjusts the controlsignal to its maximum in an attempt to increase the speed tobecome equal to the speed reference. This makes the motor tospeed up maximum speed — much larger than the reference. Nogood...
• Kalman Filter without measurement-based updatingduring sensor failure (i.e. Kalman Filter running assimulator): As in the previous case, the sensor fails at timet = 10 sec. Figure 23.20 (Page 153) shows the responses. Nowthe speed controller acts on basis of the simulated speed. Thislargely improves the behaviour of the control system. Thecontrol system is even able to track the speed reference change.So, it is demonstrated that it is much better to let the controlleract on basis of a simulated measurement than an erroneousmeasurement.
Solution to Exercise 19.1
Figure 23.14 shows the control system that can be used in a simulator (ine.g. Simulink or LabVIEW). You can run the simulations with modelparameters (indexed with 0 in the figure) in the simulated motor that aredifferent from the model parameters (indexed with 1 in the figure) used inthe feedback controller, the feedback controller and the estimator.Measurement noise is included in the simulator using a proper randomsignal generator (such signal generators exist in Simulink and LabVIEW).
Parameters Km0 and Tm0 are the parameters you assume are accuratelyknown and therefore use in the feedback controller, the feedback controllerand the estimator. Parameters Km1 and Tm1 are parameters you can usein the simulated motor when you want to introduce model errors. You canfor example systematically vary Km1 between
0.8Km0 ≤ Km1 ≤ 1.2Km0 (23.411)
and vary Tm1 between
0.8Tm0 ≤ Tm1 ≤ 1.2Tm0 (23.412)
and run simulations with each of the different parameter sets.
142
PI controller
Speedreference
Estimator of load torque
PI controllertuning withSkogestad’s
method
Controlsignal
Measured speed
(contains random
measurement noise)
uSr S
Lest
Feedforwardcontroller
uf
Motor model
Random measurement
noise (simulated)
nModel
parameters:Km1, Tm1
Model parameters:
Km0, Tm0
Model parameters :
Km0, Tm0
Model parameters:
Km0, Tm0
Measurement lowpass filter
LoadL
Figure 23.14:
Solution to Exercise 20.1
1. The standard process model form is
x = f +Bu (23.413)
We have
x =
[LT
](23.414)
and
u =
[FcFh
](23.415)
We get
B =
[B11 B12B21 B22
]=
1ρA
1ρA
Tc−TρAL
Th−TρAL
(23.416)
143
and
f =
[f1f2
]=
[ − FρA
0
](23.417)
2. The control function is generally
u = B−1(Kpe+Ki
∫ t
0edτ + rf − f
)(23.418)
where (I use indices L and T instead of 1 and 2 as in the standardform)
u =
[FcFh
](23.419)
B =
[1ρA
1ρA
Tc−TρAL
Th−TρAL
]
(23.420)
e =
[Lr − LTr − T
](23.421)
Kp =
[KpL 00 KpT
](23.422)
Ki =
[KiL 00 KiT
]=
KpL
TiL0
0KpT
TiT
(23.423)
rf =
[LrfTrf
](23.424)
f =
[ − FρA
0
](23.425)
(where index f means lowpass filtered).
3.
KpL =1
TCL(23.426)
TiL = kTCL (23.427)
KpT =1
TCT(23.428)
TiT = kTCT (23.429)
where k can be set to 1.5.
4. The following parameters and variables must be known:
144
• ρ
• A
• L (using a level sensor)
• T (temperature sensor)
• Tc (temperature sensor)
• Th (temperature sensor)
• F (flow sensor)
5. No
No
6. Yes
Solution to Exercise 20.2
1. To derive the control function we first write the process model on thestandard form:
y =1
m[−D|y − uc| (y − uc) + Fw(φ, Vw)]
︸ ︷︷ ︸f
+1
m︸︷︷︸B
Ft︸︷︷︸u
(23.430)
The control function becomes
Ft = B−1[Kpe+
Kp
Ti
∫ t
0edτ +KpTd
defdt
+ rf − f
](23.431)
=
(1
m
)−1
Kpe+
Kp
Ti
∫ t0 edτ +KpTd
defdt + rf
+ 1m [−D|y − uc| (y − uc) + Fw(φ, Vw)]
(23.432)
where e is the control error:
e = r − y (23.433)
Index f in rf means that the reference is lowpass filtered (with asecond order filter).
The internal PID controller in (23.432) is on parallel form.Skogestad’s PID tuning method for the double integrator (which isthe linearized process for which the internal PID controller is
145
designed) assumes a serial PID controller. The PID settings for theserial PID controller becomes:
Kps =1
4 (TC/2)2 =
1
TC2(23.434)
Tis = 4TC/2 = 2TC (23.435)
Tds = 4TC/2 = 2TC (23.436)
Using the formulas for transformation of serial PID parameters toparallel PID parameters, we have
Kp = Kpp = Kps
(1 +
TdsTis
)=
1
TC2
(1 +
2TC2TC
)=
2
TC2 (23.437)
Ti = Tip = Tis
(1 +
TdsTis
)= 2TC
(1 +
2TC2TC
)= 4TC (23.438)
Ts = Tdp = Tds1
1 +TdsTis
= 2TC1
1 + 2TC2TC
= TC (23.439)
2. Of course, all variables and parameters of the control function(23.432) must be known to make the controller implementable. Inmore details:
• The parameters m and D must be known.
• The ship position y must be measured usinig e.g. GPSmeasurement.
• The ship speed y can be calculated as the time-derivative of y,or it can be estimated with a Kalman Filter or an observer.
• The water speed uc must either be measured or estimated witha Kalman Filter or an observer.3
• The wind angle φ and the wind speed Vw must be measuredwith a wind sensor (mounted on the ship). The wind forcefunction Fw is assumed to be known for a given ship.
4
3. In (23.432) the control variable Ft is a function of the disturbanceFw(φ, Vw). This dependency implements feedforward. The controllerwill compensate for the disturbance, so that the ship position y willnot be influenced by Fw, whatever value of Fw.
3Kongsberg Maritime (Norway) use a Kalman Filter in their ship positioning systems— or DP (Dynamic Positioning) systems.
4Fw is calculated from the geometry of the ship.
146
Solution to Exercise 21.1
1.
x1 =
f1︷︸︸︷x2 (23.440)
x2 =
f2︷ ︸︸ ︷1
m[−D|x2 − uc| (x2 − uc) + Fw(φ, Vw) + Ft] (23.441)
2. See Figure 23.15.
Optimal controller with integral action
-G11
r
Integrator
e
-G21
-G31
x2x1
u = -Gxx3 y = x1u
Ship
y = x2
Fwuc
Figure 23.15:
3. See Figure 23.15. The state-variable of the integator is defined by
x3 = r − x1 (23.442)
The augmentet (total) state-space model becomes
x1 =
f1︷︸︸︷x2 (23.443)
x2 =
f2︷ ︸︸ ︷1
m[−D|x2 − uc| (x2 − uc) + Fw(φ, Vw) + Ft] (23.444)
x3 =
f3︷ ︸︸ ︷r − x1 (23.445)
147
4. Matrices A and B are found by linearization:
A =
∂f1∂x1
∂f1∂x2
∂f1∂x3
∂f2∂x1
∂f2∂x2
∂f2∂x3
∂f3∂x1
∂f3∂x2
∂f3∂x3
=
0 1 00 −2D
m |x2 − uc| 00 0 −1
(23.446)
B =
∂f1∂u∂f2∂u∂f3∂u
=
01m0
(23.447)
(Element A(2, 2) can be found by resolving the absolute value by firstassuming (x2 − uc) is positive and then assuming (x2 − uc) isnegative, then taking the partial derivative, and finally expressingthe results of the partial differentiations compactly.)
State weight matrix:
Q =
Q11 0 00 Q22 00 0 Q33
=
1|x1max |2
0 0
0 1|x2max |2
0
0 0 1|x3max |2
(23.448)State weight matrix:
R =[R11
]=[
1|umax |2
](23.449)
5. To reduce the fuel consumption (less aggressive control) you canincrease the cost (weight) of the control signal, i.e. increase R11.
6. To reduce the speed of the ship you can increase the cost (weight) ofthe speed, i.e. increase Q22.
Solution to Exercise 22.1
No.
Solution to Exercise 22.2
1. The optimization criterion is
J =
Np∑
k=0
Q1 [e1(tk)]
2 +Q2 [e2(tk)]2 + ...+Qn [en(tk)]
2
+Nc∑
k=1
R1
[∆u1(tk)]
2 +R2 [∆u2(tk)]2 + ...+Rr [∆ur(tk)]
2
(23.450)
148
The state variables are x1 = y and x2 = y. Thererfore, the error e1 isthe position control error. To allow for a larger e1 thecost or weight of e1 — which is Q1 — should be increased.
2. Yes, MPC takes into account specific limits of control variable(s)when calculating the optimal control signal.
Solution to Exercise 22.3
Cascade control.
r1MPC may be level of a tank. rPID1 may be flow reference (setpoint) to aflow PI(D) controller. u1 may be control signal to a control valve.
Solution to Exercise 23.1
1. The dead-time is larger than the time-constant. Therefore youcan expect improved disturbance compensation with dead-time
compensation.
Also, reference tracking will be improved with dead-time
compensation, because dead-time compensation always improvesreference tracking — no matter what the relation between thetime-constant and the time-delay is.
2. The dead-time is smaller than the time-constant. Therefore youcan not expect improved disturbance compensation with dead-time
compensation.
Reference tracking will be improved with dead-time compensation.
Solution to Exercise 23.2
According to Skogestad’s method (with c = 2), the parameters of a PIcontroller is
Kp =T
K (TC + τ)=
20
2 (10 + 0)= 1 (23.451)
Ti = min [T , c (TC + τ)] = min [20, 2 (10 + 0)] = 20 s (23.452)
Td = 0 (23.453)
Solution to Exercise 23.3
1. The transportation time on the conveyor belt constitutes a
dead-time in the level control loop.
149
2. The dead-time will be eliminated if the level controller adjustsboth the screw speed and the conveyor belt speed, proportionally.
This is illustrated in Figure 23.16.
y [m]Wood chip
Wood-chip tank
LTLCControlvariable
Outflow
Conveyor belt
Figure 23.16:
150
Figure 23.17:
151
Figure 23.18:
152
Figure 23.19:
153
Figure 23.20:
154
Appendix A
Models with parametervalues
A.1 Electric motor
The motor is used in Exercises 19.1, 17.1, and 18.1.
Figure A.1 shows the electric motor (a DC motor). It is manipulated with
Figure A.1:
an input voltage signal, u, which is in the range [−10 V, +10 V].
The rotational speed is measured with a tachometer which produces aoutput voltage signal which is proportional to the speed. The speed, S
155
156
[krpm = kilo revolutions per minute] is calculated continuously from thetachometer voltage, and hence the speed is assumed to be known at anyinstant of time. A proper mathematical model of the motor is
TmS(t) + S(t) = Km[u(t) + L(t)] (A.1)
L is equivalent load torque (represented in the same unit as the controlvariable, namely voltage). L can be regarded as a process disturbance. Km
is gain. Tm is time-constant. Parameter values which can be used in e.g.simulations are
Tm = 0.3 sec (A.2)
Km = K = 0.17 krpm/V (A.3)
Figure A.2 shows a block diagram of the motor.
MotorSu
L
Disturbance(load)
Control variable
Process output variable
Figure A.2:
A.2 Ship
Figure 20.2 shows a ship.
The ship is used in Exercises 20.2, 21.1, and 22.2.
The parameter values presented in this appendix are realistic values of aspecific ship.
Only the surge (forward-backward) direction is considered in the exercisesin this book.
Applying Newtons’s Law of Motion we obtain the following mathematical
157
Wind forceFw [N]
Hydrodynamicforce Fh [N]
Thruster forceFt [N]
Mass m [kg]
Position (relative to earth ) y [m]
Water current speed (rel. to earth) uc [m/s]
Figure A.3:
model of the surge motion:
my = −D|y − uc| (y − uc)︸ ︷︷ ︸Fh
+ Fw(φ, Vw) + Ft (A.4)
Position is y [m]. Speed is y [m/s].
Thruster force (applied to move the ship) is Ft [N]. Maximum force is 552kN forwards and 467 kN backwards.
Mass:m = 71164 tons = 71164 · 103 kg (A.5)
Water current speed is uc [m/s]. It typically varies in the range 0 — 3 m/s.
Hydrodynamic coeffiecient:
D = −8.4 kN/[(m/s)2
](A.6)
The wind force Fw acting on the ship is a function of the wind attack angleφ and the wind speed Vw [m/s]. The wind model is
Fw = V 2w [c1 cos(φ) + c2 cos(3φ)] [kN] (A.7)
158
Wind coefficients:c1 = 0.1838 (A.8)
c2 = −0.0068 (A.9)
Figure A.4 shows the wind scale.
Figure A.4:
Figure A.5 shows a block diagram of the ship.
159
ShipyFt
FwDisturb-
ances
Control variable
Process output variable
uc
Windmodel
Vw
Figure A.5: