9781934015216 Chapter 3 Stability Analysis of Nonlinear
-
Upload
dulam-srikanth -
Category
Documents
-
view
229 -
download
1
Transcript of 9781934015216 Chapter 3 Stability Analysis of Nonlinear
Modern Control Systems: An Introductionby S. M. Tripathi
Infinity Science Press LLC. (c) 2008. Copying Prohibited.
Reprinted for Vineel Kumar Reddy Kovvuri, [email protected]
Reprinted with permission as a subscription benefit of Books24x7,http://www.books24x7.com/
All rights reserved. Reproduction and/or distribution in whole or in part inelectronic,paper or other forms without written permission is prohibited.
Table of Contents Chapter 3: Stability Analysis of Nonlinear Systems....................................................................1
3.1 INTRODUCTION.............................................................................................................13.2 AUTONOMOUS SYSTEM AND EQUILIBRIUM STATE.................................................13.3 STABILITY DEFINITIONS...............................................................................................2
3.3.1 Stability in the Sense of Lyapunov..........................................................................23.3.2 Asymptotic Stability in the Sense of Lyapunov.......................................................23.3.3 Asymptotic Stability in-the-Large.............................................................................23.3.4 Instability.................................................................................................................2
3.4 CONCEPT OF SIGN-DEFINITENESS............................................................................33.4.1 Positive-Definiteness of a Scalar Function..............................................................33.4.2 Negative-Definiteness of a Scalar Function............................................................33.4.3 Positive-Semidefiniteness of a Scalar Function......................................................33.4.4 Negative-Semidefiniteness of a Scalar Function....................................................43.4.5 Indefiniteness of a Scalar function..........................................................................4
3.5 QUADRATIC FORM OF A SCALAR FUNCTION............................................................43.6 DEFINITENESS OF A MATRIX (SYLVESTOR'S THEOREM)........................................53.7 LYAPUNOV'S STABILITY CRITERION (DIRECT METHOD OF LYAPUNOV)...............73.8 LYAPUNOV'S DIRECT METHOD AND LINEAR TIME-INVARIANT SYSTEM.............143.9 CONSTRUCTING LYAPUNOV'S FUNCTION FOR NONLINEAR SYSTEMS
(KRASOVSKII'S METHOD)..................................................................................................173.10 POPOV'S CRITERION FOR STABILITY OF NONLINEAR SYSTEMS.......................21EXERCISES........................................................................................................................23REFERENCES....................................................................................................................25
i
Chapter 3: Stability Analysis of Nonlinear Systems
3.1 INTRODUCTIONThe stability of a system implies that the small changes in the system input (either in systemparameters or in initial conditions of the system) do not result in large changes in the system output.For a given control system, stability is one of the most important characteristics to be determined. Inorder to analyze the stability of linear time-invariant systems, many stability criteria, such as Nyquiststability criterion, Routh's stability criterion, etc., are available. However, for nonlinear systemsand/or time-varying systems, such stability criteria do not apply.
In 1892, A.M. Lyapunov, the Russian mathematician, presented two methods (known as first andsecond methods of Lyapunov) in order to determine the stability of dynamic systems described byordinary differential equations. By using the second method of Lyapunov, the stability of thedynamic system can be determined without actually solving the differential equations, that's why it isalso referred to as the direct method of Lyapunov. This is quite advantageous because solvingnonlinear and/or time-varying state equations is usually very difficult or sometimes may beimpossible, too. Thus, the direct-method of Lyapunov is the most general method that can beemployed for the determination of the stability of nonlinear and/or time-varying systems as it avoidsthe necessity of solving state equations.
3.2 AUTONOMOUS SYSTEM AND EQUILIBRIUM STATEConsider the general state equation for a time-invariant system
If the input vector u(t) is constant, we may write Equation (3.1) as
The system described by Equation (3.2) is called an autonomous system.
In the system described by Equation (3.2), a state xe(t) where
is called an equilibrium state of the system.
For studying the system dynamic response at an equilibrium point to small perturbation, the systemis linearized at that point using the linearization technique. (Refer to Section 1.7.)
The linearized state model of the system described by Equation (3.2) may be written as
For this linear autonomous system, there exists only one equilibrium state xe(t) = 0 if A isnonsingular i.e., |A| ≠ 0, and there exist infinitely many equilibrium states if A is singular i.e., |A| = 0.For the nonlinear systems, there may be one or more equilibrium states.
If xe(t) is an isolated equilibrium state, it can be shifted to the origin of the state-space by atranslation of coordinates. In this chapter, we shall deal with the stability analysis of only suchequilibrium states.
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
3.3 STABILITY DEFINITIONSAs seen in Section 3.2, there may be one or more equilibrium states for the nonlinear systems.Thus, in the case of nonlinear systems, we shall define the system stability relative to theequilibrium state rather than using a general term ‘stability of a system.’ There are a number ofstability definitions in control systems' literature. We shall only concentrate on three of these:stability, asymptotic stability, and asymptotic stability in-the-large.
Consider an autonomous system described by the state Equation (3.2). We assume that the systemhas only one equilibrium state, which is the case with all properly designed systems. Moreover,without loss of generality, we assume the origin of state-space as the equilibrium point.
3.3.1 Stability in the Sense of Lyapunov
The system described by Equation (3.2) is said to be stable at the origin if, for every real number ɛ> 0, there exists a real number δ(ɛ) > 0 such that ‖x(t0)‖ δ results in ‖ X(t) ‖ ≤ ɛ for all t ≤ t0.
In the preceeding definition, ‖ x(t) ‖ is called the Euclidean norm and is defined as
Also, ‖ x(t) ‖ ≤ R represents a hyper-spherical region S(R) of radius R surrounding theequilibrium point xe(t) = 0.
Thus, the system described by Equation (3.2) is said to be stable at the origin if, corresponding toeach S(ɛ), there is an S(δ) such that trajectories starting in S(δ) do not leave S(ɛ) as t increasesindefinitely, as illustrated in Figure 3.1(a).
3.3.2 Asymptotic Stability in the Sense of Lyapunov
The system described by Equation (3.2) is said to be asymptotically stable at the origin if it isstable in the sense of Lyapunov and δ can be found such that ‖ x(t0) ‖ ≤ δ results in x(t) → 0 as t →∞ i.e., trajectories starting within S(7δ) converge on the origin without leaving S(ɛ) as t increasesindefinitely, as illustrated in Figure 3.1(b).
3.3.3 Asymptotic Stability in-the-Large
The system described by Equation (3.2) is said to be asymptotically stable in-the-large (globallyasymptotically stable) at the origin if it is stable in the sense of Lyapunov and every initial statex(t0), regardless of how near or far it is from the origin, results in x(t) → 0 as t → ∞.
3.3.4 Instability
The system described by Equation (3.2) is said to be unstable if for some real number ɛ > 0 andany real number δ(ɛ) > 0, no matter how small, there is always a state x(t0) in S(δ) such that thetrajectory starting at this state leaves S(ɛ) as illustrated in Figure 3.1(c).
Modern Control Systems: An Introduction 2
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Figure 3.1: (a) Stability in the sense of Lyapunov (b) Asymptotic stability (c) Instability
3.4 CONCEPT OF SIGN-DEFINITENESSIn this section, we will discuss the concept of sign-definiteness of a scalar function V of a vector xi.e., V(x).
3.4.1 Positive-Definiteness of a Scalar Function
A scalar function V(x), which for some real number ɛ > 0, satisfying the following properties for all xin the region ‖x‖ ≤ ɛ
is called a positive-definite scalar function.
For example, V(x) = x21 + 2x2
2 is a positive-definite scalar function.
3.4.2 Negative-Definiteness of a Scalar Function
A scalar function V(x), which for some real number ɛ > 0, satisfying the following properties, for all xin the region ‖x‖ ≤ ɛ
is called a negative-definite scalar function.
For example, V(x)= - x21, -(x1, +x2)2 is a negative-definite scalar function.
3.4.3 Positive-Semidefiniteness of a Scalar Function
A scalar function V(x), which for some real number ɛ > 0, satisfying the following properties for all xin the region ‖x‖ ≤ ɛ
Modern Control Systems: An Introduction 3
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
is called a positive-semidefinite scalar function.
For example, V(x) = (x1, +x2)2 is a positive-semidefinite scalar function.
3.4.4 Negative-Semidefiniteness of a Scalar Function
A scalar function V(x), which for some real number ɛ > 0, satisfying the following properties for all xin the region ‖x‖ ≤ ɛ
is called a negative-semidefinite scalar function.
For example, V(x) = - (x1, +x2)2 is a negative-semidefinite scalar function.
It should be noted that in the definitions given in Sections 3.4.1, 3.4.2, 3.4.3, and 3.4.4, if ɛ ischoosen arbitrarily large, the definitions hold in the entire state-space and are said to be global.
3.4.5 Indefiniteness of a Scalar function
A scalar function V(x), which for some real number ɛ > 0, it assumes both positive and negativevalues, for all x within the region ‖x‖ ≤ ɛ no matter how small ɛ is choosen, is called an indefinitescalar function.
For example, V(x) = x1x2 + x22 is an indefinite scalar function.
3.5 QUADRATIC FORM OF A SCALAR FUNCTIONThe quadratic form of a scalar function plays an important role in the stability analysis based onLyapunov's direct (second) method.
If a scalar function V(x) is of the quadratic form, then it can be expressed as
where x is a real vector and P is a real symmetric matrix.
In the expanded form, Equation (3.4) may be rewritten as
Modern Control Systems: An Introduction 4
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
3.6 DEFINITENESS OF A MATRIX (SYLVESTOR'S THEOREM)If a scalar function V(x) is of the quadratic form given by Equation (3.4), then the definiteness ofV(x) is attributed to matrix P. The definiteness of matrix P can be determined by Sylvestor'stheorem, which states that the necessary and sufficient conditions for a matrix
to be positive-definite are that all the successive principal minors of P be positive, i.e.,
Note that:
The matrix P is positive-definite if all eigen-values of P are positive.•
Modern Control Systems: An Introduction 5
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
The matrix P is positive-semidefinite if any of the principal minors of P are zero.•
The matrix P is negative-definite if the matrix - P is positive-definite.•
The matrix P is negative-semidefinite if the matrix - P is positive-semidefinite.•
If P is positive-definite, so is P2 and P-1.•
It should be noted that the definiteness of a scalar function V(x), which is of the quadratic form givenby Equation (3.4), is global.
Example 3.1
Determine whether the following quadratic form is positive-definite:
Solution. The quadratic form of scalar function V(x) can be written as
The matrix P can be found as,
We may write Equation (3.6) as
Applying Sylvestor's Theorem, we obtain
Modern Control Systems: An Introduction 6
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Since all the successive principal minors of the matrix P are positive, it means that P ispositive-definite. Hence V(x) is positive-definite. Ans.
3.7 LYAPUNOV'S STABILITY CRITERION (DIRECT METHOD OFLYAPUNOV)The direct method of Lyapunov is based on the concept of energy and the relation of storedenergy with the system stability that the energy stored in a stable system cannot increase with time.Consider an autonomous physical system described as
Let V(x) be the suitable function of energy associated with the system. If the derivative isnegative or all x except the equilibrium point xe, then it follows that energy associated with thesystem decays with increasing time t untill it finally assumes its minimum value at equilibrium pointxe. This holds good because energy associated with a system being a non-negative function ofsystem state reaches a minimum only if the system motion stops.
For purely mathematical systems, however, there is no obvious way of defining an"energy-function". In order to circumvent this difficulty, A.M. Lyapunov, the Russianmathematician, introduced a fictitous energy function, which is now known as Lyapunov'sfunction. This idea is, however, more general than that of energy and is more widely applicable.
The method for investigating the stability of a system using Lyapunov?s function (known asLyapunov's direct method) is given by the following theorems:
Theorem 3.1. Consider the system described by
If there exists a scalar function V(x), which for some real number ɛ > 0 satisfies the followingproperties for all x in the region ‖ x ‖ ≤ ɛ
V(x) is a positive-definite scalar function, i.e.,a.
Modern Control Systems: An Introduction 7
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
V(x) has continuous first partial derivatives with respect to all components of x,b.
(c) is a negative-semidefinie scalar function, i.e.,c.
then the system described by (3.7) is stable at the origin.
Theorem 3.2. Consider the system described by
If there exists a scalar function V(x), which for some real number ɛ > 0 satisfies the followingproperties for all x in the region ‖ x ‖ ≤ ɛ
V(x) is a positive-definite scalar function, i.e.,a.
V(x) has continuous first partial derivatives with respect to all components of x,b.
(c) is a negative-denite scalar function, i.e.,c.
then the system described by (3.8) is asymptotically stable at the origin.
Theorem 3.3. Consider the system described by
If there exists a scalar function V(x), which for some real number ɛ > 0 satisfies the followingproperties, for all x in the region ‖ x ‖ ≤ ɛ
Modern Control Systems: An Introduction 8
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
V(x) is a positive definite scalar function, i.e.,a.
V(x) has continuous first partial derivatives with respect to all components of x,b.
is a negative-denite salar unction, i.e.,
c.
V(x)→∞as ‖x‖∞,
then the system described by (3.9) is asymptotically stable in-the-large at the origin.
d.
The determination of stability via Lyapunov's direct method centers around the choice of apositive-definite function V(x) called the Lyapunov's function. Unfortunately there is no obviousmethod for selecting a Lyapunov function that is unique for a special problem. The choice of asuitable Lyapunov function depends on the ingenuity of the designer. Note that if a suitableLyapunov function can not be found, it no way implies that the system is unstable. It only meansthat our attempt in trying to establish the stability of the system has failed. The basic instabilitytheorem can be stated as follows:
Theorem 3.4. Consider the system described by
If there exists a scalar function V(x), which for some real number ɛ > 0 satisfies the followingproperties for all x in the region ‖ x ‖ ≤ ɛ
V(x) is a positive-definite scalar function, i.e.,a.
V(x) has continuous first partial derivatives with respect to all components of x,b.
is a positive-denite (or semidenite) scalar function, i.e.,c.
Modern Control Systems: An Introduction 9
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
then the system described by (3.10) is unstable at the origin.
Example 3.2
Consider a nonlinear system described by
Investigate the system's stability using the direct method of Lyapunov.
Solution. We might choose a positive-definite scalar function as,
Putting and in Equation (3.11), we have
Clearly,
implies that is negative-definite.
Therefore, the system under consideration is asymptotically stable at the origin. (Refer toTheorem 3.2). Ans.
Example 3.3Consider a nonlinear system described by
Modern Control Systems: An Introduction 10
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Investigate whether the system is stable or not.
Solution. We might choose a positive-definite scalar function as,
Putting 1 and 2 in Equation (3.12), we have
Here, is indefinite (Refer to Section 3.4.5). So we cannot predict the stability of the system.This implies that this particular V(x) is not a suitable Lyapunov function. As we are unable to get theproper Lyapunov function, it no way implies that the system is unstable. It only means that ourattempt in trying to establish the stability of the system has failed. (Refer to Section 3.7).
The choice of a suitable Lyapunov function depends on the ingenuity of the designer. Let uschoose some other positive-definite scalar function for the same system as
Putting 1 and 2 in Equation (3.13), we have
Clearly,
implies that is negative-definite.
Modern Control Systems: An Introduction 11
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Further, V(x) → ∞ as ‖ x ‖ → ∞.
Therefore, the system under consideration is asymptotically stable in-the-large at the origin.(Refer to Theorem 3.3). Ans.
Example 3.4Investigate the stability of the system described by
Solution. We might choose a positive-definite scalar function as
Putting and in Equation (3.14), we have dV(x)
Clearly,
implies that is negative-dennite.
Further, V(x) → ∞ as ‖ x ‖ → ∞.
Therefore, the system under consideration is asymptotically stable in-the-large at the origin. Ans.
Example 3.5Example 3.5. Determine whether the system is stable or not, given that
Modern Control Systems: An Introduction 12
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Solution. Given that
We may write
We might choose a positive-definite scalar function as,
Putting 1 and 2 in Equation (3.15), we have dV(x)
Clearly,
implies that is negative-definite.
Therefore, the system under consideration is asymptotically stable at the origin. Ans.
Modern Control Systems: An Introduction 13
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
3.8 LYAPUNOV'S DIRECT METHOD AND LINEARTIME-INVARIANT SYSTEMConsider a linear autonomous system described by the state Equation
where x(t) is a state vector and A is an n × n constant matrix. We assume that A is nonsingular, i.e.,| A | ≠ 0, so that the only equilibrium state is the origin, i.e., xe (t) = 0. We can easily investigate thestability of the system described by Equation (3.16) by use of the direct method of Lyapunov.
We choose a possible Lyapunov function for the system described by Equation (3.16) as
where P is a real, positive-definite, symmetric matrix.
The time-derivative of V(x) along any trajectory is given by
Since V(x) was choosen to be positive-definite, for to be negative-definite, we require that
which is a real, positive-definite, symmetric matrix.
The norm of x is given by
Hence, the system described by Equation (3.16) is asymptotically stable in-the-large at the originand for this result to hold, it is sufficient that Q be positive-definite.
Instead of first specifying a positive-definite matrix P and examining whether Q is positive-definite, itis more convenient to specify a positive-definite matrix Q first and then examine whether P,determined from Equation (3.17), is positive-definite. Note that the conditions for thepositive-definiteness of P are sufficient for the system described by (3.16) to be asymptoticallystable in-the-large. The conditions are necessary also and in order to show this fact, suppose thatthe system described by (3.16) is asymptotically stable in-the-large at the origin and P isnegative-definite.
Consider the scalar function
Modern Control Systems: An Introduction 14
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
This shows a contradiction since V(x) given by (3.18) satisfies the instability theorem (Refer toTheorem 3.4).
Therefore, we can conclude that the conditions for the positive-definiteness of P are necessary andsufficient for the system described by (3.16) to be asymptotically stable in-the-large at the origin.We shall summarize what we have just stated in the form of a theorem given next.
Theorem 3.5. Consider a linear autonomous system, described by the state equation
where x(i) is a state-vector and A is an n × n constant nonsingular matrix.
The linear system described by state Equation (3.19) be asymptotically stable in-the-large at theorigin (equilibrium point) if, and only if, given any real, positive-definite, symmetric matrix Q thereexists a real, positive-definite, symmetric matrix P such that
The scalar function xTPx being a Lyapunov function for the system described by Equation (3.19).
Example 3.6
Consider the system described by
Determine whether the system is stable or not.
Modern Control Systems: An Introduction 15
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Solution. Let us assume a tentative Lyiapunov function as
where P is to be obtained by solving the equation
for an arbitrary choice of positive-definite, real-symmetric matrix Q. It is convenient to choose Q = I,the identity matrix. Equation (3.20) then become
As matrix P is known to be a positive-definite, real-symmetric matrix for a stable system, we maytake p12 = p21.
Therefore,
By expanding this matrix equation, we obtain three simultaneous equations as follows
Solving for p11, p12, p22 we obtain
Now, applying Sylvestor's theorem, in order to test the positive-definiteness of P. We have
Since all the successive principal minors of the matrix P are positive, this means that P ispositive-definite. Hence, the system under consideration is asymptotically stable in-the-large atthe origin and the Lyapunov function is in this case
Modern Control Systems: An Introduction 16
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Ans.
3.9 CONSTRUCTING LYAPUNOV'S FUNCTION FORNONLINEAR SYSTEMS (KRASOVSKII'S METHOD)As we have seen in earlier sections, the Lyapunov theorems give only sufficient conditions onsystem stability and furthermore, there is no unique way of constructing a Lyapunov function exceptin the case of linear systems (refer to Section 3.8), where a Lyapunov function can always beconstructed and both necessary and sufficient conditions established. We shall now presentKrasovskii's method of constructing a Lyapunov function for nonlinear systems. Consider thesystem
Assume that F(x) is differentiable with respect to all components of x. Then, the Jacobian matrixfor the system described by (3.21) is given by
Modern Control Systems: An Introduction 17
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Define also
If K(x) is negative-definite, then
Thus, there is no other equilibrium state than x = 0 in the entire state-space, i.e.,
If we observe (3.21) and (3.23), we find that a scalar function V(x) given by
is positive-definite.
Now,
We can obtain (x) as
Obviously, if K(x) is negative-definite, it follows that (x) is negative-definite.
Hence, V(x) is a Lyapunov function and therefore, the system described by (3.21) isasymptotically stable at the origin. Moreover, if V(x) = FT(x)F(x) tends to infinity as ‖ x ‖ → ∞, thenthe system described by (3.21) is asymptotically stable in-the-large at the origin.
Now, we shall summarize what we have just stated in the form of a theorem, called Krasovskii'sTheorem, given next.
Theorem 3.6. (Krasovskii's Theorem):
Consider the system described by
Modern Control Systems: An Introduction 18
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Assume that F(x) is differentiable with respect to all components of x. The Jacobian matrix for thesystem is
Define also
If K(x) is negative-definite, then the system described by (3.26) is asymptotically stable at theorigin. A Lyapunov function for this system is
If, in addition,
then the system, described by (3.26) is asymptotically stable in-the-large at the origin.
Krasovskii's theorem gives a sufficient condition of asymptotic stability for nonlinear systems.Notice that Krasovskii's theorem differs from the usual linearization approaches. It is not limited tosmall departures from the equilibrium state. Moreover, failure to obtain a suitable Lyapunov functionby this method does not imply instability of the system.
Example 3.7
By use of Krasovskii's theorem, determine whether the system is stable or not, given
Also determine the Lyapunov function for this system.
Solution. Given,
Modern Control Systems: An Introduction 19
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Hence,
Hence, K(x) is negative-definite for all x ≠ 0 The Lyapunov function for this system is
Ans.
Furthermore, V(x)→ ∞ as ‖ x ‖→ ∞.
We conclude that, the system under consideration is asymptotically stable in-the-large at theorigin. Ans.
Readers should note that K(x) being negative-definite requires that J(x) must have nonzeroelements on its main diagonal.
Modern Control Systems: An Introduction 20
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
3.10 POPOV'S CRITERION FOR STABILITY OF NONLINEARSYSTEMSV.M. Popov obtained a frequency domain stability criterion, which is quite similar to the Nyquistcriterion, as a sufficient condition for asymptotic stability of an important class of nonlinearsystems. Popov's stability criterion applies to a closed-loop control system that consists of anonlinear element and a linear time-invariant plant, as shown in Figure 3.2. The nonlinearity isdescribed by a functional relation that must lie in the first and the third quadrants, as shown inFigure 3.3.
Figure 3.2: Popov's basic feedback control system
Figure 3.3: Nonlinear characteristicsMany control systems with a single nonlinearity, in practice, can be modeled by the block-diagramand the nonlinear characteristics of Figures 3.2 and 3.3, respectively.
Popov's stability criterion is based on the following assumptions:
The transfer function G(s) of the linear part of the system, has more poles than zeros andthere are no pole-zero cancellations.
1.
Modern Control Systems: An Introduction 21
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
The nonlinear characteristic is bounded by k1 and k2 as shown in Figure 3.3, i.e.,2.
Now, the Popov's Criterion is stated as follows:
“The closed-loop system is asymptotically stable if the Nyquist plot of G(jw) does not intersect orenclose the circle, which is described by
where x and y denote the real and imaginary coordinates of the G(jw)-plane, respectively.”
Note that the Popov's Stability Criterion is sufficient but is not necessary. If the preceedingcondition is violated, it does not necessarily mean that the system is unstable. Popov's stabilitycriterion can be illustrated gemetrically, as shown in Figure 3.4.
Figure 3.4: Geometrical interpretation of Popov's criterionIn practice, the system nonlinearities (in a great majority) are with k1 = 0. In that case, the circle ofEquation (3.28) is replaced by a straight line, given by
For stability, the Nyquist plot of G(jw) must not intersect this line. This fact is illustrated in Figure 3.5.
Modern Control Systems: An Introduction 22
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
Figure 3.5: Popov's criterion for common nonlinearitiesFor linear systems, k1 = k2 = k and it is interesting to observe that the circle of Equation (3.28)degenerates to the (- 1, f0) point and Popov's stability criterion becomes Nyquist's Criterion.
EXERCISESConsider a nonlinear system described by:
Investigate whether the system is stable or not.
1.
Consider a second-order system with two nonlinearties:
Assume that f1(0) = f2(0) = 0 and that f1(x1) and f2(x2) are real and differentiable. In addition,
Determine sufficient conditions for asymptotic stability of the equilibrium state x = 0.
[Hint. Use Krasovskii's theorem.]
2.
Modern Control Systems: An Introduction 23
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
A linear system is descibed by
where,
Determine whether the system is stable or not.
3.
A nonlinear system is described by:
Investigate the stability of the equilibrium state.
4.
Consider the system described by:
Determine the stability of the equilibrium state
5.
A nonlinear system is governed by the equations
Determine the region in the state-plane for which the equilibrium state x = 0 is asymptoticallystable.
6.
Modern Control Systems: An Introduction 24
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited
REFERENCESA.W. Langill Jr., Automatic Control Systems Engineering, Prentice-Hall, Inc., EnglewoodCliffs, New Jersey, 2nd Volume, 1965.
1.
D. Roy Choudhury, Modern Control Engineering, Prentice-Hall of India Pvt. Ltd, New Delhi,2005.
2.
I.J. Nagrath, M. Gopal, Control Systems Engineering, New Age International Publishers,New Delhi, 4th Edition, 2005.
3.
Katsuhiko Ogata, Modern Control Engineering, Prentice-Hall, Upper Saddle River, NewJersey, 3rd Edition, 1997.
4.
Modern Control Systems: An Introduction 25
Reprinted for CSC/vkovvuri2, CSC Jones and Bartlett Publishers, Infinity Science Press LLC (c) 2008, Copying Prohibited