Review de Nonlinear Control

download Review de Nonlinear Control

of 18

Transcript of Review de Nonlinear Control

  • 8/10/2019 Review de Nonlinear Control

    1/18

    A Brief Overview of Nonlinear Control

    Graham C. Goodwin(With Jose De Dona, Osvaldo J. Rojas and Michel Perrier)

    Centre for Integrated Dynamics and ControlDepartment of Electrical and Computer EngineeringThe University of Newcastle

    NSW 2308 Australia1

    Abstract

    Control system design has reached a level of consider-able maturity and there exists substantial understand-ing of standard linear control problems. The aim ofthis paper is to look beyond linear solutions and briefly

    review recent developments in Nonlinear Control. Weadopt a practical viewpoint so as to introduce this fieldto those who may not be familiar with it.

    1 Introduction

    1.1 Why Nonlinear Control?Over the past 5 decades there have been remarkabledevelopments in linear control theory. Indeed, it mightbe reasonably claimed that this topic has been largelyresolved. Moreover, linear theory has been extensively

    used in applications. It is not uncommon to find prac-tical systems [20] that utilize very sophisticated lin-ear controllers, e.g., based on high order Kalman filtersand LQR theory. However, the real world behaves in anonlinear fashion at least when considered over wideoperating ranges. Not withstanding this observation,linear control design methods have been hugely suc-cessful and hence it must be true that many systemscan be well approximated by linear models. On theother hand, there are well known examples of nonlin-ear practical systems, e.g., high performance aircraftoperating over a wide range of Mach Numbers, Alti-

    tude and Angle of Attack. A common engineering ap-proach to these kinds of problem is to base the designon a set of linearized models valid at a set of repre-sentative operating conditions. One can then use someform of gain scheduling to change the control lawsettings based on the current operating condition. Re-cently there has been significant developments in thesupporting theory [27, 33, 43, 56, 60]. Indeed, over thelast few years the more general topic of nonlinear con-trol has attracted substantial research interest. The

    1CIDAC is an Australian Government funded Special Re-search Centre.

    reader is referred to some of the excellent books avail-able on this topic for further information (see for ex-ample [35, 3, 42, 53, 61, 67, 29]). The aim of this paperis to briefly outline some of these developments. Wewill aim to emphasise engineering aspects rather thanmathematical rigour. In particular, we will summa-

    rize a selection of the available approaches to nonlinearcontrol and to relate these, where possible, to practicalproblems.

    1.2 Classification of Nonlinear ProblemsTo give a broad classification it is convenient to dis-tinguish smooth and non-smooth nonlinearities [20].Smooth nonlinearities are commonly met. They in-clude products and other power law type functionsas well as other continuous nonlinear operators. Non-smooth nonlinearities are also very commonly met inpractice. Well known examples include stiction and

    backlash in valves. Another type of nonsmooth non-linearity met in practice is that of input slew rate andamplitude constraints. Unless compensated, these ef-fects can have a dramatic influence on the final controlsystem performance.

    1.3 Smooth NonlinearitiesSmooth nonlinearities are conceptually easier to dealwith than nonsmooth nonlinearities. This is these non-linearities are, in a sense, invertible [20]. Indeed, thereexist a class of smooth nonlinear problems which areclose to linear in the sense that they can be con-verted, via feedback, into linear problems. The asso-

    ciated design strategy is known as feedback lineariza-tion. We will briefly outline the essential ideas below.

    1.4 Nonsmooth NonlinearitiesNonsmooth nonlinearities are typically more difficult todeal with because they lack a globally definable inverse[20]. One of the most common forms of nonsmoothnonlinearity is that of actuator amplitude and/or slewrate limits. Approaches to this problem can be clas-sified [10] as being serendipitous, cautious, evolution-ary, and tactical. In the serendipitousapproach, oneallows occasional violation of the constraints but no

  • 8/10/2019 Review de Nonlinear Control

    2/18

  • 8/10/2019 Review de Nonlinear Control

    3/18

    also exist [2, 50] nonlinear versions of the Model Predic-tive Control Strategies. These strategies come with aready made Lyapunov function, provided one can forcethe trajectory associated with the receding horizon op-timization problem into a suitable terminal set havingcertain properties [51]. This converts the question ofstability into one of verifying feasibility of the controllaw.

    1.8 Output Feedback Issues in Nonlinear Con-trolA point that has been glossed over in the above briefoverview is the issue of the type of information on whichthe control law is based. Many of the nonlinear de-sign methods assume that the full state is available.Clearly this is rarely the case in practice. Thus, oneusually needs to add provisions for state estimation,including disturbances, in practical nonlinear control.The reader will recall that in linear control the issueof state estimation presents no major difficulties [20].Indeed, it is well known that one can invoke CertaintyEquivalence i.e., one can treat the state estimates asif they were the true states for the purpose of controlsystem design. Stability of the composite system fol-lows by ensuring that the state estimator and full statefeedback controller are separately stable. Also, in thelinear case, one can be rather casual about how onedeals with disturbances since these do not affect thestability properties. Alas, the situation for nonlinearsystems is far more complex. For example,

    Certainty Equivalence no longer holds in gen-eral. Indeed, there is usually a cross couplingbetween estimation and control. Specifically, thecontrol law may need to be cautious to accountfor state uncertainty and, conversely, the controlpolicy may incorporate learning i.e., the inputsignal can enhance the state estimation accuracy[63].

    Stability does not follow by separately guarantee-ing that the state estimator and state feedbackcontrol law are stable. Indeed, there exist exam-ples of systems where an exponentially conver-gent state estimator can still destabilize an expo-

    nentially stable state feedback closed-loop system[35].

    Disturbances can present a difficulty in nonlinearcontrol problems. On the one hand, these canbe conceptually dealt with by simply modellingthem along with other system states. However,there are some hidden traps in nonlinear prob-lems. For example, if a step-type disturbance ismodelled as if it were at the system output, but itactually occurs at the system input, or vice versa,then this change can effect performance and maybe potentially destablizing [21]. This is due to the

    inherent nonlinear nature of the problem whichmeans that superposition does not hold. Thus,the nature of the system dynamics can, at leastin theory, be dramatically altered by the point ofinjection and nature of disturbances.

    1.9 Complex Behaviour of Nonlinear Systems

    The response of certain nonlinear systems can be ex-ceedingly complex when viewed in a general setting.We recall that linear systems can be classified veryeasily as being either stable, marginally stable or un-stable. However, nonlinear systems can exhibit quitebizarre behaviour. For example, one can easily gener-ate chaotic behaviour where the trajectories are arbi-trarily sensitive to the initial conditions. In case thereader should feel that chaos is a theoretical conceptthat would never be met in practice, one can readilythink of real world nonlinear systems that do indeed ex-hibit this kind of behaviour. Two examples that cometo mind are

    simple adaptive control laws [49] - this is a littlesurprising since one would not expect such bizarrebehaviour by simply connecting together a pa-rameter estimation with a feedback controller.However, we remind the reader that this is funda-mentally a nonlinear problem and thus certaintyequivalence can not be expected to hold in gen-eral.

    on-off controllers - these are very prevalent in

    practical control but the exact behaviour can bevery complex including the onset of chaos.

    1.10 Overview of the Remainder of the PaperThe remainder of the paper will expand on the aboveideas. Wherever possible, we will link the ideas backto the linear case. We will also use simple examplesto illustrate the ideas. The layout of the remainder ofthe paper is as follows: Section 2 - Gives a brief de-scription of a practical nonlinear control problem to actas motivation, Section 3- Explains how linear controlcan be extended to the case where actuators are sub-

    ject to hard limits, Section 4 - Gives a brief overviewof nonlinear model predictive control, Section 5- Out-lines feedback linearization, Section 6- Gives a simplegeneralization of the idea of feedback linearization toinclude systems which are not necessarily stably in-vertible,Section 7- Describes the idea of flatness andrelates it to feedback linearization, Section 8- Gives abrief introduction to the idea of backstepping, Section9 - Describes several simple nonlinear control prob-lems and illustrates the complex behaviour that canarise even from simple systems, andSection 10- Drawsconclusions.

  • 8/10/2019 Review de Nonlinear Control

    4/18

    2 A Motivational Example ContinuousMetal Casting

    A continuous caster is a common means by whichmolten liquid metal is solidified. The molten metal ispoured into a mould which is a rectangular containeropen at each end. (See Figure 1.) On the system withwhich we are familiar (BHP Steel Works Newcastle,

    Australia), the mould level sustained a continuous os-cillation [23]. The amplitude of this oscillation was rel-atively small (10mm) but resulted in serious degra-dation in quality of the caste material.

    Figure 1: Continuous caster schematic with slide gatevalve

    As in most real world control problems, it is not dif-ficult to pin-point a raft of nonlinearities associatedwith this problem. For example, the flow of moltenmetal through the slide gate valve (SGV) depends onthe type of material being cast which can vary fromsticky to freely flowing. This suggests that wemay need gain scheduling or adaptation to adjustthe controller settings. Also, the geometry of the SGVitself introduces nonlinear behaviour since it compro-mises two intersecting circular regions (See Figure 1).Fortunately, it is relatively easy to compensate for thiskind of smooth nonlinearity in the design [23]. Alas,our observation has been that these steps did little, or

    nothing, to mitigate the mould level oscillation prob-lem. This suggests that one should expand ones viewof the problem to include the possibility of other non-linear phenomena, including nonsmooth nonlinearities.Further examination revealed that the SGV did indeedhave a fundamental problem. On the one hand, it mustbe sized to deal with the gross material flow. However,it then experiences slip-stick friction when dealing withthe finer regulation needed to maintain the mould level.If this problem involved a material such as water, thenit would be feasible to address this problem by usingtwo valves to achieve a dual range solution. However,

    this is not a feasible option with molten metal. In thecase with which we are familiar, a high frequency dithersignal was applied to the valve. This kept the valve inmotion thus avoiding the slip-stick phenomena. Theresult was that the oscillations were immediately elim-inated! Of course, as in all real world designs, thereis a price to be paid for the improved output quality,namely the dither causes increased wear of the SGV.

    However, the cost of more frequent valve replacementswas more than off-set [32] by the achieved improve-ments in product quality (estimated at millions of dol-lars annually).

    Engineers involved in practical control problems maywell see common elements from this example in prob-lems that they have dealt with. Indeed, it is the au-thors experience that one doesnt have to search far tofind significant nonlinear issues in all real world con-trol system design problems. With this as motivation,we proceed to examine some approaches to nonlinearcontrol system design.

    3 Dealing with Actuator Constraints

    Arguably, the most commonly met nonlinearity is thatof actuator amplitude and slew rate limits. Indeed,most industrial controllers will invariably incorporatesome form of anti-integral windup protection to dealwith actuator constraints [48, 15, 25, 5, 38, 65, 66].There are many ways that can be used to describe anti-windup strategies. A systematic formulation of manyschemes is given in [38]. There exists substantial workon this topic, see for example [48, 15, 25, 5, 38, 65, 66].Many of the schemes can be understood as mechanismsfor informing the control law that the constraintshave been met so that appropriate modifications to thefuture control actions can be taken.

    To illustrate, say that we are given a minimum phasebi-proper SISO controller transfer function C(s). Wecan expand the transfer function in the form

    C(s)1 =H+H(s) (3)

    where H is the high frequency gain and H(s) isstrictly proper. We can then implement the controlleras shown in Figure 2. Note that, if the gain is set tounity, then the transfer function from e to u is

    U(s)

    E(s) =

    H1

    1 +H1 H(s)=

    1

    H+H(s)=

    1

    C(s)1 =C(s)

  • 8/10/2019 Review de Nonlinear Control

    5/18

    Figure 2: Anti-windup scheme

    We also see from Figure 2, that all of the dynamicsof the controller are contained in the transfer functionH(s) and hence are informed about the true plant in-put. This suggests that anevolutionarysolution to theconstrained control problem could be to simply replacethe gain element in Figure 2 by the required nonlinearoperation (saturation and/or slew rate limits). Actu-ally, this simple idea underlies many of the availableschemes for anti-windup control [20]. The idea gen-erally works well in practice provided precautions aretaken. For example, it is important that one does notcall for too much oversaturation i.e., the unsaturatedcontrol should not exceed the available limits by morethan say, 2 or 3 times.

    The same idea can be used with multiple-inputmultiple-output controllers. Here, however, additionalissues arise with respect to the gain block (here a multi-variable gain) in Figure 2. Specifically, when one inputsaturates, then the relative proportioning of the otherinputs can be perturbed [20]. To retain good perfor-mance in the presence of constraints, the relativities

    between the different inputs usually need to be pre-served. Methods for doing this include replacing thegain block in Figure 2 by some form of input (or error)scaling to meet the required constraints. A discussionof some of the options is given in [20].

    As mentioned above, this kind of strategy performs wellwhen the demanded level of oversaturation is not toosevere (say a factor of 2:1 from the linear response).This suggests that one might be able to improve per-formance by using multiple linear designs of differentgain so that the control action is typically scaled backby something like 2:1 under all conditions. Related

    methods include the multi-mode regulator [36] and thepiecewise linear control (PLC) [68]. In these methods,a number of controllers is precomputed and, at eachtime, the best controller such that the constraints arenot violated, is used. A further improvement of thesemethods is to recognize that performance can be im-proved by forcing the controls into saturation. Two al-ternative methods that modify the schemes describedevolve from this idea; i.e., the controller proposed in[11], which incorporates the concept of oversaturationto measure the extent to which the demanded input ex-ceeds the saturation level. The alternative controller of

    [46] combines the ideas of PLC and of the low-and-highgain controller [45]. Finally, the stability of these kindsof schemes can be addressed using piecewise constantLyapunov functions, as used in [11, 46]. This exploitsthe inherent switched nature of the control law.

    4 Nonlinear Model Predictive Control

    Model Predictive Control is a control algorithm basedon solving a constrained optimal control problem.Since, constraints can be included from the beginning,this is an example of a tacticalapproach to constrainednonlinear control. A receding horizonapproach is typ-ically used, which can be summarized in the followingsteps:

    (i) At time k and for the current state x(k), solve,on-line, an open-loop optimal control problem

    over some future interval, taking into account thecurrent and futureconstraints.

    (ii) Apply the first step in the optimal control se-quence.

    (iii) Repeat the procedure at time (k+ 1), using the(new) current state x(k+ 1).

    The solution is converted into a closed-loop strategy byusing the measured value ofx(k) as the current state.When x(k) is not directly measured, then one can ob-

    tain a closed-loop policy by replacing x(k) by an esti-mate provided by some form of observer. The methodcan be summarized as follows: Given a model for anonlinear system,

    x(+ 1) = f(x(), u()), x(k) =x (4)

    the MPC at event (x, k) is computed by solving a con-strained optimal control problem:

    PN(x) : VoN(x) = min

    UUNVN(x, U) (5)

    where

    U = {u(k), u(k+ 1), . . . , u(k+N 1)}(6)

    VN(x, U) =k+N1=k

    L(x(), u()) +F(x(k+N))(7)

  • 8/10/2019 Review de Nonlinear Control

    6/18

    UN is the set ofUthat satisfy the constraints over theentire interval [k, k+N 1]:

    u() U = k, k+ 1, . . . , k+N 1 (8)

    x() X = k, k+ 1, . . . , k+N (9)

    together with the terminal constraint

    x(k+N) W (10)

    Usually, U Rm is convex and compact, X Rn isconvex and closed, and Wis a set that can be selectedappropriately to achieve stability. In the above formu-lation, the model and cost function are time invariant.Hence, one obtains a time-invariant feedback controllaw. In particular, we can set k = 0 in the open-loop

    control problem without loss of generality. Then, atevent (x, k), we solve

    PN(x) : VoN(x) = min

    UUNVN(x, U) (11)

    where

    U = {u(0), u(1), . . . , u(N 1)} (12)

    VN(x, U) =

    N1=0

    L(x(), u()) +F(x(N)) (13)

    subject to the appropriate constraints. Standard opti-mization methods are used to solve the above problem.

    Let the minimizing control sequence be

    Uox ={uox(0), u

    ox(1), . . . , u

    ox(N 1)} (14)

    Then the actual control applied at time k is the firstelement of this sequence, i.e.,

    u= uox(0) (15)

    Time is then stepped forward one instant, and theabove procedure is repeated for another N-step-aheadoptimization horizon. The first input of the new N-step-ahead input sequence is then applied. The aboveprocedure is repeated endlessly.

    The above MPC strategy implicitly defines a time-invariant control policy h(), i.e., a static mappingh: X U of the form

    h(x) =uox(0) (16)

    It is worth recalling that MPC, originally, arose in in-dustry [54] as a response to the need to deal with con-straints. In fact, it can be said that constraintsare theraison detreof MPC.

    Until recently, MPC was restricted to relatively slowprocesses due to the need to carry out an on-line op-timization to find the control policy u = h(x), for thecurrent state, x. However, recent work [59, 4] has ob-viated the need for this on-line calculation, at least forcertain classes of systems. In particular, for the caseof constrained control of linear systems, it turns out

    that the time invariant control policy h() can be ex-plicitly characterised, making on-line computation un-necessary. This has many advantages beyond simpli-fication of the required on-line computations; e.g., thepolicy can be characterised off-line leading to betteracceptance in practice; the range of performance canbe examined and verified prior to inplementation and;it has the potential to offer alternative mechanisms foranalyzing stability. Moreover, these studies reveal a re-lationship between these optimality-based control poli-cies and the heuristically-based anti-windup strategies[9]. Indeed, this body of work seems to open the door towidespread application of these methods to high speed,

    high performance and mission critical applications.

    5 Feedback Linearization

    A well known and conceptually simple technique forthe control of nonlinear systems where the nonlinear-ities are smooth is Feedback Linearization [41]. Thismethod is restricted to certain classes of nonlinear sys-tems and stable invertibility is required [41]. However,Feedback Linearization has strong practical appeal be-cause of its simplicity (see, for example, [34, 44]).

    The notion of Feedback Linearization was formally ad-dressed for the first time by Brockett [6] in 1978, wherehe solved the problem of finding a smooth local changeof coordinatesz = (x) and a smooth feedback controllaw u = (x) + (x)v in order to transform a generalnonlinear system x= f(x) + g(x)uinto a linear systemz = Az+ Bv. A closely related problem formulationwas previously considered by Krener [39, 40] where thelinearization of a general affine nonlinear control sys-tem was achieved by means of a local change of coor-dinates only. In later years, a more general version ofthe Feedback Linearization problem studied by Brock-

  • 8/10/2019 Review de Nonlinear Control

    7/18

    ett was solved using different approaches (see, for ex-ample, Korobov [37], Jacubczyk and Respondek [31],Sommer [62], Hunt et al. [28] and Su [64]). A detaileddescription of the evolution of Feedback Linearizationand its influence over some important developments innonlinear system theory can be reviewed in [41].

    One noticeable drawback of feedback linearization is

    that, in order for it to be applicable, a certain involu-tivity condition has to hold [29, 41], thus only a limitedclass of nonlinear systems can be feedback linearizable.

    An important limitation of feedback linearization isthat it fails to ensure stability when the nonlinear sys-tem has unstable zero dynamics or, in analogy withlinear systems, when the nonlinear system is nonmin-imum phase. It is possible to extend feedback liner-ization to nonminimum phase systems. One possibleapproach is via selecting another output of the systemwith respect to which the system has minimum phasecharacteristics. A different method [12, 1], is based on

    the construction of a minimum phase approximation ofthe original model using an inner-outer factorization.

    We present below a brief review of Feedback Lineariza-tion. Consider the single-input single-output nonlinearstate space system

    x(t) = f(x) +g(x)u(t)

    y(t) = h(x) (17)

    Assume that x = 0 is an equilibrium point of (17)

    i.e., f(0) = 0, and that the nonlinear system (17) hasrelative degreer defined in a certain neighbourhood Uof x = 0. Next, consider a stable linear differentialoperator p() of degree r

    p() = prr +pr1

    r1 +. . . + 1

    Then p(), applied to the system output y(t) can bewritten as

    p()y(t) = b(x) +a(x)u(t). (18)

    whereb(x) anda(x) are suitable nonlinear functions ofthe system states. Also, we have that a(x)= 0 x U,since the nonlinear system (17) has relative degree r inU. From (18) it is clear that if we apply the controllaw,

    u(t) = y(t) b(x)

    a(x) (19)

    we will be able to transform the original nonlinear sys-tem into a linear system of the form

    p()y(t) =y(t) (20)

    where y(t) can be any external signal.

    The control law defined in (19) is known as Input-Output Feedback Linearization [30, 29]. One advan-tage of Feedback Linearization is that it is simple and itallows a straightforward design of the differential oper-atorp(), since the roots ofp() determine the dynamicbehaviour of the output of the closed-loop system. In-put to state stability issues are addressed, for examplein [35].

    6 Generalized Feedback Linearization

    A nonlinear control strategy related to Feedback Lin-earization is known as Generalized Feedback Lineariza-tion (GFL) [22]. This method aims to retain the es-sential simplicity of the Feedback Linearization schemewhilst overcoming some of the restrictions. We willsee later that this objective will be achievable at theexpense of having to compromise the linear behaviourof the closed-loop system. Therefore the GeneralizedFeedback Linearization name reflects the origin of theidea, rather than describing the properties of the re-sulting system.

    In order to introduce the Generalized Feedback Lin-earization strategy we re-examine equality (20) whichleads to the usual Feedback Linearization control pol-icy (19). Previously, the order of the linear differentialoperator p() wasr i.e., the relative degree of the non-linear system. We now allow the degree ofp() to benp r. Notice, at this stage, that from the choice madefor np the equality p()y(t) = b(x) + a(x)u(t) will nolonger hold, since the relative degree of the nonlinearsystem has not changed.

    As mentioned above, the Feedback Linearization strat-egy which achieves (20) suffers from a lack of internalstability in the presence of unstable zero dynamics, i.e.,the input can diverge. It thus seems heuristically rea-sonable to match (20) by a similar requirement on theinputu(t) of the nonlinear system. Thus, we might askthat the input satisfies a linear dynamic model of theform

    l()u(t) = u(t) (21)

    wherel() is a differential operator of degreenl = npr

  • 8/10/2019 Review de Nonlinear Control

    8/18

    l() = lnlnl +lnl1

    nl1 + + 1 (22)

    and u(t) is the steady state input signal which makesthe steady state value of the output y(t) to be equalto y(t). The control law expressed by (21) is clearlyan open loop strategy. Thus, it is not suitable for openloop unstable systems. Also, conditions (20) and (21)are not, in general, simultaneously compatible. Thissuggests that we determine the control signal u(t) bycombining both conditions in some way. A possiblechoice is to use a linear combination of (20) and (21),yielding

    (1 )p()y(t) y(t)

    +

    l()u(t) u(t)

    = 0

    (23)

    with [0, 1]. Equation (23) constitutes the General-ized Feedback Linearization(GFL) control law, whichcan also be written as

    z(t)=p()y(t) +l()u(t) = z(t) (24)

    where

    p() = 1

    p()

    z(t) = 1

    y(t) +u(t)

    Notice that (24) implicitly defines an improper linearcontrol law which becomes a nonlinear proper controllaw when the state space model is used to evaluatep()y(t). It is clear that this strategy reverts to theusual Feedback Linearization strategy; by taking = 0in (23). The strategy can handle all stable systems,whether or not they are stably invertible, by taking = 1. By continuity, various combinations of stableand stably invertible dynamics will also be able to bestabilized by this class of control law, depending uponthe design of the differential operatorsp() andl(), aswell as the value of the parameter .

    To develop the control law implicitly defined in (24),we introduce a dummy variable u(t), as follows:

    u(t) =l() u(t) (25)

    In this way, if the original nonlinear system has rela-tive degree r, then the nonlinear system that relatesu(t) to y(t) has relative degree np, which is equal tothe order of the p() operator. Hence, p()y(t) willdepend again explicitly on u(t) and we are able to writep()y(t) as a nonlinear function of the states, i.e.,

    p() y(t) =b() +a() u(t). (26)

    where b() and a() are not necessarily the same func-tions as in (18) and = [xT T]T are the states ofthe extended system formed by the combination of thelinear system defined in (25) and the original nonlinearsystem (17), as follows:

    x =f(x) +g(x) 1

    =Al+Blu(t) (27)y =h(x)

    where, without loss of generality, we have used a con-troller canonical form for thenl-order state space linearmodel given by Al and Bl

    Al =

    0 1 00 0 0...

    ... . . .

    ...0 0 1

    1ln

    l

    l1ln

    l

    lnl1

    lnl

    ; Bl =

    00...01lnl

    The inputu(t) of the original nonlinear system becomesnow a state of the extended system (27); in particular1 = u. Substituting (25) and (26) into expression(24) which defines the GFL control strategy, we finallyobtain the following nonlinear control law:

    u(t) = z(t) b()

    1 +a() (28)

    Comparing the GFL control law (28) with the FeedbackLinearization control law (19), we can see that there areclose connections between the two strategies. The ad-ditional term in the denominator of the GFL controllaw (28) is the one responsible for preventing the con-troller from cancelling the system zero dynamics. Theresulting closed loop obtained from the application ofthe GFL strategy is depicted in Figure 3.

  • 8/10/2019 Review de Nonlinear Control

    9/18

    Nonlinear

    Plant

    u(t)1

    l()

    y(t)

    (t)

    u(t)

    u= z(t)b

    ()1+a()

    GFL control law

    Figure 3: General scheme of the application of the GFLstrategy to a nonlinear system

    To give further insight into the GFL strategy we nextexamine its application to the special case of linear sys-tems. Consider the single-input single-output linearsystem defined by the following ordinary differentialequation:

    A()y(t) =B ()u(t) (29)

    where

    A() = n +an1n1 +. . . +a0

    B(

    ) =

    bm

    m

    +bm

    1m1

    +. . .

    +b

    0; bm= 0

    r = n m (30)

    and whereA() andB() are assumed relatively prime.

    In the sequel, we will be interested in the linear ver-sion of (24). In particular, we will show how, for thelinear case, explicit dependence on the states can betransformed into dependence on filtered values of theinput and output. Towards this end, we present thefollowing:

    Lemma 1 Consider the linear system given in (29),then the output predictor used in(24)can always be ex-pressed in terms of the system input and output signals.

    We call this a Generalized Predictor for the system. In

    this case we need to choose np = n, that is the orderof the linear system, for reasons that will become clear

    later. Thus, we can write

    py(t) =M

    E y(t) +

    GB

    E u(t) (31)

    where E and G are differential operators of degree r

    withEstable, andMis a differential operator of degree

    n 1 satisfying the following identity:

    Ep =M+ GA (32)

    Proof: Starting from the system o.d.e. in (29) andmultiplying both sides byG we obtain:

    GAy(t) =GBu(t)

    then using (32) to replaceGA, we obtain

    (Ep M)y(t) =GB u(t) (33)

    which establishes (31) after solving forpy(t).

    We next show how the predictor presented in lemma1 can be used to define a proper feedback control law.This control law will be linear in the case of linear sys-tems. Consider again the GFL control law introducedin equation (24). Our aim is to express this control lawas a casual feedback relationship in terms of u(t) andy(t). Substituting expression (31) for the generalizedpredictor of lemma 1, into equation (24), we have that

    M

    Ey (t) +

    GB+El

    E u(t) = z(t) (34)

    which clearly defines a linear proper controller, namely

    u(t) = M

    Ly (t) +

    E

    Lz(t) (35)

    where

    L= GB +El

    It can be easily verified that the degree ofL is equalto n and therefore, in the linear case, the GFL controlstrategy is equivalent to a standard output feedbacklinear controller. The closed-loop characteristic poly-nomial can readily be seen to be E(Al+Bp). Hence,if we choose l and p such that Al + Bp =A is stable,

  • 8/10/2019 Review de Nonlinear Control

    10/18

    then the above designs leads to a stable closed-loopsystem. It is also interesting to examine the implicitoutput defined by

    z = py+lu (36)

    It is easily seen that this output has stable zero dynam-ics since

    Az= Apy+Alu = Au (37)

    where A is stable by choice ofp and l.

    The above special linear case suggests that in the non-linear case we might choosep andl so that [Aol +Bop

    ]is stable whereAo,Bo denote the denominator and nu-merator of a linearized model at the required operating

    point. In this way, we ensure that z has stable zero dy-namics locally to the desired operating point. Indeed,via this route, it is relatively easy to establish [55] thatthe GFL strategy is, at least, locally stabilizing for abroad class of nonlinear systems. Global Stability re-sults for systems of Lure type are also studied in [55].

    7 Flatness

    Another concept related to that of stable zero dynamicsis that of flatness [18]. The main features of differential

    flatness is the presence of a fictitious output z (calleda flat or linearizing output) such that

    every system variable may be expressed as a func-tion of the components ofz and of a finite numberof its time derivatives.

    z may be expressed as a function of the systemvariables and of a finite number of their timederivatives; and

    the number of independent components of z isequal to the value of independent input elements.

    Usually the search for linearizing outputs is based onphysical insights into the problem [17]. To gain insightinto the idea we will reexamine the special case of linearsingle-input single-output systems. Consider a linear not necessarily minimum phase system having transferfunction

    G(s) = B(s)

    A(s) (38)

    where B(s), A(s) are relatively prime. Since A and Bare relatively prime, we can choose l and p, such thatAl + Bp = 1. We now define a new system variable by

    z= py+lu (39)

    Notice that

    Az= u, Bz = y

    Thus z qualifies as a flat output. Note that stabilizingcontrol can be easily obtained by use of the control lawimplicitly defined in equation (24) i.e.,

    z= z

    where z depends on the desired operating point.

    We note that there is a close connection between find-ing a flat output and generalized feedback lineariza-tion. Specifically, in generalized feedback linearizationwe use the linear form (39) to define a variable z . Thekey point in GFL is that we design p and l such thatthe zero dynamics associated with z are locally stable.However, when using flatness, we seek some other vari-able z (defined via a nonlinear relationship) which hasstable zero dynamics associated with it.

    8 Backstepping

    As mentioned in the introduction, backstepping is amethod that can be used on systems of special struc-ture to find an output having a passivity property, in-cluding relative degree one and stable zero dynamics.The notion of backstepping was probably first used inthe context of Adaptive Control to extend known re-sults to the (relative degree)>1 case [16]. The methodwas developed and used for general nonlinear systemssee [57, 58]. To simplify the presentation we will firstconsider the simplest case of second order systems hav-ing a special form.

    8.1 Backstepping in a Simple CaseConsider a system comprising a chain of integrators,i.e.,

    x1 = f1(x1) +g1(x1)x2 (40)

    x2 = u (41)

  • 8/10/2019 Review de Nonlinear Control

    11/18

    Notice that the first equation does not depend explic-itly on u. To ensure passivity, we require z to be ofrelative degree one. It suffices that z depends on x2via an equation of the form

    z = x2 (x1) (42)

    Next, we choose (x1) such that the system has stablezero dynamics. Setting z = o, shows that the zerodynamics satisfy

    x1= f1(x1) +g1(x1)(x1) (43)

    Hence, we have another stabilization problem. How-ever, this stabilization problem is for a lower dimen-sional system (here of order 1) in which the statex

    2=

    (x

    1) is input. Indeed making the special choice

    =[x1+f1]

    g1(44)

    would yield the zero dynamics as x1 = x1 which isclearly stable. We next express the original problemin terms ofx1 and z. Specifically, from (40), (42) and(44) we have

    x1=x1+g1z (45)

    Also, from (42) and (41)

    z = x2 d

    dx1x1 (46)

    = u d

    dx1(x1+g1z) (47)

    We now introduce the Lyapunov function

    V =1

    2x21+

    1

    2z2 (48)

    Then

    V = x1x1+zz

    = x1[x1+g1z] +z[u d

    dx1(x1+g1z)]

    It is then clear, that if we choose u such that

    x1g1+ [u d

    dx1(x1+g1z)] = z (49)

    Then

    V = x21 z2 (50)

    This line of argument can be used [57, 7] to establishthat the feedback control law defined via (49), (44)and (42) is globally asymptotically stabilizing. We seeagain that finding and output z having stable zerodynamics plays a key role in this design.

    Indeed, the closed-loop system resulting from this con-

    trol law is

    x1 = x1+g1z (51)

    z = z g1x1 (52)

    Of course, since z has stable zero dynamics associatedwith it, we could also attempt to apply feedback lin-earization. Indeed, choosing p() = + 1, then

    z+z = x2 ddx1

    x1+z (53)

    = u

    d

    dx1

    [f1+g1x2] +x2 (54)

    Then the alternative control law is as follows:

    u= d

    dx1[f1+g1x2] x2 (55)

    which yields the following closed-loop system:

    z = z (56)

    x1 = x1+g1z (57)

    Note that this alternative system is input-output sta-ble and locally input-to-state stable. However, globalinput-state stability requires additional restrictions,e.g., linear growth conditions.

  • 8/10/2019 Review de Nonlinear Control

    12/18

    8.2 An Algebraic view of BacksteppingHere we present a more general formulation of back-stepping due to Clements and Jiang [8]. It is instruc-tive to first examine the linear case where the systemis in lower triangular form

    x= N x +Ax+bu (58)

    where

    N is a zero matrix save for 1s on the upper diagonalb is a zero vector save for the last entry which is unityA is lower triangular

    This represents a string of integrators together withsome feedback defined via A. We now define a newstate vector z by

    z = x+H x (59)

    and a new input by

    =u+kTx (60)

    where H is strictly lower triangular, k is any row vec-tor. The closed-loop system with input and state zsatisfies

    z = N z+ Az+B (61)

    where

    A= (I+ H)[(N+ A) bkT](I+ H)1 N (62)

    which also has the same structure asA. The associateddesign problem is to findkandHthat yield a particularA. Rearranging (62), we have

    N H+ bkT =A+H(N+ A) A(I+ H) (63)

    Now, let h1, . . . , hn denote the rows ofHa1, . . . , an denote the rows ofA

    a1, . . . , an denote the rows ofA

    Then, equation (63) can be written row by row for i =1, . . . , n 1 as

    hi+1, = ai+hi(N+ A) ai i1j=1

    aijhj (64)

    initialized by h1 and from the last row

    kT =an+hn(N+ A) an n

    j=1

    anjhj (65)

    Settingh1 = 0, these equations uniquely define k anda strictly lower triangular H. Equations (64)and (65)are the (linear version) of the backstepping equations.We next develop the nonlinear version of this idea. Let

    xidenote the firsticomponents ofx. A matrix functionf is said to be lower triangular dependent (strictly) onx if the ith row is a function ofxi(xi1). We also saythat a functionf is lower triangular (strictly) if all theentries above (above and on) the diagonal are zero.

    Consider the nonlinear system

    x= N x+(x) +bu (66)

    where (x) is lower triangularly dependent on x. Thissystem is said to be in strict feedback form. We thendefine a new state variable

    z= x +NTf(x) (67)

    and an auxiliary input

    = u +b

    T

    f(x) (68)

    defined in terms of one functionfwhich we take to belower triangular dependent on x. Due to the structureofNTf(x) and bTf(x), the above transformations areglobally invertible: i.e., there exists a function f(z)which is lower triangularly dependent on z such that

    x = x+NTf(z) (69)

    u = +bTf(z) (70)

  • 8/10/2019 Review de Nonlinear Control

    13/18

    Also note that

    (N z+b) = N x +bu+f(x) (71)

    since N NT +bbT =I. Also,

    NT[xf(x)]b= 0 (72)

    The desired loop dynamics for z are then simply seento satisfy

    z = x+NT[xf(x)]x

    = N x +bu+(x) +NT[xf(x)](N x+(x))

    = N z+b f(x) +(x) +NT[xf(x)](N x+(x))

    = N z

    f(x) +(x) +NT[x

    f(x)](N x+(x)) +b

    = N z+(z) +b

    where

    (z) = [f(x) +(x) +NT[xf(x)](N x+(x))]x=z+NTf(z)(73)

    It is easy to see that (z) is again lower triangularlydependent on z. Now, as in the linear case, we seek

    to find an fsuch that we achieve a given . Equation(73) can be rewritten as

    f(x) = (x) +NT[xf(x)](N x+(x)) (x+NTf(x))(74)

    Now based on the triangular dependence of the variousquantities, this equation can be readily solved for f ina row by row fashion. This yields the backsteppingequations for systems in strict feedback form.

    The above development has been purely structural.However, since we can generate an arbitrary (z) itis now reasonable to ask how we might choose (z)to insure (Lyapunov) stability. For example, we couldchoose (z) as

    (z) =NTz C(z, t)z (75)

    C(z, t) is lower triangularly dependent on z and is posi-tive semi-definite. Then consider the Lypunov function

    V(z) = 1

    2zTz (76)

    Then

    V(z) = zT(N z+ (z)) (77)

    = zT

    (N NT

    C(z, t))z= zTC(z, t)z

    0

    which insures that the system is Lyapunov stable.

    9 Illustrative Examples

    Finally, we present some simple nonlinear control sys-tem design problems. These are motivated by physical

    systems. They could be used, for example, as benchmark problems to test new nonlinear design methods.

    9.1 Boost ConverterWe begin with an example taken from [19] of a dc-dcpower converter. The average behaviour of a modu-lated boost converter circuit may be described by

    x1 = C1ux2+C2

    x2 = C3ux1 C4x2

    where x1 is the average inductor current, x2 is the av-erage output capacitor voltage and u is the duty cycletaking values in the interval [0, 1], and constituting theinput. Note that x2 has unstable zero dynamics. It isshown in [19] that a suitable flat output , z , is given bythe average stored energy.

    z = 1

    2C1x21+

    1

    2C3x22 (78)

    It can be shown that (as required by flatness)

    x1 = f1(z, z) (79)

    x2 = f2(z, z) (80)

    u = f3(z, z,z) (81)

    where f1(), f2(), f3() are particular nonlinear func-tions. Now, say that we wantz to satisfy the followinglinear dynamics:

  • 8/10/2019 Review de Nonlinear Control

    14/18

    z+a1z+aoz= aoz (82)

    In this case, we have from the (81) that a stabilizingfeedback loop around the set point can be obtained bysetting

    u= f3(z, z, a1z aoz aoz) (83)

    In the simulations presented below we compare theperformance of GFL and the controller based on flat-ness described above. As in [19], we choose C1 =50, C2 = 750, C3 = 50 10

    3, C4 = 667. For theGFL design, we linearized about the operating point[uo = 0.25, yo = 60] and setA(s) = (s+300)2(s+600)to designp andl which are then converted into nonlin-ear feedback via the state equations. Also, we choose

    = 0.37. For the design based on flatness we choosea1= 360, ao = 4, 000.

    Figure 4 shows two responses based on a flatness de-sign. The faster trajectory (having greater undershoot)is defined as in (83). The slower response (with lessundershoot) uses the alternative smooth trajectory asdefined in [19]. Note that, in a similar fashion to thelinear case [20], there exists a connection between theextent of undershoot and the rise time. An open re-search question is how to formally quantify this rela-tionship in the nonlinear case.

    0 0.025 0.05 0.075 0.1 0.125 0.1515

    20

    25

    30

    35

    40

    45

    50

    55

    60

    65

    Time [s]

    Plantouput

    Figure 4:

    Figure 5 shows response obtained form the GFL de-sign. Our experience was that the designs based onflatness were easier to tune once one had found theflat output. Choosingp and l in the GFL design tooktime. We also tested a purely linear design obtainedfrom the linearized model. This performed well pro-vided the linearization point was chosen carefully but

    was less robust to the point of linearization than theGFL strategy.

    0 0.025 0.05 0.075 0.1 0.125 0.1515

    20

    25

    30

    35

    40

    45

    50

    55

    60

    65

    Time [s]

    Plantoutp

    ut

    Figure 5:

    9.2 Cyclopentenol SynthesisThis system is typical of many chemical and biochem-ical systems [14, 13]. Other related examples are dis-cussed in [24]. We assume that the volume of the sys-tem is constant and that isothermal conditions apply.We consider a set of reactions, in which an input com-ponent A reacts as follows:

    A k1 B

    k2 C

    2A k3 E

    Letx1, x2, ddenote the concentration ofA, concentra-tion ofB and input concentration respectively. Then,if follows that a suitable model can be written as

    x1 = k1x1 k3x21+ (d x1)u (84)

    x2 = k1x1 k2x2 x2u (85)

    y = x2 (86)

    whereu is the incoming (and outgoing) flow rate. Thissystem has relative degree 1 and unstable zero dynam-ics. In order to highlight the system zero dynamics weneed to transform the system state space representa-tion (84), (85) into an appropriate normal form [35].We suggest to use the following change of variables:

    z = (x) =

    z1z2

    =

    x2dx1x2

    (87)

    which defines a global diffeomorphism, since its inverseis well defined x, z R2

  • 8/10/2019 Review de Nonlinear Control

    15/18

    x= 1(z) =

    x1x2

    =

    d z1z2

    z1

    (88)

    The normal form obtained from the application of thechange of variables (87) is given by

    z1 = k1(d z1z2) k2z1 z1u

    z2 = k3(d z1z2)2 +k1(1 z2)(d z1z2) +k2z1z2

    z1y = z1

    Assuming we want to keep the system outputy(t) fixedat the value y(t) = z 01 , the system zero dynamics aredefined by

    z2= k3(d z

    01z2)

    2 +k1(1 z2)(d z01 z2) +k2z

    01 z2

    z01(89)

    In the sequel we take k1 = 39.58, k2 = 39.58, k3 =5.43, d= 5.1. The system shows either a minimum ornonminimum-phase behaviour depending upon the op-erating point around which the plant is evolving. Thelinearized dynamics are further explored in Figure 6which shows the steady state response in x1 and x2versus the input u, the location of the (linearized) zero

    zversus the input uo

    (at the local operating point) andthe location of the (linearized) poles versus the inputuo (at the local operating point).

    0 40 80 120 160 2000

    2

    4

    State

    x1

    0 40 80 120 160 2000

    0.7

    1.4

    State

    x2

    0 40 80 120 160 200300

    0

    300

    Zero

    0 40 80 120 160 200300

    150

    0

    Input u

    Poles

    Figure 6:

    Note that the linearized gain to x2 is positive foruo [0, 40] and negative for uo [40, ]. Similarly,the system is nonminimum phase for uo [0, 40] andminimim phase for uo [40, ]. These characteristics

    make control of this seemingly simple system challeng-ing. In the sequel, we assume that the input u must liein the range [0, 200].

    (i) GFL: To apply the Generalized FeedbackLinearization strategy we use a differential operatorp() of order np = n = 2 and a l() differential op-erator of order nl = np r= 1

    p() =p22 +p1 + 1,

    l() =l1+ 1. (90)

    We know that using the auxilliary variable u(t) =l() u(t) the expression for p() y(t) can be written as

    p() y(t) = b() +a() u(t). (91)

    where represents the states of the extended systemobtained by the inclusion of the dynamics of 1

    l(s) .

    For the system described in (84) to (86) we have

    b() = p2

    k1x1 (k2+u)x2+

    1

    l1x2u

    +

    p1[k1x1 k2x2 x2u] +x2

    a() = p2

    l1x2 (92)

    Since, in this problem, the steady state gain changessign, we were not able to use the same GFL design forwildly differing operating points. Figure 7 shows theresponse in the nonminimum phase region for differentvalues of. Notice that as is decreased the responsebecomes faster but the undershoot greater.

    (ii) A Simple Switching Control Law: Fi-nally, we adopt an engineering approach to this prob-lem. Figure 6 indicates that x1 uniquely defines the

    system steady state operating point. Since the input isconstrained to lie in the range [0, 200], some ad-hoc rea-soning leads to the following simple switching controllaw (implemented with hysteresis of 0.05).

    u(t) = 200 if x1 < x1

    u(t) = 0 if x1 > x1

    where x1 defines the desired steady state value of x1.Figure 8 shows the response ofx2 when x

    1 is changed

  • 8/10/2019 Review de Nonlinear Control

    16/18

    0 0.05 0.1 0.15 0.2 0.25 0.30.5

    0.6

    0.7

    0.8

    0.9

    1

    1.1

    Time [h]

    Plantoutput

    =1e

    4

    =6e5

    =35

    Figure 7:

    from 1 to 3 (i.e., from the operating point [uo =10, x2= 0.7] to the operating point [uo = 80, x2= 1]).The inverse-response characteristic ofx2is evident in

    this plot. The final limit cycle is due to the switchingnature of the controller and could be removed by useof a local controller.

    0 0.05 0.1 0.15 0.2 0.25 0.30.5

    0.6

    0.7

    0.8

    0.9

    1

    1.1

    Time [h]

    Plantoutput

    Figure 8:

    The responses obtained here is quite good. The moralis that one should not be afraid of nonlinear problemsand that, sometimes, an acceptable control strategy

    can be obtained by ad-hoc design provided it capturesthe key features of the problem under study.

    10 Conclusions

    Nonlinearities are rarely too far from the surface inreal world control problems. Indeed, it might be ar-gued that certain kinds of nonlinear behaviour (e.g.,actuator amplitude and slew rate constraints) are ubiq-uitous issues in control. This paper has given a briefoverview of some of the simpler strategies for nonlin-

    ear control. Some of these methods, e.g., feedbacklinearization and backstepping can be seen to be veryclose to linear methods and, as such, are easy to use inpractice. Other methods are more subtle and requirespecialist knowledge to understand. Our introductionto nonlinear control has necessarily been brief in partlimited by space but perhaps, more truthfully, by ourown expertise.

    An important question not answered here is, How canone distinguish those systems where nonlinear controlwill give genuine performance enhancements from thosesystems where a simple linear control will do just aswell? This question is, to the best of the authorsknowledge, largely open. Notwithstanding this obser-vation, nonlinear thinking can be of major benefit inpractical problems and we encourage readers not to beafraid to try these techniques which can, if the circum-stances are right, lead to major benefits.

    References

    [1] F. Allgower, Approximate input-output lin-earization of nonminimum phase nonlinear systems,Proc. 4th ECC, Belgium, 1997.

    [2] F. Allgower, T.A. Badgwell, J.S. Qin, J.B. Rawl-ings and S.J. Wright, Nonlinear Predictive Controland Moving Horizon Estimation An IntroductoryOverview, in Advances in Control, Paul M. Frank(Ed)Springer Verlag, London 1999.

    [3] D.P. Atherton, Nonlinear control engineering,Van Nostrand Reinhold, 1982.

    [4] A. Bemporad, M. Morari, V. Dua and E. Pis-tikopoulos, The explicit solution of model predictivecontrol via multiparametric quadratic programming,ACC, USA, 2000.

    [5] D. Bernstein and A. Michel, A chronologicalbibliography on saturating actutators, Int. J. of Ro-bust and Nonlinear Control, Vol.5, pp.375-380, 1995.

    [6] R.W. Brockett, Feedback invariants for nonlin-ear systems,Proceedings of the International Congressof Mathematicians, Helsinki, pp.1357-1368, 1978.

    [7] L.O. Chua, C.A. Desoer and E.S. Kuh, Linear

    and Nonlinear Circuits, McGrawHill, New York, 1987.

    [8] D.J. Clements and Y.A. Jiang, Introduction tobackstepping: State feedback of strict feedback sys-tems, University of N.S.W. Australia, 2001.

    [9] J.A. De Dona, G.C. Goodwin and M.M. Seron,Anti-windup and model predictive control: Reflec-tions and connections, European Journal of Control,Vol.6, pp.467-477, 2000.

    [10] J. De Dona, G.C. Goodwin and M. Seron,Constrained Control and Estimation, Springer-Verlag,2002.

  • 8/10/2019 Review de Nonlinear Control

    17/18

    [11] J.A. De Dona, S.O.R. Moheimani, G.C. Goodwinand A. Feuer, Robust hybrid control incorporatingover-saturation, Systems and Control Letters, Vol.38,pp.179-185, 1999.

    [12] F.J. Doyle III, F. Allgower and M. Morari, Anormal form approach to approximate input-outputlinearization for maximum-phase nonlinear SISO sys-tems, IEEE Trans. on Auto. Cont., Vol.41, pp.305-

    309, 1996.

    [13] F.J. Doyle and M.A. Henson, Nonlinear systemstheory, in M.A. Henson and D.E. Seborg (Eds), Non-linear Process Control, Prentice Hall, 1997.

    [14] S. Engell and K.U. Klatt, Grain scheduling ofa non-minimum phase CSTR, Proc. ECC, pp.2323-2328, 1993.

    [15] H. Fertik and C. Ross, Direct digital control al-gorithm with anti-windup feature,Instrument Societyof America, Preprint No.10-1-1CQS-67, 1967.

    [16] A. Feuer and S. Morse, Adaptive controlof single-input single-output linear systems, IEEETrans. on Auto. Cont., Vol.23, pp.557-570, 1978.

    [17] M. Fliess, J. Levine, P. Martin and P. Rouchon,Controlling nonlinear systems by flatness, in C.I.Byrnes (Ed), Systems and Control in the Twenty-FirstCentury, Birkhauser, 1997.

    [18] M. Fliess, J. Levine, P. Martin and P. Rouchon,Flatness and defect in nonlinear systems: Introduc-ing theory and applications, International Journal ofControl, Vol.61, pp.1327-1361,1995.

    [19] M. Fliess, H. Sira-Ramirez and R. Marquez,

    Regulation of nonminimum phase ouputs: A flatnessbased approach, in D. Normand-Cryot (Ed), Perspec-tives in Control, Springer-Verlag, 1998.

    [20] G.C. Goodwin, S.F. Graebe and M.E. Salgado,Control System Design, Prentice-Hall, 2001.

    [21] G.C. Goodwin and O.J. Rojas, Robustness is-sues associated with the provision of integral action innonlinear systems, in S.O.R. Moheimani (Ed), Per-spectives in Robust Control, Springer, 2001.

    [22] G.C. Goodwin, O. Rojas and H. Takata, Nonlin-ear control via Generalized Feedback Linearization us-

    ing Neural Networks,Asian Journal of ControlVol.3,pp.79-88, 2001.

    [23] S.F. Graebe, G.C. Goodwin and G. Elsley,Rapid prototyping and implementation of control incontinuous steel casting, IEEE Control Systems Mag-azine, Vol.15, No.4, pp.64-71, 1995.

    [24] M. Guay, S Kansal and J.F. Forbes, Trajectoryoptimization for flat dynamic systems, Industrial En-gineering Chemical Res., Vol.40, pp.2089-2102, 2001.

    [25] R. Hanus, A new technique for preventing con-trol windup, Journal A, Vol.21, No.1, pp.15-20, 1980.

    [26] D.J. Hill and P.J. Moylan, Dissipative dynami-cal systems: Basic input-output and state properties,J. of the Franklin Institute, (309)5, pp.327-357, 1980.

    [27] D. Hrovat and H. Tan, Application of gainscheduling to design of active suspensions, IEEE CDC, pp.1030-1035, San Antonio, 1993.

    [28] R.L. Hunt and R. Su, Linear equivalents of non-

    linear time varying systems, in N. Levan (Ed), Pro-ceedings of the Symposium on the Mathematical Theoryof Networks and Systems, Western Periodicals, NorthHollywood, CA, pp.119-123, 1981.

    [29] A. Isidori,Nonlinear Control Systems, Springer-Verlag, 1995.

    [30] A. Isidori and A. Ruberti, On the synthesis oflinear input-output responses for nonlinear systems,Systems and Control LettersVol.4, pp.17-22, 1984.

    [31] B. Jacubczyk and W. Respondek, On lineariza-tion of control systems, Bull. Acad. Polonaise Sci.Ser. Sci. Math, Vol.28, pp.517-522, 1980.

    [32] M.S. Jenkins, B.G. Thomas, W.C. Chen andR.B. Mahapatra, Investigation of strand surface de-fects using mould instrumentation and modelling,Proceeding 77th Steel Making Conference, Iron andSteel Society, Chicago, 1994.

    [33] J. Jiang, Optimal gain scheduling controllersfor a diesel engine, IEEE Control Systems Magazine,Vol.14, No.4, pp.42-48,1994.

    [34] K.M. Kam and M.O. Tade, Nonlinear controlof a simulated industrial evaporation system using afeedback linearization technique with a state observer,

    Ind. & Engg Chem. Res., Vol.38, pp.2995-3006, 1999.

    [35] H.K. Khalil, Nonlinear Systems, Prentice Hall,1996.

    [36] I. Kolmanovsky and E.G. Gilbert, Multimoderegulators for systems with state and control con-straints and disturbance inputs, in A.S. Morse (Ed),Control using Logic-Based Switching: Lecture Notes

    in Control and Information Sciences, Springer Verlag,pp.118-127, 1996.

    [37] V.I. Korobov, A general approach to the solu-tion of the problem of synthesizing bounded controls

    in a control problem,Math USSR-Sb, Vol. 37, p. 535,1979.

    [38] M. Kothare, P. Campo, M. Morari and C. Nett,A unified framework for the study of anti-windup de-signs,Automatica, Vol.30, No.12, pp.1869-1883, 1994.

    [39] A.J. Krener, On the equivalence of control sys-tems and the linearization of nonlinear systems, SIAMJournal on Control, Vol.11, pp.670-676, 1973.

    [40] A.J. Krener, Local approximation of controlsystems, Journal of Differential Equations, Vol.19,pp.125-133, 1975.

  • 8/10/2019 Review de Nonlinear Control

    18/18

    [41] A.J. Krener, Feedback Linearization, in: J.Baillieul, J.C. Willems (Eds.), Mathematical ControlTheory, Springer Verlag, New York, pp.66-98, 1999.

    [42] M. Krstic, I. Kanellakopoulos and P. Kokotovic,Nonlinear and Adaptive Control Design, Wiley, 1995.

    [43] D.A. Lawrence and W.J. Rugh, Gain schedul-ing dynamic linear controllers for a nonlinearplant,Automatica, Vol.31, pp.381-390, 1995.

    [44] D.C. Lee, G.M. Lee, K.D. Lee, DC-bus voltagecontrol of three-phase AC/DC PWM converters usingfeedback linearization, IEEE Trans. Industry Appli-cationsVol.36, pp.826-833, 2000.

    [45] Z. Lin, Global control of linear systems withsaturating actuators, Automatica, 34(7), pp.897-905,1998.

    [46] Z. Lin, M. Pachter, S. Banda and Y. Shamash,Stabilizing feedback design for linear systems withrate limited actuators, in S. Tarboriech and G. Gar-cia (Eds), Control of Uncertain Systems with Bounded

    Inputs, Vol.227 of Lecture Notes in control and Infor-mation Sciences, Springer Verlag, pp.173-186, 1997.

    [47] Z. Lin and A. Saberi, Semi-global exponentialstabilization of linear systems subject to input satura-tion via linear feedbacks,Systems and Control Letters,Vol.21, pp.225-239, 1993.

    [48] J. Lozier, A steady-state approach to the the-ory of saturable servo systems, IRE Transactions onAutomatic Control, pp.19-39, 1956.

    [49] I. Mareels and J.W. Polderman, Adaptive Sys-tems, An Introduction, Birkhauser, 1996.

    [50] D.Q. Mayne, Nonlinear model predictive con-trol: Challenges and opportunities, in F. Allgowerand A. Zheng (Eds), Nonlinear Predictive Control,Birkhauser, Basel, 1999.

    [51] D.Q. Mayne, J. Rawlings, C. Rao and P.Scokaert, Constrained model predictive control: Sta-bility and optimality,Automatica, 2000.

    [52] M. Morari and J.H. Lee, Model predictive con-trol: Past, present and future, in Proceedings PSE97-ESCAPE-7 Symposium, Trondheim, 1997.

    [53] H. Nijmeijer and A.J. van der Schaft, NonlinearDynamic Control Systems, Springer-Verlag, 1990.

    [54] S.J. Qin and T.A. Badgwell, An overview of in-dustrial model predictive control technology, in J.C.Kantor, C.E. Garcia and B. Carnahan (Eds), CPC V,1996.

    [55] O.J. Rojas and G.C. Goodwin, A nonlinear con-trol strategy related to feedback linearization capa-ble of dealing with non-stably invertible nonlinear sys-tems,Technical Note, Univ. of Newcastle, 2001.

    [56] W.J. Rugh, Analytical framework for gainscheduling, IEEE Control Systems Magazine, Vol.11,No.1, pp.79-84, 1991.

    [57] R. Sepulchre, M. Jankovic, P. Kokotovic, Con-stuctive Nonlinear Control, Springer-Verlag, 1996.

    [58] R. Sepulchre, M. Jankovic, P. Kokotovic, Recu-sive designs and feedback passivation, in C. Byrneset.al (Eds), Systems and Control in the Twenty-FirstCentury, Birkhauser, 1997.

    [59] M.M. Seron, J.A. De Dona and G.C. Goodwin,Global analytical model predictive control with inputconstraints,IEEE 39th CDC, Australia, 2000.

    [60] J.S. Shamma and M. Athens, Gain scheduling:potential hazards and possible remedies, IEEE Con-trol Systems Magazine, Vol.12, pp.101-107, 1992.

    [61] J.J.E. Slotine and W. Li,Applied Nonlinear Con-trol, Prentice Hall, 1991.

    [62] R. Sommer, Control design for multivariablenonlinear time varying systems, International Jour-nal of ControlVol.31, pp.883-891, 1980.

    [63] R.F. Stengel, Stochastic Optimal Control: The-ory and Application, Wiley, 1986.

    [64] R. Su, On the linear equivalents of nonlinear sys-tems,Systems & Control Letters, V2, pp.48-52, 1982.

    [65] A. Teel, A nonlinear control viewpoint on anti-windup and related problems, IFAC Nonlinear Con-trol Symposium, Enschede, The Netherlands, 1998.

    [66] A. Teel, Anti-windup for exponentially unstablelinear systems, International Journal of Robust andNonlinear Control, Vol.9, No.10, pp.701-716, 1999.

    [67] M. Vidyasagar, Nonlinear Systems Analysis,Prentice Hall, 1993.

    [68] G.F. Wredeuhagen and P.R. Belanger,Piecewise-linear LQ control for systems with in-put constraints, Automatica, 30(3), pp.403-416,1994.