Modifier-Adaptation Methodology for Real-Time Optimization

12
Modifier-Adaptation Methodology for Real-Time Optimization A. Marchetti, B. Chachuat, and D. Bonvin* Laboratoire d’Automatique École Polytechnique Fe ´de ´rale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland The ability of a model-based real-time optimization (RTO) scheme to converge to the plant optimum relies on the ability of the underlying process model to predict the plant’s necessary conditions of optimality (NCO). These include the values and gradients of the active constraints, as well as the gradient of the cost function. Hence, in the presence of plant-model mismatch or unmeasured disturbances, one could use (estimates of) the plant NCO to track the plant optimum. This paper shows how to formulate a modifed optimization problem that incorporates such information. The so-called modifiers, which express the difference between the measured or estimated plant NCO and those predicted by the model, are added to the constraints and the cost function of the modified optimization problem and are adapted iteratively. Local convergence and model-adequacy issues are analyzed. The modifier-adaptation scheme is tested experimentally via the RTO of a three-tank system. 1. Introduction The chemical industry is characterized by a large number of continuously operating plants, for which optimal operation is of economic importance. However, optimal operation is par- ticularly difficult to achieve when the plant models are inaccurate or in the presence of process disturbances. In response to these difficulties, real-time optimization (RTO) has become the rule for a large number of chemical and petrochemical plants. 1 In highly automated plants, optimal operation is typically addressed by a decision hierarchy involving several levels that include plant scheduling, RTO, and process control. 2-4 At the RTO level, medium-term decisions are made on a time scale of hours to a few days by considering economical objectives explicitly. This step typically relies on an optimizer that determines the optimal operating point under slowly changing conditions such as catalyst decay or changes in raw material quality. The optimal operating point is characterized by setpoints that are passed to lower-level controllers. Model-based RTO typically involves nonlinear first-principles models that describe the steady-state behavior of the plant. Because these models are often relatively large, model-based optimization may be computationally intensive. RTO emerged in the chemical process industry in the late 1970s, at the time when online computer control of chemical plants became available. There has been extensive research in this area since then, and numerous industrial applications have been reported. 3 Because accurate models are rarely available in industrial applications, RTO typically proceeds using an iterative two- step approach, 3,5,6 namely, an identification step followed by an optimization step. The idea is to repeatedly estimate selected uncertain model parameters and use the updated model to generate new inputs via optimization. This way, the model is expected to yield a better description of the plant at its current operating point. The classical two-step approach works well provided that (i) there is little structural plant-model mismatch, 7 and (ii) the changing operating conditions provide sufficient excitation for estimating the uncertain model parameters. Unfortunately, such conditions are rarely met in practice. Regarding (i), in the presence of structural plant-model mismatch, it is typically not possible to satisfy the necessary conditions of optimality (NCO) of the plant simply by estimating the model parameters that predict the plant outputs well. Some information regarding plant gradients must be incorporated into the RTO scheme. In the so-called “integrated system optimiza- tion and parameter estimation (ISOPE) method”, 8,9 a gradient- modification term is added to the cost function of the optimi- zation problem to force the iterates to converge to a point that satisfies the plant NCO. Regarding (ii), the use of multiple datasets has been suggested to increase the number of identifi- able parameters. 10 In response to both (i) and (ii), methods that do not rely on model-parameter update have gained in popularity recently. These methods encompass the two classes of model-free and fixed-model methods that are discussed next. Model-free methods do not use a process model online to implement the optimization. Two approaches can be distin- guished. In the first one, successive operating points are determined by “mimicking” iterative numerical optimization algorithms. For example, evolutionary-like schemes have been proposed that implement the Nelder-Mead simplex algorithm to approach the optimum. 11 To achieve faster convergence rates, techniques based on gradient-based algorithms have also been developed. 13 The second approach to model-free methods consists in recasting the nonlinear programming (NLP) problem into that of choosing outputs whose optimal values are ap- proximately invariant to uncertainty. The idea is then to use measurements to bring these outputs to their invariant values, thereby rejecting the uncertainty. In other words, a feedback law is sought that implicitly solves the optimization problem, as it is done in self-optimizing control 14 and NCO tracking. 15 Fixed-model methods utilize both a nominal process model and appropriate measurements to guide the iterative scheme toward the optimum. The process model is embedded within an NLP problem that is solved repeatedly. However, instead of refining the parameters of a first-principles model from one RTO iteration to the next, the measurements are used to update the cost and constraint functions in the optimization problem so as to yield a better approximation of the plant cost and constraints at the current point. In a recent work, 16 a modified optimization problem has been formulated wherein both the constrained values and the cost and constraint gradients are corrected. This way, an operating point that satisfies the plant NCO is obtained * To whom correspondence should be addressed. E-mail address: dominique.bonvin@epfl.ch. Present address: Department of Chemical Engineering, McMaster University, Hamilton, Ontario L8S4L8, Canada. Ind. Eng. Chem. Res. 2009, 48, 6022–6033 6022 10.1021/ie801352x CCC: $40.75 2009 American Chemical Society Published on Web 03/20/2009

Transcript of Modifier-Adaptation Methodology for Real-Time Optimization

Modifier-Adaptation Methodology for Real-Time Optimization

A. Marchetti, B. Chachuat,† and D. Bonvin*

Laboratoire d’Automatique École Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland

The ability of a model-based real-time optimization (RTO) scheme to converge to the plant optimum relieson the ability of the underlying process model to predict the plant’s necessary conditions of optimality (NCO).These include the values and gradients of the active constraints, as well as the gradient of the cost function.Hence, in the presence of plant-model mismatch or unmeasured disturbances, one could use (estimates of)the plant NCO to track the plant optimum. This paper shows how to formulate a modifed optimization problemthat incorporates such information. The so-called modifiers, which express the difference between the measuredor estimated plant NCO and those predicted by the model, are added to the constraints and the cost functionof the modified optimization problem and are adapted iteratively. Local convergence and model-adequacyissues are analyzed. The modifier-adaptation scheme is tested experimentally via the RTO of a three-tanksystem.

1. Introduction

The chemical industry is characterized by a large number ofcontinuously operating plants, for which optimal operation isof economic importance. However, optimal operation is par-ticularly difficult to achieve when the plant models are inaccurateor in the presence of process disturbances. In response to thesedifficulties, real-time optimization (RTO) has become the rulefor a large number of chemical and petrochemical plants.1

In highly automated plants, optimal operation is typicallyaddressed by a decision hierarchy involving several levels thatinclude plant scheduling, RTO, and process control.2-4 At theRTO level, medium-term decisions are made on a time scaleof hours to a few days by considering economical objectivesexplicitly. This step typically relies on an optimizer thatdetermines the optimal operating point under slowly changingconditions such as catalyst decay or changes in raw materialquality. The optimal operating point is characterized by setpointsthat are passed to lower-level controllers. Model-based RTOtypically involves nonlinear first-principles models that describethe steady-state behavior of the plant. Because these modelsare often relatively large, model-based optimization may becomputationally intensive. RTO emerged in the chemical processindustry in the late 1970s, at the time when online computercontrol of chemical plants became available. There has beenextensive research in this area since then, and numerousindustrial applications have been reported.3

Because accurate models are rarely available in industrialapplications, RTO typically proceeds using an iterative two-step approach,3,5,6 namely, an identification step followed byan optimization step. The idea is to repeatedly estimate selecteduncertain model parameters and use the updated model togenerate new inputs via optimization. This way, the model isexpected to yield a better description of the plant at its currentoperating point. The classical two-step approach works wellprovided that (i) there is little structural plant-model mismatch,7

and (ii) the changing operating conditions provide sufficientexcitation for estimating the uncertain model parameters.Unfortunately, such conditions are rarely met in practice.Regarding (i), in the presence of structural plant-model

mismatch, it is typically not possible to satisfy the necessaryconditions of optimality (NCO) of the plant simply by estimatingthe model parameters that predict the plant outputs well. Someinformation regarding plant gradients must be incorporated intothe RTO scheme. In the so-called “integrated system optimiza-tion and parameter estimation (ISOPE) method”,8,9 a gradient-modification term is added to the cost function of the optimi-zation problem to force the iterates to converge to a point thatsatisfies the plant NCO. Regarding (ii), the use of multipledatasets has been suggested to increase the number of identifi-able parameters.10

In response to both (i) and (ii), methods that do not rely onmodel-parameter update have gained in popularity recently.These methods encompass the two classes of model-free andfixed-model methods that are discussed next.

Model-free methods do not use a process model online toimplement the optimization. Two approaches can be distin-guished. In the first one, successive operating points aredetermined by “mimicking” iterative numerical optimizationalgorithms. For example, evolutionary-like schemes have beenproposed that implement the Nelder-Mead simplex algorithmto approach the optimum.11 To achieve faster convergence rates,techniques based on gradient-based algorithms have also beendeveloped.13 The second approach to model-free methodsconsists in recasting the nonlinear programming (NLP) probleminto that of choosing outputs whose optimal values are ap-proximately invariant to uncertainty. The idea is then to usemeasurements to bring these outputs to their invariant values,thereby rejecting the uncertainty. In other words, a feedbacklaw is sought that implicitly solves the optimization problem,as it is done in self-optimizing control14 and NCO tracking.15

Fixed-model methods utilize both a nominal process modeland appropriate measurements to guide the iterative schemetoward the optimum. The process model is embedded withinan NLP problem that is solved repeatedly. However, instead ofrefining the parameters of a first-principles model from one RTOiteration to the next, the measurements are used to update thecost and constraint functions in the optimization problem so asto yield a better approximation of the plant cost and constraintsat the current point. In a recent work,16 a modified optimizationproblem has been formulated wherein both the constrainedvalues and the cost and constraint gradients are corrected. Thisway, an operating point that satisfies the plant NCO is obtained

* To whom correspondence should be addressed. E-mail address:[email protected].

† Present address: Department of Chemical Engineering, McMasterUniversity, Hamilton, Ontario L8S4L8, Canada.

Ind. Eng. Chem. Res. 2009, 48, 6022–60336022

10.1021/ie801352x CCC: $40.75 2009 American Chemical SocietyPublished on Web 03/20/2009

upon convergence. Note that one can choose to not include thegradient-modification terms, in which case the approach reducesto a simple constraint-adaptation scheme.17,18 The term “modi-fier adaptation” has recently been coined for those fixed-modelmethods that adapt correction terms (or modifiers) based on theobserved difference between actual and predicted functions orgradients.19 Modifier-adaptation methods exhibit the nice featurethat they use a model parametrization and an update criterionthat are tailored to the tracking of the NCO.

This paper formalizes the idea of using plant measurements toadapt the optimization problem in response to plant-modelmismatch, following the paradigm of modifier adaptation. It isorganized as follows. The formulation of the RTO problem and anumber of preliminary results are given in section 2. Section 3describes the modifier-adaptation approach. Model-adequacy re-quirements and local convergence conditions are discussed. Twofiltering strategies are proposed to achieve stability and reducesensitivity to measurement noise. Links between modifier adapta-tion and related work in the literature are highlighted. The modifier-adaptation method is tested in an experimental three-tank setup insection 4, and conclusions are presented in section 5.

2. Preliminaries

2.1. Problem Formulation. The objective of RTO is theminimization or maximization of some steady-state operatingperformance (e.g., minimization of operating cost or maximiza-tion of production rate), while satisfying a number of constraints(e.g., bounds on process variables or product specifications).In the context of RTO, because it is important to distinguishbetween the plant and the model, we will use the notation ( · )p

for the variables that are associated with the plant.The steady-state optimization problem for the plant can be

formulated as follows:

s.t.

where u ∈ Rnu denotes the decision (or input) variables, and yp

∈ Rny are the measured (or output) variables; φ: Rnu × Rny f R

is the scalar cost function to be minimized; gi: Rnu × Rny f R

(with i ) 1, ..., ng) is a set of constraint functions; and uL anduU are the lower and upper bounds on the decisions variables,respectively. These latter bounds are considered separately fromthe constraints g, because they are not affected by uncertaintyand, therefore, do not require adaptation.

An important assumption throughout this paper is that thecost function φ as well as the constraint functions g are knownand can be evaluated directly from the measurements. Whilethese assumptions are often met in practice, there also are manyRTO applications in which not all output variables in the costfunction and inequality constraints can be measured; that is,the RTO scheme is open-loop, with respect to unmeasuredoutputs. Note that the lack of measurements requires to introducesome conservatism, e.g., in the form of constraint backoffs.20

This will be addressed in future work.In any practical application, the plant mapping yp(u) is not

known accurately. However, an approximate model is oftenavailable in the form

where f ∈ Rnx is a set of process model equations includingmass and energy balances and thermodynamic relationships, x∈ Rnx are the model states, y ∈ Rny are the output variablespredicted by the model, and θ ∈ Rnθ is a set of adjustable modelparameters. For simplicity, we shall assume that the outputs ycan be expressed explicitly as functions of u and θ, y(u, θ).Using one such model, the solution to the original problem (1)can be approached by solving the following NLP problem:

s.t.

with Φ(u, θ) :) φ(u, y(u, θ)) and G(u, θ) :) g(u, y(u, θ)).Assuming that the feasible set U :) {u ∈ [uL, uU]: G(u, θ) e0} is nonempty and the cost function Φ is continuous for θgiven, a minimizing solution of Problem (2) is guaranteedto exist; the set of active constraints at u* is denoted byA(u*) :) {i: Gi(u*, θ) ) 0, i ) 1, ..., ng}.

2.2. Necessary Conditions of Optimality. Assuming thatthe required constraint qualification holds at the solution pointu* and the functions Φ and G are differentiable at u*; thereexist unique Lagrange multiplier vectors µ ∈ Rng, �U, �L ∈ Rnu

such that the following first-order Karush-Kuhn-Tucker(KKT) conditions hold at u*:21

with L being the Lagrangian function (L :) Φ + µTG + �UT(u- uU) + �LT(uL - u)). The NCOs (3) are called primalfeasibility conditions, those given in (4) are called complemen-tarity slackness conditions, and those given in (5) and (6) arecalled dual feasibility conditions.

2.3. Model Adequacy for Two-Step RTO Schemes. Theproblem of model selection in the classical two-step approachof RTO has been discussed by Forbes and Marlin.22 A processmodel is called adequate if values for its parameters, say, θjcan be found such that a fixed point of the RTO schemecoincides with the plant optimum up*. Note that the parametervalues θj may not represent the true values of model parameters,especially in the case of structural plant-model mismatch forwhich the concept of “true values” has no meaning. Forbes andMarlin22 proposed the following model adequacy criterion.

Criterion 1 (Model Adequacy for Two-Step RTOSchemes). Let up* be the unique plant optimum and let thereexist (at least) one set of values θj for the model parameterssuch that

minu

Φp(u) :) φ(u, yp) (1)

Gp(u) :) g(u, yp) e 0

uL e u e uU

f(u, x, θ) ) 0

y ) h(u, x, θ)

u* ) arg minu

Φ(u, θ) (2)

G(u, θ) e 0uL e u e uU

G e 0, uL e u e uU (3)

µTG ) 0, �UT(u - uU) ) 0, �LT(uL - u) ) 0 (4)

µ g 0, �U g 0, �L g 0 (5)

∂L∂u

) ∂Φ∂u

+ µT∂G∂u

+ �UT - �LT ) 0 (6)

Gi(up*, θ) ) 0, i ∈ A(up*) (7)

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009 6023

where ∇rΦ and ∇r2Φ are the reduced gradient and reduced

Hessian of the cost function, respectively;23 and Jid stands forthe performance index in the (unconstrained) parameter estima-tion problem. Then, the process model is adequate.

Conditions (7)-(10) represent sufficient conditions for up* tobe a strict local minimum of (2) with the parameters chosen asθj, whereas conditions (11) and (12) are sufficient for θj to be astrict local minimum of the estimation problem at up*. Hence,if all these conditions hold, the plant optimum up* correspondsto a (local) model optimum for θ ) θj and conditions (7)-(12)are sufficient for model adequacy. However, these conditionsare not necessary for model adequacy. Indeed, it may be thatup* corresponds to the model optimum (for θ ) θj)si.e., themodel is adequate, but conditions (7)-(12) are not met. Onesuch situation is when the reduced Hessian is only semidefinitepositive, for which (10) does not hold.

Noting that the equalities (11) alone yield nθ conditions, itbecomes clear that the full set of adequacy conditions (7)-(12)is overspecified. In other words, these model adequacy condi-tions are often not satisfied in the presence of plant-modelmismatch.

3. Modifier Adaptation

Modifier adaptation is presented and analyzed in this section.The idea behind modifier adaptation is introduced in subsection3.1. The modifier-adaptation algorithm is described in subsection3.2, followed by discussions on KKT matching, local conver-gence, and model adequacy in subsections 3.3, 3.4, and 3.5,respectively. Alternative modifier-adaptation schemes are con-sidered in subsection 3.6, and links to previous work areindicated in subsection 3.7. A discussion on the estimation ofexperimental gradients closes the section.

3.1. Idea of Modifier Adaptation. In the presence ofuncertainty, the constraint values and the cost and constraintgradients predicted by the model do not match those of the plant.

Therefore, open-loop implementation of the model-based solu-tion results in suboptimal and possibly infeasible operation.

The idea behind modifier adaptation is to use measurementsfor correcting the cost and constraint predictions betweensuccessive RTO iterations in such a way that a KKT point forthe model coincides with the plant optimum.16 Unlike two-stepRTO schemes, the model parameters θ are not updated. Instead,a linear modification of the cost and constraint functions isimplemented, which relies on so-called modifiers representingthe difference between the plant and predicted values of someKKT-related quantities.

At a given operating point u, the modified constraint functionsread as

where the modifiers εG ∈ Rng and λG ∈ Rnu×ng are given by

A graphical interpretation of the modifiers in the jth inputdirection for the constraint Gi is depicted in Figure 1. Themodifier εGi corresponds to the gap between the plant andpredicted constraint values at u, whereas λGi represents thedifference between the slopes of Gp, i and Gi at u.

Likewise, the cost function is corrected as

where the modifier λΦ ∈ Rnu is given by

Observe that the cost modification comprises only a linear termin u, as the additional constant term (Φp(u) - Φ(u, θ) - λΦTu)to the cost function would not affect the solution point.

The modifiers and KKT-related quantities in (14), (15), and(17) can be denoted collectively as nK-dimensional vectors,

with nK ) ng + nu(ng + 1). This way, (14), (15), and (17) canbe rewritten as

Implementation of these modifications requires the cost andconstraint gradients of the plant (∂Φp/∂u(u) and ∂Gp/∂u(u)) tobe available at u. These gradients can be inferred from themeasured plant outputs yp(u) and the estimated output gradients∂yp/∂u(u):

A discussion on how to estimate the output gradients for theplant is deferred to subsection 3.8.

Gi(up*, θ) < 0, i ∉ A(up*) (8)

∇rΦ(up*, θ) ) 0 (9)

∇r2Φ(up

*, θ) > 0 (positive definite) (10)

∂Jid

∂θ(yp, up*, θ) ) 0 (11)

∂2Jid

∂θ2(yp, up

*, θ) > 0 (positive definite) (12)

Figure 1. Linear modification of the constraint function Gi so that the valueand gradient of the modified function Gm, i match those of Gp, i at u.

Gm(u, θ) :) G(u, θ) + εG + λGT(u - u_) (13)

εG :) Gp(u_) - G(u_, θ) (14)

λGT :)∂Gp

∂u(u_) - ∂G

∂u(u_, θ) (15)

Φm(u, θ) :) Φ(u, θ) + λΦTu (16)

λΦT :)∂Φp

∂u(u_) - ∂Φ

∂u(u_, θ) (17)

ΛT :) (εG1, ..., εGng, λG1T, ..., λGngT, λΦT)

C T :) (G1, ..., Gng,∂G1

∂u, ...,

∂Gng

∂u,∂Φ∂u )

Λ(u_) ) Cp(u_) - C (u_, θ) (18)

∂Φp

∂u(u_) ) ∂φ

∂u(u_, yp(u_)) + ∂φ

∂y(u_, yp(u_))

∂yp

∂u(u_)

∂Gp

∂u(u_) ) ∂g

∂u(u_, yp(u_)) + ∂g

∂y(u_, yp(u_))

∂yp

∂u(u_)

6024 Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009

3.2. Modifier-Adaptation Algorithm. The proposed modi-fier-adaptation algorithm is depicted in Figure 2. It consists ofapplying the foregoing modification procedure to determine thenew operating point. In the kth iteration, the next point uk+1 isobtained as

where

s.t.

Here, uk is the current operating point; εkG and λk

G are theconstraint-value and constraint-gradient modifiers in the currentiteration; and λk

Φ is the cost-gradient modifier in the currentiteration. These modifiers are adapted at each iteration, using(estimates of) the constraint values and cost and constraintgradients of the plant at uk.

The simplest adaptation strategy is to implement the fullmodification given by (18) at each iteration:

However, this simple strategy may lead to excessive correctionwhen operating far away from the optimum, and it may alsomake modifier adaptation very sensitive to measurement noise.A better strategy consists of filtering the modifiers, e.g., with afirst-order exponential filter:

where K ∈ RnK×nK is a gain matrix. A possible choice for K isthe block-diagonal matrix

where the gain entries b1, ..., bng, q1, ..., qng

, d are taken in (0, 1].A block-diagonal gain matrix has the advantage that it naturallydecouples the modifier adaptation laws as

Of course, more-general choices of the gain matrix are possible,but they are typically more difficult to make. The condition for

local convergence introduced in subsection 3.4 can be used to checka posteriori that a proposed gain matrix is indeed appropriate.

It may happen that the constraint and cost gradients cannotbe reliably estimated due to the particular process characteristicsor high noise level (see subsection 3.8). In this case, one maydecide to not adapt the gradient modifiers, e.g., by setting q1 )· · · ) qng

) d ) 0; that is, the modifier-adaptation algorithmreduces to a simple constraint-adaptation scheme.18

The computational complexity of the modifier-adaptationalgorithm is dictated by the solution of the NLP subproblems.That is, modifier adaptation exhibits similar complexity as theconventional two-step approach (it is actually less computa-tionally demanding, in that the solution of a parameter estimationproblem is no longer needed at each iteration).

3.3. KKT Matching. Perhaps the most attractive propertyof modifier-adaptation schemes is that, upon convergence (i.e.,under noise-free conditions), the resulting KKT point u∞ forthe modified model-based optimization problem (20) is also aKKT point for the plant optimization problem (1). This isformalized in the following theorem.

Theorem 1 (KKT Matching). Let the gain matrix K benonsingular and assume that the modifier-adaptation algorithmdescribed by (19), (20), and (22) converges, with u∞ :) limkf∞ uk

being a KKT point for the modified problem (20). Then, u∞ isalso a KKT point for the plant optimization problem (1).

Proof. Because K is nonsingular, letting kf ∞ in (22) gives

That is,

It is then readily seen that, upon convergence, the KKT elementsCm for the modified problem (20) match the correspondingelements Cp for the plant,

or, considering the individual terms,

Because, by assumption, u∞ is a KKT point for the modifiedproblem (20), it satisfies eqs (3)-(6) with the associated Lagrange

Figure 2. Modifier-adaptation algorithm for real-time optimization (RTO).

uk+1 :) uk+1* (19)

uk+1* ) arg minu

Φm(u, θ) :) Φ(u, θ) + λkΦTu (20)

Gm(u, θ) :) G(u, θ) + εkG + λk

GT(u - uk) e 0

uL e u e uU

Λk+1 ) Cp(uk+1) - C (uk+1, θ) (21)

Λk+1 ) (I - K)Λk + K[Cp(uk+1) - C (uk+1, θ)] (22)

K :) diag(b1, ..., bng, q1Inu

, ..., qngInu

, dInu)

εk+1Gi ) (1 - bi)εk

Gi + bi[Gp,i(uk+1) - Gi(uk+1, θ)],i ) 1, ..., ng (23)

λk+1Gi T ) (1 - qi)λk

GiT + qi[∂Gp,i

∂u(uk+1) -

∂Gi

∂u(uk+1, θ)],

i ) 1, ..., ng (24)

λk+1Φ T ) (1 - d)λk

ΦT + d[∂Φp

∂u(uk+1) -

∂Φ∂u

(uk+1, θ)] (25)

Λ∞ ) Cp(u∞) - C (u∞, θ) (26)

ε∞Gi ) Gp,i(u∞) - Gi(u∞, θ), i ) 1, ..., ng (27)

λ∞GiT )

∂Gp,i

∂u(u∞) -

∂Gi

∂u(u∞, θ), i ) 1, ..., ng (28)

λ∞ΦT )

∂Φp

∂u(u∞) - ∂Φ

∂u(u∞, θ) (29)

Cm(u∞, θ) :) C (u∞, θ) + Λ∞ ) Cp(u∞) (30)

Gm(u∞, θ) :) G(u∞, θ) + ε∞G ) Gp(u∞) (31)

∂Gm

∂u(u∞, θ) :) ∂G

∂u(u∞, θ) + λ∞

GT )∂Gp

∂u(u∞) (32)

∂Φm

∂u(u∞, θ) :) ∂Φ

∂u(u∞, θ) + λ∞

ΦT )∂Φp

∂u(u∞) (33)

Table 1. Values of the Uncertain Parameters θ in Problem (34)Corresponding to the Plant and the Model

Value of Parameter

θ1 θ2 θ3 θ4

plant 3.5 2.5 -0.4 1.0model 2.0 1.5 -0.5 0.5

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009 6025

multipliers µ∞, �∞U, and �∞

L. Hence, from (31)-(33), u∞ is also aKKT point for the original problem (1), with the same set ofLagrange multipliers.

9A direct consequence of Theorem 1 is that, at u∞, the active

constraints and the corresponding Lagrange multipliers are thesame for the modified problem (20) and the plant problem (1).Furthermore, note that the equalities (31)-(33) represent morethan what is strictly needed for KKT matching: indeed, becausethe Lagrange multipliers µ∞ that correspond to inactive con-straints are zero, one simply needs to match the values andgradients of the actiVe constraints. However, adaptation of theinactive constraints and their gradients is required to detect theactive set, which is not known a priori.

Also note that no guarantee can be given that a globaloptimizer for the plant has been determined, even if thesuccessive operating points uk+1 correspond to global solutionsof the modified problem (20). Indeed, the converged operatingpoint u∞ may correspond to any stationary point for the plant(e.g., also to a local minimum). A special case where modifier-adaptation guarantees a global optimum for the plant is whenthe optimization problem (1) is convex, although this conditioncan never be verified in practice.

The following numerical example illustrates the convergenceof modifier adaptation to a KKT point in the convex case.

Example 1. Consider the following convex optimizationproblem:

s.t.

which is comprised of two decision variables (u ) [u1 u2]T),four model parameters (θ ) [θ1 θ2 θ3 θ4]T), and a singleinequality constraint (G). The parameter values θ for the plant(simulated reality) and the model are reported in Table 1.

Figure 3 illustrates the convergence of several implementationsof the modifier-adaptation scheme. Note that, because of parametricerrors, the plant optimum (point P) and model optimum (point M)are quite different. Starting from the model optimum, constraintadaptation alone is first applied, i.e., with d ) 0, and q ) 0; the aiterations with b ) 0.8 converge to the feasible, yet suboptimaloperating point C. By enabling the correction of the gradient ofthe cost function (b ) d ) 0.8; q ) 0), one obtains the b iterations,while enabling the correction of the gradient of the constraint (b) q ) 0.8; d ) 0) yields the c iterations. These two intermediatecases show how the plant optimum can be approached with respectto case a by correcting the different gradients involved in the KKTconditions. Finally, the full modifier-adaptation algorithm appliedwith b ) d ) q ) 0.8 produces the d iterations, which convergeto the plant optimum.

9For some problems, the iterations may converge by following

an infeasible path (i.e., with violation of the constraints), evenif the modifier-adaptation algorithm starts at a feasible point.A way of reducing the violation of a constraint is by reducingthe gain coefficients in the matrix K; however, this is at theexpense of a slower convergence rate. Constraint violation canalso be prevented by combining the modifier-adaptation schemewith a constraint controller.24 Although the iterations may followan infeasible path, a straightforward consequence of Theorem1 is that a convergent modifier-adaptation scheme yields feasible

operation after a finite number of RTO iterations upon backing-off the constraints in the original problem.

Theorem 1 establishes that, under mild conditions, a conver-gent implementation of the scheme described by (19), (20), and(22) finds a KKT point for the plant optimization problem (1).Yet, convergence is not ensured. It may happen, for instance,that the modified NLP problem (20) becomes infeasible becausea modifier is too large, or that the modifier sequence exhibitsundamped oscillations when some gain coefficients in the matrixK are too large. Some guidance regarding the choice of thegain matrix K is given subsequently, based on a localconvergence analysis.

3.4. Local Convergence Analysis. This subsection derivesthe conditions necessary for the modifier-adaptation algorithmdescribed by (19), (20), and (22) to converge. To conduct theanalysis, an auxiliary constraint modifier εG is introduced, whichcorresponds to the sum of the constant terms in the constraintmodification (20):

The corresponding vector of modifiers is denoted by ΛT :)(εG1, ..., εGng, λG1T, ..., λGngT, λΦT) ∈ RnK, and it is related to Λ as

with the matrix T(u) ∈ RnK×nK given by

Moreover, the modified optimization problem (20), expressedin terms of the auxiliary modifiers Λ, reads as follows:

Figure 3. Modifier adaptation applied to Problem (34). Legend: thick solidline, plant constraints; thick dash-dotted line, model constraints; dotted lines,cost function contours; point P, plant optimum; point M, model optimum;and point C, optimum found using only constraint adaptation.

minug0

Φ(u, θ) :) (u1 - θ1)2 + 4(u2 - 2.5)2 (34)

G :) (u1 - θ2)2 - 4(u2 - θ4)θ3 - 2 e 0

εkG :) εk

G - λkGTuk

Λk ) T(uk)Λk

T(u) :) (1 uT

··. ··.1 uT

Inu

··.Inu

Inu

)

6026 Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009

s.t.

Next, it is assumed that this problem has a unique globalsolution point for each iteration Λk, given by uk+1* ) U*(Λk).This uniqueness assumption precludes multiple global solutionpoints, but it assumes nothing regarding the existence of localoptima. Uniqueness of the global optimum is required toestablish convergence. If the model-based optimization problemexhibited several global solutions, the optimizer would randomlypick either one of these global solution points at any iterations,thereby making convergence hopeless.

Consider the map Γ(Λ) that represents the difference betweenthe plant and predicted KKT quantities C for a given set ofauxiliary modifiers Λ:

With this, the modifier-update law (22) can be rewritten asfollows:

Noting that T is invertible, with (T(u))-1 ) T(-u), this latterlaw can then be written in the generic form:

with

Let Λ∞ be a fixed point of the algorithm M, and let u∞*denote the corresponding optimal inputs. The map U* isdifferentiable in the neighborhood of Λ∞, provided that (i)regularity conditions, (ii) second-order sufficient conditionsfor a strict local minimum, and (iii) strict complementarityslackness conditions hold at u∞* for the modified optimizationproblem (35) (or (20)).25 Under these conditions, a first-orderapproximation of M around Λ∞ is obtained as

where δΛk :) Λk - Λ∞.Clearly, a necessary condition for the modifier-adaptation

algorithm to converge to Λ∞ is that the gain matrix K be chosensuch that the 2nK × 2nK matrix given by

has a spectral radius <1 (or any norm of Υ∞ < 1). Moreover,the relation described by (38) establishes the linear rate ofconvergence of the modifier-adaptation scheme. This neces-sary condition for convergence is illustrated in the nextexample.

Example 2. Consider case d of Example 1, where the modifier-adaptation algorithm is applied to the problem with b ) d ) q )0.8. The converged inputs and modifiers are given by

At that particular point, the matrix Υ∞ is calculated and itseigenvalues are determined to be 0.427, -0.395, 0.2 (multiplicityof 4) and 0 (multiplicity of 4). Therefore, the spectral radius ofΥ∞ is <1, which supports the convergence shown earlier in Fig-ure 3.

9The next example shows how the necessary condition for

convergence can be used to guide the choice of the gainmatrix K.

Example 3. Consider the optimization problem

whose unique solution is up* ) 1. Let the model of the costfunction be Φ(u) :) θu2, where θ is a model parameter (θ) 1/4), and consider the following modified optimizationproblem:

where the cost gradient modifier λΦ is adapted according to(25):

An unconstrained solution of Problem (41) is obtained as uk+1*) -λk

Φ/(2θ), provided that -5 e -λkΦ/(2θ) e 5. Using this

solution in (42) gives

for which a fixed point is λ∞Φ ) -2θ ) -1/2. Because -λk

Φ/(2θ) ∈ [-5, 5], the assumption of an unconstrained solution isverified in the neighborhood of λ∞

Φ.Noting that Λ ) Λ ) λΦ and ∂M /∂Λk-1(Λ∞) ) 0, the matrix

Υ∞ defined in (39) reads

Therefore, a necessary condition for the modifier adaptationalgorithm to converge is -1 < (1 - d/θ) < 1, i.e., 0 < d < 1/2.

93.5. Model Adequacy for Modifier-Adaptation Schemes. The

model-adequacy requirements for two-step RTO approacheshave been reviewed in subsection 2.3 and were shown to bevery restrictive. In this subsection, model adequacy is investi-gated in the context of modifier adaptation.

A process model is called adequate if values for the modifiers,say, Λj can be found such that a fixed point of the modifier-adaptation scheme coincides with the plant optimum up*. Thesituation is much simpler than that for two-step approaches, fortwo main reasons. First, by choosing Λj ) Λ∞, given by (30),the analogues of Conditions (7)-(9) in Criterion 1 are automati-cally satisfied. Second, because the adaptation step (22) doesnot require the solution of an optimization problem, noconditions such as those described by (11) and (12) are needed.Therefore, the only remaining (sufficient) condition for modeladequacy in modifier adaptation is the analogue of (10). In otherwords, modifier values such that up* is the solution of themodified optimization problem (20) are guaranteed to exist,provided that the reduced Hessian of the cost function Φ is

uk+1* ) arg minu

Φm(u, θ) :) Φ(u, θ) + λkΦTu (35)

Gm(u, θ) :) G(u, θ) + εkG + λk

GTu e 0 uL e u e uU

Γ(Λ) :) Cp(u) - C (u, θ), u ) U*(Λ) (36)

T(uk+1)Λk+1 ) (I - K)T(uk)Λk + KΓ(Λk)

Λk+1 ) M (Λk, Λk-1) (37)

M (Λk, Λk-1) :) T(-U*(Λk))(I - K)T(U*(Λk-1))Λk +T(-U*(Λk))KΓ(Λk)

δΛk+1 ) ∂M

∂Λk

(Λ∞)δΛk +∂M

∂Λk-1

(Λ∞)δΛk-1 (38)

Υ∞ :) (∂M

∂Λk

(Λ∞)∂M

∂Λk-1

(Λ∞)

InK0nK×nK

) (39)

u∞ ) [2.8726 2.1632 ]T, Λ∞ ) [3.4 -2 -0.4 -3 0 ]T

min-5eue5

Φp(u) :) (u - 1)2 (40)

min-5eue5

θu2 + λkΦu (41)

λk+1Φ ) (1 - d)λk

Φ + d[2(uk+1* - 1) - 2θuk+1* ] (42)

λk+1Φ ) (1 - d

θ)λkΦ - 2d (43)

Υ∞ ) (1 - dθ

0

1 0 )

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009 6027

positive definite at up*. Interestingly, this positive-definitenessrequirement is independent of the modifier values themselves.Note that model adequacy is dictated only by the second-orderderivatives, because any mismatch in the cost and constraintfunctions is corrected, up to the first-order derivatives, by themodifier-adaptation scheme. The criterion for model adequacycan be formalized as follows.

Criterion 2 (Model Adequacy for Modifier-AdaptationSchemes). Let up* be the unique plant optimum, which isassumed to be a regular point for the constraints. If the processmodel is such that the reduced Hessian of the cost function Φis positive definite at up*,then the process model is adequate for use in the modifier-

adaptation RTO scheme that is described by (19), (20), and (22).The fact that Criterion 2 is much easier to meet than Criterion

1 represents a clear advantage of modifier-adaptation schemesover two-step approaches. In modifier adaptation, the key issueis not model adequacy, but rather the availability of the KKT-related quantities Cp, which include plant-gradient information.

It turns out that model adequacy is necessary (i.e., aprerequisite) for convergence to a KKT point of the plant.Indeed, if the model is inadequate, then, by definition, no valueΛj can be found such that up* corresponds to an optimum of themodified optimization problem (20), thereby precluding con-vergence of the modifier-adaptation scheme to up*. Note that adirect consequence of Condition (44) is that convergence to anunconstrained plant optimum cannot be achieved when the costfunction Φ is linear, as illustrated in the following example.

Example 4. Consider the optimization problem (40) givenin Example 3. Let the model of the cost function be now Φ(u):) 1.5u, and consider the following modified optimizationproblem:

where the cost gradient modifier λkΦ is adapted according to (25),

with d ) 0.5. Because Problem (45) is linear, model-adequacycriterion 2 cannot be satisfied. Also, because of linearity, thesolution of (45) is either at uk+1* ) -5 or uk+1* ) 5, depending

on whether 1.5 + λkΦ is positive or negative, respectively (see

Figure 4). In other words, the modifier-adaptation algorithm failsto converge.

93.6. Alternative Schemes. This paper argues in favor of the

formulation presented in subsection 3.1, where the filters areplaced on the modifiers Λ. This approach gives the ability tofilter each modifier individually; i.e., it allows direct control ofthe constraints and helps to prevent constraint violation. It alsopermits combination of modifier adaptation with a constraintcontroller.24

However, several alternative schemes are possible and arebriefly described in this subsection. These variants differ fromthe modifier-adaptation algorithm described by (19), (20), and(22), either in the implementation of filtering or in the way thatthe modification itself is made.

3.6.1. Alternative Filtering. Instead of putting the filters onthe modifiers Λ as in (22), one can filter the inputs u. In thisvariant, the modifiers Λk are updated according to (18), as

while the next operating point is calculated by a first-orderexponential filter:

where K is a (nu × nu) gain matrix. Overall, this algorithm isgiven by (20), (46), and (47).

Conditions under which this variant scheme converges to aKKT point for the plant are the same as those stated in Theorem1. This is easily seen by noting that (18) imposes Cm(u∞*, θ) )Cp(u∞*).

A local convergence analysis can also be performed in theneighborhood of a converged operating point u∞. Under theassumptions that the solution u* of Problem (20) is unique anddifferentiable, with respect to the modifiers Λ at Λ∞,25 it canbe shown that a necessary condition for this modifier-adaptationvariant to converge is that the spectral radius of the matrix I -K[I - ∂U*/∂Λ(Λ∞)∂Λ/∂u(u∞)] be <1. This condition providesguidelines for choosing the elements of the gain matrix K in(47).

Criterion 2 for model adequacy still holds for this modifier-adaptation variant.

3.6.2. Alternative Modification. This second variant of themodifier-adaptation algorithm consists of making a linearmodification of the process model, rather than of the cost andconstraint functions in the optimization problem. At a givenoperating point u, the process output is modified as

where εy ∈ Rny and λy ∈ Rnu×ny are the model modifiers.In this modifier-adaptation variant, the operating point is

updated from the repeated solution of the following optimizationproblem:

s.t.

Figure 4. Illustration of the lack of convergence in Example 4.

∇r2Φ(up

*, θ) > 0 (positive definite) (44)

min-5eue5

1.5u + λkΦu (45)

Λk ) Cp(uk) - C (uk, θ) (46)

uk+1 ) (I - K)uk + Kuk+1* (47)

ym(u, θ) :) y(u, θ) + εy + λyT(u - u_) (48)

uk+1* ) arg minu

Φm(u, θ) :) φ(u, y(u, θ) + εky + λk

yT(u - uk))(49)

Gm(u, θ) :) g(u, y(u, θ) + εky + λk

yT(u - uk)) e 0

uL e u e uU

6028 Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009

where uk represents the current operating point. Again, one canchoose between two filtering strategies:

(1) Filter the ny(nu + 1) model modifiers εy and λy, in whichcase the adaptation proceeds as

with the gain coefficients b1, ..., bny, q1, ..., qny

∈ (0, 1], while thenew operating point is simply given by uk+1 :) uk+1* .

(2) Filter the nu inputs u, which results in the same adaptationlaw as (47) for the determination of the new operating pointuk+1, while the model modifiers are calculated as follows:

If both φ(u, y) and g(u, y) are linear in u and y, the latterscheme is identical to the modifier-adaptation variant in subsec-tion 3.6.1. The only difference between the two schemes is thedefinition of the modifiers and the way the approach is viewed:either as an approach where the modifiers are used to adapt themodel that is subsequently used in the optimization step, or asan approach where the modifiers are used to adapt the constraintsand the cost function in the optimization problem. It followsfrom these considerations that the local convergence conditionfor the modifier-adaptation variant with modifiers on the modeloutputs and filters on the inputs is identical to that given insubsection 3.6.1 when the modifiers are on the cost andconstraint functions.

3.7. Links to Previous Work. We intend to first highlightthe distinction between the gradient modifiers used in this workand the gradient modifiers proposed in the ISOPE literature. Inthe ISOPE literature, an important distinction is made betweenoptimization problems with and without inequality constraints thatdepend on the output y, i.e., process-dependent constraints of theform g(u, y)e 0.9,26 For instance, the original ISOPE formulation,which does not consider the constraints, modifies only the gradientof the cost function.8,27 Optimality and convergence analysis forthe original ISOPE algorithm have been studied.28 ISOPE has laterbeen extended to include process-dependent constraints.29,30 How-ever, instead of including additional modifiers for the constraintfunctions, as in Gao and Engell,16 a Lagrangian modifier wasintroduced in the cost function. Also, to the best knowledge of theauthors, to date, no convergence analysis has been provided forthe extended algorithm.

The original ISOPE algorithm, which does not considerprocess-dependent constraints in the optimization problem, reads

s.t.

In the kth RTO iteration, a parameter estimation problem issolved at the operating point uk, which yields the updatedparameter values θk. The parameter estimation problem is solvedunder the additional constraint

Then, assuming that the plant output gradients ∂yp/∂u(uk) areavailable, the ISOPE modifier λk is computed as

The output-matching requirement described by (55) constitutesa model-qualification condition that is found throughout theISOPE literature.8,9,28,30 The new operating point is determinedby filtering the inputs through a first-order exponential filter,as in (47).

It has recently been established that Condition (55) can besatisfied without updating the model parameters θ.31 Thisapproach, which was also used by Gao and Engell,16 introducesthe model-shift term ak :) yp(uk) - y(uk, θ) in the modifiedoptimization problem, so that (54) becomes

s.t.

with

Because this variant eliminates the need to estimate modelparameters, the name ISOPE becomes inappropriate. Interest-ingly, the model-shift term ak serves the same purpose as theoutput modifier εk

y in (52).Note that, if the gradient of the cost function is modified using

(56), Condition (55) is required for the gradient of the modifiedcost function to match the plant gradient upon convergence.This is somewhat awkward, because, in principle, output valuematching should not be a prerequisite for output gradient

Figure 5. Modifier adaptation applied to Problem (34). Legend: thick solidline, plant constraint; thick dash-dotted line, model-predicted constraint;dotted lines, cost function contours; point P, plant optimum; point M, modeloptimum.

εk+1yi :) (1 - bi)εk

yi + bi[yp,i(uk+1) - yi(uk+1, θ)],i ) 1, ..., ny (50)

λk+1yi T :) (1 - qi)λk

yiT + qi[∂yp,i

∂u(uk+1) -

∂yi

∂u(uk+1, θ)],

i ) 1, ..., ny (51)

εky :) yp(uk) - y(uk, θ) (52)

λkyT :)

∂yp

∂u(uk) -

∂y∂u

(uk, θ) (53)

uk+1* ) arg minu

φ(u, y(u, θk)) + λkTu (54)

uL e u e uU

y(uk, θk) ) yp(uk) (55)

λkT ) ∂φ

∂y(uk, y(uk, θk))[∂yp

∂u(uk) -

∂y∂u

(uk, θk)] (56)

uk+1* ) arg minu

φ(u, y(u, θ)) + ak) + λkTu (57)

uL e u e uU

λkT :) ∂φ

∂y(uk, y(uk, θ) + ak)[∂yp

∂u(uk) -

∂y∂u

(uk, θ)] (58)

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009 6029

matching. This inconsistency can be removed by defining thegradient modifier λΦ, as described in subsection 3.1:

Interestingly, if Condition (55) is satisfied, the gradient modifierλΦ, as defined in (59), reduces to the ISOPE modifier λ in (56).

To handle process-dependent constraints, the methodologypresented in this paper uses the approach of modifying theconstraint functions that was introduced by Gao and Engell.16

This approach differs significantly from that used in the ISOPEliterature, which introduces a Lagrangian modifier in the costfunction.29,30,32

3.8. Gradient Estimation. Perhaps the major difficulty of themodifier-adaptation approach lies in the fact that the gradients ofthe plant outputs, with respect to the input variables (which alsoare called experimental gradients, ∂yp/∂u), must be available. Inthe context of RTO, several methods for estimating experimentalgradients have been proposed. These methods have been comparedby several authors.33,34 A distinction can be made between steady-state perturbation methods and dynamic perturbation methods.Steady-state perturbation methods require at least (nu + 1) steady-

state operating points to estimate the gradients. The idea behinddynamic perturbation methods is to estimate the experimentalgradient using information obtained during the transient betweensteady-state operating points corresponding to successive RTOiterations. One such approach is dynamic model identification,where the experimental gradients are obtained from a dynamicmodel that is identified during transient operation.34-37 The majoradvantage of dynamic perturbation methods is that the waiting timeneeded to reach a new steady state can be avoided. However, theyhave the disadvantage of requiring additional excitation to identifythe dynamic model.

With regard to steady-state perturbation methods, the moststraightforward approach to estimate experimental gradients isto use finite-difference techniques directly on the plant. Thesetechniques consist of perturbing each input variable individuallyaround the current operating point to get an estimate of thecorresponding gradient. The use of finite differences wassuggested in the original ISOPE paper.27 However, because anew steady state must be attained for each perturbation in orderto evaluate the plant derivatives, the time between successiveRTO iteration increases significantly, and the approach becomesexperimentally intractable with several input variables.

Other steady-state perturbation methods use the currentand past operating points to estimate gradients. In the so-called dual ISOPE algorithms,9,38 a constraint on the searchregion for the next operating point, which takes into account

Figure 6. Schematic of the three-tank system.

Table 2. Calibration Coefficients for the Levels in the Tanks andthe Flow Rates through the Pumps

tank z1i z2i

T1 37.69 -10.79T2 35.50 -7.224T3 40.94 -7.177

pump wi

P1 13.22P2 14.96

Table 3. Calibration Coefficients for the Flow Rates from the Tanks

ai aj2

T1 0.1203 T1-T2 0.0381T2 0.0613 T3-T2 0.0285T3 0.1141

Figure 7. Constraint adaptation alone applied to Problem (65).

λkΦT :)

∂Φp

∂u(uk) -

∂Φ∂u

(uk, θ) ) ∂φ

∂u(uk, yp(uk)) +

∂φ

∂y(uk, yp(uk))

∂yp

∂u(uk) -

∂φ

∂u(uk, y(uk, θ)) -

∂φ

∂y(uk, y(uk, θ))

∂y∂u

(uk, θ) (59)

6030 Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009

the position of the past points, is added to the model-basedoptimization problem. This way, the optimization objectiveand the requirement of generating sufficient steady-stateperturbations in the operating points for the purpose ofestimating the gradient are addressed simultaneously. Al-though the problem of experimental gradient estimation iscrucial to the modifier-adaptation methodology, it is beyondthe scope of this paper. A first attempt that considersunconstrained problems has been detailed elsewhere.39

Example 5. Consider the optimization problem of Example 1.To depart from the ideal case of Example 1, Gaussian noise witha standard deviation of 0.05 is added to the measured cost Φp andconstraint Gp. To estimate the gradients of the cost and constraintfunctions, finite-difference perturbations are applied at each RTOiteration. The step size of the perturbations is 0.1. The resultingiterations are shown in Figure 5, starting from the model optimum(point M) and using the filter parameters b ) 0.7, d ) 0.5, and q) 0.5. The algorithm converges to a region of the input space thatcontains the plant optimum (point P). To avoid constraint violationdue to measurement noise and perturbation of the inputs, a backofffrom the constraint could be implemented.

4. Experimental Application to a Three-Tank System

4.1. Experimental Setup. The three-tank system is depictedin Figure 6. It consists of the three plexiglass cylinders T1, T2,and T3; the cross-sectional area and height of each cylinder areA ) 154 cm2 and H ) 60 cm, respectively. Each cylinder isequipped with a manual bottom valve (labeled as V1, V2, andV3, respectively). Moreover, the cylinders are serially connectedthrough manual valves (labeled as V12 and V32). Liquid (water)is fed into cylinders T1 and T3 by the pumps (labeled as P1and P2, respectively), at a maximum flow rate of 6 L/min. Levelmeasurements in the cylinders are performed by piezoresistivedifferential pressure sensors.

Subsequently, q1, q2, and q3 denote the outlet flow rates; q12

and q32 are the serial connection flow rates; qp1 and qp2, the inletflow rates delivered by the pumps; and h1, h2, and h3 are therespective levels in the cylinders.

The liquid level hi in each tank is obtained as an affine functionof the voltage signal yi provided by the level sensor in that tank:

The flow rate qpi delivered by each pump is proportional to thevoltage ui ∈ [0, 8] applied to that pump:

The voltages u1 and u2, rather than the flow rates qp1 and qp2 areconsidered as input variables. The numerical values for thecoefficients z1i, z2i, and wi are reported in Table 2.

4.2. Formulation of the Optimization Problem. At steadystate, the mass conservation equations read as follows:

For the sake of simplicity, the flow rates are modeled asfunctions of the tank levels, using Torricelli’s rule:

with dj2 :) hj - h2. The model parameters ai (for i ) 1, ..., 3)and aj2 (for j ) 1, 3) have been identified from steady-statemeasurements using a least-squares approach. The estimatedvalues are reported in Table 3. Subsequently, this model isselected as the nominal model. The time constant of the system,with all the valves in fully open position, is ∼3 min.

The optimization problem consists of minimizing the overallpumping effort, while maintaining the liquid level between givenlimits:

s.t.

The lower and upper bounds on the liquid levels are respectivelygiven as hL ) 5 cm and hU ) 30 cm, and the lower and upperbounds on the pump voltages are respectively given as uL ) 0 Vand uU ) 8 V.

4.3. Real-Time Optimization Using Modifier Adaptation.Modifier adaptation is implemented based on the update laws(23) and (24). Note that the cost gradient does not needadaptation, because the cost function, which depends only onthe input variables u1 and u2, is perfectly known. A RTO periodof 30 min is chosen, which leaves sufficient time for the systemto reach steady state after an input change. Measurements ofthe tank levels are taken every second, and the average valueover the last 10 min is considered when adapting the modifiers.

Constraint Adaptation Alone. The results shown in Figure7 correspond to the adaptation of the constraint values only, i.e.,without modification of the constraint gradients. The algorithmstarts from the conservative operating point u1 ) 6.0, u2 ) 6.2and uses the constraint filter parameter b ) 0.5.

In the first part of the experiment, all the manual valves arein the fully open position. The iterations converge to anoperating point with h2 at its lower bound of 5 cm. At time t )25.6 h, a perturbation is introduced in the system by partiallyclosing valve V32, to reduce the flow rate q32; this time isindicated by a vertical dash-dotted line in Figure 7. It can beseen that only a few iterations are needed for constraintadaptation to reject this perturbation and converge to a differentoperating point with, again, h2 at its lower bound.

Adaptation of the Constraint Values and Gradients. Theresults obtained by applying the modifier adaptation algorithm withadjustment of the constraint values and gradients are depicted inFigure 8. The iterations start from the same conservative operatingpoint as previously mentioned, and the update laws (23) and (24)are applied with b ) q ) 0.5. To estimate the experimentalgradients, finite-difference perturbations are imposed on both inputsat each RTO iteration, with a perturbation amplitude of -0.5 V.These perturbations, which can be seen in the upper plot of Figure8, call for an increase of the RTO period from 30 min to 90 min.At time t ) 26.24 h, the same perturbation on the position of valveV32 is introduced.

In either part of the experiment, the iterations converge toan operating point for which the lower bound of h2 is active. Aslight violation of the constraint on h2 is observed at each

hi ) z1i + z2iyi, i ) 1, ..., 3

qpi ) wiui, i ) 1, 2

Adh1

dt) qp1 - q1 - q12 ) 0 (60)

Adh2

dt) q12 + q32 - q2 ) 0 (61)

Adh3

dt) qp2 - q3 - q32 ) 0 (62)

qi ) Aai√hi, i ) 1, ..., 3 (63)

qj2 ) sign(dj2)Aaj2√|dj2|, j ) 1, 3 (64)

minu1,u2

u12 + u2

2 (65)

Model (60)-(64)hL e h1, h2, h3 e hU

uL e u1, u2 e uU

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009 6031

iteration, because of the finite-difference perturbation forgradient estimation. Note also that, similar to the case withoutgradient correction, the constraint on h2 is violated during thetransient after the valve perturbation.

Comparison. A comparison in terms of the cost value at theRTO iterations without and with constraint gradient correctionis given in Figure 9. For easier comparison, the times coordinatesare shifted so that, in both experiments, the origin correspondsto the time at which the perturbation in valve V32 is applied.As expected, modifier adaptation with gradient correction yieldsa lower cost value upon convergence.

5. Conclusions

In this paper, a modifier-adaptation methodology has beenconsidered in the context of real-time optimization (RTO).Unlike two-step approaches that rely on a parameter estimationstep to adapt the parameters of a process model, the modifier-adaptation approach adjusts the optimization problem by adapt-ing linear modifier terms in the cost and constraint functions.These modifiers are based on the differences between themeasured and predicted values of (i) the constraints and (ii) theconstraint and cost gradients, i.e., quantities that are involvedin the necessary conditions of optimality (NCO). The adaptationof these modifiers in the model-based optimization is such thata point that satisfies the plant NCO is reached upon convergence.

Various aspects of modifier adaptation have been discussed andillustrated through numerical examples, including local convergenceconditions and model adequacy. The rather restrictive model-

adequacy conditions of two-step approaches can be relaxedconsiderably, which represents a clear advantage of modifieradaptation. Variants of the modifier-adaptation scheme have alsobeen presented, including an alternative filtering and an alternativemodification of the process model. The differences and similaritieswith integrated system optimization and parameter estimation(ISOPE) and other approaches that exist in the literature have beendiscussed.

Finally, experimental application to a three-tank system showsthe applicability of the approach. The major difficulty lies inthe estimation of the experimental gradient. In this work, it wasestimated using a finite-difference approach. However, otherapproaches available in the literature, such as those mentionedin subsection 3.8, also can be used.

For a two-step approach to work well, the process model shouldbe able to predict accurately the constraint values as well as theconstraint and cost gradients for the entire range of operatingconditions and process disturbances. Hence, process disturbancesmust be taken into account in the model. Furthermore, parameterestimation is complicated by plant-model mismatch and the lackof input excitation. With additional excitation, the resultingoperating point may be suboptimal or even infeasible. Hence, inthe presence of plant-model mismatch and unmeasured distur-bances, the modifier-adaptation approach is appealing, because itdoes not require the process model to be updated. Furthermore, ituses a parametrization Λ that is tailored to meeting the NCOquantities C.19 The price to pay is the need to estimate experimentalgradients online. Although, in this paper, this is done at each RTOiteration, it is also possible to adapt the gradient modifiers less

Figure 8. Full modifier adaptation applied to Problem (65).

Figure 9. Cost comparison, showing (-*-) constraint adaptation and (-O-) full modifier adaptation.

6032 Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009

frequently than the constraint offsets, particularly for problemswhere the solution is largely determined by active constraints.Finally, we feel that modifier adaptation might not scale to large-scale problems as easily as the two-step approach or constraintadaptation, because the difficulty in estimating experimentalgradients from past operating points increases with the number ofinputs. This re-emphasizes the need to investigate alternative waysof estimating experimental gradients.

Note Added after ASAP Publication: The version of thispaper that was published on the Web March 20, 2009 has beenrevised. Changes to the appearance of many of the symbolsthroughout the paper have been made, and reference 12 hasbeen removed. In addition, a correction was made to eq 34.The corrected version of this paper was reposted to the WebMay 4, 2009.

Literature Cited

(1) Cutler, C. R.; Perry, R. T. Real time optimization with multivariablecontrol is required to maximize profits. Comput. Chem. Eng. 1983, 7 (5),663–667.

(2) Darby, M. L.; White, D. C. On-line optimization of complex processunits. Chem. Eng. Process. 1988, 51–59.

(3) Marlin, T. E.; Hrymak, A. N. Real-time operations optimization ofcontinuous processes. In Proceedings of the 5th International Conferenceon Chemical Process ControlsV: Assessment and New Directions forResearch; Kantor, J. C., Garcia, C. E., Carnahan, B., Eds.; AIChESymposium Series, No. 316; American Institute of Chemical Engineers(AIChE): New York, 1997; pp 156-164.

(4) Young, R. E. Petroleum refining process control and real-timeoptimization. IEEE Control Syst. Mag. 2006, 26 (6), 73–83.

(5) Jang, S.-S.; Joseph, B.; Mukai, H. On-line optimization of constrainedmultivariable chemical processes. AIChE J. 1987, 33 (1), 26–35.

(6) Chen, C. Y.; Joseph, B. On-line optimization using a two-phaseapproach: An application study. Ind. Eng. Chem. Res. 1987, 26, 1924–1930.

(7) Yip, W. S.; Marlin, T. E. The effect of model fidelity on real-timeoptimization performance. Comput. Chem. Eng. 2004, 28, 267–280.

(8) Roberts, P. D.; Williams, T. W. On an algorithm for combined systemoptimisation and parameter estimation. Automatica 1981, 17 (1), 199–209.

(9) Brdys, M.; Tatjewski, P. IteratiVe Algorithms for Multilayer Optimiz-ing Control; Imperial College Press: London, U.K., 2005.

(10) Yip, W. S.; Marlin, T. E. Multiple data sets for model updating inreal-time operations optimization. Comput. Chem. Eng. 2002, 26, 1345–1362.

(11) Box, G. E. P.; Draper, N. R. EVolutionary Operation. A StatisticalMethod for Process ImproVement: Wiley: New York, 1969.

(12) Deleted in production.(13) Garcia, C. E.; Morari, M. Optimal operation of integrated processing

systems. Part II: Closed-loop on-line optimizing control. AIChE J. 1984,30 (2), 226–234.

(14) Skogestad, S. Self-optimizing control: The missing link betweensteady-state optimization and control. Comput. Chem. Eng. 2000, 24, 569–575.

(15) Francois, G.; Srinivasan, B.; Bonvin, D. Use of measurements forenforcing the necessary conditions of optimality in the presence ofconstraints and uncertainty. J. Process Control 2005, 15 (6), 701–712.

(16) Gao, W.; Engell, S. Iterative set-point optimization of batchchromatography. Comput. Chem. Eng. 2005, 29, 1401–1409.

(17) Forbes, J. F.; Marlin, T. E. Model accuracy for economic optimizingcontrollers: The bias update case. Ind. Eng. Chem. Res. 1994, 33, 1919–1929.

(18) Chachuat, B.; Marchetti, A.; Bonvin, D. Process optimization viaconstraints adaptation. J. Process Control 2008, 18, 244–257.

(19) Chachuat, B.; Srinivasan, B.; Bonvin, D. Adaptation Strategies forReal-Time Optimization. Comput. Chem. Eng., in review, 2009.

(20) de Hennin, S. R.; Perkins, J. D.; Barton, G. W. Structural decisions inon-line optimization. In Proceedings of the 5th International Conference onProcess Systems Engineering (PSE ’94), Kyougju, Korea, 1994; pp 297-302.

(21) Bazaraa, M. S.; Sherali, H. D.; Shetty, C. M. Nonlinear Program-ming: Theory and Algorithms, 2nd Edition: Wiley: New York, 1993.

(22) Forbes, J. F.; Marlin, T. E. Design cost: A systematic approach totechnology selection for model-based real-time optimization systems.Comput. Chem. Eng. 1996, 20, 717–734.

(23) Gill, P. E.; Murray, W.; Wright, M. H. Practical Optimization;Academic Press: London, 2003.

(24) Marchetti, A.; Chachuat, B.; Bonvin, D. Real-time optimizationvia adaptation and control of the constraints. Presented at the 18th EuropeanSymposium on Computer Aided Process Engineering, ESCAPE 18, Lyon,France, 2008.

(25) Fiacco, A. V. Introduction to SensitiVity and Stability Analysis inNonlinear Programming; Mathematics in Science and Engineering, Vol.165; Academic Press: New York, 1983.

(26) Roberts, P. D. Coping with model-reality differences in industrialprocess optimisationsa review of integrated system optimisation andparameter estimation (ISOPE). Comput. Ind. 1995, 26, 281–290.

(27) Roberts, P. D. An algorithm for steady-state system optimizationand parameter estimation. J. Syst. Sci. 1979, 10, 719–734.

(28) Brdys, M.; Roberts, P. D. Convergence and optimality of modifiedtwo-step algorithm for integrated system optimisation and parameterestimation. Int. J. Syst. Sci. 1987, 18 (7), 1305–1322.

(29) Brdys, M.; Chen, S.; Roberts, P. D. An extension to the modifiedtwo-step algorithm for steady-state system optimization and parameterestimation. Int. J. Syst. Sci. 1986, 17 (8), 1229–1243.

(30) Zhang, H.; Roberts, P. D. Integrated system optimization andparameter estimation using a general form of steady-state model. Int. J.Syst. Sci. 1991, 22 (10), 1679–1693.

(31) Tatjewski, P. Iterative optimizing set-point controlsThe basicprinciple redesigned. Presented at the 15th IFAC World Congress, Barce-lona, Spain, 2002.

(32) Ellis, J. E.; Kambhampati, C.; Sheng, G.; Roberts, P. D. Approachesto the optimizing control problem. Int. J. Syst. Sci. 1988, 19 (10), 1969–1985.

(33) Mansour, M.; Ellis, J. E. Comparison of methods for estimatingreal process derivatives in on-line optimization. Appl. Math. Modell. 2003,27, 275–291.

(34) Zhang, Y.; Forbes, J. F. Performance analysis of perturbation-basedmethods for real-time optimization. Can. J. Chem. Eng. 2006, 84, 209–218.

(35) Bamberger, W.; Isermann, R. Adaptive on-line steady stateoptimization of slow dynamic processes. Automatica 1978, 14, 223–230.

(36) Garcia, C. E.; Morari, M. Optimal operation of integrated processingsystems. Part I: Open-loop on-line optimizing control. AIChE J. 1981, 27(6), 960–968.

(37) Zhang, H.; Roberts, P. D. On-line steady-state optimisation ofnonlinear constrained processes with slow dynamics. Tran. Inst. Meas.Control 1990, 12 (5), 251–261.

(38) Tadej, W.; Tatjewski, P. Analysis of an ISOPE-type dual algorithmfor optimizing control and nonlinear optimization. Int. J. Appl. Math.Comput. Sci. 2001, 11 (2), 429–457.

(39) Marchetti, A.; Chachuat, B.; Bonvin, D. Real-time optimizationwith estimation of experimental gradient. Submitted to AdV. Control Chem.Process., Proc. IFAC Symp. (2009), 2009.

ReceiVed for reView September 8, 2008ReVised manuscript receiVed January 25, 2009

Accepted January 28, 2009

IE801352X

Ind. Eng. Chem. Res., Vol. 48, No. 13, 2009 6033