Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE....
Transcript of Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE....
![Page 1: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/1.jpg)
Computational Issues inNonlinear Control and
EstimationArthur J Krener
Naval Postgraduate SchoolMonterey, CA 93943
![Page 2: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/2.jpg)
Modern Control TheoryModern Control Theory dates back to first InternationalFederation for Automatic Control (IFAC) World Congress thattook place in Moscow in 1960.
At least three major theoretical accomplishments were reportedthere.
• Pontryagin, Bolshanski, Gamkerlidze and Mischenkopresented the Maximum Principle.
• Bellman presented the Dynamic Programming Method andthe Hamilton-Jacobi-Bellman (HJB) PDE.
• Kalman presented the theory of Controllability, Observabilityand Minimality for Linear Control Systems.
What did these accomplishments have in common.
They dealt with control and estimation problems in
State Space Form.
![Page 3: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/3.jpg)
Modern Control TheoryModern Control Theory dates back to first InternationalFederation for Automatic Control (IFAC) World Congress thattook place in Moscow in 1960.
At least three major theoretical accomplishments were reportedthere.
• Pontryagin, Bolshanski, Gamkerlidze and Mischenkopresented the Maximum Principle.
• Bellman presented the Dynamic Programming Method andthe Hamilton-Jacobi-Bellman (HJB) PDE.
• Kalman presented the theory of Controllability, Observabilityand Minimality for Linear Control Systems.
What did these accomplishments have in common.
They dealt with control and estimation problems in
State Space Form.
![Page 4: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/4.jpg)
Modern Control TheoryModern Control Theory dates back to first InternationalFederation for Automatic Control (IFAC) World Congress thattook place in Moscow in 1960.
At least three major theoretical accomplishments were reportedthere.
• Pontryagin, Bolshanski, Gamkerlidze and Mischenkopresented the Maximum Principle.
• Bellman presented the Dynamic Programming Method andthe Hamilton-Jacobi-Bellman (HJB) PDE.
• Kalman presented the theory of Controllability, Observabilityand Minimality for Linear Control Systems.
What did these accomplishments have in common.
They dealt with control and estimation problems in
State Space Form.
![Page 5: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/5.jpg)
Modern Control TheoryModern Control Theory dates back to first InternationalFederation for Automatic Control (IFAC) World Congress thattook place in Moscow in 1960.
At least three major theoretical accomplishments were reportedthere.
• Pontryagin, Bolshanski, Gamkerlidze and Mischenkopresented the Maximum Principle.
• Bellman presented the Dynamic Programming Method andthe Hamilton-Jacobi-Bellman (HJB) PDE.
• Kalman presented the theory of Controllability, Observabilityand Minimality for Linear Control Systems.
What did these accomplishments have in common.
They dealt with control and estimation problems in
State Space Form.
![Page 6: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/6.jpg)
Modern Control TheoryModern Control Theory dates back to first InternationalFederation for Automatic Control (IFAC) World Congress thattook place in Moscow in 1960.
At least three major theoretical accomplishments were reportedthere.
• Pontryagin, Bolshanski, Gamkerlidze and Mischenkopresented the Maximum Principle.
• Bellman presented the Dynamic Programming Method andthe Hamilton-Jacobi-Bellman (HJB) PDE.
• Kalman presented the theory of Controllability, Observabilityand Minimality for Linear Control Systems.
What did these accomplishments have in common.
They dealt with control and estimation problems in
State Space Form.
![Page 7: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/7.jpg)
Modern Control TheoryModern Control Theory dates back to first InternationalFederation for Automatic Control (IFAC) World Congress thattook place in Moscow in 1960.
At least three major theoretical accomplishments were reportedthere.
• Pontryagin, Bolshanski, Gamkerlidze and Mischenkopresented the Maximum Principle.
• Bellman presented the Dynamic Programming Method andthe Hamilton-Jacobi-Bellman (HJB) PDE.
• Kalman presented the theory of Controllability, Observabilityand Minimality for Linear Control Systems.
What did these accomplishments have in common.
They dealt with control and estimation problems in
State Space Form.
![Page 8: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/8.jpg)
Modern Control TheoryModern Control Theory dates back to first InternationalFederation for Automatic Control (IFAC) World Congress thattook place in Moscow in 1960.
At least three major theoretical accomplishments were reportedthere.
• Pontryagin, Bolshanski, Gamkerlidze and Mischenkopresented the Maximum Principle.
• Bellman presented the Dynamic Programming Method andthe Hamilton-Jacobi-Bellman (HJB) PDE.
• Kalman presented the theory of Controllability, Observabilityand Minimality for Linear Control Systems.
What did these accomplishments have in common.
They dealt with control and estimation problems in
State Space Form.
![Page 9: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/9.jpg)
Linear State Space Control Theory
The decade of the 1960 witnessed the explosion of linear statespace control theory.
• Linear Quadratic Regulator (LQR)
• Kalman Filtering
• Separation Principle, Linear Quadratic Gaussian (LQG)
• Many other linear results.
It was the time of F,G,H, J if you lived near Stanford andA,B,C,D if you lived anywhere else.
And there was much lamenting the gap between the linear statespace theory and the linear frequency domain practice.
![Page 10: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/10.jpg)
Linear State Space Control Theory
The decade of the 1960 witnessed the explosion of linear statespace control theory.
• Linear Quadratic Regulator (LQR)
• Kalman Filtering
• Separation Principle, Linear Quadratic Gaussian (LQG)
• Many other linear results.
It was the time of F,G,H, J if you lived near Stanford andA,B,C,D if you lived anywhere else.
And there was much lamenting the gap between the linear statespace theory and the linear frequency domain practice.
![Page 11: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/11.jpg)
Linear State Space Control Theory
The decade of the 1960 witnessed the explosion of linear statespace control theory.
• Linear Quadratic Regulator (LQR)
• Kalman Filtering
• Separation Principle, Linear Quadratic Gaussian (LQG)
• Many other linear results.
It was the time of F,G,H, J if you lived near Stanford andA,B,C,D if you lived anywhere else.
And there was much lamenting the gap between the linear statespace theory and the linear frequency domain practice.
![Page 12: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/12.jpg)
Linear State Space Control Theory
The decade of the 1960 witnessed the explosion of linear statespace control theory.
• Linear Quadratic Regulator (LQR)
• Kalman Filtering
• Separation Principle, Linear Quadratic Gaussian (LQG)
• Many other linear results.
It was the time of F,G,H, J if you lived near Stanford andA,B,C,D if you lived anywhere else.
And there was much lamenting the gap between the linear statespace theory and the linear frequency domain practice.
![Page 13: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/13.jpg)
Linear State Space Control Theory
The decade of the 1960 witnessed the explosion of linear statespace control theory.
• Linear Quadratic Regulator (LQR)
• Kalman Filtering
• Separation Principle, Linear Quadratic Gaussian (LQG)
• Many other linear results.
It was the time of F,G,H, J if you lived near Stanford andA,B,C,D if you lived anywhere else.
And there was much lamenting the gap between the linear statespace theory and the linear frequency domain practice.
![Page 14: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/14.jpg)
Linear State Space Control Theory
The decade of the 1960 witnessed the explosion of linear statespace control theory.
• Linear Quadratic Regulator (LQR)
• Kalman Filtering
• Separation Principle, Linear Quadratic Gaussian (LQG)
• Many other linear results.
It was the time of F,G,H, J if you lived near Stanford andA,B,C,D if you lived anywhere else.
And there was much lamenting the gap between the linear statespace theory and the linear frequency domain practice.
![Page 15: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/15.jpg)
Linear State Space Control Theory
The decade of the 1960 witnessed the explosion of linear statespace control theory.
• Linear Quadratic Regulator (LQR)
• Kalman Filtering
• Separation Principle, Linear Quadratic Gaussian (LQG)
• Many other linear results.
It was the time of F,G,H, J if you lived near Stanford andA,B,C,D if you lived anywhere else.
And there was much lamenting the gap between the linear statespace theory and the linear frequency domain practice.
![Page 16: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/16.jpg)
Linear State Space Control Theory
The decade of the 1960 witnessed the explosion of linear statespace control theory.
• Linear Quadratic Regulator (LQR)
• Kalman Filtering
• Separation Principle, Linear Quadratic Gaussian (LQG)
• Many other linear results.
It was the time of F,G,H, J if you lived near Stanford andA,B,C,D if you lived anywhere else.
And there was much lamenting the gap between the linear statespace theory and the linear frequency domain practice.
![Page 17: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/17.jpg)
LINPACK
In the early 1970s something happened to change theconversation.
The Department of Energy funded a project to develop softwarefor linear algebra applications for what were then calledsupercomputers.
Four numerical analysts, Dongarra, Bunch, Moler and Stewartwrote LINPACK in Fortran. LINPACK makes use of the BLAS(Basic Linear Algebra Subprograms) libraries for performingbasic vector and matrix operations.
In the later 1970s LINPAK and a related package called EISPAKwere replaced and supplanted by LAPACK which also usesBLAS.
![Page 18: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/18.jpg)
LINPACK
In the early 1970s something happened to change theconversation.
The Department of Energy funded a project to develop softwarefor linear algebra applications for what were then calledsupercomputers.
Four numerical analysts, Dongarra, Bunch, Moler and Stewartwrote LINPACK in Fortran. LINPACK makes use of the BLAS(Basic Linear Algebra Subprograms) libraries for performingbasic vector and matrix operations.
In the later 1970s LINPAK and a related package called EISPAKwere replaced and supplanted by LAPACK which also usesBLAS.
![Page 19: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/19.jpg)
LINPACK
In the early 1970s something happened to change theconversation.
The Department of Energy funded a project to develop softwarefor linear algebra applications for what were then calledsupercomputers.
Four numerical analysts, Dongarra, Bunch, Moler and Stewartwrote LINPACK in Fortran. LINPACK makes use of the BLAS(Basic Linear Algebra Subprograms) libraries for performingbasic vector and matrix operations.
In the later 1970s LINPAK and a related package called EISPAKwere replaced and supplanted by LAPACK which also usesBLAS.
![Page 20: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/20.jpg)
LINPACK
In the early 1970s something happened to change theconversation.
The Department of Energy funded a project to develop softwarefor linear algebra applications for what were then calledsupercomputers.
Four numerical analysts, Dongarra, Bunch, Moler and Stewartwrote LINPACK in Fortran. LINPACK makes use of the BLAS(Basic Linear Algebra Subprograms) libraries for performingbasic vector and matrix operations.
In the later 1970s LINPAK and a related package called EISPAKwere replaced and supplanted by LAPACK which also usesBLAS.
![Page 21: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/21.jpg)
MATLAB
Cleve Moler, the chairman of the computer science departmentat the University of New Mexico, started developing MATLABin the late 1970s.
He designed it to give his students access to LINPACK andEISPACK without them having to learn Fortran.
Jack Little, an engineer, was exposed to it during a visit Molermade to Stanford University in 1983.
Recognizing its commercial potential, he joined with Moler andSteve Bangert. They rewrote MATLAB in C and foundedMathWorks in 1984 to continue its development.
MATLAB was first adopted by researchers and practitioners incontrol engineering, Little’s specialty, but quickly spread tomany other domains.
![Page 22: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/22.jpg)
MATLAB
Cleve Moler, the chairman of the computer science departmentat the University of New Mexico, started developing MATLABin the late 1970s.
He designed it to give his students access to LINPACK andEISPACK without them having to learn Fortran.
Jack Little, an engineer, was exposed to it during a visit Molermade to Stanford University in 1983.
Recognizing its commercial potential, he joined with Moler andSteve Bangert. They rewrote MATLAB in C and foundedMathWorks in 1984 to continue its development.
MATLAB was first adopted by researchers and practitioners incontrol engineering, Little’s specialty, but quickly spread tomany other domains.
![Page 23: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/23.jpg)
MATLAB
Cleve Moler, the chairman of the computer science departmentat the University of New Mexico, started developing MATLABin the late 1970s.
He designed it to give his students access to LINPACK andEISPACK without them having to learn Fortran.
Jack Little, an engineer, was exposed to it during a visit Molermade to Stanford University in 1983.
Recognizing its commercial potential, he joined with Moler andSteve Bangert. They rewrote MATLAB in C and foundedMathWorks in 1984 to continue its development.
MATLAB was first adopted by researchers and practitioners incontrol engineering, Little’s specialty, but quickly spread tomany other domains.
![Page 24: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/24.jpg)
MATLAB
Cleve Moler, the chairman of the computer science departmentat the University of New Mexico, started developing MATLABin the late 1970s.
He designed it to give his students access to LINPACK andEISPACK without them having to learn Fortran.
Jack Little, an engineer, was exposed to it during a visit Molermade to Stanford University in 1983.
Recognizing its commercial potential, he joined with Moler andSteve Bangert. They rewrote MATLAB in C and foundedMathWorks in 1984 to continue its development.
MATLAB was first adopted by researchers and practitioners incontrol engineering, Little’s specialty, but quickly spread tomany other domains.
![Page 25: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/25.jpg)
MATLAB
Cleve Moler, the chairman of the computer science departmentat the University of New Mexico, started developing MATLABin the late 1970s.
He designed it to give his students access to LINPACK andEISPACK without them having to learn Fortran.
Jack Little, an engineer, was exposed to it during a visit Molermade to Stanford University in 1983.
Recognizing its commercial potential, he joined with Moler andSteve Bangert. They rewrote MATLAB in C and foundedMathWorks in 1984 to continue its development.
MATLAB was first adopted by researchers and practitioners incontrol engineering, Little’s specialty, but quickly spread tomany other domains.
![Page 26: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/26.jpg)
Competitors
With BLAS readily available, several other companies arose tocompete with MathWorks, among them Control-C andMatrix-X. But they gradually faded away.
There continues to arise new competitors to Matlab like Scilabbut with Matlab’s extensive suite of toolboxes it is hard forthem to get market share.
Alan Laub and others wrote the Control Systems Toolbox andLeonard Ljung wrote the System Identification Toolbox. Bothdeal primarily with linear systems.
Other Controls related toolboxes include the Aerospace Toolbox,the Model Predictive Control Toolbox, the Robotics Toolboxand the Robust Control Toolbox.
So special purpose software is becoming available for somenonlinear systems.
![Page 27: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/27.jpg)
Competitors
With BLAS readily available, several other companies arose tocompete with MathWorks, among them Control-C andMatrix-X. But they gradually faded away.
There continues to arise new competitors to Matlab like Scilabbut with Matlab’s extensive suite of toolboxes it is hard forthem to get market share.
Alan Laub and others wrote the Control Systems Toolbox andLeonard Ljung wrote the System Identification Toolbox. Bothdeal primarily with linear systems.
Other Controls related toolboxes include the Aerospace Toolbox,the Model Predictive Control Toolbox, the Robotics Toolboxand the Robust Control Toolbox.
So special purpose software is becoming available for somenonlinear systems.
![Page 28: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/28.jpg)
Competitors
With BLAS readily available, several other companies arose tocompete with MathWorks, among them Control-C andMatrix-X. But they gradually faded away.
There continues to arise new competitors to Matlab like Scilabbut with Matlab’s extensive suite of toolboxes it is hard forthem to get market share.
Alan Laub and others wrote the Control Systems Toolbox andLeonard Ljung wrote the System Identification Toolbox. Bothdeal primarily with linear systems.
Other Controls related toolboxes include the Aerospace Toolbox,the Model Predictive Control Toolbox, the Robotics Toolboxand the Robust Control Toolbox.
So special purpose software is becoming available for somenonlinear systems.
![Page 29: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/29.jpg)
Competitors
With BLAS readily available, several other companies arose tocompete with MathWorks, among them Control-C andMatrix-X. But they gradually faded away.
There continues to arise new competitors to Matlab like Scilabbut with Matlab’s extensive suite of toolboxes it is hard forthem to get market share.
Alan Laub and others wrote the Control Systems Toolbox andLeonard Ljung wrote the System Identification Toolbox. Bothdeal primarily with linear systems.
Other Controls related toolboxes include the Aerospace Toolbox,the Model Predictive Control Toolbox, the Robotics Toolboxand the Robust Control Toolbox.
So special purpose software is becoming available for somenonlinear systems.
![Page 30: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/30.jpg)
Competitors
With BLAS readily available, several other companies arose tocompete with MathWorks, among them Control-C andMatrix-X. But they gradually faded away.
There continues to arise new competitors to Matlab like Scilabbut with Matlab’s extensive suite of toolboxes it is hard forthem to get market share.
Alan Laub and others wrote the Control Systems Toolbox andLeonard Ljung wrote the System Identification Toolbox. Bothdeal primarily with linear systems.
Other Controls related toolboxes include the Aerospace Toolbox,the Model Predictive Control Toolbox, the Robotics Toolboxand the Robust Control Toolbox.
So special purpose software is becoming available for somenonlinear systems.
![Page 31: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/31.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose
• Reliable• Scalable• Portable• Easy to Use• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 32: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/32.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose• Reliable
• Scalable• Portable• Easy to Use• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 33: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/33.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose• Reliable• Scalable
• Portable• Easy to Use• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 34: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/34.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose• Reliable• Scalable• Portable
• Easy to Use• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 35: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/35.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose• Reliable• Scalable• Portable• Easy to Use
• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 36: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/36.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose• Reliable• Scalable• Portable• Easy to Use• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 37: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/37.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose• Reliable• Scalable• Portable• Easy to Use• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 38: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/38.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose• Reliable• Scalable• Portable• Easy to Use• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 39: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/39.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose• Reliable• Scalable• Portable• Easy to Use• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 40: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/40.jpg)
Nonlinear Systems Toolbox 2016The Control Systems Toolbox for linear systems is
• General Purpose• Reliable• Scalable• Portable• Easy to Use• Benchmarked
Can we develop a toolbox with similar properties that isapplicable to a broad class of nonlinear systems?
In the rest of this talk I would to present a few baby steps inthis direction, which can be found in NST 2016.
It should be noted that the numerical optimization communityhas already developed software with these properties for a widevariety of applications like Mixed Intger Linear Programs (i.e.,GUROBI and CPLEX) and Nonlinear Programs (i.e., KNITROand IPOPT). For optimzation solver benchmarks seehttp://plato.la.asu.edu/bench.html
![Page 41: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/41.jpg)
Smooth Nonlinear Systems
A smooth nonlinear system is of the form
x = f(x, u)
y = h(x, u)
with state x ∈ IRn×1 , input u ∈ IRm×1 and output y ∈ IRp×1
By smooth we mean that f(x, u), h(x, u) have as manycontinuous derivatives as needed by problem and its solution.
Later we shall relax this to piecewise smooth.
There may be state and control constraints of the form
g(x, u) ≤ 0
Any problem with other than linear equality constraints isinherently nonlinear even if f(x, u), h(x, u) are linear in x, u .
![Page 42: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/42.jpg)
Smooth Nonlinear Systems
A smooth nonlinear system is of the form
x = f(x, u)
y = h(x, u)
with state x ∈ IRn×1 , input u ∈ IRm×1 and output y ∈ IRp×1
By smooth we mean that f(x, u), h(x, u) have as manycontinuous derivatives as needed by problem and its solution.
Later we shall relax this to piecewise smooth.
There may be state and control constraints of the form
g(x, u) ≤ 0
Any problem with other than linear equality constraints isinherently nonlinear even if f(x, u), h(x, u) are linear in x, u .
![Page 43: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/43.jpg)
Smooth Nonlinear Systems
A smooth nonlinear system is of the form
x = f(x, u)
y = h(x, u)
with state x ∈ IRn×1 , input u ∈ IRm×1 and output y ∈ IRp×1
By smooth we mean that f(x, u), h(x, u) have as manycontinuous derivatives as needed by problem and its solution.
Later we shall relax this to piecewise smooth.
There may be state and control constraints of the form
g(x, u) ≤ 0
Any problem with other than linear equality constraints isinherently nonlinear even if f(x, u), h(x, u) are linear in x, u .
![Page 44: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/44.jpg)
Smooth Nonlinear Systems
A smooth nonlinear system is of the form
x = f(x, u)
y = h(x, u)
with state x ∈ IRn×1 , input u ∈ IRm×1 and output y ∈ IRp×1
By smooth we mean that f(x, u), h(x, u) have as manycontinuous derivatives as needed by problem and its solution.
Later we shall relax this to piecewise smooth.
There may be state and control constraints of the form
g(x, u) ≤ 0
Any problem with other than linear equality constraints isinherently nonlinear even if f(x, u), h(x, u) are linear in x, u .
![Page 45: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/45.jpg)
Fundamental Problems
What are the fundamental problems of control?
• Finding a feedback that stabilizes a plant to an operatingpoint.
• Finding an open loop control trajectory that steers a plantfrom one state to another.
• Estimating the state of a system from partial and inexactmeasurements.
• Finding a feedforward and feedback that tracks a referencetrajectory.
• Your favorite problem.
![Page 46: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/46.jpg)
Fundamental Problems
What are the fundamental problems of control?
• Finding a feedback that stabilizes a plant to an operatingpoint.
• Finding an open loop control trajectory that steers a plantfrom one state to another.
• Estimating the state of a system from partial and inexactmeasurements.
• Finding a feedforward and feedback that tracks a referencetrajectory.
• Your favorite problem.
![Page 47: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/47.jpg)
Fundamental Problems
What are the fundamental problems of control?
• Finding a feedback that stabilizes a plant to an operatingpoint.
• Finding an open loop control trajectory that steers a plantfrom one state to another.
• Estimating the state of a system from partial and inexactmeasurements.
• Finding a feedforward and feedback that tracks a referencetrajectory.
• Your favorite problem.
![Page 48: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/48.jpg)
Fundamental Problems
What are the fundamental problems of control?
• Finding a feedback that stabilizes a plant to an operatingpoint.
• Finding an open loop control trajectory that steers a plantfrom one state to another.
• Estimating the state of a system from partial and inexactmeasurements.
• Finding a feedforward and feedback that tracks a referencetrajectory.
• Your favorite problem.
![Page 49: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/49.jpg)
Fundamental Problems
What are the fundamental problems of control?
• Finding a feedback that stabilizes a plant to an operatingpoint.
• Finding an open loop control trajectory that steers a plantfrom one state to another.
• Estimating the state of a system from partial and inexactmeasurements.
• Finding a feedforward and feedback that tracks a referencetrajectory.
• Your favorite problem.
![Page 50: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/50.jpg)
Fundamental Problems
What are the fundamental problems of control?
• Finding a feedback that stabilizes a plant to an operatingpoint.
• Finding an open loop control trajectory that steers a plantfrom one state to another.
• Estimating the state of a system from partial and inexactmeasurements.
• Finding a feedforward and feedback that tracks a referencetrajectory.
• Your favorite problem.
![Page 51: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/51.jpg)
Linear Solutions
These are fundamental problems for both linear and nonlinearsystems. For linear systems we have theoretical solutions thatare easily implemented numerically. Here is a few.
• The Linear Quadratic Regulator (LQR) can be used foroptimally stabilizing a linear plant.
• Linear controllability and the Controllability Gramian lead toan optimal transfer from one state to another.
• The Kalman Filter yields an optimal estimate.
• The Francis theory of regulation can be used to optimallytrack a reference signal.
Notice the repeated use of the word ”optimal”.
It can be difficult to solve a problem that has many solutions.Introducing an optimality criterion narrows the search.
![Page 52: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/52.jpg)
Linear Solutions
These are fundamental problems for both linear and nonlinearsystems. For linear systems we have theoretical solutions thatare easily implemented numerically. Here is a few.
• The Linear Quadratic Regulator (LQR) can be used foroptimally stabilizing a linear plant.
• Linear controllability and the Controllability Gramian lead toan optimal transfer from one state to another.
• The Kalman Filter yields an optimal estimate.
• The Francis theory of regulation can be used to optimallytrack a reference signal.
Notice the repeated use of the word ”optimal”.
It can be difficult to solve a problem that has many solutions.Introducing an optimality criterion narrows the search.
![Page 53: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/53.jpg)
Linear Solutions
These are fundamental problems for both linear and nonlinearsystems. For linear systems we have theoretical solutions thatare easily implemented numerically. Here is a few.
• The Linear Quadratic Regulator (LQR) can be used foroptimally stabilizing a linear plant.
• Linear controllability and the Controllability Gramian lead toan optimal transfer from one state to another.
• The Kalman Filter yields an optimal estimate.
• The Francis theory of regulation can be used to optimallytrack a reference signal.
Notice the repeated use of the word ”optimal”.
It can be difficult to solve a problem that has many solutions.Introducing an optimality criterion narrows the search.
![Page 54: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/54.jpg)
Linear Solutions
These are fundamental problems for both linear and nonlinearsystems. For linear systems we have theoretical solutions thatare easily implemented numerically. Here is a few.
• The Linear Quadratic Regulator (LQR) can be used foroptimally stabilizing a linear plant.
• Linear controllability and the Controllability Gramian lead toan optimal transfer from one state to another.
• The Kalman Filter yields an optimal estimate.
• The Francis theory of regulation can be used to optimallytrack a reference signal.
Notice the repeated use of the word ”optimal”.
It can be difficult to solve a problem that has many solutions.Introducing an optimality criterion narrows the search.
![Page 55: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/55.jpg)
Linear Solutions
These are fundamental problems for both linear and nonlinearsystems. For linear systems we have theoretical solutions thatare easily implemented numerically. Here is a few.
• The Linear Quadratic Regulator (LQR) can be used foroptimally stabilizing a linear plant.
• Linear controllability and the Controllability Gramian lead toan optimal transfer from one state to another.
• The Kalman Filter yields an optimal estimate.
• The Francis theory of regulation can be used to optimallytrack a reference signal.
Notice the repeated use of the word ”optimal”.
It can be difficult to solve a problem that has many solutions.Introducing an optimality criterion narrows the search.
![Page 56: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/56.jpg)
Linear Solutions
These are fundamental problems for both linear and nonlinearsystems. For linear systems we have theoretical solutions thatare easily implemented numerically. Here is a few.
• The Linear Quadratic Regulator (LQR) can be used foroptimally stabilizing a linear plant.
• Linear controllability and the Controllability Gramian lead toan optimal transfer from one state to another.
• The Kalman Filter yields an optimal estimate.
• The Francis theory of regulation can be used to optimallytrack a reference signal.
Notice the repeated use of the word ”optimal”.
It can be difficult to solve a problem that has many solutions.Introducing an optimality criterion narrows the search.
![Page 57: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/57.jpg)
Linear Solutions
These are fundamental problems for both linear and nonlinearsystems. For linear systems we have theoretical solutions thatare easily implemented numerically. Here is a few.
• The Linear Quadratic Regulator (LQR) can be used foroptimally stabilizing a linear plant.
• Linear controllability and the Controllability Gramian lead toan optimal transfer from one state to another.
• The Kalman Filter yields an optimal estimate.
• The Francis theory of regulation can be used to optimallytrack a reference signal.
Notice the repeated use of the word ”optimal”.
It can be difficult to solve a problem that has many solutions.Introducing an optimality criterion narrows the search.
![Page 58: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/58.jpg)
Optimal StabilizationAn operating point (equilibrium point) is a pair xe, ue such thatf(xe, ue) = 0.
Without loss of generalitity we can assume xe = 0, ue = 0.
A classic way to find a feedback u = κ(x) that stabilizes anonlinear system to an operating point is to pose and solve aninfinite horizon optimal control problem. Choose a Lagrangianl(x, u) that is nonnegative definite in x, u and positve definitein u and
minu(·)
∫ ∞0
l(x, u) dt
subject to
x = f(x, u)
x(0) = x0
![Page 59: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/59.jpg)
Optimal StabilizationAn operating point (equilibrium point) is a pair xe, ue such thatf(xe, ue) = 0.
Without loss of generalitity we can assume xe = 0, ue = 0.
A classic way to find a feedback u = κ(x) that stabilizes anonlinear system to an operating point is to pose and solve aninfinite horizon optimal control problem. Choose a Lagrangianl(x, u) that is nonnegative definite in x, u and positve definitein u and
minu(·)
∫ ∞0
l(x, u) dt
subject to
x = f(x, u)
x(0) = x0
![Page 60: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/60.jpg)
Optimal StabilizationAn operating point (equilibrium point) is a pair xe, ue such thatf(xe, ue) = 0.
Without loss of generalitity we can assume xe = 0, ue = 0.
A classic way to find a feedback u = κ(x) that stabilizes anonlinear system to an operating point is to pose and solve aninfinite horizon optimal control problem. Choose a Lagrangianl(x, u) that is nonnegative definite in x, u and positve definitein u and
minu(·)
∫ ∞0
l(x, u) dt
subject to
x = f(x, u)
x(0) = x0
![Page 61: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/61.jpg)
Optimal StabilizationThe reason is the optimal solution is given by a feedbacku(t) = κ(x(t)) and the optimal cost π(x0) ≥ 0 starting at x0 isa potential Lyapunov function for the closed loop system
d
dtπ(x(t)) = −l(x(t), κ(x(t))) ≤ 0
that can be used to verify stability.
If we only have approximations π(x), κ(x) to the true solutionsthen we can verify local asymptotic stability on the largestsublevel set on which the standard Lyapunov equations hold.
If x 6= 0 and π(x) ≤ c
0 < π(x)
0 >∂π
∂x(x)f(x, κ(x))
![Page 62: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/62.jpg)
Optimal StabilizationThe reason is the optimal solution is given by a feedbacku(t) = κ(x(t)) and the optimal cost π(x0) ≥ 0 starting at x0 isa potential Lyapunov function for the closed loop system
d
dtπ(x(t)) = −l(x(t), κ(x(t))) ≤ 0
that can be used to verify stability.
If we only have approximations π(x), κ(x) to the true solutionsthen we can verify local asymptotic stability on the largestsublevel set on which the standard Lyapunov equations hold.
If x 6= 0 and π(x) ≤ c
0 < π(x)
0 >∂π
∂x(x)f(x, κ(x))
![Page 63: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/63.jpg)
Optimal StabilizationThe reason is the optimal solution is given by a feedbacku(t) = κ(x(t)) and the optimal cost π(x0) ≥ 0 starting at x0 isa potential Lyapunov function for the closed loop system
d
dtπ(x(t)) = −l(x(t), κ(x(t))) ≤ 0
that can be used to verify stability.
If we only have approximations π(x), κ(x) to the true solutionsthen we can verify local asymptotic stability on the largestsublevel set on which the standard Lyapunov equations hold.
If x 6= 0 and π(x) ≤ c
0 < π(x)
0 >∂π
∂x(x)f(x, κ(x))
![Page 64: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/64.jpg)
HJB Equations
If there exist smooth solutions π(x), κ(x) to theHamilton-Jacobi-Bellman equations
0 = minu
{∂π
∂x(x)f(x, u) + l(x, u)
}κ(x) = argminu
{∂π
∂x(x)f(x, u) + l(x, u)
}then these are the optimal cost and optimal feedback.
The control Hamiltonian is
H(λ, x, u) = λ′f(x, u) + l(x, u)
![Page 65: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/65.jpg)
HJB Equations
If there exist smooth solutions π(x), κ(x) to theHamilton-Jacobi-Bellman equations
0 = minu
{∂π
∂x(x)f(x, u) + l(x, u)
}κ(x) = argminu
{∂π
∂x(x)f(x, u) + l(x, u)
}then these are the optimal cost and optimal feedback.
The control Hamiltonian is
H(λ, x, u) = λ′f(x, u) + l(x, u)
![Page 66: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/66.jpg)
HJB Equations
If the control Hamiltonian is strictly convex in u for every λ, xthen the HJB equations simplify to
0 =∂π
∂x(x)f(x, κ(x)) + l(x, κ(x))
0 =∂π
∂x(x)
∂f
∂u(x, κ(x)) +
∂l
∂u(x, κ(x))
If R(x) > 0 and
f(x, u) = f0(x) + f1(x)u
l(x, u) = q(x) +1
2u′R(x)u
then H(λ, x, u) is strictly convex in u for every λ, x.
![Page 67: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/67.jpg)
HJB Equations
If the control Hamiltonian is strictly convex in u for every λ, xthen the HJB equations simplify to
0 =∂π
∂x(x)f(x, κ(x)) + l(x, κ(x))
0 =∂π
∂x(x)
∂f
∂u(x, κ(x)) +
∂l
∂u(x, κ(x))
If R(x) > 0 and
f(x, u) = f0(x) + f1(x)u
l(x, u) = q(x) +1
2u′R(x)u
then H(λ, x, u) is strictly convex in u for every λ, x.
![Page 68: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/68.jpg)
Linear Quadratic Regulator
About the only time the HJB equations can be solved in closedform is the so-called Linear Quadratic Regulator (LQR)
minu(·)
1
2
∫ ∞0
x′Qx+ 2x′Su+ u′Ru dt
subject to
x = Fx+Gu
The optimal cost is π(x) = 12x′Px and optimal feedback is
u = Kx where P,K satisfy
0 = F ′P + PF +Q− (PG+ S)R−1(PG+ S)′
K = −R−1(PG+ S)′
![Page 69: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/69.jpg)
Linear Quadratic Regulator
About the only time the HJB equations can be solved in closedform is the so-called Linear Quadratic Regulator (LQR)
minu(·)
1
2
∫ ∞0
x′Qx+ 2x′Su+ u′Ru dt
subject to
x = Fx+Gu
The optimal cost is π(x) = 12x′Px and optimal feedback is
u = Kx where P,K satisfy
0 = F ′P + PF +Q− (PG+ S)R−1(PG+ S)′
K = −R−1(PG+ S)′
![Page 70: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/70.jpg)
Jacobian Linearization
Standard engineering practice is to approximate the nonlineardynamics
x = f(x, u)
by its Jacobian linearization
x = Fx+Gu
where
F =∂f
∂x(0, 0), G =
∂f
∂u(0, 0)
Then 12x′Px and u = Kx are approximations to the true
optimal cost π(x) and optimal feedback u = κ(x).
![Page 71: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/71.jpg)
Jacobian Linearization
Standard engineering practice is to approximate the nonlineardynamics
x = f(x, u)
by its Jacobian linearization
x = Fx+Gu
where
F =∂f
∂x(0, 0), G =
∂f
∂u(0, 0)
Then 12x′Px and u = Kx are approximations to the true
optimal cost π(x) and optimal feedback u = κ(x).
![Page 72: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/72.jpg)
Lyapunov Argument
If
f(x, u) = Fx+Gu+ O(x, u)2
then
d
dt
(x′(t)Px(t)
)= −x′(t)
(Q+ (PG+ S)R−1(PG+ S)′
)x(t)
+O(x(t))3
so on some neighborhood of x = 0 the feedback u = Kxstabilizes the nonlinear system.
Kicker: How big is the neighborhood?
![Page 73: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/73.jpg)
Lyapunov Argument
If
f(x, u) = Fx+Gu+ O(x, u)2
then
d
dt
(x′(t)Px(t)
)= −x′(t)
(Q+ (PG+ S)R−1(PG+ S)′
)x(t)
+O(x(t))3
so on some neighborhood of x = 0 the feedback u = Kxstabilizes the nonlinear system.
Kicker: How big is the neighborhood?
![Page 74: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/74.jpg)
Al’brekht’s Method
In 1961 Ernst Al’brekht noticed that first HJB equation
0 =∂π
∂x(x)f(x, κ(x)) + l(x, κ(x))
is singular at the operating point x = 0, u = 0 becausef(0, 0) = 0.
This means that it is amenable to power series methods. Letπ[k](x) denote a homogeneous polynomial of degree k andconsider the linear operator
π[k](x) 7→∂π[k]
∂x(x)(F +GK)x
The eigenvalues of this operator are the sums of k eigenvaluesof F +GK so if the latter are all in the open left half planethan so are the former.
![Page 75: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/75.jpg)
Al’brekht’s Method
In 1961 Ernst Al’brekht noticed that first HJB equation
0 =∂π
∂x(x)f(x, κ(x)) + l(x, κ(x))
is singular at the operating point x = 0, u = 0 becausef(0, 0) = 0.
This means that it is amenable to power series methods. Letπ[k](x) denote a homogeneous polynomial of degree k andconsider the linear operator
π[k](x) 7→∂π[k]
∂x(x)(F +GK)x
The eigenvalues of this operator are the sums of k eigenvaluesof F +GK so if the latter are all in the open left half planethan so are the former.
![Page 76: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/76.jpg)
Al’brekht’s Method
In 1961 Ernst Al’brekht noticed that first HJB equation
0 =∂π
∂x(x)f(x, κ(x)) + l(x, κ(x))
is singular at the operating point x = 0, u = 0 becausef(0, 0) = 0.
This means that it is amenable to power series methods. Letπ[k](x) denote a homogeneous polynomial of degree k andconsider the linear operator
π[k](x) 7→∂π[k]
∂x(x)(F +GK)x
The eigenvalues of this operator are the sums of k eigenvaluesof F +GK so if the latter are all in the open left half planethan so are the former.
![Page 77: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/77.jpg)
Al’brekht’s Method
Al’brekht plugged the Taylor polynomial expansions
f(x, u) ≈ Fx+Gu+ f [2](x, u) + . . .+ f [d](x, u)
l(x, u) ≈1
2
(x′Qx+ 2x′Su+ u′Ru
)+ . . .+ l[d+1](x, u)
π(x) ≈1
2x′Px+ π[3](x) + . . .+ π[d+1](x)
κ(x) ≈ Kx+ κ[2](x) + . . .+ κ[d](x, u)
into the HJB equations and collected terms of like degrees.
![Page 78: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/78.jpg)
Al’brekht’s Method
At the lowest level he obtained the familiar LQR equations
0 = F ′P + PF +Q− (PG+ S)R−1(PG+ S)′
K = −R−1(PG+ S)′
At the next level he obtained linear equations for the coefficientsof π[3](x) and κ[2](x).
These linear equations are solvable if the standard LQRconditions hold.
There are similar linear equations for the higher degree terms,π[k+1](x) and κ[k](x).
![Page 79: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/79.jpg)
Al’brekht’s Method
At the lowest level he obtained the familiar LQR equations
0 = F ′P + PF +Q− (PG+ S)R−1(PG+ S)′
K = −R−1(PG+ S)′
At the next level he obtained linear equations for the coefficientsof π[3](x) and κ[2](x).
These linear equations are solvable if the standard LQRconditions hold.
There are similar linear equations for the higher degree terms,π[k+1](x) and κ[k](x).
![Page 80: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/80.jpg)
Al’brekht’s Method
At the lowest level he obtained the familiar LQR equations
0 = F ′P + PF +Q− (PG+ S)R−1(PG+ S)′
K = −R−1(PG+ S)′
At the next level he obtained linear equations for the coefficientsof π[3](x) and κ[2](x).
These linear equations are solvable if the standard LQRconditions hold.
There are similar linear equations for the higher degree terms,π[k+1](x) and κ[k](x).
![Page 81: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/81.jpg)
Al’brekht’s Method
At the lowest level he obtained the familiar LQR equations
0 = F ′P + PF +Q− (PG+ S)R−1(PG+ S)′
K = −R−1(PG+ S)′
At the next level he obtained linear equations for the coefficientsof π[3](x) and κ[2](x).
These linear equations are solvable if the standard LQRconditions hold.
There are similar linear equations for the higher degree terms,π[k+1](x) and κ[k](x).
![Page 82: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/82.jpg)
NST 2016
The routine hjb.m in my Nonlinear Systems Toolbox canimplement Al’brekht’s method in any state and controldimensions to any degree subject to machine limitations.
The routine hjb set up.m converts symbolic expressions forf(x, u) and l(x, u) into the matrices of coefficients that hjb.m
needs.
Here is an example.
![Page 83: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/83.jpg)
NST 2016
The routine hjb.m in my Nonlinear Systems Toolbox canimplement Al’brekht’s method in any state and controldimensions to any degree subject to machine limitations.
The routine hjb set up.m converts symbolic expressions forf(x, u) and l(x, u) into the matrices of coefficients that hjb.m
needs.
Here is an example.
![Page 84: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/84.jpg)
NST 2016
The routine hjb.m in my Nonlinear Systems Toolbox canimplement Al’brekht’s method in any state and controldimensions to any degree subject to machine limitations.
The routine hjb set up.m converts symbolic expressions forf(x, u) and l(x, u) into the matrices of coefficients that hjb.m
needs.
Here is an example.
![Page 85: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/85.jpg)
Optimal Stabilization of a Rigid Body
The example was fairly simple with n = 4 states and m = 2 .
What about a more challenging problem?
Consider rigid body with six degrees of freedom, a boat, a planea satellite. There are three translational degrees of freedom andthree angular degrees of freedom.
A state space model has n = 12 dimensions with associatedvelocities.
Suppose the body is fully actuated, then the control dimensionis m = 6.
Our goal is to optimally stabilize the body to a desired constantvelocity and desired attitude.
![Page 86: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/86.jpg)
Optimal Stabilization of a Rigid Body
The example was fairly simple with n = 4 states and m = 2 .
What about a more challenging problem?
Consider rigid body with six degrees of freedom, a boat, a planea satellite. There are three translational degrees of freedom andthree angular degrees of freedom.
A state space model has n = 12 dimensions with associatedvelocities.
Suppose the body is fully actuated, then the control dimensionis m = 6.
Our goal is to optimally stabilize the body to a desired constantvelocity and desired attitude.
![Page 87: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/87.jpg)
Optimal Stabilization of a Rigid Body
The example was fairly simple with n = 4 states and m = 2 .
What about a more challenging problem?
Consider rigid body with six degrees of freedom, a boat, a planea satellite. There are three translational degrees of freedom andthree angular degrees of freedom.
A state space model has n = 12 dimensions with associatedvelocities.
Suppose the body is fully actuated, then the control dimensionis m = 6.
Our goal is to optimally stabilize the body to a desired constantvelocity and desired attitude.
![Page 88: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/88.jpg)
Optimal Stabilization of a Rigid Body
The example was fairly simple with n = 4 states and m = 2 .
What about a more challenging problem?
Consider rigid body with six degrees of freedom, a boat, a planea satellite. There are three translational degrees of freedom andthree angular degrees of freedom.
A state space model has n = 12 dimensions with associatedvelocities.
Suppose the body is fully actuated, then the control dimensionis m = 6.
Our goal is to optimally stabilize the body to a desired constantvelocity and desired attitude.
![Page 89: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/89.jpg)
Optimal Stabilization of a Rigid Body
The example was fairly simple with n = 4 states and m = 2 .
What about a more challenging problem?
Consider rigid body with six degrees of freedom, a boat, a planea satellite. There are three translational degrees of freedom andthree angular degrees of freedom.
A state space model has n = 12 dimensions with associatedvelocities.
Suppose the body is fully actuated, then the control dimensionis m = 6.
Our goal is to optimally stabilize the body to a desired constantvelocity and desired attitude.
![Page 90: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/90.jpg)
Optimal Stabilization of a Rigid Body
The example was fairly simple with n = 4 states and m = 2 .
What about a more challenging problem?
Consider rigid body with six degrees of freedom, a boat, a planea satellite. There are three translational degrees of freedom andthree angular degrees of freedom.
A state space model has n = 12 dimensions with associatedvelocities.
Suppose the body is fully actuated, then the control dimensionis m = 6.
Our goal is to optimally stabilize the body to a desired constantvelocity and desired attitude.
![Page 91: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/91.jpg)
Runtimes
First we did it to degree 4 in the cost and degree 3 in thefeedback.
hjb set up time = 89.7025 seconds
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3
hjb time 3 = 4.0260 seconds
![Page 92: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/92.jpg)
Runtimes
First we did it to degree 4 in the cost and degree 3 in thefeedback.
hjb set up time = 89.7025 seconds
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3
hjb time 3 = 4.0260 seconds
![Page 93: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/93.jpg)
Runtimes
First we did it to degree 4 in the cost and degree 3 in thefeedback.
hjb set up time = 89.7025 seconds
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3
hjb time 3 = 4.0260 seconds
![Page 94: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/94.jpg)
Runtimes
First we did it to degree 4 in the cost and degree 3 in thefeedback.
hjb set up time = 89.7025 seconds
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3
hjb time 3 = 4.0260 seconds
![Page 95: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/95.jpg)
RuntimesNext we did it to degree 6 in the cost and degree 4 in thefeedback.
Unfortunately Matlab’s symbolic differentiation routine was notup to the task of computing Taylor polynomials of degree 5, 6.
So hjb set up.m did not run.
By padding f and l with random coefficients we can test hjb.m
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3Solving the HJB equations of degree 4Solving the HJB equations of degree 5
hjb time 5 = 1.6048e+ 03
This is 26.7466 minutes. The number of monomials of degreesone through five in n+m = 18 variables is 33648.
![Page 96: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/96.jpg)
RuntimesNext we did it to degree 6 in the cost and degree 4 in thefeedback.
Unfortunately Matlab’s symbolic differentiation routine was notup to the task of computing Taylor polynomials of degree 5, 6.
So hjb set up.m did not run.
By padding f and l with random coefficients we can test hjb.m
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3Solving the HJB equations of degree 4Solving the HJB equations of degree 5
hjb time 5 = 1.6048e+ 03
This is 26.7466 minutes. The number of monomials of degreesone through five in n+m = 18 variables is 33648.
![Page 97: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/97.jpg)
RuntimesNext we did it to degree 6 in the cost and degree 4 in thefeedback.
Unfortunately Matlab’s symbolic differentiation routine was notup to the task of computing Taylor polynomials of degree 5, 6.
So hjb set up.m did not run.
By padding f and l with random coefficients we can test hjb.m
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3Solving the HJB equations of degree 4Solving the HJB equations of degree 5
hjb time 5 = 1.6048e+ 03
This is 26.7466 minutes. The number of monomials of degreesone through five in n+m = 18 variables is 33648.
![Page 98: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/98.jpg)
RuntimesNext we did it to degree 6 in the cost and degree 4 in thefeedback.
Unfortunately Matlab’s symbolic differentiation routine was notup to the task of computing Taylor polynomials of degree 5, 6.
So hjb set up.m did not run.
By padding f and l with random coefficients we can test hjb.m
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3Solving the HJB equations of degree 4Solving the HJB equations of degree 5
hjb time 5 = 1.6048e+ 03
This is 26.7466 minutes. The number of monomials of degreesone through five in n+m = 18 variables is 33648.
![Page 99: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/99.jpg)
RuntimesNext we did it to degree 6 in the cost and degree 4 in thefeedback.
Unfortunately Matlab’s symbolic differentiation routine was notup to the task of computing Taylor polynomials of degree 5, 6.
So hjb set up.m did not run.
By padding f and l with random coefficients we can test hjb.m
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3Solving the HJB equations of degree 4Solving the HJB equations of degree 5
hjb time 5 = 1.6048e+ 03
This is 26.7466 minutes. The number of monomials of degreesone through five in n+m = 18 variables is 33648.
![Page 100: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/100.jpg)
RuntimesNext we did it to degree 6 in the cost and degree 4 in thefeedback.
Unfortunately Matlab’s symbolic differentiation routine was notup to the task of computing Taylor polynomials of degree 5, 6.
So hjb set up.m did not run.
By padding f and l with random coefficients we can test hjb.m
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3Solving the HJB equations of degree 4Solving the HJB equations of degree 5
hjb time 5 = 1.6048e+ 03
This is 26.7466 minutes. The number of monomials of degreesone through five in n+m = 18 variables is 33648.
![Page 101: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/101.jpg)
RuntimesNext we did it to degree 6 in the cost and degree 4 in thefeedback.
Unfortunately Matlab’s symbolic differentiation routine was notup to the task of computing Taylor polynomials of degree 5, 6.
So hjb set up.m did not run.
By padding f and l with random coefficients we can test hjb.m
Solving the HJB equations of degree 1Solving the HJB equations of degree 2Solving the HJB equations of degree 3Solving the HJB equations of degree 4Solving the HJB equations of degree 5
hjb time 5 = 1.6048e+ 03
This is 26.7466 minutes. The number of monomials of degreesone through five in n+m = 18 variables is 33648.
![Page 102: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/102.jpg)
Kicker
So using Al’brekht’s method we can get a local solution to theHJB equations for smooth medium sized systems withoutconstraints.
To my knowledge there is no other method that can do this.
But
• How big is the neighborhood of stabilization?
• What if there are constraints?
• What if there are discontinuities?
![Page 103: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/103.jpg)
Kicker
So using Al’brekht’s method we can get a local solution to theHJB equations for smooth medium sized systems withoutconstraints.
To my knowledge there is no other method that can do this.
But
• How big is the neighborhood of stabilization?
• What if there are constraints?
• What if there are discontinuities?
![Page 104: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/104.jpg)
Kicker
So using Al’brekht’s method we can get a local solution to theHJB equations for smooth medium sized systems withoutconstraints.
To my knowledge there is no other method that can do this.
But
• How big is the neighborhood of stabilization?
• What if there are constraints?
• What if there are discontinuities?
![Page 105: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/105.jpg)
Kicker
So using Al’brekht’s method we can get a local solution to theHJB equations for smooth medium sized systems withoutconstraints.
To my knowledge there is no other method that can do this.
But
• How big is the neighborhood of stabilization?
• What if there are constraints?
• What if there are discontinuities?
![Page 106: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/106.jpg)
Kicker
So using Al’brekht’s method we can get a local solution to theHJB equations for smooth medium sized systems withoutconstraints.
To my knowledge there is no other method that can do this.
But
• How big is the neighborhood of stabilization?
• What if there are constraints?
• What if there are discontinuities?
![Page 107: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/107.jpg)
Model Predictive ControlA variation on Model Predictive Control (MPC) is an answer tothe first question.
In MPC we don’t try to solve off-line for the optimal cost π(x)and optimal feedback κ(x) to an infinite horizon optimal controlproblem.
Instead we solve on-line a discrete time finite horizon optimalcontrol problem for our current state. Suppose t = t0 andx(t0) = x0 , we choose a horizon length T and a LagrangianL(x, u) and seek to find the optimal control sequenceu∗t0 = (u∗(t0), . . . , u∗(t0 + T − 1)) that minimizes
t0+T−1∑t=t0
L(x(t), u(t)) + Π(x(t0 + T ))
x(t+ 1) = F (x(t), u(t))
![Page 108: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/108.jpg)
Model Predictive ControlA variation on Model Predictive Control (MPC) is an answer tothe first question.
In MPC we don’t try to solve off-line for the optimal cost π(x)and optimal feedback κ(x) to an infinite horizon optimal controlproblem.
Instead we solve on-line a discrete time finite horizon optimalcontrol problem for our current state. Suppose t = t0 andx(t0) = x0 , we choose a horizon length T and a LagrangianL(x, u) and seek to find the optimal control sequenceu∗t0 = (u∗(t0), . . . , u∗(t0 + T − 1)) that minimizes
t0+T−1∑t=t0
L(x(t), u(t)) + Π(x(t0 + T ))
x(t+ 1) = F (x(t), u(t))
![Page 109: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/109.jpg)
Model Predictive ControlA variation on Model Predictive Control (MPC) is an answer tothe first question.
In MPC we don’t try to solve off-line for the optimal cost π(x)and optimal feedback κ(x) to an infinite horizon optimal controlproblem.
Instead we solve on-line a discrete time finite horizon optimalcontrol problem for our current state. Suppose t = t0 andx(t0) = x0 , we choose a horizon length T and a LagrangianL(x, u) and seek to find the optimal control sequenceu∗t0 = (u∗(t0), . . . , u∗(t0 + T − 1)) that minimizes
t0+T−1∑t=t0
L(x(t), u(t)) + Π(x(t0 + T ))
x(t+ 1) = F (x(t), u(t))
![Page 110: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/110.jpg)
Model Predictive Control
There is a discrete time version of Al’brekht for the corresondinginfinite horizon optimal control problem implemented in dpe.m
that yields Taylor polynomials for the infinite time optimal costΠ(x) and optimal feedback K(x).
We use Π(x) as our terminal cost for the discrete time finitehorizon optimal control problem for then the finite horizon andinfinite horizon costs are the same.
The discrete time finite horizon optimal control problem is anonlinear program which we pass to a fast solver like KNITROor IPOPT. The solver returns u∗t0 , we implement the feedbacku(t0) = u∗(t0) and move one time step. We increment t0 byone which slides the horizon forward one time step and repeatthe process.
![Page 111: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/111.jpg)
Model Predictive Control
There is a discrete time version of Al’brekht for the corresondinginfinite horizon optimal control problem implemented in dpe.m
that yields Taylor polynomials for the infinite time optimal costΠ(x) and optimal feedback K(x).
We use Π(x) as our terminal cost for the discrete time finitehorizon optimal control problem for then the finite horizon andinfinite horizon costs are the same.
The discrete time finite horizon optimal control problem is anonlinear program which we pass to a fast solver like KNITROor IPOPT. The solver returns u∗t0 , we implement the feedbacku(t0) = u∗(t0) and move one time step. We increment t0 byone which slides the horizon forward one time step and repeatthe process.
![Page 112: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/112.jpg)
Model Predictive Control
There is a discrete time version of Al’brekht for the corresondinginfinite horizon optimal control problem implemented in dpe.m
that yields Taylor polynomials for the infinite time optimal costΠ(x) and optimal feedback K(x).
We use Π(x) as our terminal cost for the discrete time finitehorizon optimal control problem for then the finite horizon andinfinite horizon costs are the same.
The discrete time finite horizon optimal control problem is anonlinear program which we pass to a fast solver like KNITROor IPOPT. The solver returns u∗t0 , we implement the feedbacku(t0) = u∗(t0) and move one time step. We increment t0 byone which slides the horizon forward one time step and repeatthe process.
![Page 113: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/113.jpg)
Adaptive Horizon Model Predictive Control
But how do we know that the horizon T is long enough so thatthe endpoint x(t0 + T ) is in the domain where Π(x) is a validLyapunov function for closed loop dynamics under the feedbacku = K(x)?
We compute the extension the state trajectory from x(t0 + T )an additional S times steps using the closed loop dynamicsunder the feedback u = K(x).
As we do so we check that the Lyapunov conditions hold on theextension. If so we infer that we are stabilizing.
If they are comfortably satisfied we might decrease the horizon.
If the are not satisfied we increase the horizon.
![Page 114: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/114.jpg)
Adaptive Horizon Model Predictive Control
But how do we know that the horizon T is long enough so thatthe endpoint x(t0 + T ) is in the domain where Π(x) is a validLyapunov function for closed loop dynamics under the feedbacku = K(x)?
We compute the extension the state trajectory from x(t0 + T )an additional S times steps using the closed loop dynamicsunder the feedback u = K(x).
As we do so we check that the Lyapunov conditions hold on theextension. If so we infer that we are stabilizing.
If they are comfortably satisfied we might decrease the horizon.
If the are not satisfied we increase the horizon.
![Page 115: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/115.jpg)
Adaptive Horizon Model Predictive Control
But how do we know that the horizon T is long enough so thatthe endpoint x(t0 + T ) is in the domain where Π(x) is a validLyapunov function for closed loop dynamics under the feedbacku = K(x)?
We compute the extension the state trajectory from x(t0 + T )an additional S times steps using the closed loop dynamicsunder the feedback u = K(x).
As we do so we check that the Lyapunov conditions hold on theextension. If so we infer that we are stabilizing.
If they are comfortably satisfied we might decrease the horizon.
If the are not satisfied we increase the horizon.
![Page 116: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/116.jpg)
Adaptive Horizon Model Predictive Control
But how do we know that the horizon T is long enough so thatthe endpoint x(t0 + T ) is in the domain where Π(x) is a validLyapunov function for closed loop dynamics under the feedbacku = K(x)?
We compute the extension the state trajectory from x(t0 + T )an additional S times steps using the closed loop dynamicsunder the feedback u = K(x).
As we do so we check that the Lyapunov conditions hold on theextension. If so we infer that we are stabilizing.
If they are comfortably satisfied we might decrease the horizon.
If the are not satisfied we increase the horizon.
![Page 117: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/117.jpg)
Adaptive Horizon Model Predictive Control
But how do we know that the horizon T is long enough so thatthe endpoint x(t0 + T ) is in the domain where Π(x) is a validLyapunov function for closed loop dynamics under the feedbacku = K(x)?
We compute the extension the state trajectory from x(t0 + T )an additional S times steps using the closed loop dynamicsunder the feedback u = K(x).
As we do so we check that the Lyapunov conditions hold on theextension. If so we infer that we are stabilizing.
If they are comfortably satisfied we might decrease the horizon.
If the are not satisfied we increase the horizon.
![Page 118: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/118.jpg)
Adaptive Horizon Model Predictive Control
If there are constraints G(x, u) ≤ 0 we can take the sameapproach as long as they are not active at the operating pointG(0, 0) < 0.
We pass these constraints to the solver which hopefully canhandle them.
And we check that the constraints and the Lyapunov conditionsare satisfied on the computed extension.
![Page 119: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/119.jpg)
Adaptive Horizon Model Predictive Control
If there are constraints G(x, u) ≤ 0 we can take the sameapproach as long as they are not active at the operating pointG(0, 0) < 0.
We pass these constraints to the solver which hopefully canhandle them.
And we check that the constraints and the Lyapunov conditionsare satisfied on the computed extension.
![Page 120: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/120.jpg)
Adaptive Horizon Model Predictive Control
If there are constraints G(x, u) ≤ 0 we can take the sameapproach as long as they are not active at the operating pointG(0, 0) < 0.
We pass these constraints to the solver which hopefully canhandle them.
And we check that the constraints and the Lyapunov conditionsare satisfied on the computed extension.
![Page 121: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/121.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 122: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/122.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.
x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 123: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/123.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.
u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 124: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/124.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.
u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 125: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/125.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.
The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 126: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/126.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.
The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 127: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/127.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.
The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 128: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/128.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.
The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 129: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/129.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.
The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 130: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/130.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 131: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/131.jpg)
Examples
We seek to stabilize a double pendulum to the upright positionusing torques at each of the pivots.
x1 is the angle of the first leg measured in radianscounter-clockwise from straight up and x3 = x1.x2 is the angle of the second leg measured in radianscounter-clockwise from straight up and x4 = x2.u1 is the torque applied at the base of the first leg.u2 is the torque applied at the joint between the legs.The length of the first massless leg is 1 m.The length of the second massless leg is 2 m.The mass at the joint between the legs is 2 kg.The mass at the tip of the second leg is 1 kg.The damping coefficients at the two joints are both 0.5 s−1 .
The continuous time dynamics is discretized using Euler’smethod with time step 0.1 s. assuming the control is constantthroughout the time step.
![Page 132: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/132.jpg)
Examples
The continuous time Lagrangian is lc(x, u) = (|x|2 + |u|2)/2.
Its Euler discretization is l(x, u) = (|x|2 + |u|2)/20.
The initial state is x = (π/2,−π/2, 0, 0)′.
The initial horizon length is N = 5.
The terminal cost and feedback Vf(x), κf(x) are the solutionof the discrete time, infinite horizon LQR problem using thelinear part of the dynamics at the origin and the quadraticLagrangian.
The class K∞ function is α(|x|) = 0.1|x|2.The extended horizon is kept frozen at L = 5.
We do not move one time step forward if the feasibility andLyapunov conditions do not hold over the extended statetrajectory but instead we increased N by 1 and recomputedfrom the same x.
![Page 133: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/133.jpg)
Examples
The continuous time Lagrangian is lc(x, u) = (|x|2 + |u|2)/2.
Its Euler discretization is l(x, u) = (|x|2 + |u|2)/20.
The initial state is x = (π/2,−π/2, 0, 0)′.
The initial horizon length is N = 5.
The terminal cost and feedback Vf(x), κf(x) are the solutionof the discrete time, infinite horizon LQR problem using thelinear part of the dynamics at the origin and the quadraticLagrangian.
The class K∞ function is α(|x|) = 0.1|x|2.The extended horizon is kept frozen at L = 5.
We do not move one time step forward if the feasibility andLyapunov conditions do not hold over the extended statetrajectory but instead we increased N by 1 and recomputedfrom the same x.
![Page 134: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/134.jpg)
Examples
The continuous time Lagrangian is lc(x, u) = (|x|2 + |u|2)/2.
Its Euler discretization is l(x, u) = (|x|2 + |u|2)/20.
The initial state is x = (π/2,−π/2, 0, 0)′.
The initial horizon length is N = 5.
The terminal cost and feedback Vf(x), κf(x) are the solutionof the discrete time, infinite horizon LQR problem using thelinear part of the dynamics at the origin and the quadraticLagrangian.
The class K∞ function is α(|x|) = 0.1|x|2.The extended horizon is kept frozen at L = 5.
We do not move one time step forward if the feasibility andLyapunov conditions do not hold over the extended statetrajectory but instead we increased N by 1 and recomputedfrom the same x.
![Page 135: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/135.jpg)
Examples
The continuous time Lagrangian is lc(x, u) = (|x|2 + |u|2)/2.
Its Euler discretization is l(x, u) = (|x|2 + |u|2)/20.
The initial state is x = (π/2,−π/2, 0, 0)′.
The initial horizon length is N = 5.
The terminal cost and feedback Vf(x), κf(x) are the solutionof the discrete time, infinite horizon LQR problem using thelinear part of the dynamics at the origin and the quadraticLagrangian.
The class K∞ function is α(|x|) = 0.1|x|2.The extended horizon is kept frozen at L = 5.
We do not move one time step forward if the feasibility andLyapunov conditions do not hold over the extended statetrajectory but instead we increased N by 1 and recomputedfrom the same x.
![Page 136: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/136.jpg)
Examples
The continuous time Lagrangian is lc(x, u) = (|x|2 + |u|2)/2.
Its Euler discretization is l(x, u) = (|x|2 + |u|2)/20.
The initial state is x = (π/2,−π/2, 0, 0)′.
The initial horizon length is N = 5.
The terminal cost and feedback Vf(x), κf(x) are the solutionof the discrete time, infinite horizon LQR problem using thelinear part of the dynamics at the origin and the quadraticLagrangian.
The class K∞ function is α(|x|) = 0.1|x|2.The extended horizon is kept frozen at L = 5.
We do not move one time step forward if the feasibility andLyapunov conditions do not hold over the extended statetrajectory but instead we increased N by 1 and recomputedfrom the same x.
![Page 137: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/137.jpg)
Examples
The continuous time Lagrangian is lc(x, u) = (|x|2 + |u|2)/2.
Its Euler discretization is l(x, u) = (|x|2 + |u|2)/20.
The initial state is x = (π/2,−π/2, 0, 0)′.
The initial horizon length is N = 5.
The terminal cost and feedback Vf(x), κf(x) are the solutionof the discrete time, infinite horizon LQR problem using thelinear part of the dynamics at the origin and the quadraticLagrangian.
The class K∞ function is α(|x|) = 0.1|x|2.
The extended horizon is kept frozen at L = 5.
We do not move one time step forward if the feasibility andLyapunov conditions do not hold over the extended statetrajectory but instead we increased N by 1 and recomputedfrom the same x.
![Page 138: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/138.jpg)
Examples
The continuous time Lagrangian is lc(x, u) = (|x|2 + |u|2)/2.
Its Euler discretization is l(x, u) = (|x|2 + |u|2)/20.
The initial state is x = (π/2,−π/2, 0, 0)′.
The initial horizon length is N = 5.
The terminal cost and feedback Vf(x), κf(x) are the solutionof the discrete time, infinite horizon LQR problem using thelinear part of the dynamics at the origin and the quadraticLagrangian.
The class K∞ function is α(|x|) = 0.1|x|2.The extended horizon is kept frozen at L = 5.
We do not move one time step forward if the feasibility andLyapunov conditions do not hold over the extended statetrajectory but instead we increased N by 1 and recomputedfrom the same x.
![Page 139: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/139.jpg)
Examples
The continuous time Lagrangian is lc(x, u) = (|x|2 + |u|2)/2.
Its Euler discretization is l(x, u) = (|x|2 + |u|2)/20.
The initial state is x = (π/2,−π/2, 0, 0)′.
The initial horizon length is N = 5.
The terminal cost and feedback Vf(x), κf(x) are the solutionof the discrete time, infinite horizon LQR problem using thelinear part of the dynamics at the origin and the quadraticLagrangian.
The class K∞ function is α(|x|) = 0.1|x|2.The extended horizon is kept frozen at L = 5.
We do not move one time step forward if the feasibility andLyapunov conditions do not hold over the extended statetrajectory but instead we increased N by 1 and recomputedfrom the same x.
![Page 140: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/140.jpg)
Example 1
x0 = (0.5π,−0.5π, 0, 0)′
0 10 20 30 40 50 60-2
-1.5
-1
-0.5
0
0.5
1
1.5
2AMPC stabilizaton of a double pendulum
Figure: Angles Converging to the Vertical
![Page 141: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/141.jpg)
Example 1
x0 = (0.5π,−0.5π, 0, 0)′
0 10 20 30 40 50 600
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5Changing horizon length
Figure: Adaptively Changing Horizon
![Page 142: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/142.jpg)
Example 1
x0 = (0.5π,−0.5π, 0, 0)′
0 10 20 30 40 50 60-80
-60
-40
-20
0
20
40
60Control sequence
Figure: Control sequence
![Page 143: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/143.jpg)
Example 2
x0 = (0.9π,−0.9π, 0, 0)′
0 10 20 30 40 50 60-3
-2
-1
0
1
2
3AMPC stabilizaton of a double pendulum
Figure: Angles Converging to the Vertical
![Page 144: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/144.jpg)
Example 2
x0 = (0.9π,−0.9π, 0, 0)′
0 10 20 30 40 50 600
2
4
6
8
10
12
14Changing horizon length
Figure: Adaptively Changing Horizon
![Page 145: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/145.jpg)
Example 2
x0 = (0.9π,−0.9π, 0, 0)′
0 10 20 30 40 50 60-40
-30
-20
-10
0
10
20
30Control sequence
Figure: Control sequence
![Page 146: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/146.jpg)
Example 3
x0 = (0.9π, 0.9π, 0, 0)′
0 10 20 30 40 50 60-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5AMPC stabilizaton of a double pendulum
Figure: Angles Converging to the Vertical
![Page 147: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/147.jpg)
Example 3
x0 = (0.9π, 0.9π, 0, 0)′
0 10 20 30 40 50 600
5
10
15
20
25
Changing horizon length
Figure: Adaptively Changing Horizon
![Page 148: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/148.jpg)
Example 3
x0 = (0.9π, 0.9π, 0, 0)′
0 10 20 30 40 50 60-35
-30
-25
-20
-15
-10
-5
0
5
10
15Control sequence
Figure: Control sequence
![Page 149: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/149.jpg)
Example 4
x0 = (0.9π, 0.9π, 0, 0)′
|u1| ≤ 10
|u2| ≤ 10
0 10 20 30 40 50 60-0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5AMPC stabilizaton of a double pendulum
Figure: Angles Converging to the Vertical
![Page 150: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/150.jpg)
Example 4
x0 = (0.9π, 0.9π, 0, 0)′
|u1| ≤ 10
|u2| ≤ 10
0 10 20 30 40 50 600
5
10
15
20
25
30
35
40
45Changing horizon length
Figure: Adaptively Changing Horizon
![Page 151: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/151.jpg)
Example 4
x0 = (0.9π, 0.9π, 0, 0)′
|u1| ≤ 10
|u2| ≤ 10
0 10 20 30 40 50 60 70 80 90 100-10
-8
-6
-4
-2
0
2
4
6
8
10Control sequence
Figure: Control sequence
![Page 152: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/152.jpg)
Our Slogan
Think Mathematically
Act Computationally
![Page 153: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/153.jpg)
Our Slogan
Think Mathematically
Act Computationally
![Page 154: Computational Issues in Nonlinear Control and Estimation · the Hamilton-Jacobi-Bellman (HJB) PDE. Kalman presented the theory of Controllability, Observability and Minimality for](https://reader030.fdocuments.in/reader030/viewer/2022040607/5ebe8459ba4a9944113b579a/html5/thumbnails/154.jpg)
Conclusion
Thank You