Nonlinear Random Vibration, Analytical Techniques and Applications

310

Transcript of Nonlinear Random Vibration, Analytical Techniques and Applications

Nonlinear Random VibrationThis page intentionally left blankThis page intentionally left blank
Nonlinear Random Vibration Analytical Techniques and Applications Second edition
Cho W.S. To Professor of Mechanical Engineering University of Nebraska-Lincoln USA
CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742
© 2012 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works Version Date: 20111212
International Standard Book Number-13: 978-1-4665-1284-9 (eBook - PDF)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the valid- ity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or uti- lized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopy- ing, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http:// www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com
and the CRC Press Web site at http://www.crcpress.com
Table of contents
Acknowledgements xv
2 Markovian and Non-Markovian Solutions of Stochastic Nonlinear Differential Equations 3 2.1 Introduction 3
2.1.1 Classification based on regularity 3 2.1.2 Classification based on memory 4 2.1.3 Kinetic equation of stochastic processes 4
2.2 Markovian Solution of Stochastic Nonlinear Differential Equations 6 2.2.1 Markov and diffusion processes 6 2.2.2 Itô’s and Stratonovich integrals 7 2.2.3 One-dimensional Fokker-Planck-Kolmogorov equation 9 2.2.4 Systems with random parametric excitations 9
2.3 Non-Markovian Solution of Stochastic Nonlinear Differential Equations 13 2.3.1 One-dimensional problem 13 2.3.2 Multi-dimensional problem 15
3 Exact Solutions of Fokker-Planck-Kolmogorov Equations 19 3.1 Introduction 19 3.2 Solution of a General Single-Degree-of-Freedom System 22 3.3 Applications to Engineering Systems 33
3.3.1 Systems with linear damping and nonlinear stiffness 33 3.3.2 Systems with nonlinear damping and linear stiffness 50 3.3.3 Systems with nonlinear damping and nonlinear stiffness 53
3.4 Solution of Multi-Degree-of-Freedom Systems 54 3.5 Stochastically Excited Hamiltonian Systems 62
4 Methods of Statistical Linearization 65 4.1 Introduction 65 4.2 Statistical Linearization for Single-Degree-of-Freedom Nonlinear Systems 66
4.2.1 Stationary solutions of single-degree-of-freedom systems under zero mean Gaussian white noise excitations 66
4.2.2 Non-Zero mean stationary solution of a single-degree-of-freedom system 76
4.2.3 Stationary solution of a single-degree-of-freedom system under narrow-band excitation 78
4.2.4 Stationary solution of a single-degree-of-freedom system under parametric and external random excitations 81
4.2.5 Solutions of single-degree-of-freedom systems under nonstationary random excitations 84
vi Table of contents
4.4.1 Single-degree-of-freedom systems 94 4.4.2 Multi-degree-of-freedom systems 100
4.5 Uniqueness and Accuracy of Solutions by Statistical Linearization 112 4.5.1 Uniqueness of solutions 112 4.5.2 Accuracy of solutions 113 4.5.3 Remarks 114
5 Statistical Nonlinearization Techniques 115 5.1 Introduction 115 5.2 Statistical Nonlinearization Technique Based on
Least Mean Square of Deficiency 117 5.2.1 Special case 117 5.2.2 General case 118 5.2.3 Examples 122
5.3 Statistical Nonlinearization Technique Based on Equivalent Nonlinear Damping Coefficient 133 5.3.1 Derivation of equivalent nonlinear damping coefficient 134 5.3.2 Solution of equivalent nonlinear equation of
single-degree-of-freedom systems 135 5.3.3 Concluding remarks 143
5.4 Statistical Nonlinearization Technique for Multi-Degree-of-Freedom Systems 143 5.4.1 Equivalent system nonlinear damping coefficient and exact solution 144 5.4.2 Applications 146
5.5 Improved Statistical Nonlinearization Technique for Multi-Degree-of-Freedom Systems 148 5.5.1 Exact solution of multi-degree-of-freedom nonlinear systems 149 5.5.2 Improved statistical nonlinearization technique 154 5.5.3 Application and comparison 156 5.5.4 Concluding remarks 158
5.6 Accuracy of Statistical Nonlinearization Techniques 161
6 Methods of Stochastic Averaging 163 6.1 Introduction 163 6.2 Classical Stochastic Averaging Method 164
6.2.1 Stationary solution of a single-degree-of-freedom system under broad band stationary random excitation 166
6.2.2 Stationary solutions of single-degree-of-freedom systems under parametric and external random excitations 172
6.2.3 Nonstationary solutions of single-degree-of-freedom systems 178 6.2.4 Remarks 187
6.3 Stochastic Averaging Methods of Energy Envelope 188 6.3.1 General theory 190 6.3.2 Examples 194 6.3.3 Remarks 201
6.4 Other Stochastic Averaging Techniques 202 6.5 Accuracy of Stochastic Averaging Techniques 227
6.5.1 Smooth stochastic averaging 227 6.5.2 Non-smooth stochastic averaging 228 6.5.3 Remarks 229
Table of contents vii
7 Truncated Hierarchy and Other Techniques 231 7.1 Introduction 231 7.2 Truncated Hierarchy Techniques 231
7.2.1 Gaussian closure schemes 234 7.2.2 Non-Gaussian closure schemes 235 7.2.3 Examples 237 7.2.4 Remarks 239
7.3 Perturbation Techniques 239 7.3.1 Nonlinear single-degree-of-freedom systems 239 7.3.2 Nonlinear multi-degree-of-freedom systems 240 7.3.3 Remarks 242
7.4 Functional Series Techniques 242 7.4.1 Volterra series expansion techniques 242 7.4.2 Wiener-Hermite series expansion techniques 251
Appendix Probability, Random Variables and Random Processes 255 A.1 Introduction 255 A.2 Probability Theory 255
A.2.1 Set theory and axioms of probability 255 A.2.2 Conditional probability 256 A.2.3 Marginal probability and Bayes’ theorem 257
A.3 Random Variables 258 A.3.1 Probability description of single random variable 258 A.3.2 Probability description of two random variables 260 A.3.3 Expected values, moment generating and characteristic functions 261
A.4 Random Processes 263 A.4.1 Ensemble and ensemble averages 263 A.4.2 Stationary, nonstationary and evolutionary random processes 264 A.4.3 Ergodic and Gaussian random processes 265 A.4.4 Poisson processes 266
References 269 Chapter 1 269 Chapter 2 271 Chapter 3 273 Chapter 4 275 Chapter 5 281 Chapter 6 283 Chapter 7 287 Appendix 291
Index 293
To My Parents
Preface to the first edition
The framework of this book was first conceptualized in the late nineteen eighties. However, the writing of this book began while the author was on sabbatical, July 1991 through June 1992, at the University of California, Berkeley, from the University of Western Ontario, London, Ontario, Canada. Over half of the book was completed before the author returned to Canada after his sabbatical. With full-time teaching, research, the arrival of a younger daughter, and the moving in 1996 from Canada to the University of Nebraska, Lincoln the author has only completed the project of writing this book very recently. Owing to the long span of time for the writing, there is no doubt that many relevant publications may have been omitted by the author.
The latter has to admit that a book of this nature is influenced, without exception, by many authors and examples in the field of random vibration. The original purpose of writing this book was to provide an advanced graduate level textbook dealing, in a more systematical way, with analytical techniques of nonlinear random vibration. It was also aimed at providing a textbook for a second course in the analytical techniques of random vibration for graduate students and researchers.
In the introduction chapter reviews in the general areas of nonlinear random vibration appeared in the literature are quoted. Books exclusively dealing with and related to are listed in this chapter. Chapter 2 begins with a brief introduction to Markovian and non-Markovian solutions of stochastic nonlinear differential equations. Chapter 3 is concerned with the exact solution of the Fokker-Planck-Kolmogorov (FPK) equation. Chapter 4 presents the methods of statistical linearization (SL). Uniqueness and accuracy of solutions by the SL techniques are summarized. An introduction to and discussion on the statistical nonlinearization (SNL) techniques are provided in Chapter 5. Accuracy of the SNL techniques is addressed. The methods of stochastic averaging are introduced in Chapter 6. Various stochastic averaging techniques are presented in details and their accuracies are discussed. Chapter 7 provides briefly the truncated hierarchy, perturba- tion, and functional series techniques.
C.W.S. To Lincoln, Nebraska 2000
This page intentionally left blankThis page intentionally left blank
Preface to the second edition
Various theoretical developments in the field of nonlinear random vibration have been made since the publication of the first edition. Consequently, the latter has been expanded somewhat in the present edition in which a number of errors and misprints has been corrected.
The organization of the present edition remains essentially the same as that of the first edition. Chapter 1 is an updated introduction to the reviews in the general areas of nonlinear random vibration. Books exclusively dealing with and related to analytical techniques and applications are cited. Chapter 2 is concerned with a brief introduction to Markovian and non-Markovian solutions to stochastic nonlinear differential equations. Exact solutions to the Fokker-Planck-Kolmogorov (FPK) equations are included in Chapter 3. Methods of statistical linearization (SL) with uniqueness and accuracy of solutions are presented in Chapter 4. Some captions and labels of figures in this chapter have been changed to commonly used terminology. Chapter 5 deals with the statistical nonlinearization (SNL) techniques. Section 5.5 is a new addition introducing an improved SNL technique for approximating multi-degree-of-freedom nonlinear systems. Methods of stochastic averaging are presented in Chapter 6. In the present edition, more detailed steps are added and some reorganization of steps are made. Chapter 7 includes truncated hierarchy, perturbation, and functional series techniques. In the present edition, more steps have been incorporated in the Volterra series expansion techniques. An appendix presenting a brief introduction to the basic concepts and theory of probability, random variables, and random processes has been added to the present edition. This new and brief addition is aimed at those readers who need a rapid review of the prerequisite materials.
C.W.S. To Lincoln, Nebraska 2011
This page intentionally left blankThis page intentionally left blank
Acknowledgements
ACKNOWLEDGEMENTS FOR THE FIRST EDITION
The author began his studies in random vibration during his final year of undergraduate program, between 1972 and 1973, at the University of Southampton, United Kingdom. The six lectures given by Professor B.L. Clarkson served as a stimulating beginning. After two years of master degree studies at the University of Calgary, Canada, in October 1975 the author returned to the University of Southampton to work as a research fellow in the Institute of Sound and Vibration Research for his doctoral degree under the supervision of Professor Clarkson. The fellowship was sponsored by the Admiralty Surface Weapons Establishment, Ministry of Defence, United Kingdom. During this period of studies, the author was fortunate enough to have attended lectures on random vibration presented by Professor Y.K. Lin who was visiting Professor Clarkson and the Institute in 1976. The year 1976 saw the gathering of many experts and teachers in the field of random vibration at the International Union of Theoretical and Applied Mechanics, Symposium on Stochastic Problems in Dynamics. The author was, thus, influenced and inspired by these experts and teachers.
The conducive atmosphere and the availability of many publications in the libraries at the University of California, Berkeley and the hospitality of emeritus Professors J.L. Sackman, J.M. Kelly, Leo Kanowitz and other friends at Berkeley had made the writing enjoyable, and life of the author and his loved ones memorable.
Case (ii) in page 131 and the excitation processes in almost all the examples in Chapter 6 have been re-written and changed as a result of comments from one of the reviewers. Section 7.4 has been expanded in response to the suggestion of another reviewer. The author is grateful to them for their interest in reviewing this book.
Thanks are due to the author’s two present graduate students, Ms. Guang Chen and Mr. Wei Liu who prepared all the drawings in this book.
Finally, the author would like to express his gratitude to his friend, Professor Fai Ma for his encouragement, and wishes to thank the Publisher, Mr. Martin Scrivener and his staff for their publishing support.
ACKNOWLEDGEMENTS FOR THE SECOND EDITION
Since the publication of the first edition in 2000 various theoretical developments in the field of nonlinear random vibration have been made. It is therefore appropriate to publish the present edition at this time. The author has taken the opportunity to make a number of corrections.
The appendix on Probability, Random Variables and Random Processes is the result of the suggestion of a reviewer of the proposal for the present edition. The reviewer’s suggestion and comments are highly appreciated.
Finally, the author wishes to thank Mr. Janjaap Blom, Senior Publisher, Ms. Madeline Alper, Customer Service Supervisor, and their staff for their publishing support.
This page intentionally left blankThis page intentionally left blank
1 Introduction
For safety, reliability and economic reasons, the n onlinearities of many dynamic engineering systems under environmental and other forces that are treated as random disturbances must be taken into account in the design procedures. This and the demand for precision have motivated the research and development in nonlinear random vibration. Loosely speaking, the field of nonlinear random vibration can be subdivided into four categories. The latter include analytical techniques, computational methods, Monte Carlo simulation (MCS), and system identification with experimental techniques. This book is mainly concerned with the first category and therefore the publications quoted henceforth focus on this category. The subject of computational nonlinear random vibration is dealt with in a companion book that is published recently [1.1].
It is believed that the first comprehensive review on nonlinear random vibration was performed by Caughey [1.2]. Subsequently, other reviews appeared in the literature [1.3-1.15], for example. There are books exclusively concerned with and related to nonlinear random vibration [1.16-1.24]. Many books [1.25-1.39] also contain chapter(s) on nonlinear random vibration.
While it is agreed that there are many techniques available in the literature for the analysis of nonlinear systems under random excitations, the focus of the present book is, however, on those frequently employed by engineers and applied scientists. It also reflects the current interests in the analytical techniques of nonlinear random vibration.
Chapter 2 begins with a brief introduction to Markovian and non-Markovian solutions of stochastic nonlinear differential equations. This serves as a foundation to subsequent chapters in this book.
Chapter 3 presents the exact solutions of the Fokker-Planck-Kolmogorov (FPK) equations. Solution of a general single degree-of-freedom (dof) system and applications to engineering systems are included. Solution of a multi-degrees-of- freedom (mdof) system and s tochastically excited Hamiltonian systems are also considered.
2 Nonlinear Random Vibration
Chapter 4 deals with the methods of statistical linearization (SL). Solutions to single dof and mdof nonlinear systems with examples of engineering applications are given. Uniqueness and accuracy of solutions by the SL techniques are summarized.
Chapter 5 provides an introduction to and discussion on the statistical nonlinearization (SNL) techniques. Single dof and mdof nonlinear systems are considered. Accuracy of the SNL techniques is addressed.
Chapter 6 treats the methods of stochastic averaging. The classical stochastic averaging (CSA) method, stochastic averaging method of energy envelope (SAMEE), and various other stochastic averaging techniques are introduced and examples given. Accuracy of the stochastic averaging techniques is discussed.
Chapter 7 introduces briefly several other techniques. The lattter include truncated hierarchy, perturbation, and functional series techniques. The truncated hierarchy techniques include Gaussian closure schemes and non-Gaussian closure schemes, while the functional series techniques encompass the Volterra series expansion techniques, and Wiener-Hermite series expansion techniques.
It is assumed that the readers have a first course in random vibration or similar subject. Materials in Chapters 2 and 3 are essential and serve as a foundation to a better understanding of the techniques and applications in subsequent chapters.
An outline of the basic concepts and theory of probability, random variables and random processes is included in the appendix for those who need a rapid review of the essential background materials.
2 Markovian and Non-Markovian Solutions of Stochastic Nonlinear Differential Equations
2.1 Introduction
Within the field of nonlinear random vibration of structural and mechanical systems the statistical complexity of a stochastic process (s.p.) is determined by the properties of its distribution functions. Two types of classifications are important in the analysis. These are classification based on the statistical regularity of a process and classification based on its memory.
In this section the above two types of classifications are introduced in Sub- sections 2.1.1 and 2.1.2. Then in Sub-section 2.1.3 the kinetic equation associated with the s.p. is derived. This provides the basis for distribution and density functions that are important to subsequent analysis. Section 2.2 contains the basic material for Markovian solution of stochastic nonlinear differential equations. Essential features and relevant information for non-Markovian solution of stochastic nonlinear differential equations are included in Section 2.3.
2.1.1 Classification based on regularity
In this type of classification, s.p. are divid ed into two categories. They are the stationary stochastic processes (s.s.p.) and nonstationary stochastic processes (n.s.p.). Assuming time t is the parameter of the strict sense or strong s.s.p. X(t), its statistical properties are all independent of time t or are all independent of the absolute time origin. On the other hand, for a n.s.p. all statistical properties of that process are dependent of time t.
When the absolute value of the expectation of the s.s.p. X(t) is a constant and less than infinity, the expectation of the square of X(t) is less than infinity, and the
4 Nonlinear Random Vibration
(2.1)
(2.2)
covariance of X(t) is equal to the correlation function of X(t), the s.p. is called a wide-sense or weak s.s.p. Of course, when such a s.s.p. is Gaussian it is completely specified by its means and covariance functions. 2.1.2 Classification based on memory
If s.p. are grouped in accordance with the manner in which the present state of a s.p. depends on its past history, then such a classification is called classification based on memory. This classification is centered around the Markov processes.
In accordance with the memory properties, the s implest s.p. is on e without memory or is purely stochastic. This is usually called a zeroth order Markov process. Clearly, a continuous-parameter purely s.p. is physically not realizable since it implies absolute independence between the past and the present regardless of their temporal closeness. The white noise process is a purely s.p. The Markov process to be defined in Sub-section 2.2.1 is usually called a simple Markov process. There are higher order Markov processes that are not applied in this book and therefore are not defined here.
It may be appropriate to note that the memory of a s.p. is not to be confused with the memory of a nonlinear transformation. The latter is said to have memory if it involves with inertia.
2.1.3 Kinetic equation of stochastic processes
A technique that can give explicit results of joint distributions of the solution process is introduced in this sub-section. The foundation of the following derivation was presented by Bartlett [2.1] and Pawula [2.2], and subsequently by Soong [2.3].
A s.p. X(t) with its first probability density function being denoted by p(x,t) satisfies the relation that
where p(x,t+)t *y,t) is the conditional probability density function of X(t+)t) given that X(t) = y.
Let R(u,t+)t *y,t) be the conditional characteristic function of )X = X(t+)t) - X(t) given that X(t) = y,
Markovian and Non-Markovian Solutions 5
(2.3)
(2.4)
(2.5)
where the angular brackets denote the mathematical expectation. By taking the inverse Fourier transformation, one has
By expanding the conditional characteristic function R(u,t+)t *y,t) in a Taylor series about u = 0, Eq. (2.3) becomes
where
These expectations are known as the incremental moments of X(t). Substituting Eq. (2.4) into (2.1) and after integration, one obtains
This equation can be expressed as
Upon dividing this equation by )t and in the limit as )t 6 0, it leads to
where
(2.6)
(2.7)
kEquation (2.5) is known as the kinetic equation of the s.p. X(t) and " (x,t) are the derivate moments. It is a deterministic parabolic partial differential equation and has important use in the solution of stochastic differential equations.
2.2 Markovian Solution of Stochastic Nonlinear Differential Equations
There are many physical quantities, such as the response of a nonlinear system under a random excitation that can be represented by a white noise process, can be described as Markov processes. Rigorous fundamental treatment on the subject was presented by Kolmogorov [2.4]. The solution by the analytical techniques considered in this monograph is generally based on the concepts of Markov processes. Thus, it is essential to introduce these concepts. To this end, Markov and diffusion processes are defined in Sub-section 2.2.1 while the Stratonovich and Itô's integrals are presented in Sub-section 2.2.2. Sub-section 2.2.3 is concerned with the one- dimensional Fokker-Planck forward or Fokker-Planck-Kolmogorov (FPK) equation. To further clarify the use of Stratonovich and Itô's integrals a single-degree-of- freedom (sdof) quasi-linear system is included in Sub-section 2.2.4.
2.2.1 Markov and diffusion processes
A stochastic process X(t) on an interval [0,T] is called a Markov process if it has the following property:
where P[.] designates the probability of an event and the conditional probability of 0 0the Markov process X(t), P[X(t) < x * X(t ) = x ] is known as the transition
probability distribution function. Equation (2.6) means that the process forgets the n - 1past if t is being regarded as the present.
Applying the Markov property (2.6), one can show that
i i i-1 i-1where p(x ,t * x ,t ), i = 2,3, are the transition probability densities. Equation (2.7) 1describes the flow or transition probability densities from instant t to another instant
3t . It is known as the Smoluchowski-Chapman-Kolmogorov (SCK) equation.
Markovian and Non-Markovian Solutions 7
(2.8a)
(2.8b)
(2.8c)
(2.9)
(2.10)
(2.11a)
(2.11b)
A Markov process X(t) is called a diffusion process if its transition probability density satisfies the following two conditions for )t = t - s and g > 0,
and
where the drift and diffusion coefficients, f(x,s) and G(x,s), respectively, are independent of the time t when X(t) is stationary because in this case p(y,t * x,s) = p(y,t - s * x) only depends on the time lag )t.
2.2.2 Itô's and Stratonovich integrals
Consider a characteristic function on the interval [a,b] for 0 # a < b # T,
For 0 # a < b # T, one defines
where B(t) is the Brownian motion process which is a martingale because
1 2 n 1 nand for all t < t < ... < t and a ,..., a ,
0 1 mIf f(t) is a step function on [0,T] and 0 = t < t < ... < t = b, then
8 Nonlinear Random Vibration
(2.12)
(2.13)
(2.14)
(2.15)
k k+1where the subscript [t ,t ) denotes the semi-open interval. Now, one can define
The function f(t) can be a random function of B(t) and the value of f(t) is taken at the left end point of the partition interval. When f(t) is a random function of B(t) it is independent of the increments B(t+s) - B(t) for all s > 0. Such a function is called a non-anticipating function.
In the limit when m approaches infinity Eq. (2.13) becomes the Itô's integral. The properties of the stochastic or Itô's integral [2.5] are very much different from those of the Riemann-Stieltjes integral. Applying the Itô's integral for f(t) = B(t), one can show that
Now, another type of stochastic integral, the so-called Stratonovich integral
[2.6], is defined for explicit function of B(t) by
k + 1 kwhere )t = t - t . If f(B(t),t) = B(t) Eq. (2.15) gives B (T)/2. This result is2
very different from that in Eq. (2.14). The Stratonovich integral satisfies all the formal rules of classical calculus and
therefore it is a na tural choice for stochastic differential equations on manifolds. However, Stratonovich integrals are not martingales. In contrast, Itô's integrals are martingales and therefore they have an important computational advantage.
Markovian and Non-Markovian Solutions 9
(2.16)
(2.17)
(2.18)
2.2.3 One-dimensional Fokker-Planck-Kolmogorov equation
With the kinetic equation for s.p. X(t) derived in Sub-section 2.1 and diffusion 0 0process defined in Sub-section 2.2.1, the transition probability density p(x,t * x ,t )
or simply p for an one-dimensional problem satisfies the following parabolic partial differential equation
1 2in which " and " are the first and second derivate moments,
where D = BS and S is the spectral density of the Gaussian white noise process. Equation (2.16) is known as the Fokker-Planck forward or Fokker-Planck- Kolmogorov (FPK) equation with the following initial conditions
where *(.) is the Dirac delta function. In passing, it is noted that the reduced FPK equation is a boundary value
problem and the classification of boundaries in accordance with that proposed by Feller [2.7] for one-dimensional diffusion process has been summarized by Lin and Cai [2.8]. A whole set of new classification criteria that is equivalent but simpler than that of Feller has been developed by Zhang [2.9]. For high dimensional problems the classification of boundaries is still open.
The new criteria for classification of singular boundaries may be applied to the time averaging method in order to investigate the stability problem of nonlinear stochastic differential equations. In this way, it can be applied to decide the existence of the stationary probability density of a nonlinear system that may be reduced to an one-dimensional problem. However, the stability obtained by using the classification of singular boundaries is a weak one. That is, it is the stability in probability. Moreover, the time averaging on the differential operator is limited to a specific form.
Finally, the issues of stability and bifurcation of nonlinear systems under stationary random excitations are not pursued here. These will be considered in a separate monograph to be published in due course.
2.2.4 Systems with random parametric excitations
There are many practical engineering systems whose dynamical behaviors can be
10 Nonlinear Random Vibration
(2.19)
(2.20)
(2.21)
described by governing equations of motion containing random parametric excitations. The controversy in this type of systems is the addition of the so-called Wong and Zakai (WZ) or Stratonovich (S) correction term to the Itô's stochastic differential equation. This issue was discussed by Gray and Caughey [2.10], Mortensen [2.11, 2.12], and others [2.13-2.21]. The usual reason given, for instance, in Refs. [2.15, 2.20], is that when the random parametric excitations in the governing equation of motion are independent physical Gaussian white noises, to convert the equation to the corresponding Itô's equation the WZ or S correction term is required. In Ref. [2.21], an example was presented to demonstrate that such a given reason is not adequate.
In the following, the stochastic differential equation, of a quasi-linear system with random parametric excitations, and relevant concluding remarks in Ref. [2.21] are included since they are important in the understanding and solution of many nonlinear systems in subsequent chapters.
(a) Statement of Problem
The stochastic differential equation of interest is given by
where W(t) is a vector of Gaussian white noises and Z(t) is a vector of s.p.; f(Z,t) and G (Z,t) are known vector and matrix quantities and are nonlinear functions of Z(t) and t, in general.
With the arguments omitted, Eq. (2.19) can be written as
where dB = Wdt, with B being the vector of Brownian motion or Wiener processes. Equation (2.20) is the so-called Itô's stochastic differential equation. The solution of Eq. (2.20) is as the following
Note that the second integral on the right-hand side (RHS) of Eq. (2.21) cannot be interpreted as an ordinary Riemann or Lebesque-Stieltjes integral, since the sample function of a Brownian motion is, with probability 1, of unbounded variation [2.22]. Two interpretations of this second integral have been presented in the literature. The
Markovian and Non-Markovian Solutions 11
(2.22)
(2.23)
first [2.22] leads to replacing Eq. (2.20) by the following matrix equation
in which the second term inside the parentheses on the RHS of the above equation involves with the division by the vector MZ which, strictly speaking, is not allowed in the matrix operation. Thus, the partial differentiation term, MG/MZ should be used in accordance with the rules of matrix operation. More explicitly the above equation may be written as
This equation is to be solved in the sense of Itô's calculus [2.5] so that the property of the martingale [2.23] is retained. The second term inside the parentheses on the RHS of Eq. (2.23) is known as the WZ or S correction term. The solution of Eq. (2.23) is equal to that of Eq. (2.20), provided that Eq. (2.20) is solved in accordance with the second interpretation in which the second integral on the RHS of Eq. (2.21) is defined in the sense of Stratonovich calculus [2.6]. With such a definition it can be treated in the same way as with the ordinary integrals of smooth functions.
As introduced in Sub-section 2.2.2, the rules governing the Itô calculus and the Stratonovich calculus are entirely different and therefore, the WZ or S correction term is required not as a consequence of converting the physical Gaussian white noises into the ideal white noises. It may be appropriate to recall that the white noise process is just a mathematical idealization. In applying a mathematical approach to describe a physical phenomenon, such as the dynamical behavior or response, one has inherently adopted some form of idealization. The following example of a quasi- linear random differential equation with random parametric excitations will illustrate the reason for the addition of the WZ or S term. (b) An Example
Consider a single degree of freedom (sdof) system disturbed by both parametric and external stationary Gaussian white noise excitations. The governing equation of motion for the system is
or simply as
(I-1)
(I-2)
(I-3)
(I-4)
where x is the stochastic displacement, the over-dot and double over-dot designate first and second derivatives with respect to time t; a and b are constants, while w is the Gaussian white noise excitation. Equation (I-1) can be converted into two first
1 2 1 2order differential equations by writing z = x, z = x0 and Z = (z z ) such that T
or in similar form to Eq. (2.20)
in which the superscript T denotes the transpose of. The diffusion coefficient G in Eq. (I-2) is a function of Z. Therefore, according
to Refs. [2.10, 2.11, 2.22] the WZ or S correction term is required. Accordingly, Eq. (2.23) becomes
Applying this equation to the system described by Eq. (I-1) one has
The second term inside the square brackets on the RHS of the second equation of (I-4) is the WZ or S correction. This term will not be zero as long as a is not
1 2equal to zero or bz + az is not equal to unity.
(c) Remarks
The above example clearly demonstrates that the WZ or S correction term is required regardless of whether the Gaussian white noise excitation w is ideal or physical. Indeed, when the parametric stationary white noise excitation associated
Markovian and Non-Markovian Solutions 13
(2.24)
(2.25)
with the next to highest derivative of the governing equation of motion is zero, the solution of the equation by the Itô's calculus rules is identical to that given by using the ordinary or Stratonovich calculus rules, and consequently no WZ or S correction is necessary. In other words, in the above sdof system the WZ or S correction term is required because there is a random parametric excitation associated with the velocity term. If the random parametric excitation associated with the velocity term is zero and the random parametric excitation associated with the restoring force is retained the WZ or S correction term is zero, meaning the solution in this case is identical whether one employs the Stratonovich or Itô's calculus.
2.3 Non-Markovian Solution of Stochastic Nonlinear Differential
Equations
While in practice many physical phenomena occur in structural and mechanical systems can adequately be represented by Markovian processes, there are important cases in other fields that have to be modeled by non-Markovian processes. For example, in the problem of magnetic resonance in a fluctuating magnetic field [2.24, 2.25], nematic liquid crystals [2.26], and the behavior of the intensity of a single mode dye laser [2.27] the non-Markovian processes were employed. In fact the Markovian processes in the foregoing sections are special cases to the non- Markovian processes. Therefore, it may be of interest to p resent the essential features and relevant information for the non-Markovian solution of stochastic nonlinear differential equations.
2.3.1 One-dimensional problem
Consider an one-dimensional system described by the following stochastic differential equation [2.28]:
where f(q(t)) or simply f(t) or f and g(q(t)) or simply g(t) or g are general nonlinear functions of q(t) or simply q, while >(t) or simply > is the colored noise excitation which is also known as the Ornstein-Uhlenbeck process. The latter is a Gaussian process with zero mean and correlation function given by
14 Nonlinear Random Vibration
(2.26)
(2.27)
where J is the finite correlation time and D is the noise parameter of the stochastic disturbance >(t). Since any solution of Eq. (2.24) is non-Markovian and nonstationary, Eq. (2.24) defines a class of non-Markovian and nonstationary random (NMNR) processes. The latter differ from each other in the selection of initial conditions. In the limit J v 0, Eq. (2.24) defines a stationary Markovian process when the distribution of initial conditions are also stationary. As pointed out in Ref. [2.28], in the class of processes defined by Eq. (2.24) the effects of non-Markovian and nonstationary properties cannot be disentangled. This is due to the fact that both properties have the same origin J.
In general, exact solution for moments and correlation functions of the process defined by Eq. (2.24) is not available and therefore approximate solution, in which the zeroth-order approximation is the Markovian limit J = 0, is derived.
By expanding in powers of J, the approximate solution for the first moment < q(t) > or simply < q > is obtained following averaging of Eq. (2.24) as [2.28]
in which, by assuming t >> J, the second term on the RHS of Eq. (2.26) can be shown to be
where
For nonlinear one-dimensional problems, results obtained by applying the above equations can be found in Ref. [2.27].
It may be appropriate to point out that for a linear one-dimensional NMNR problem that has the following relations in Eq. (2.24)
Markovian and Non-Markovian Solutions 15
(2.28)
(2.29)
(2.30)
(2.31)
rThe steady-state relaxation time J is given by
rIn the limit J v 0, the steady-state relaxation time J v 1/a which is the steady- state relaxation time of a Markovian problem.
2.3.2 Multi-dimensional problem
The equations above can be generalized to multi-dimensional problems. Thus, the equations corresponding to (2.24) and (2.25) are, respectively [2.28]
ij ij ijwhere * is the Kronecka delta such that * = 1 if i = j otherwise, * = 0. The differential equations of first moments are given as
where
While the above first order approximate solutions have been obtained for quasi-linear single and multi-dimensional problems [2.28, 2.29], and nonlinear one-dimensional problems [2.27], the solution of general multi-dimensional nonlinear NMNR problems remains a formidable challenge.
Before leaving this sub-section, a sdof or two-dimensional problem is included here to illustrate application of the foregoing procedure. Consider the system having unit mass such that the equation of motion is
or simply as
where w is the zero mean Gaussian white noise such that <w(t)w(t')> = 2BS*(8) with 8 = t - t' and S being the spectral density of the white noise process, > the Ornstein-Uhlenbeck process whose correlation function has been defined by Eq. (2.25), and the remaining symbols have their usual meaning.
To proceed further one can express the quantities of interest of the above oscillator as x = q, and dx/dt = p such that the equation of motion can be re-
Markovian and Non-Markovian Solutions 17
(I-2)
(I-3)
(I-4)
(I-5)
written as two first order stochastic differential equations
The solution process in Eq. (I-2) is NMNR due to the fact that > is not a white noise. By applying Eq. (2.31), writing q(t) = q and p(t) = p, one obtains the approximate equations for the first moments as
By means of the J expansion [2.28], one can show that
and the approximate equations, to first order in J, for the second moments are
Equations (I-3) and (I-5) can be solved in closed form or by so me numerical integration algorithm, such as the fourth order Runge-Kutta (RK4) scheme. They are dependent of J which is a measure of the non-Markovian property of the solution process. In the limit when J approaches 0 the solutions in Eqs. (I-3) and (I-5) are Markovian.
This page intentionally left blankThis page intentionally left blank
3 Exact Solutions of Fokker-Planck-Kolmogorov Equations
3.1 Introduction
The response of general nonlinear oscillator under parametric random excitations and external random excitations has been extensively studied in the last three decades. The foundation of the development has been installed earlier by Rayleigh [3.1], Fokker [3.2], and Smoluchowski [3.3], for example. In general, no exact solution can be found. When the excitations can be idealized as Gaussian white noises, in which case, the response of the system can be represented by a Markovian vector and the probability density function of the response is described by the FPK equation, exact stationary solution can be obtained. The solution of the FPK equation has been reported in the literature [3.4-3.15]. The following approach is that presented by To and Li [3.15]. It seems that the latter approach gives the broadest class of solvable reduced FPK equations. It is based on the systematic procedure of Lin and associates [3.11-3.14], and the application of the theory of elementary or integrating factor for first order ordinary differential equations. In Ref. [3.14] the solution of the reduced FPK equation is obtained by applying the theory of generalized stationary potential which is less restrictive than that employing the concept of detailed balance [3.12]. The latter is similar to that of Graham and Haken [3.16]. The basic idea of the concept of Graham and Haken is to separate each drift coefficient into reversible and irreversible parts.
In this chapter after the introduction of the FPK equation for a vector process, the solution of a general sdof nonlinear system is presented in Section 3.2. Section 3.3 includes solutions to various systems that are frequently encountered in the field of random vibration. Sections 3.4 and 3.5 are concerned with the solution of mdof nonlinear systems.
20 Nonlinear Random Vibration
(3.1)
(3.2)
(3.3)
(3.4a,b)
(3.5)
(3.6)
The one-dimensional FPK equation in the last chapter can be easily extended to the multi-dimensional cases.
Consider the following Itô equation for a n-dof system
1 2 2nwhere X = (x x , ..., x ) ; f(X,t) and G(X,t) or simply f and G are the drift vectorT
of 2n × 1 and diffusion matrix of 2n × 2n, respectively. Note that B is the jBrownian motion vector process such that B are the elements of the vector B. The
latter should not be confused with the matrix [B] of second derivate moments. The ijelements of [B] are B .
The associated FPK equation is
in which p(X,t) is the joint transition probability density function or simply referred to as transition probability density, and D is the matrix of excitation intensities
ij ij ijwhose ij'th element is D = BS where S are the cross-spectral densities of the white noise processes.
i ijIn terms of the first and second derivate moments, A and B rsespectively, one has
such that
It is understood that the first and second derivate moments are evaluated at X = x. The initial conditions for the FPK equation are
The FPK equation is invariant under translations in time t. In other words,
Exact Solutions of Fokker-Planck-Kolmogorov Equations 21
(3.7)
(3.8)
(3.9)
such that one can write the backward Kolmogorov or backward FPK equation and forward FPK equation, (3.3), respectively, as
where L is the adjoint operator to L. *
0Writing s = t the backward FPK equation becomes
0in which the first and second derivate moments are functions of X(s) or X(t ), and the initial conditions are defined in Eq. (3.5). The backward FPK equation can be applied to derive partial differential equations for the moments of the response of the system, while the forward FPK equation is employed mainly to evaluate the transition probability density.
Finally, the so-called Itô's differential rule for an arbitrary function Y(X,t) or simply Y of a Markov vector process X(t) is important and useful for subsequent application and therefore is included at this stage. Starting from the classical chain rule,
iSubstituting dX, and remembering that elements of the latter vector are x , in Eq. (3.1) and adding the WZ or S correction terms to (3.8), one can show that
Xwhere (.) is the generating differential operator of the Markov process X and is defined as
Equation (3.9) is the Itô's differential rule which is also known as Itô's lemma
or Itô's formula.
Consider the stochastic system
1 2where x = x, x = dx/dt, and the double over-dot denotes the second derivative with 1 2 i 1 2 irespect to time t, h(x ,x ) or simply h and f (x ,x ) or f are generally nonlinear
1 2 i ifunctions of x and x , and w ( t) or w are Gaussian white noises with the delta type correlation functions
Applying the technique in Section 3.1, the FPK equation for the system described by Eq. (3.10) becomes
In general, exact solution for the transition probability density function p is not savailable and only the stationary probability density function, p can be obtained
from the reduced FPK equation
i ijThe first derivate moment A and second derivate moment B of Eq.(3.13) are
According to the method described in Ref. [3.14], the first and second derivate moments are divided as in the following
Exact Solutions of Fokker-Planck-Kolmogorov Equations 23
(3.14)
(3.16)
(3.15)
(3.17)
(3.18)
(3.19)
(3.20)
Then Eq. (3.13) is solvable if the following equations are satisfied
The stationary probability density function can be shown to be
1 2where N is a function of x and x and C is a normalization constant. It should be noted that Eqs. (3.14) through (3.16) are similar to Eqs. (17) through (19) of Ref.
2 x y[3.14], except that A here is replaced by (- 8 /8 ) in Ref. [3.14].(2)
By the characteristic function method, one has from Eq. (3.16)
From the first two equations of Eq. (3.18), one has
In order to obtain the exact solution of Eq. (3.19) and to incorporate a broader class 1 2of nonlinear systems, the integrating factor method [3.15] is applied. Let M(x ,x )
or M be the integrating factor of Eq. (3.19), then
24 Nonlinear Random Vibration
Therefore,
Equation (3.22) gives
1 1where C (x ) is an arbitrary function and M is a general function characteristic of a particular nonlinear system.
Substituting Eq. (3.23) into (3.19), one has
Integrating Eq. (3.24) leads to
The RHS of Eq. (3.25) is a constant. Equation (3.25) is the implicit solution of Eq. 2(3.19) in which A is given by Eq. (3.23).(2)
By the first and third equations of Eq. (3.18), one has
The above equation gives
(3.26)
(3.27)
(3.28)
(3.29)
(3.30)
Applying Eq. (3.19), Eq. (3.27) reduces to
This gives
0where N (r) is an arbitrary function. Substituting Eq. (3.28) into Eq. (3.17), one has
Substituting Eqs. (3.28) and (3.23) into (3.15), and re-arranging it leads to
26 Nonlinear Random Vibration
By Eqs. (3.14) and (3.28) one has
2 1 1where C (x ) is an arbitrary function of x . Substituting Eq. (3.31) into the first two terms on the RHS of Eq. (3.30) one can
show that
3 1 1where C (x ) is an arbitrary function of x . Substituting the last equation into Eq. (3.30) gives
Note that Eqs. (3.29) and (3.33) constitute the broadest class of solvable reduced FPK equations of nonlinear sdof systems. Previous results in Refs. [3.4-3.14] are included in this class.
yIf the function M = 8 (x,y) which is the partial derivative of 8(x,y) with 2 1respect to y, where y = (x )/2, x = x and 8(x,y) or simply 8 is an arbitrary2
function of x and y, then by Eq. (3.25)
1 1Without loss of generality, by setting C (x ) = 0 the above equation becomes
By Eq. (3.28), one has
Exact Solutions of Fokker-Planck-Kolmogorov Equations 27
(3.36)
(3.37)
(I-1)
(I-2)
Substituting Eq. (3.35) into Eq. (3.17), one has
This is Eq. (7) obtained by Cai and Lin in Ref. [3.17]. By Eqs. (3.33) and (3.34),
This is Eq. (21) in Ref. [3.14] and is Eq. (6) in Ref. [3.17]. From the foregoing, it is apparent that Eq.(3.36) is a special case of Eq. (3.29),
while Eq. (3.37) is a special case of Eq. (3.33). The stationary probability density function for an energy dependent nonlinear sdof system provided by Zhu and Yu in Eq. (3) of Ref. [3.18] is also a special case of Eq. (3.29). The equation of motion associated with Eq. (3) of Ref. [3.18] is one in which the coefficients of velocity and random excitation are functions of total energy of the oscillator. This case is included in the following as Example V.
Several mathematical models are included in the following to illustrate the application of the method presented above. These mathematical models have previously been studied. As they are used for illustration only the probability density function of every case is considered.
Example I. Consider the model in Ref. [3.19]
where w(t) is a Gaussian white noise with a spectral density S, g(x) is the nonlinear spring force, .(8) is an arbitrary function, and 8 is the total energy
Applying the same symbols as in the method presented above, the two Itô stochastic differential equations for Eq. (I-1) are
28 Nonlinear Random Vibration
(I-3)
(I-4)
(I-5)
(I-6)
(I-7)
(I-8)
(II-1)
(II-2)
where B(t) or B is a unit Wiener process. The corresponding reduced FPK equation becomes
The first and second derivate moments are divided into two parts as those in the 2 1procedure described above except that A = - g(x ) is chosen in accordance with(2)
3 1Eq. (3.23), and C (x ) = 0 is imposed in Eq. (3.32) such that Eq. (3.14) is satisfied. 1Then, Eq. (3.15) for the reduced FPK Eq. (I-5) in which f = 1 becomes
2 2From Eq. (I-2), one has M8 = x Mx . Thus, integrating Eq. (I-6) leads to
2 1where C (x ) is an arbitrary constant and therefore may be set to zero without loss of generality. Thus, Eq. (3.17) gives
Example II. Consider the Rayleigh or modified van der Pol oscillator [3.19]
where $ is a positive constant. If one writes
Exact Solutions of Fokker-Planck-Kolmogorov Equations 29
(II-3)
(II-4)
(II-5)
(II-6)
(II-7)
(II-8)
(II-9)
(II-10)
then Eq. (II-1) becomes
This equation is of the type described by Eq. (I-1) in Example I above. In the latter, 8 is replaced by H here. Therefore,
Substituting Eq. (II-4) into (I-8), it gives
where C is a normalization constant. One can re-write
Substituting for Eq. (II-2) and re-arranging,
By Eq. (II-7), Eq. (II-5) becomes
Equation (II-8) can also be written as
Equation (II-9) can be reduced to
1where C is a normalization constant. Thus, the response of the system by Eq. (II-1) is not Gaussian.
30 Nonlinear Random Vibration
Example III. Consider a nonlinear oscillator with parametric and external excitations
i ii where w (t) are independent Gaussian white noises with spectral densities S ; ", $ and S are constant. This is the example studied by Yong and Lin in Ref. [3.11].
2 1Applying Eq. (3.15) and setting A = - S x by using Eq. (3.23) as well as(2) 2
3 1imposing C (x ) = 0 in Eq. (3.32), it results
1 1 2since f = - S x and f = 1. Integrating Eq. (III-2) gives2
22 11In particular, if S /S = "/$, one has
Without loss of generality, one may choose
The stationary probability density function is therefore given as
where C is the normalization constant. As noted by Yong and Lin [3.11] that under a suitable combination of Gaussian parametric and external random excitations, the response of the above nonlinear system is Gaussian.
Example IV. The following system is the one considered by Dimentberg [3.8], and Yong and Lin [3.11]
Exact Solutions of Fokker-Planck-Kolmogorov Equations 31
(IV-2)
(IV-3)
(IV-4)
(IV-5)
(IV-6)
(IV-7)
i iiwhere w (t) are independent Gaussian white noises with spectral densities S , S is a constant, and
As the coefficient of velocity in Eq. (IV-1) has a parametric random excitation the WZ correction term [3.20] is required. The resulting Itô equations for Eq. (IV-1) are
The corresponding reduced FPK equation becomes
2 1Applying the procedure described above and specifying A = - S x with reference(2) 2
3 1to Eq. (3.23), and imposing C (x ) = 0 in Eq. (3.32), then Eq. (3.15) gives
1 2 2 1 3 11 22since f = - x , f = - S x and f = 1. If S = S S , then from Eq. (IV-6) one2 2
can show that
4 1Applying Eq. (3.16) results C (x ) = constant. Therefore, integrating and substituting the result into Eq. (3.17) gives
32 Nonlinear Random Vibration
(IV-8)
(IV-9)
(V-1)
(V-2)
(V-3)
(V-4)
(V-5)
If one confines .(7) = $7 + " in which " and $ are constant, and after integrating Eq. (IV-8) it leads to
where C is a normalization constant. Equation (IV-9) was independently presented in Refs. [3.8] and [3.11], with different notations.
Example V. The following equation of motion is for the so-called energy dependent system considered by Zhu and Yu [3.18]
where w(t) is a Gaussian white noise with a spectral density S, g(x) is the nonlinear spring force, .(8) and f(8) are arbitrary functions, and 8 is the total energy
Note that Eq. (V-1) is similar to Eq. (I-1) above except for the RHS. Applying the same symbols as in the method presented above, the two Itô stochastic differential equations for Eq. (V-1) are
and
where B(t) or written simply as B is a unit Wiener process. The corresponding reduced FPK equation becomes
Exact Solutions of Fokker-Planck-Kolmogorov Equations 33
(V-6)
(V-7)
(V-8)
The first and second derivate moments are divided into two parts as those in the 2 1procedure described above except that A = - g(x ) is chosen in accordance with(2)
3 1Eq. (3.23), and C (x ) = 0 is imposed in Eq. (3.32) such that Eq. (3.14) is satisfied. 1Then, Eq. (3.15) for the reduced FPK Eq. (V-5) in which f = f(8) becomes
2 2From Eq. (V-2), one has M8 = x Mx . Therefore, integrating Eq. (V-6) gives
2 1where C (x ) is an arbitrary constant and therefore may be set to zero without loss of generality. Thus, Eq. (3.17) gives
Equation (V-8) agrees with (3) of Ref. [3.18] except for different notations.
3.3 Applications to Engineering Systems
In this section the extension to the theory and associated procedure of generalized stationary potential described in the last section is applied to various sdof systems frequently encountered in engineering. They are grouped into three categories, namely, (a) systems with linear damping and nonlinear stiffness, (b) systems with nonlinear damping and linear stiffness, and (c) systems with both nonlinear damping and nonlinear stiffness.
As far as possible the mean square or variance of response of every system is included in addition to the stationary probability density function. Unless stated otherwise it is assumed that the stationary probability density function exists in the nonlinear system.
3.3.1 Systems with linear damping and nonlinear stiffness
This category includes nonlinear systems with (a) elastic force of polynomial type, (b) elastic force of trigonometric function type, (c) elastic force with acceleration
34 Nonlinear Random Vibration
(I-6, 7)
jumps, (d) double bi-linear restoring force, and (e) in-plane or axial random excitation. The equation of motion for every system will be solved by the direct approach whenever it is possible, and identification of the equations of the method in Section 3.2 will also be made.
Example I. Consider the Duffing oscillator. This is the one with the simplest polynomial type elastic force. It can be used to model system with large displacement [3.21] or the so-called system with geometrical nonlinearity. The equation of motion for this oscillator under Gaussian white noise excitation is
where $ is the positive damping coefficient, S is the natural frequency of the corresponding linear oscillator and g is the strength of nonlinearity. The latter is assumed to be positive henceforth.
The two Itô differential equations corresponding to Eq. (I-1) are
1where g(x ) = S x + gx . The reduced FPK equation for Eqs. (I-2) and (I-3) is2 3
In order to solve for Eq. (I-4) it may be written as
Equation (I-5) is solvable if it is satisfied by the following two equations
Exact Solutions of Fokker-Planck-Kolmogorov Equations 35
(I-8)
(I-9)
(I-10)
(I-11)
(I-12)
(I-13)
Combining Eqs. (I-8) and (I-9), one has
where C is a normalization constant. Now, before deriving the variance of displacement the method presented in
Section 3.2 is applied. With reference to Eq. (I-8) of Section 3.2, 8 is identified as the total energy of the present oscillator
and .(u) in Eq. (I-8) of Section 3.2 is $ in the present system. Consequently, by applying Eq. (I-8) of Section 3.2 one has result identical to Eq. (I-10) above.
The mean square of displacement is
Before substituting E q. (I-10) into (I-11) one separates the joint stationary s 2 2 1 1probability density function into two parts as p = p (x ) p (x ) where
and
where
Therefore,
By Eqs. (I-13) through (I-16), Eq. (I-11) can be expressed as
Both Eqs. (I-15) and (I-17) can be evaluated by applying the parabolic cylindrical and gamma functions. Making use of the following identity
where U(a,z) is the parabolic cylindrical function. Writing
Exact Solutions of Fokker-Planck-Kolmogorov Equations 37
(I-19)
(I-20)
(I-21)
(I-22)
(II-1)
so that
Substituting Eq. (I-19) into (I-15) and making use of Eq. (I-18), one has
Simplifying, it gives
Applying Eq. (I-19) and with similar procedure as for the derivation of Eq. (I-21), Eq. (I-17) can be obtained as
xIf S = S = g = 1.0 and $ = 0.1 Eq. (I-22) gives F = 3.5343 , and other values are2
plotted in Figure 3.1. With reference to the latter, it is clear that the mean square of displacement decreases with increasing strength of nonlinearity, but it increases with increasing spectral density of the Gaussian white noise excitation.
Example II. An example of a system with elastic force of trigonometric function type is the following
0 0where k is the initial spring rate, x is the maximum deflection obtainable with
38 Nonlinear Random Vibration
(II-2)
(II-3)
(II-4)
0 0 infinite force such that - x < x < x , m is the mass of the system, and the remaining symbols have their usual meaning. This nonlinear elastic force is shown in Figure 3.2. Clearly, this oscillator is similar to that described in Eq. (I-1) above, except that the polynomial elastic force is replaced by a so-called tangent elasticity
characteristic [3.22]. The elastic force described in Eq. (II-1) represents a hardening spring with limiting finite deflection, even when it is subjected to an infinite force. Equation (II-1) was dealt with by Klein [3.23]. The following results are taken from the latter with change in the notations. A possible example of application is in the analysis of a vibration isolator that uses an elastomer, such as neoprene, as the spring element. Isolators of this type are used in protecting electronic equipment from vibration in aircraft and missiles [3.23].
The joint stationary probability density function of equation (II-1) can be 1 1 0 0 1 0obtained by replacing g(x ) = S x + gx with g(x ) = [2k x /(Bm)] tan[Bx /(2x )]2 3
so that
0 where S = k /m and C is the normalization constant. 2
0Writing F = BS/($S ) and performing the integration in Eq. (II-2), 2 2
The above probability density function can be factored by marginal distributions as indicated in the last example due to the solutions given in Eqs. (I-8) and (I-9). In
1 2other words, x and x are statistically independent. Thus,
where
Figure 3.1 Mean square of displacement of Duffing oscillator.
Figure 3.2 Elastic force of a hardening spring with finite deflection.
40 Nonlinear Random Vibration
(II-5)
(II-6)
(II-7)
(II-8)
(II-9)
The normalization constant, C can be evaluated by the following equation
0 0 0where n = [2x / (BF )] $ 0. Writing y = Bx /(2x ) and with appropriate change of2
integration limits, the last equation becomes
Evaluating Eq. (II-7) gives
where '(.) is the gamma function or the Euler integral of the second kind. Substituting C into Eq. (II-5) results
The mean square of displacement of the oscillator is
Exact Solutions of Fokker-Planck-Kolmogorov Equations 41
(III-1)
In general the integral in Eq. (II-9) can not be evaluated explicitly. However, when n is a positive integer it can be determined explicitly. Typical results are shown in Figure 3.3.
Figure 3.3 Mean square of displacement.
Example III. Consider a nonlinear oscillator with a set-up spring. Its equation of motion is given by
where sgn x = 1 for x > 0 and sgn x = -1 for x < 0. When the oscillator mass traverses through x = 0 it undergoes a jump in relative acceleration of magnitude
02F /m, m being the mass of the oscillator, whereas its relative velocity is continuous. This oscillator, shown in Figure 3.4, was analyzed by Crandall [3.24]. Figure 3.5 presents the restoring force as a function of relative motion x of the oscillator.
42 Nonlinear Random Vibration
Figure 3.4 Oscillator with a set-up spring.
Figure 3.5 Restoring force of a nonlinear oscillator with a set-up spring.
Exact Solutions of Fokker-Planck-Kolmogorov Equations 43
(III-2)
(III-3)
(III-4)
(III-5)
(III-6)
The reduced FPK equation for (III-1) is similar to (I-4), therefore
Following similar procedure in Example I above one obtains the jpdf as
0where F = BS/(2.S ) and C is the normalization constant which is defined by2 3
Then, applying the following definitions
the last relation can be found as
1 2Since the double integral is an even function of x and x and therefore one can simply consider the following term
Substituting Eq. (III-5) into (III-4) one can show that
Substituting Eq. (III-6) into (III-3), it gives
44 Nonlinear Random Vibration
(III-7)
(III-8)
(III-9)
(IV-1a)
(IV-1b)
(IV-2)
Similarly,
By Eq. (III-7) the mean square of displacement is obtained as
The results in Eq. (III-7) through (III-9) are identical to those presented by Crandall [3.24] except that they are different in notations.
Example IV. Consider a sdof nonlinear oscillator with a general double bi-linear restoring force, as shown in Figure 3.6. Oscillators of this type can be applied to model materials undergoing elasto-plastic deformation or systems with energy dissipation absorbers. The equation of motion can be expressed as
1 1 1 0 1where S = k /m, T = k /m, g = (k - k )x /k , and m is the mass of the system.2 2
Equation (IV-1a) is linear and Eq. (IV-1b) is similar to Eq. (III-1) above. By the procedure in the last example, the probability density functions are
Exact Solutions of Fokker-Planck-Kolmogorov Equations 45
(IV-3)
(IV-4)
(IV-5a)
(IV-5b)
where
(IV-6)
(IV-7)
(IV-8)
(IV-9)
(IV-10)
1 2 and there are similar relations for p and p . These continuity conditions will be satisfied if one assumes the following relation
1The constant C can be evaluated from the normalization condition
Therefore,
where
where
By making use of Eqs. (IV-2), (IV-3), (IV-6), (IV-8), and (IV-9) one has
Exact Solutions of Fokker-Planck-Kolmogorov Equations 47
(IV-11)
(IV-12)
(IV-13)
where
The mean squares of displacement for several special cases can be evaluated by applying Eq. (IV-10).
0 1Case (i) x = 0 and k = k
This is a linear system. Thus, the mean square of displacement is
1Case (ii) k = k ( 2 = ( ) This case is also a linear system. It is easy to show that the mean square of
displacement is identical to Eq. (IV-11) above.
1Case (iii) 2 v B /2 ( k v 4 ) 2From Eq. (IV-6), C = 0 and the joint stationary probability density function is
given by Eq. (IV-2). Applying Eqs. (IV-8) and (IV-10) one can show that the mean square of displacement is
1Case (iv) 2 = 0 ( k = 0 ) This case can be employed to model elastic perfectly plastic materials. For this
case Eq. (IV-1b) becomes
48 Nonlinear Random Vibration
(IV-14)
(IV-15)
(IV-16)
(IV-17)
(V-1)
Applying Eq. (III-3) above or Eq. (IV-3), one can show that the probability density function is given by
and the continuity condition gives
From the normalization condition one has
The mean square of displacement, after some algebraic manipulation, can then be expressed as
Example V. Consider an oscillator with a parametric random excitation as a coefficient of the cubic displacement term. The equation of motion is
Equation (V-1) can be used to model a single mode vibration of a plate structure under a transversal random excitation and an in-plane random excitation when the second and higher modes of vibration are well beyond the frequency range of interest. Likewise, it can be used to model a single mode vibration of a beam structure in bending and simultaneously subjected to an axial random excitation. Of course, the random excitations considered here are Gaussian white noise processes. This, in theory, should have an infinite range of frequency and therefore would cover
Exact Solutions of Fokker-Planck-Kolmogorov Equations 49
(V-2)
(V-3)
(V-4)
(V-5)
(V-6)
all the modes in the plate or beam structures. However, in practice, the single mode assumption is acceptable in that the duration of excitation is finite rather than infinite.
Applying similar procedure as that in Example III of Section 3.2, one can show that
11 1 1 2where " = g S S and in Eq. (3.15) f = - gS x , and f = 1. 2 4 2 3
Integrating Eq. (V-2) leads to
Without loss of generality, one can choose
so that
Strictly speaking, this equation satisfies only the solvability conditions (3.14) and (3.15) in Section 3.2 above. To also satisfy the solvability condition (3.16) one requires
1 2 22 1 2where " = 5S /3 and " = 2S /(3"). When x = 0 then x = 4 such that N = 4.2
This leads to a zero probability density function. By Eq. (V-4), the joint stationary probability density function becomes
2 1 where C is the normalization constant. For x > Sx the probability density given by Eq. (V-6) is stable but non-Gaussian.
50 Nonlinear Random Vibration
3.3.2 Systems with nonlinear damping and linear stiffness
This class of problems includes self-excited oscillators, such as the van der Pol oscillator and modified van der Pol or Rayleigh oscillator. For small strength of nonlinearity the latter two oscillators exhibit limit cycles and the responses of both oscillators are essentially sinusoidal. As the value of the strength of nonlinearity increases the limit cycles become distorted and the responses non-sinusoidal.
Example I. The Rayleigh or modified van der Pol oscillator excited by Gaussian white noise has been considered by Caughey and Payne [3.7] and To [3.19]. This model can be applied to analyse flow induced vibration of a slender cylinder if the excitation is small and only the first mode of vibration for the cylinder is of interest. The equation of motion for such an oscillator is
where $ is a positive constant. If one let
Then Eq. (I-1) becomes
This equation is of the type described by Eq. (I-1) in Section 3.2 above. In the latter, 8 is replaced by H here. Therefore,
Substituting Eq. (I-4) into Eq. (I-8) of Section 3.2, it gives
where C is a normalization constant. One can re-write
Substituting for Eq. (I-2) and re-arranging,
Exact Solutions of Fokker-Planck-Kolmogorov Equations 51
(I-7)
(I-8)
(I-9)
(I-10)
(I-11)
(I-12)
Equation (I-8) can also be written as
Equation (I-9) can be reduced to
1where C is a normalization constant. Thus, the response of the system described by Eq. (I-1) is not Gaussian. Expanding the square term in the exponential function in Eq. (I-10), it can be expressed as
or
2where C is the normalization constant. Therefore, the joint stationary probability density function is given by
Equation (I-12) agrees with Eq. (41) of Ref. [3.7].
52 Nonlinear Random Vibration
Similarly, the mean square of velocity can be obtained as
1 2 As the probability density function in Eq. (I-12) is symmetric in x and x , hence
Introducing the change of variables,
Therefore,
Performing the double integrations, one can show that [3.7]
Example II. Consider a nonlinear oscillator with a parametric excitation. This model can be applied to the simplified response analysis of a rotor blade. The equation of motion is
Exact Solutions of Fokker-Planck-Kolmogorov Equations 53
(II-1)
(II-2)
(II-3)
(II-4)
(II-5)
(I-1)
where w(t) is the Gaussian white noise with spectral density S; ", $ and S are constant. This is Example III in Section 3.2 above except that the external random
2 1 excitation is disregarded. Applying Eq. (3.15) and setting A = - S x by using Eq.(2) 2
3 1(3.23) as well as imposing C (x ) = 0 in Eq. (3.32), it leads to
1 1since f = - S x . Integrating Eq. (II-2) gives2
Without loss of generality, one may choose
The stationary probability density function can then be expressed as
where C is the normalization constant. Clearly, the response of the above oscillator is not Gaussian.
3.3.3 Systems with nonlinear damping and nonlinear stiffness
Many practical engineering systems belong to this category. However, explicit solution is difficult to obtained if not impossible. The following example is included to illustrate its derivation rather than to present an analysis of a reasonably practical problem.
Consider a nonlinear oscillator having the following equation of motion
54 Nonlinear Random Vibration
(I-2)
(I-3a,b)
(I-4)
(I-5)
(I-6)
where w(t) or simply w is a Gaussian white noise with a spectral density S and the total energy is
Note that in the foregoing the limits of integration are not identified as the reference level of the potential energy may be chosen arbitrarily. Equation (I-2) can be easily verified if one applies the following co-ordinate transformation
Using Eq. (I-2), Eq. (I-1) can be written as
Equation (I-4) is similar to Eq. (I-1) of Example I in Section 3.2 and therefore the joint stationary probability density function can be expressed as
Performing the integration in Eq. (I-5) and substituting Eq. (I-2), it results
3.4 Solution of Multi-Degree-of-Freedom Systems
Generalization of Eq. (3.10) for lumped-parameter systems with multi-degree of freedom (mdof) is straightforward though the amount of algebraic manipulation is substantially increased. For example, the scalar variables x and x0 in Section 3.2
1 2 3 n 1 2 3 n ibecome vectors. That is, X = (x x x ... x ) and Y = (x0 x0 x0 ... x0 ) such that fT T
ir i rbecomes f and w changes to w , where i = 1,2, ..., n and r = 1,2,..., m. Accordingly, the equations of motion for a n dof system may be written as
Exact Solutions of Fokker-Planck-Kolmogorov Equations 55
(3.38)
(3.39)
This set of equations can be expressed in matrix form as
where Y and h(X;Y) are vector functions of order n × 1. In order to identify the first and second derivate moments with the above
1 1 2 3 n 2 1 2 3 nnonlinear mdof system, one writes Z = X = (x x x ... x ) , Z = Y = (x0 x0 x0 ... x0 ) ,T T
1 2 1 2 3 2nand Z = (Z Z ) = (z z z ... z ) such that the state vector equation becomesT T
m × 1where (w) is the vector of delta correlated white noise processes. The corresponding Itô's equation is
m × 1 m × 1where 6 is the WZ correction term and is a vector of order n×1, (db) = (w) dt, r rin which db = w (t)dt is the Brownian motion or Wiener processes, and
The latter equation may also be expressed as
where the subscripts r,s = 1,2,...,m. The first and second derivate moments of the FPK equation associated with Eq.
(3.39) are
Splitting the first and second derivate moments into
Then applying Eq. (3.40), following similar steps between Eqs. (3.14) through (3.36) in Section 3.2, and now suppose that the elementary or integrating factor is
1 2 3 n 1 2 3 n 1 2 3 n 1 2 3 n i iM(x , x , x ,...,x ;y ,y ,y ,...,y ), r = 8(x , x , x ,..., x ; y , y , y ,..., y ), y = (x ) /2, and2
yiM = 8 , one can obtain the stationary probability density as
To illustrate the application of the foregoing procedure, it suffices to consider the following two dof system
ir 1 2 1 2where f , in general, are functions of x , x , x0 and x0 . To identify the first and second derivate moments of this two dof system one re-
writes Eq. (I-1) into the following four first order differential equations
Exact Solutions of Fokker-Planck-Kolmogorov Equations 57
With reference to the above equations, and recall the notation, that the first and second derivate moments of the corresponding FPK equations can be determined as
58 Nonlinear Random Vibration
(I-3)
(I-4)
The remaining second derivate moments are zero. The above first and second derivate moments are split and making use of the sdof
Eq. (3.33) for mdof systems, one can show that
If a consistent function dN(8) /d8 can be found from Eqs. (I-3) and (I-4) then the above problem is of the generalized stationary potential type.
Consider the simple case of Scheurkogel and Elishakoff [3.25] in which the two
Exact Solutions of Fokker-Planck-Kolmogorov Equations 59
(I-5)
(I-6)
(I-7)
(I-8)
(I-9)
(I-10a,b)
dof system with equations of motion similar to Eq. (I-1) above but
and
where H is non-negative potential function. From Eq. (I-5) and comparing with Eq. (I-3) in the foregoing, one has M = 1,
and
where ( is a constant and therefore, the system admits a stationary solution
Note that Eq. (I-8) is independent of the choice of (. This result was obtained by Cai and Lin [3.14], and was also obtained by Scheurkogel and Elishakoff [3.25] applying a different procedure. It was pointed out by Cai and Lin that the system is in detailed balance when ( = 1/2.
To derive the statistical moments for this particular case, Ref. [3.25] uses
1 2where " and " are positive, and g is a positive small parameter. Recall that
Introducing the new variables
60 Nonlinear Random Vibration
and the marginal probability density functions
1 s where C is a normalization constant, the probability density function p can then be written as
Equation (I-15) implies that the velocities and new displacement variables defined by Eq. (I-10) are pairwise independent. From Eqs. (I-11) through (I-13), one can conclude that the velocities and u are normally distributed with zero mean
Applying the following identity [3.26]
the second moments of velocities and u can be shown to be
With reference to Eqs. (I-10) and (I-16), and the independence of u and v one can obtain
and therefore
To evaluate Eq. (I-20) one requires < v >. Note that the marginal probability density2
function of v is symmetrical about the origin and therefore all its odd moments are zero. Furthermore, applying Eqs. (I-10) and (I-16) one has
Exact Solutions of Fokker-Planck-Kolmogorov Equations 61
(I-21)
(I-22)
(I-23)
(I-24)
(I-25)
(I-26)
(I-27)
By definition the moments of even order of v are
By making use of the following substitution
equation (I-22) can be obtained as
mwhere the function Q [.] is given by
Setting m = 0 in Eq. (I-24), the normalization constatnt
Hence, Eq. (I-24) becomes
As g is small one can show that the second moment of v is given by [3.25]
62 Nonlinear Random Vibration
(I-28)
(I-29)
(I-30)
(3.42)
1Applying Eqs. (I-10), (I-18), (I-19), (I-21) and (I-28) the second moments of x and 2x can be shown to be
and
3.5 Stochastically Excited Hamiltonian Systems
Another general technique of dealing with a somewhat larger but still restricted class of mdof nonlinear systems in terms of Hamiltonian formulation has been provided in Refs. [3.27-3.29]. The technique in Refs. [3.27, 3.29] is a generalization of that by Soize [3.28]. The basic steps in the technique are included here.
Consider the mdof nonlinear system governed by
Exact Solutions of Fokker-Planck-Kolmogorov Equations 63
(3.43)
(3.44)
(3.45)
(3.46)
(3.47)
(3.48)
where "(q) is an arbitrary function of q; H is the Hamiltonian with continuous first r irorder derivatives; w (t) are Gaussian white noises; $(H), ( (q;p), and f(H) are twice
ij 1 2 3 n 1 2 3 ndifferentiable; c (q;p) are differentiable; q = (q q q ... q ) ; p = (p p p ... p ) ;T T
i iq and p are generalized displacements and momenta, respectively. The system in Eq. (3.42) encompasses both additive and multiplicative random
excitations. Following the procedure in Ref. [3.29], one has
i j ij ijwhere N is the probability potential; < w (t) w (t + J) > = 2BS *(J); and B is(i)
related to the second derivate moments. Equation (3.43) may be re-written as
which has a general solution
Therefore, the stationary probability density
ij ir ij ji ijSuppose $ is constant, c and ( depend on q only, and c + c = :B , one obtains
ij ir ij ji ijIf $ is a function of H, c and ( depend on q only, and c = c = :B /2,
64 Nonlinear Random Vibration
Consider another system whose Hamiltonian is given by [3.29]
i i ij jwhere q are the generalized displacements, p = m q are the generalized momenta, and m(q) is a symmetric matrix. The system corresponding to the above Hamiltonian has the following governing equations of motion
1 2 3 n iwhere x = (x x x ... x ) , x being the displacement of the i'th dof of the system,T
and the remaining symbols have their usual meaning. Thus, the stationary probability density of x and x0 can be expressed in terms of that for q and p by the following relation
where *J* is the Jacobian and is equal to the determinant of the symmetric matrix m(x) in Eq. (3.51).
4 Methods of Statistical Linearization
4.1 Introduction
The systematic methods developed by Cai and Lin [4.1], and further generalized by To and Li [4.2] give a broadest class of solvable reduced Fokker- Planck-Kolmogorov (FPK) equations which contains all solvable equations previously obtained and presented in the literature. However, it is difficult to find a real mechanical or structural system that corresponds to a solvable reduced FPK equation other than those already reported in the literature and representatively included in Chapter 3. Consequently, it is necessary to apply approximation methods to deal with other real mechanical or structural systems.
One popular class of methods for approximate solutions of nonlinear systems is that of statistical linearization (SL) or equivalent linearization (EL) techniques. These techniques are popular among structural dynamicists and in the engineering mechanics community. This is partially due to its simplicity and applicability to systems with mdof, and systems under various types of random excitations.
The SL technique was independently developed by Booton [4.3,4.4] and Kazakov [4.5,4.6] in the field of control engineering. Further developments in this field were presented and reviewed by Sawaragi et al. [4.7], Kazakov [4.8,4.9], Gelb and Van Der Velde [4.10], Atherton [4.11], Sinitsyn [4.12], and Beaman and Hedrick [4.13]. In the control and electrical engineering communities the SL techniques are also known as methods of describing functions. In the field of structural dynamics Caughey [4.14] independently presented the SL technique as an approximate method for solving nonlinear systems under external random forces. Subsequently, generalization of the SL technique in the field of structural dynamics was made by Foster [4.15], Malhotra and Penzien [4.16], Iwan and Yang [4.17], Atalik and Utku [4.18], Iwan and Mason [4.19], Spanos [4.20], Brückner and Lin
66 Nonlinear Random Vibration
(4.1)
(4.2)
(4.3)
[4.21], and Chang and Young [4.22]. Many applications of the SL technique have been made since its introduction in mid-1950 and early 1960. Examples can be found in the survey articles of Sinitsyn [4.12], Spanos [4.23], Socha and Soong [4.24], and the books by Roberts and Spanos [4.25], and Socha [4.26]. The underlying idea of the SL techniques is to replace the nonlinear system by a linear one such that the behaviour of the equivalent linear system approximates that of the original nonlinear oscillator. In essence the techniques are generalizations of the deterministic linearization method of Krylov and Bogoliubov [4.27] in the sense that equivalent natural frequencies are employed.
In this chapter representative SL techniques, in the field of structural dynamics, the issues of existence and uniqueness, accuracy, and various applications are presented and discussed.
4.2 Statistical Linearization for Single-Degree-of-Freedom
Nonlinear Systems
In this section the methods of SL for sdof nonlinear systems with stationary solutions, sdof systems with nonstationary random response, non-zero mean stationary solution, stationary solution of a nonlinear sdof system under narrow-band excitation, stationary solution of a sdof system under parametric and external random excitations are introduced.
4.2.1 Stationary solutions of single-degree-of-freedom systems under zero
mean Gaussian white noise excitations Consider a sdof nonlinear oscillator described by the equation of motion
where the symbols have their usual meaning. In particular,
in which S is the spectral density of the Gaussian white noise process w(t). The underlying idea of the SL technique is to replace Eq. (4.1) by the following equivalent linear equation of motion
Methods of Statistical Linearization 67
(4.4)
(4.5)
(4.6)
(4.7)
(4.8)
(4.11)
(4.12a,b)
e ewhere $ and k are the equivalent damping and stiffness coefficients that best approximate the original nonlinear equation of motion, (4.1). To achieve this, one simply adds the equivalent linear damping and restoring force terms to both sides of Eq. (4.1) and re-arranges to give
where D is the deficiency or error term in the approximation. The deficiency
In order to minimize the error, a common criterion is to minimize the mean square e evalue of the error process D. Therefore, the parameters $ and k have to be
chosen such that
and
For < D > to be a true minimum it requires that 2
68 Nonlinear Random Vibration
(I-1)
By virtue of Eqs. (4.7), (4.8) and (4.12) one has the following pair of algebraic equations
e eClearly, the solution for $ and k requires the knowledge of the unknown expectations. There are two possible approximations [4.4,4.14]. The first approach is to replace the joint transition probability density function by the joint stationary probability density function. This, in turn, enables one to replace the time-dependent mean square values of displacement and velocity by their corresponding stationary mean square values. The other approach is to replace the joint stationary probability density function by the joint stationary probability density function of the linearized equation. In this approach the expectations are now implicit functions of the equivalent damping and stiffness coefficients. The consequence of this is that Eqs.
e e(4.13) and (4.14) become nonlinear in $ and k . It should be noted that the most general formulas for the determination of the
equivalent linear damping and stiffness coefficients, which are applicable to stationary and nonstationary Gaussian approximations of the response are
Equations (4.15) and (4.16) are obtained from the corresponding relations for mdof systems that were presented by Atalik and Utku [4.18]. This SL technique for mdof systems is included in Section 4.3 and therefore is not dealt with here.
In the following several examples are included to illustrate the application of the SL technique presented above. These are systems with (a) nonlinear restoring forces and linear dampings, (b) nonlinear dampings and linear restoring forces, and (c) nonlinear dampings and nonlinear restoring forces.
Example I. Applying the method of SL, determine the stationary variances of displacement and velocity of a Duffing oscillator whose equation of motion is given by Eq. (4.1) in which
Methods of Statistical Linearization 69
(I-2)
(I-3a,b,c)
(I-3d,e)
(I-4a,b)
(I-5)
(I-6)
(I-7a,b)
(I-8)
where S is the natural frequency of the associated linear oscillator, that is, when g = 0 in Eq. (I-1).
The equivalent linear equation is
The approximation adopted here is to assume that x and dx/dt are stationary, independent, and with zero means. Consequently, for the oscillator defined by Eqs. (4.1) and (I-1)
Applying Eqs. (4.13), (4.14) and (I-3) immediately leads to
The variance of x can be determined from the following relation
where the power spectral density of x is given by
win which "(T) and S (T) are the frequency response function or receptance of the equivalent linear system and the power spectral density function of the excitation, respectively. Thus,
The stationary variance of x for the equivalent linear equation is given by
Writing the stationary variance of x for the linear oscillator, that is g = 0,
70 Nonlinear Random Vibration
(I-9)
(I-10)
(II-1)
(II-2)
(II-3)
(II-4)
(II-5)
and solving for the variance of x of the equivalent linear oscillator by making use of Eqs. (I-8) and (I-4b), one has
Example II. Consider the nonlinear oscillator of Eq. (4.1) where
Let the equivalent linear equation be given by Eq. (I-2) above. Applying Eq. (4.15) one can show that
Similarly, applying Eq. (4.16) gives
From Ref. [4.28]
where the stationary probability density is assumed to be Gaussian since the excitation is Gaussian, that is
Therefore,
(II-6)