Reliability of Structures Chapter 2

79
Open University, 9 –12 /2013

description

Reliability of Structures Chapter 2

Transcript of Reliability of Structures Chapter 2

  • Open University, 9 12 /2013

  • 2.1.2.1.Basis definitionsBasis definitions

    2.2.2.2.Properties of probability functions (CDF, PDF, PMF)Properties of probability functions (CDF, PDF, PMF)

    2.3.2.3.Parameters of a random variableParameters of a random variable

    2.4.2.4.Common random variablesCommon random variables

    2.5. 2.5. Probability paperProbability paper

    2.6.2.6. Interpretation of test data using statisticsInterpretation of test data using statistics

    2.7.2.7.Conditional probabilityConditional probability

    2 8 R d t2 8 R d t2.8. Random vectors2.8. Random vectors

    2.9.2.9.CorrelationCorrelation

    2 10 Bayesian updating2 10 Bayesian updating2.10. Bayesian updating 2.10. Bayesian updating

  • 2.1.1Sample space and Event

    - Consider experiments which test material strength, measure the depth of a beam, or determine the occurrence of a truck on a bridge during a period of time. In these experiments, the outcomes are unpredictable.p , p- All possible outcomes of an experiments comprise a sample space- Combination of one or more of the possible outcomes can be defined as events.

    Example 2 1 Consider anExample 2.1. Consider anexperiment in which some number(n) of standard concrete cylinders istested to determine theirtested to determine theircompressive strength, fc, as shownin Figure 2.1.

    Figure2.1: Concrete cylinder test

  • 2.1.1Sample space and Event

    - Assume that the test results of compressive strength are

    n1 2 3E ,E ,E , ,E" n1 2 3, , , ,

    Sample space Event

    The defined sample space for concrete cylindertests is called a continuous sample space

    Sa p e space Event

    Example 2.2. A reinforced concrete beam is tested to determine one of 2 possiblemodes of failure:Mode 1: failure occurs by crushing of concreteMode 2: failure occurs by yielding of steel

    This sample space has a finite number of elements and it is called a discrete samplep p pspace. Each mode of failure can be considered an event

  • 2.1.2 Axioms of probability

    - Let E represent an event, and let represent a sample space. The notation P() is used to denote a probability function defined on events in the sample space.

    0 1P(E) Axiom 1: For any event Ewhere P(E) = probability of event E.where P(E) probability of event E.

    Axiom 2: P( 1) =Axiom3: Consider n mutually exclusive events E E EAxiom3: Consider n mutually exclusive events E1, E2,, En.

    ( )n ni ii 1i 1

    P E P E==

    =

  • 2.1.3. Random variables

    - A random variable is defined as a function that maps events onto intervals on theaxis of real numbers (Figure 2.2). A random variable is designed by a capital letter.- A random variable can be either a continuous or discrete random variable.

    Continous random variables

    Di t d i blDiscrete random variables

    Figure 2.2: Representation of aFigure 2.2: Representation of a random variable as a function

  • 2.1.4. Basic function

    The Probability Mass Function (PMF) is defined for discrete random variables aspX(x) = probability that a discrete random variable X is equal to a specific value xwhere x is a real number. Mathematicallyy

    Xp (x) P(X x)= =Figure 2.3 illustrates a example of values

    of the PMF function, in whichPX(1) = P (X=1) = 0.05PX(2) = P (X=2) = 0.20PX(3) = P (X=3) = 0.65PX(4) = P (X=4) = 0.10PX(4) P (X 4) 0.10

  • 2.1.4. Basic function

    The Cumulative Distribution Function (CDF) is defined for both discrete andcontinuous random variables as FX(x) = the total sum (or integral) of allprobability functions (continuous and discrete) corresponding to values less thanp y ( ) p gor equal to x. Mathematically

    XF (x) P(X x)= Figure 2.4 illustrates the corresponding

    CDF function of Figure 2.3 withF (1) = P (X1) = 0 05FX(1) = P (X1) = 0.05FX(2) = P (X 2) = 0.25FX(3) = P (X 3) = 0.90FX(4) = P (X 4) = 1.0

  • 2.1.4. Basic function

    The Probability Density Function (PDF) is defined for continuous randomvariables as the first derivative of the cumulative distribution function. The PDF[fX(x)] and the CDF[FX(x)] for the continuous random variables are related as:[ X( )] [ X( )]

    X Xdf (x) F (x) (2.12)

    dx=

    dxx

    X XF (x) f ( )d (2.13)=

  • 2.1.4. Basic function

    To illustrate these relationships, consider a continuous random variable X. ThePDF and CDF functions might look like those in Figure 2.5 and 2.6. Eq. 2.13represents the shaded area under the PDF as shown in Figure 2.7 for the case x = a.p g

    x

    X XF (x) f ( )d (2.13)=

  • Some important properties of the CDF are enumerated below. Any function whichsatisfies these six conditions can be considered a CDF.

    1. Definition of a CDF is the same for both discrete & continuous random variables

    2. The CDF is a positive, nondecreasing function whose value is between 0 and 12. The CDF is a positive, nondecreasing function whose value is between 0 and 1

    X0 F (x) 1 (2.14) 3. If 1 2 X 1 X 2x x then F (x ) F (x )<

    X4.F ( ) 0 =X5.F ( ) 1+ =

    6. For continuous random variablesb

    X X XaP(a x b) F (b) F (a) f ( )d (2.15) = =

  • 2.3.1 Basic parameters

    Consider a random variable X. The basic parameters of X includes

    + Mean value of X is denoted by X.

    iContinous random variables

    Discrete ran

    X X X

    dom varia

    i X i

    bles

    all xxf (x)dx (2.16) ; x p (x ) (2.17)

    = =

    Discrete random variables

    XE(X) (2.18)= + Expected value of X is denoted by E(X) and is equal to mean value of variable.

    +It is also possible to determine the expected value of Xn. This expected value iscalled the nth moment of X and is defined for continuous and discrete variables as

    n nE(X ) f ( )d (2 19)+ n nE(X ) ( ) (2 20)n n XE(X ) x f (x)dx (2.19)=

    i

    n ni X i

    allxE(X ) x p (x ) (2.20)=

  • 2.3.1 Basic parameters

    2 2 2 2X X X X i X X i(x ) f (x)dx ; (x ) p (x ) (2.21)

    = =

    + Variance (phng sai) of X is denoted as = the expected value of (X-X)22X

    iContinuous random variables

    Discrete random

    all

    variables

    x

    An important relationship exists among the mean, variance, and second moment2 2 2X XE(X ) (2.22) =

    + Standard deviation of X is defined as the positive square root of the variance2

    X X (2.23) = + The nondimensional coefficient of variation, VX, is defined as

    XXX

    X

    V (2.24)=

  • 2.3.2 Sample parameters

    In many practical applications, we do not know the true distribution, and we need toestimate parameters using test data. If a set of n observations {x1,x2,,xn} areobtained for a particular random variable X, then the true mean X can beapproximated by the sample mean and the true standard deviation X can beapproximated by the sample standard deviation sX.

    x

    The sample mean is calculated asThe sample mean is calculated asn

    ii 1

    1x x (2.25)n =

    = The sample standard deviation is calculated as

    In Matlab we use commandmean(data)

    The sample standard deviation is calculated as

    ( ) ( )nn 22 2iii=1i 1

    x n xx x(2 26)=

    In Matlab we use commandstd(data)i 1i 1Xs (2.26)n 1 n 1

    = = =

  • 2.3.3 Standard form

    X

    X

    XZ (2.27)=

    Let X be a random variable. The standard form of X, denoted by Z, is defined as

    X

    The mean of Z is calculated as follows.Let g(X) be a arbitrary function of X, then the expected value (mean value) of g(X)

    ( ) ( ) ( ) XE g X g x f (x)dx (2.28)+ = = [ ] ( )XX 1 1E E(X) E( ) 0 (2 29) = = = =

    ( ) ( ) ( ) Xg X E g X g x f (x)dx (2.28) Using this definition with Z = g(X), we can show that

    [ ] ( )Z X X XX X X

    E E(X) E( ) 0 (2.29) = = = = 2 2

    2 2 2 2X XZ Z X2 2

    X 1E(Z ) E 0 E (X ) 1 (2.30) = = = = = X X X

    Thus, the mean of the standard form of a random variable is 0 and its variance is 1

  • 2.4.1 Uniform Random Variables

    X

    1 a x bPDF f (x) (2.31)b a

    0

    = = otherwise0 otherwiseFigure 2.9 shows the PDF andCDF of a uniform variableThe mean and variance are asfollows

    a b+X

    a b (2.32)2+ =

    ( )22X

    b a(2.33)

    12 =12

  • 2.4.2 Normal Random Variables e 2.7183=2

    XX

    XX

    x1 1f (x) exp (2.34)22

    = Figure 2.10 shows the PDF and CDF of a normalrandom variable.There is no closed form solution for the CDF of aThere is no closed-form solution for the CDF of anormal random variable. However, tables havebeen developed to provide values of the CDF forthe special case in which X=0 and X=1 If wethe special case in which X 0 and X 1. If wesubstitute these values in Eq. (2.34), we get thePDF of the standard normal variable z, denoted by

    ( )21 1 ( )2 Z1 1(z) exp z f (z) (2.35)22 = =

  • 2.4.2 Normal Random Variables

    The CDF of the standard normal variable is typical denoted by (z). Values of (z)are listed in Appendix B for values of z ranging from 0 to -8.9. Values of (z) for z> 0 can also be obtained from Appendix B by applying the symmetry property of thenormal distribution

    (z) 1 ( z) (2.36) = Figure 2.11 and 2.12shows the shapes ofPDF (z) and CDF (z)PDF (z) and CDF (z)of a normal randomvariable.

  • In Matlab we use the command:normcdf(X, sample_mean, standard_deviation)

    E l df(1 13 0 1) ill i th lt 0 8708Example: normcdf(1.13,0,1) will give the result 0.8708Similarly for PDF we use

    normpdf(X, sample_mean, standard_deviation)

  • 2.4.2 Normal Random Variables

    The probability information for the standard normal random variable can be used toobtain the CDF and PDF values for an arbitrary normal random variable byperforming a simple coordinate transformation. Let X be a general normal randomvariable, and let Z be the standard form of X. Then by rearranging Eq. 2.27, we canshow that

    X XX Z (2.37)= + B th d fi iti f CDF d PDF it

    XX X X

    XX

    xF (x) P(X x) P Z x P Z (2.38) = = + =

    By the definition of CDF and PDF, we can write

    XX Z

    X

    X X

    xor F (x) F (z) (2.39)

    x xd d 1

    = = X X

    X XX X X

    x xd d 1f (x) F (x) (2.40)dx dx

    = = =

  • 2.4.2 Normal Random Variables

    Therefore, using the relationshipsin Eqs. 2.39 and 2.40, one canconstruct the distribution functionsfor an arbitrary normal randomvariable (given X and X) usingthe information provided inAppendix B. Examples of CDFsand PDFs for normal randomvariables are provided in Figure2 132.13.

  • 2.4.2 Normal Random Variables

    The distribution functions for a normal random variable have some importantproperties which are summarized as follows:

    1 The PDF f (x) is symmetrical about the mean 1. The PDF fX(x) is symmetrical about the mean XX X X Xf ( x) f ( x) (2.41) + =

    This property is illustrated in Figure 2.14.

    2. Because the normal distribution is symmetricalabout its mean value, the sum of FX(X + x) andFX(X - x) is equal to 1. That is

    p p y g

    FX(X x) is equal to 1. That isX X X XF ( x) F ( x) 1 (2.42) + + =

    This is a generalization of the property expressedThis is a generalization of the property expressedfor (z) in Eq. 2.36.

  • Example 2.3:2.4.2 Normal Random Variables

    (a) If Z is a standard normal random variable, and z = -2.16, what are the PDFand CDF values?

    (b) If z = + 1.51, what is (1.51)?(c) Given (z) = 0.80 x 10-4, what is the corresponding value of z?

    (a) From Appendix B, (z=-2.16) = 0.0154. From Eq. 2.35, (z=-2.16)=0.0387.Solution:

    In Matlab we use command phi = 1/sqrt(2*pi)*exp(-1/2*(-2.16)^2)(b) From Appendix B, (-1.51) = 0.0655. Using Eq. 2.36, (1.51) = 1 0.0655

    = 0.9345( ) F A di B t th t thi l f ( ) d t d t(c) From Appendix B, we note that this value of (z) does not correspond to a

    specific tabulated value of z. Therefore, we must use interpolation. For z = -3.77, (-3.77) = 0.816 x 10-4, and for z= -3.78, (-3.78) = 0.784 x 10-4.Interpolating between these values the desired value of z is about 3 775Interpolating between these values, the desired value of z is about -3.775.In Matlab, command a = -3.78 + (3.78 - 3.77)*(0.8-0.784)/(0.816-0.784)

  • Example 2.4:2.4.2 Normal Random Variables

    Assume X is a normal random variable with X=1500 and X=200. The PDF forthe variable is shown in Figure 2.15. Calculate the following: (a) FX(1300); (b)FX(1900); (c) FX(1700); fX(1300); fX(1500)X X X X

    Solution:

    Xx 1300 1500F (x) F (1300) ( 1) 0 159 X X

    X

    X

    F (x) F (1300) ( 1) 0.159200

    1900 1500F (1900) (2) 1 ( 2) 1 0.0228 0.977200

    = = = = = = = = =

    X X XF (1700) F (1500 200) 1 F (1500 200) 1 0.159 0.841= + = = =X

    XX X

    x1 1 1300 1500 1 1f (1300) ( 1) 0.242 0.00121200 200 200 200

    = = = = = X

    1 1500 1500 1 1f (1500) (0) 0.399 0.00199200 200 200 200

    = = = =

  • 2.4.2 Normal Random Variables

    As we see later, it is often necessary to calculate the inverse of the CDF of thestandard normal distribution function (z). Although the inverse doesnt exist inclosed form, an approximate formula for the inverse does exist, and it givesreasonable results over a wide range of probability values. Let p = (z). Theinverse problem would be to find z = -1(p). The following formula can be used ifp is less than or equal to 0.5.

    21 0 1 2

    2 31 2 3

    c c t c tz (p) t (2.43)1 d t d t d t

    for p 0.5 + += = + + + +where c0 = 2 515517; c1 = 0 802853; c2 = 0 010328; ( )2l (2 44)where c0 2.515517; c1 0.802853; c2 0.010328;d1 = 1.432788; d2 = 0.189269; d3 = 0.001308 and ( )2t ln p (2.44)= For p > 0.5; -1 is calculated for p*=(1-p), then we use the following relationship

    1 1 *( ) ( ) ( )1 1 *z (p) (p ) (2.45) = =

  • 2.4.3Lognormal Random Variables

    - A random variable X is a lognormal variable if Y = ln(X) is normally distributed.A Lognormal random variable is defined for positive values only (x0). The PDFand CDF can be calculated using distributions (z) and (z) for the standardg ( ) ( )normal random variable Z as follows

    X YF (x) P(X x) P(ln X ln x) P(Y y) F (y) (2.46)= = = =- Since Y is normally distributed, we can use the standard normal functions asdiscussed in Section 2.4.2. Specifically,

    YyF (x) F (y) (2 47) = = X Y

    Y

    F (x) F (y) (2.47)= = where y = ln(x), Y= ln(X)=mean value of ln(X), and Y = ln(X) = standarddeviation of ln(X).( )

  • 2.4.3Lognormal Random Variables

    - These quantities can be expressed as functions of X, X, and VX using thefollowing formulas: ( )2 2ln(X) Xln V 1 (2.48) = +

    ( ) 2ln(X) X ln(X)1ln (2.49)2 = If VX is less than 0.2, the following approximations can be used to find and

    2l (X)If VX is less than 0.2, the following approximations can be used to find andln(X):ln(X)

    ( )2 2ln(X) X

    ln(X) X

    V (2.50)

    ln (2.51)

    For the PDF function, using Eq. 2.12, we can show that

    ln(X) ln(X)X X

    ln(x) ln(x)d d 1f (x) F (x) (2 52) = = = X X

    ln(X) ln(X) ln(X)

    f (x) F (x) (2.52)dx dx x

  • 2.4.3Lognormal Random Variables

    The general shape of the PDF function for a lognormal variable is shown in Figure2.16.

    In Matlab we use the command:cdf(logn sample X)cdf(logn, sample, X)

    Similarly for PDF we usepdf(logn, sample, X)

  • 2.4.3Lognormal Random Variables

    Let X a lognormal random variable with a mean value of 250 and a standarddeviation of 30. Calculate FX(200) and fX(200)

    Example 2.5:

    X( ) X( )

    ( )2 2XX ln(X) X ln(X)30V 0.12 ; ln V 1 0.0143 0.1196250= = = = + = =Solution:

    ( ) 2ln(X) X ln(X)1ln ln(250) 0.5(0.0143) 5.512 = = =( )

    X 250

    ln(x) ln(200) 5 51 ln(X)X

    ln(X)

    ln(x) ln(200) 5.51F (200) ( 1.77) 0.03840.1196

    = = = = ln(X)ln(x)1 1 0.0833f (200) ( 1 77) 0 00348

    ( )Xln(X) ln(X)

    f (200) ( 1.77) 0.00348x (200)(0.1196) 23.92

    = = = =

  • - Probability paper can be used to graphically dertermine whether a set ofi t l d t f ll ti l b bilit di t ib tiexperimental data follows a particular probability distribution

    - For example, the CDF of the normaldi ib i h S h h i Fidistribution has an S-shape as shown in Figure2.20. The basic idea behind normal probabilitypaper is to redefine the vertical scale so that thep pnormal CDF will be plotted as a straight line. Theslope and y intercept of the graph can be used toderterming the mean and standard deviation ofthe distribution.

  • -Consider a normal random variable X withmean value X and standard deviation X.Now imagine a transformation in which theS-shaped CDF is straightened as shown inS-shaped CDF is straightened as shown inFigure 2.20. The transformation is such thateach point on the original CDF can onlymove vertically up and down.

  • - In commercial normal probability paper,the transformation is accomplished bthe transformation is accomplished byaltering the scale of the vertical axis asshown in Figure 2.21. Observe that thevalues on the left vertical axis are notevenly spaced.

    - The standardized form Z of a normalrandom variable X

    X X

    X X X

    X 1Z X (2.75) = = +

  • - For any realization x of the normal random variable X, the correspondingstandardized value isstandardized value is

    X X

    X X X

    x 1z x (2.76) = = + h di b bili b d h l ld b

    XX

    X

    xF (x) p (2.77) = =

    the corresponding probability based on the normal CDF would be

    1 X

    X X

    1(p) x (2.78) = + E 2 78 t li l ti hi b t 1( ) d d thi id thEq. 2.78 represents a linear relationship between z=-1(p) and x, and this provides therationale behind normal probability paper. The vertical axis on the right side ofFigure 2.21 was obtained by transforming the probability values of the left scale. If-1(p) versus x is plotted on standard, a straight line plot will result.

  • Eq. 2.78 is illustrated in Figure 2.22. Theuneven probability scale and thep ycorresponding linear scale are both shown onthe left side of the plot. Data points from ageneral normal distribution are plotted, and ageneral normal distribution are plotted, and astraight line is obtained. Observe that thevalue of x corresponding to FX(x) = 0.5 [or z= -1(0 5)=0] is the mean value X-1(0.5) 0] is the mean value X.From Eq, 2.78, we note that the slope of thestraight line is the inverse of the standarddeviation If we move away from the meandeviation. If we move away from the meanvalue by an amount nX where n is an integer,the corresponding value of z is equal to n.This is shown by the dotted lines in FigureThis is shown by the dotted lines in Figure2.22.

  • Now consider the practical application of normal probability paper to evaluateexperimental data. Consider an experiment or test in which N values of some randomp pvariable X are obtained. These values will be denoted {x}. To be able to use normalprobability paper, it is necessary to associate a probability value with each x value.The procedure is as follows:The procedure is as follows:

    1. Arrange the data values in increasing order.2. Associate with each xi a cumulative probability pi equal to pi = i/(N+1) (2.79)3 If the commercial normal probability paper is being used then plot the (xi pi)3. If the commercial normal probability paper is being used, then plot the (xi, pi)

    and go to step 6. Otherwise, go to step 44. For each pi, determine zi=-1(pi). Eq. 2.43 can be useful in this step.5 Plot the coordinates (x z ) on standard linear graph paper5. Plot the coordinates (xi, zi) on standard linear graph paper6. If the plot appears to follow a straight line, then it is reasonable to conclude

    that the data can be modeled using a normal distribution. Sketch a best fitline for the data The slope of the line will equal to 1/ and the value of x atline for the data. The slope of the line will equal to 1/X, and the value of x atwhich the probability is 0.5 (or z=0) will equal to X.

  • Consider the following set of 9 data points: {x}={6 5 5 3 5 5 5 9 6 5 6 8 7 2 5 9Example 2.7:

    Consider the following set of 9 data points: {x} {6.5, 5.3, 5.5, 5.9, 6.5, 6.8, 7.2, 5.9,6.4}. Plot the data on normal probability paper.

    Solution:It is convenient to carry out Steps 1 and 2 by setting up a table as seen in Table 2.1.

  • The values of (xi pi) or (xi zi) are plotted on

    Example 2.7: (cont)

    The values of (xi,pi) or (xi,zi) are plotted onprobability paper in Figure 2.23.The data plotted in Fig 2.23 appear tofollow a straight line and thus mightfollow a straight line and thus mightconclude that the data follow a normaldistribution. For comparison, a referencestraight line is plotted based on the samplestraight line is plotted based on the samplestatistics and sX=0.62.In Matlab use use command normplot(x)Oth l f t l d t l tt d

    x 6.2=

    Others examples of actual data plotted onnormal probability paper are seen inFigures 2.24, 2.25, 2.26

  • In Matlab use use command normplot(x)

    Example 2.7: (cont)

    In Matlab use use command normplot(x)

  • Others examples of actual data plottedon normal probability paper are seenon normal probability paper are seenin Figures 2.24, 2.25, 2.26.

  • Others examples of actual data plottedon normal probability paper are seenon normal probability paper are seenin Figures 2.24, 2.25, 2.26.

  • Others examples of actual data plottedon normal probability paper are seenon normal probability paper are seenin Figures 2.24, 2.25, 2.26.

  • In Section 2.3.2, we discussed how to calculate the sample mean and samplestandard deviation for a set of observed data. In Section 2.5, we discussed how tostandard deviation for a set of observed data. In Section 2.5, we discussed how toplot the data on normal probability paper to see if the data follow a normaldistribution. Another graphical technique, known as the histogram, is sometimeuseful The basic idea is to count the number of data points that fall into predefineduseful. The basic idea is to count the number of data points that fall into predefinedintervals and then make a bar graph. By looking at the bar graph, you can observetrends in the data and visually determine the distribution of the data. This idea isbest explained by examplesbest explained by examples.

  • Suppose we test 100 concrete cylindersExample 2.9:Suppose we test 100 concrete cylindersand determine the compressive strengthfor each specimen. We then establishintervals of values and count the numberintervals of values and count the numberof observed values that fall in eachinterval. This is shown in Table 2.3. Thenfor each interval we calculate the relativefor each interval, we calculate the relativefrequency of occurrence or percentageof observations which is shown in third

    l f T bl 2 3 Th dd tcolumn of Table 2.3. Then we add up toget a cumulative frequency value asshown in the last column of Table 2.3.

  • If we plot the values in column 3 ofExample 2.9: (cont)If we plot the values in column 3 ofTable 2.3 versus the interval valuesin column 1, we get a relativefrequency histogram plot as seen infrequency histogram plot as seen inFigure 2.27.

  • If we plot the values in column 4 ofExample 2.9: (cont)If we plot the values in column 4 ofTable 2.3 versus the interval valuesin column 1, we get a cumulativefrequency histogram as seen infrequency histogram as seen inFigure 2.28.

  • Figure 2 29 shows how the intervalExample 2.9: (cont)Figure 2.29 shows how the intervalsize can drastically influence theoverall appearance of relativefrequency and cumulative frequencyfrequency and cumulative frequencyhistograms.

  • Given two events E1 and E2, the conditional probability of E1 occurring if E2 hasalready occurred is defined as

    1 21 2

    2

    P(E E )P(E | E ) (2.80)P(E )=

    where E |E is the notation commonly used to denote the case in which event Ewhere E1|E2 is the notation commonly used to denote the case in which event E1occurs given event E2 has occurred. The quantity P(E1E2) represents theprobability of the events E1 and E2 to occur simultaneously.

    -If two events are statistically independent, then the occurrence of one event has noeffect on the probability of occurrence of the other event. In this case, we have

    1 2 1 2 1 2P(E | E ) P(E ) and P(E | E ) P(E ) (2.81)= =To illustrate the concept of conditional probability, consider the following example:

  • Example 2.11:Consider tests of concrete beams Two parameters are observed: Cracking momentConsider tests of concrete beams. Two parameters are observed: Cracking momentand ultimate moment. Let Mu and Mcr denote the ultimate bending moment and thecracking moment, respectively. Define event E1 as the case when Mu 150 kip-feet(k-ft); and define event E2 as the case when Mcr 100 k-ft.A conditional probability that the ultimate moment will be reached given that thecracking moment has been reached would be written as follows:cracking moment has been reached would be written as follows:

    1 21 2 u cr

    2

    P(E E )P(E | E ) P(M 150given M 100)P(E )= =

    u cr

    cr

    P(M 150AND M 100)P(M 100)

    =

  • - A random vector is defined as a vector of random variables {X1,X2,,Xn}. Whenwe deal with multiple random variables in a random vector, we can definewe deal with multiple random variables in a random vector, we can definedistribution functions and density functions similar to those defined for singlerandom variables. The joint cumulative distribution function is defined as

    ( ) ( )F x x x P X x X x X x (2 82) ( ) ( )1 2 nX , X , , X 1 2 n 1 1 2 2 n n

    F x ,x , , x P X x , X x , , X x (2.82)= " " "In Eq. 2.82, the right-hand side of the equation should be read as the probability ofthe intersection of the events X1x1 and X2x2 and and Xnxn .

    ( ) ( )nFf x x x x x x (2 83)=This function is defined for both discrete and continuous random variables. Forcontinuous random variables, the joint probability density function is defined as

    ( ) ( )1 2 nX , X , , X 1 2 n 1 2 n

    1 n

    f x ,x , , x x ,x , , x (2.83)x x

    = " " ""- For discrete random variables, the joint probability mass function is defined as

    ( ) ( )P X X X (2 84)( ) ( )1 2 nX , X , , X 1 2 n 1 1 2 2 n n

    p x ,x , , x P X x , X x , , X x (2.84)= = = =" " "

  • - For continuous random variables, we can define a marginal density function foreach Xi aseach Xi as

    ( ) ( )i 1 2 nX i X X ...X 1, n 1 2 i 1 i 1 n

    f x f x ..., x dx dx ...dx dx ...dx (2.85)+ +

    + = "In Eq. 2.85, it is important to note that there are n-1 integrations involved. Theq , p gintegrals are formulated for all variables except Xi.

  • The preceding formulas are completely general, but they can be confusing. To helpillustrate the definitions of joint cumulative distribution function, joint densityillustrate the definitions of joint cumulative distribution function, joint densityfunction, and marginal density functions, consider the case of two continuousrandom variables X and Y. The joint cumulative distribution function for X and Yis defined as

    F

    is defined asXYF (x, y) P(X x,Y y) (2.86)=

    The joint probability density function is defined as

    XYXY

    Ff (x, y) (x, y) (2.87)x y=

    The marginal density functions are

    X XY

    Y XY

    f (x) f (x, y)dy (2.88)

    f (y) f (x y)dx (2 89)

    ++

    ==Y XYf (y) f (x, y)dx (2.89)

  • In Section 2.7, we introduced the concept of condition probability. This concept canbe extended to define a conditional distribution function for a random vector. Again

    f (x y) joint density

    be extended to define a conditional distribution function for a random vector. Againconsider the case of two continuous random variables X and Y. The conditionaldistribution function is defined as

    XYX|Y

    Y

    f (x, y) joint densityf (x | y) (2.90)f (y) marginal density

    = =

    - If random variables X and Y are statistically independent, theny p ,

    X|Y X Y|X Yf (x | y) f (x) and f (y | x) f (y) (2.91)= =which implies that

    XY X Yf (x, y) f (x)f (y) (2.92) =p

  • Example 2.12:Consider a set of tests in which twoquantities are measured: modulus ofelasticity, X1, & compressive strength,X2. Since the values of these variablesX2. Since the values of these variablesvary from test to test, as seen in Table2.5, it is appropriate to treat them asrandom variables Using the concept ofrandom variables. Using the concept ofhistograms discussed in Section 2.6, wecan get an idea of the general shape ofthe probability density function (PDF)the probability density function (PDF)for each individual variable & the jointprobability density function and jointprobability distribution functionprobability distribution function.

  • Example 2.12: (cont)For each individual variable, we define,appropriate intervals of values and thencount the number of observations withineach interval. The resulting relativeeach interval. The resulting relativefrequency histogram for each variable isshown in Figure 2.33.

  • Example 2.12: (cont)To consider the joint histogram, wej g ,need to define two-dimensionalintervals. For example, one intervalwould be for values of X1 (E) betweenwould be for values of X1 (E) between3.0x106 psi and 3.25x106 psi, andvalues of X2 (fc) between 2.5x103 psiand 3 0x103 psi Looking at Table 2 5and 3.0x10 psi. Looking at Table 2.5,we see that there are 15 samples thatsatisfy both requirementssimultaneously Therefore we havesimultaneously. Therefore, we have15 observations in this interval out of100 total observations, and the relativefrequency value is 15/100 = 0 15frequency value is 15/100 = 0.15.

  • Example 2.12: (cont)This value 0.15 is indicated asthe shaded block in Figure 2.34,the relative frequency histogram.

  • Example 2.12: (cont)A cumulative frequency histogram canq y galso be constructed as shown in Figure2.35. For example, to find thecumulative value of the number ofcumulative value of the number oftimes that X1 is less than or equal to3.0x106 psi and X2 is less than or equalto 2 5x103 psi we add all the relativeto 2.5x10 psi, we add all the relativefrequency values in Figure 2.34 thatsatisfy this requirement. The resultwould be 0 + 0 04 + 0 01 + 0 + 0 02 +would be 0 + 0.04 + 0.01 + 0 + 0.02 +0.04 + 0.09 + 0.12 = 0.32.

  • 2.9.1 Basic definitions

    - Let X and Y be two random variables with means and standard deviations. The covariance (hip phng sai) of X and Y is defined as

    X Yand X Yand

    [ ] [ ][ ] [ ]X Y Y X X YCoV(X,Y) E (X )(Y ) E XY X Y (2.93)= = + -Note that CoV(X,Y) = CoV(Y,X). If X and Y are continuous random variables, thenthis formula becomes

    ( )( ) ( )X Y XYCoV(X,Y) x y f x, y dxdy (2.94)+ + = -The correlation coefficient (h s tng quan) is an important parameter in structural

    reliability calculations. The correlation coefficient between 2 random variables X andY is defined as CoV(X,Y) 1 1 (2 95) Y is defined as XY XY

    X Y

    ( , ) ; 1 1 (2.95) = In Matlab we use command corrcoef(X,Y)

  • 2.9.1 Basic definitions

    - Value of XY indicates the degree oflinear dependence between 2 randomvariables X and Yvariables X and Y.- If | XY | is close to 1, then X and Yare linearly correlated.- If | | is close to zero then two- If | XY | is close to zero, then twovariables are not linearly related toeach other.

    When is close to zero there- When XY is close to zero, theremay be some nonlinear relationshipbetween two variables. Figure 2.36illustrates the concept correlationillustrates the concept correlation.

  • 2.9.1 Basic definitions

    - When XY = 0, By Eq. 2.95 => CoV(X,Y) = 0, and By Eq. 2.93 => E[X,Y]=XY- When dealing with a random vector, a covariance matrix is used to describe thecorrelation between all possible pairs of the random variables in the vector. For ap prandom vector with n random variables, the covariance matrix, [C] is defined as

    ( ) ( ) ( )( ) ( ) ( )1 1 1 2 1 n2 1 2 2 2

    CoV X ,X CoV X ,X CoV X ,XCoV X X CoV X X CoV X X

    " "" "( ) ( ) ( )

    ( ) ( ) ( )

    2 1 2 2 2 n

    n 1 n 2 n n

    CoV X ,X CoV X ,X CoV X ,X[C] (2.96)

    CoV X ,X CoV X ,X CoV X ,X

    =

    # # % ## # % #

    " " 11 12 1n " "( ) ( ) ( )n 1 n 2 n n

    - In some cases it is more convenient towork with a matrix of coefficients ofcorrelation [] defined as

    11 12 1n

    21 22 2n

    [ ] (2.97)

    =

    " "# # % ## # % #correlation [] defined asn1 n2 nn

    " "

  • 2.9.1 Basic definitions

    - Note two things about the matrices [C] and [].+ First, they are symmetric since CoV(Xi,Xj)=CoV(Xj,Xi) ; ij=ji+ Second, diagonal terms in [C] => CoV(Xi,Xi) = Var(Xi) = i

    2Xg ( i i) ( i)

    diagonal terms in [] equal to 1.- If all n random variables are uncorrelated, then the off-diagonal terms in Eq. 2.96equal to zero and the covariance matrix becomes a diagonal matrix of the form

    iX

    equal to zero and the covariance matrix becomes a diagonal matrix of the form

    1

    2

    2X

    2X

    0 0

    0 0

    " "" "

    n

    2X

    [C] (2.98)

    0 0

    =

    # # % ## # % #

    " " - The matrix [] in Eq. 2.97 becomes a diagonal matrix with 1s on the diagonal

  • 2.92. Statistical estimate of the correlation coefficient

    - In practice, we often do not know the underlying distributions of the variables weare observing, and thus we have to rely on test data and observations to estimateparameters. When we have observed data for two random variables X and Y, weparameters. When we have observed data for two random variables X and Y, wecan estimate the correlation coefficient as follows.- Assume that there are n observations {x1,x2,,xn} of variable X and nobservations {y y y } of variable Y The sample mean and standard deviationobservations {y1,y2,,yn} of variable Y. The sample mean and standard deviationfor each variable can be calculated using Eqs. 2.25 and 2.26. Once the samplemeans and and sample standard deviations sX and sY are determined, thesample estimate of the correlation coefficient can be calculated using

    x y

    nn

    i ii ii 1i 1

    x y nx y(x x)(y y)1 1 (2 99)==

    sample estimate of the correlation coefficient can be calculated using

    i 1i 1XY

    X Y X Y

    (2.99)n 1 s s n 1 s s

    == = =

  • 2.10.1 Bayes theorem

    - Consider a set of n events {A1,A2,,An} which satisfy the conditions

    1 2 n 1 2 nA A A ; P(A A A ) P( ) 1 (2.100) = = = where is the sample space. Now consider another event E defined in the samplespace . If E occurs, then one or more of the Ai events must occur also. We candetermine the probability of E by using the total probability theorem as follows

    n

    i ii 1

    P(E) P(E | A )P(A ) (2.101)=

    =According to Bayess theorem, if event E occurs, the probability of occurrence ofAi can be found using

    i ii n

    P(E | A )P(A )P(A | E) (2.102)P(E | A )P(A )

    = i ii 1

    P(E | A )P(A )=

  • 2.10.2 Application of Bayes theorem

    -Bayess theorem is useful for combining statistical and judgmental informationand for updating probabilities based on observed outcomes.- For example, consider a discrete random variable A ={a1,a2,,an} representing nstrength of the structural components. The probability of each value ai is estimatedto be P(A=ai)= pi. These probabilities are the prior probabilities and are based onpast experience and judgment. Now assume that some field tests are conductedp p j gand we wish to update the probabilities in light of this additional information.-Define pi as the posterior or updated probability. Let event E represent a possibletest result. The test result must give one of the possible values {a1,a2,,an}.g p { 1, 2, , n}- Using the Bayess theorem, the updated probability can be found by

    'j i i' '

    i i j n

    P(E a | A a )P(A )p P(A a | E a ) (2.104)

    = == = = =j'j i i

    i 1

    P(E a | A a )P(A )=

    = =

  • 2.10.2 Application of Bayes theorem

    -Consider a steel beam where some corrosion is observed. An engineer is required todetermine the actual shear strength of the beam. For evaluation purposes, it is

    Example 2.13:

    sufficiently accurate to assume that the strength can take one of the following fivevalues: RV, 0.9RV, 0.8RV, 0.7RV, and 0.6RV. From past experience with the corrosionproblem, an engineer estimates that the probabilities of these values are

    0, 0.15, 0.30, 0.40, and 0.15, respectively.-A field test is conducted, and it is foundthat the strength is equal to 0.8RV. Tableg q V2.6 is a conditional probability matrixproviding an indication of the confidencein the test. Update the probabilities inin the test. Update the probabilities inlight of this new information.

  • 2.10.2 Application of Bayes theorem

    -Each entry in Table 2.6 is a probability P(E=aj|A=ai). Each column can beinterpreted as its own sample space in which the actual beam strength is known.

    Example 2.13:

    -The updated posterior probabilities are calculated using Eq. 2.104'j i i' '

    i i j n'

    P(E a | A a )P(A )p P(A a | E a ) (2.104)

    P(E | A )P(A )

    = == = = = j i ii 1

    P(E a | A a )P(A )=

    = =-We need the conditional probabilities contained in the highlighted row. Each of theupdated probabilities requires the same denominator which is calculated byupdated probabilities requires the same denominator which is calculated by

    n

    V i ii 1

    P(E 0.8R | A a )p (0.10)(0) (0.25)(0.15)=

    = = = + +(0.70)(0.30) (0.10)(0.40) (0.05)(0.15) 0.295+ + + =

  • 2.10.2 Application of Bayes theorem

    -Now the updated probabilities can be calculated as follows:Example 2.13: (cont)

    V V VP(E 0.8R | A 0.6R )P(0.6R ) (0.05)(0.15)P(A 0 6R | E 0 8R ) 0 025= =V V VV V n'j i i

    i 1

    P(A 0.6R | E 0.8R ) 0.0250.295P(E a | A a )P(A )

    =

    = = = = == =

    V V VP(E 0.8R | A 0.7R )P(0.7R ) (0.10)(0.40)P(A 0 7R | E 0 8R ) 0 136= =V V VV V n'j i i

    i 1

    P(A 0.7R | E 0.8R ) 0.1360.295P(E a | A a )P(A )

    =

    = = = = == =

    - Similarly for the cases of P(A = 0.8RV|E = 0.8RV)Similarly for the cases of P(A 0.8RV|E 0.8RV)P(A = 0.9RV|E = 0.8RV); P(A = RV|E = 0.8RV ),And the prior and posterior probabilities are listed inTable 2 7Table 2.7

  • 2.10.3 Continuous case

    -For the continuous case, the updating is applied to the prior PDF function for therandom variable.-Let fA represent the prior PDF function for the continuous random variable A, andet A ep ese t t e p o u ct o o t e co t uous a do va ab e , a dlet fA represent the updated PDF function. The conditional probability expressionshown in Eqs. 2.102 must be expressed as a continuous function to apply Bayesianupdating Let this continuous function be represented by a function L(E|a) and theupdating. Let this continuous function be represented by a function L(E|a), and thesummation in Eqs. 2.102 must be replaced by an integral. With these modifications,the expression for Bayesian updating in the continuous case becomes

    ( ) '( )( )

    'A"

    A 'A

    L E | a f (a)f (a) (2.105)

    L E | a f (a)da+

    =

  • Problem 2.1

    -The results of tests to determine the modulus of rupture (MOR) for a set of timberbeams is shown in Table P2.1+ (a) Plot the relative frequency and cumulative frequency histograms(a) ot t e e at ve eque cy a d cu u at ve eque cy stog a s+ (b) Calculate the sample mean, standard deviation, and coefficient of variation+ (c) Plot the data on normal probability paper

  • Problem 2.2

    -A set of test for the load-carrying capacity of a member is shown in Table P2.2+ (a) Plot the data on normal probability paper+ (b) Plot a normal distribution on the same probability paper. Use the sample mean(b) ot a o a d st but o o t e sa e p obab ty pape . Use t e sa p e eaand standard deviation as estimates of the true mean and standard deviation.+ (c) Plot a lognormal distribution on the same probability paper. Use the samplemean and standard deviation as estimates of the true mean and standard deviationmean and standard deviation as estimates of the true mean and standard deviation.+ (d) Plot the relative frequency and cumulative frequency histograms

  • Problem 2.3

    -For the data in Table 2.5, calculatethe statistical estimate of thecorrelation coefficient using Eq.co e at o coe c e t us g q.2.99.

    n

    (x x)(y y) i ii 1

    XYX Y

    n

    (x x)(y y)1

    n 1 s s=

    =

    i ii 1

    X Y

    x y nx y1 (2.99)

    n 1 s s=

    =

  • Problem 2.4

    -A variable X is to be modeled using a uniform distribution. The lower bound valueis 5, and the upper bound value is 36.(a) Calculate the mean and standard deviation of X(a) Ca cu ate t e ea a d sta da d dev at o o(b) What is the probability that the value of X is between 10 and 20?(c) What is the probability that the value of X is greater than 31?(d) Plot the CDF on normal probability paper(d) Plot the CDF on normal probability paper.

  • Problem 2.5

    -The dead load D on a structure is to be modeled as a normal random variable witha mean value of 100 and a coefficient of variation of 8 percent.(a) Plot the PDF and CDF on standard graph paper(a) ot t e a d C o sta da d g ap pape(b) Plot the CDF on normal probability paper.(c) Determine the probability that D is less than or equal to 95(d) Determine the probability that D is between 95 and 105(d) Determine the probability that D is between 95 and 105.

  • Problem 2.6

    -The ground snow load q (in pounds per square foot) is to be modeled as alognormal random variable. The mean value of the ground snow load is 8.85 psf,and the standard deviation is 5.83 psf.a d t e sta da d dev at o s 5.83 ps .(a) Plot the PDF and CDF on standard graph paper(b) Plot the CDF on normal probability paper.(c) Determine the probability that q is less than or equal to 7 39 psf(c) Determine the probability that q is less than or equal to 7.39 psf.(d) Determine the probability that q is between 6 and 8 psf.

  • Problem 2.7

    -The yield stress of A36 steel is to be modeled as a lognormal random variable witha mean value of 36 ksi and a coefficient of variation of 10 percent.(a) Plot the PDF and CDF on standard graph paper(a) ot t e a d C o sta da d g ap pape(b) Plot the CDF on normal probability paper.(c) Determine the probability that yield stress is greater than 40 ksi.(d) Determine the probability that yield stress is between 34 and 38 ksi(d) Determine the probability that yield stress is between 34 and 38 ksi.