Post on 24-Jan-2021
Decision under uncertainty applied to a hydrologic problem.
Item Type Thesis-Reproduction (electronic); text
Authors Dvoranchik, William Michael,1946-
Publisher The University of Arizona.
Rights Copyright © is held by the author. Digital access to this materialis made possible by the University Libraries, University of Arizona.Further transmission, reproduction or presentation (such aspublic display or performance) of protected items is prohibitedexcept with permission of the author.
Download date 30/05/2021 03:26:56
Link to Item http://hdl.handle.net/10150/191543
DECISION UNDER UNCERTAINTY
APPLIFD TO A HYDROLOGIC PROBTFM
by
William Michael Dvoranchik
A Thesis Submitted to the Faculty of the
DEPARTMENT OF SYSTEMS ENGINEERING
In Partial Fulfillment of the RequirementsFor the Degree of
MASTER OF SCIENCE
In the Graduate College
THE UNIVERSITY OF ARIZONA
1 g 7 1
STATEMENT BY AUTHOR
This thesis has been submitted in partial fulfillment of re-quirements for an advanced degree at The University of Arizona and isdeposited in the University Library to be made available to borrpgersunder rules of the Library.
Brief quotations from this thesis are allowable without specialpermission, provided that accurate acknowledgment of source is made.Requests for permission for extended quotation from or reproduction ofthis manuscript in whole or in part may be granted by the head of themajor department or the Dean of the Graduate College when in his ju.d.2- -ment the proposed use of the material is in the interests of scholar-ship. In all other instances, however, permission must be obtainedfrom the author.
S IGNED: L3 A1LayA r.)t&1,4 TD -NTUU-x/vta,L,
APPROVAL BY THESIS DIRECTOR
This thesis has been approved on the date shown below:
A) tr -
DateTi I
Lucien Ducks teinProfessor of Systems Engineering
ACEMWLEDS\i IENTS
This study could not have been what it is without the assis-
tance and understanding of many people. First, the author is indebted
to his advisor, Dr. Lucien Duckstein, whose assistance, comments, and
understanding were a major motivating force in completing this under-
taking. Second, the author is indebted to Dr. Chester C. Kisiel, who
provided the opportunity for the author to enter the field of Hydrol -
ogy. His patience, enthusiasm, and support were most helpful in con-
cluding this study. Since this study is a synthesis of the knowledge
acquired during the Master's Degree program, the author wishes to
thank the many faculty members, who contributed their time and effort
to make this program successful.
Also, the author is indebted to MY. Donald R. Davis, whose
hours of "talk" sessions enlightened and aided the author's under-
standing of the problem. The author is grateful to Professor Samuel
R. Browning for supplying a double integration computer program.
This research was supported in part by research grant B-007-
ARIZONA on the "Efficiency of data collection systems in hydrology and
water resources for prediction and control" from the Office of Water
Resources Research, United States Department of the Interior.
The author would also like to thank Miss Bonnie Jo Barthold
for typing this thesis in so short a time.
TABLE OF CONTENTS
Page
LIST OF TABLES vi
LIST OF ILLUSTRATIONS vii
ABSTRACT viii
CHAPTER
1. INTRODUCTION 1
2. PROBLal DEFINITION 4
Basic Problem 4Bayes Risk and Opportunity Loss 6Expected Opportunity Loss 7Expected Expected Opportunity Loss 8Depth-Flow Relationship 9Probability of Bridge Failure 10Decision Theoretic Approach Applied to this Problem . 12
3. PROGRAMMING METHODOLOGY 14
Search by Golden Section 14Double Integration Routine 20Error Function for Probability of Bridge Failure 22
4. SENSITIVITY AND RESULTS 25
Choices of Error Limits for Program Efficiency . • 25Results 30Sensitivity of Objective Function 30
Sample Size, or Selection of Different Years'Data 31Selection of Different Underlying Distribution . . 32Changes in the Bridge and Pier Costs 34Change of the Depth-Discharge Relationship . . 35
5. CONCLUSIONS 36
APPENDIX A: LISTING OF VARIABLES 37
iv
11-
TABLE OF CONTENTS--Continued
Page
APPENDIX B: GENERATION OF NORMAL CHI-SQUARE WITHCONCOMITANT UNCERTAINTY DISTRIBUTION 39
APPENDIX C: LISTING OF COMPUTER PROGRAM AND RESULTS 44
REFERENCES 68
LIST OF TABLES
Table Page
1. Comparison of Integration Limits 26
2. Computer Sensitivity Comparison 29
3. Comparison of Different Sample Sizes or DifferentSelection of Years 32
vi
LIST OF ILLUSTRATIONS
Figure Page
1. Division of line segment into 'golden ratios' ' 15
2. Jumping if f(pt(2)) < f(pt(1)) 17
3. Jumping if f(pt(2)) > f(pt(1)) 18
4. Deletion of extraneous point 19
5. Cumulative density functions for different underlyingdistributions for the streamflow data 33
vii
ABSTRACT
This thesis provides a basis to evaluate the worth of addi-
tional information. A decision theoretic framework is employed to
help delineate the components of the decision process.
Design modifications of a specific problem are the variables
that cause a change in the evaluation of the worth of additional infor-
mation, but the design Chosen is based on the minimum Bayes risk when
the structure is finally built. The hydrologic problem involves the
annual peak flows for a river and their effect on the design problem.
The process presented takes a great amount of uncertainty from
the question, "Do I have enough data on which to base a good decision?"
Assignment of a dollar value to the worth of an additional data
point tells the decision maker what he can expect to gain by waiting
an extra time period for that data point based on knowledge possessed
at the present time.
vi;;
CHAPTER 1
INTRODUCTION
Throughout history man has been forced to make decisions
based on the data he has or could get. The question that continually
arises in the decision maker's mind is, "Do I get enough information
from the data I possess or should I collect more data to reduce the un-
certainty of the decision?" Herfindahl (1969) points out that infor-
mation follows economies of scale, such that the more information one
gathers, the less incremental value is gained from this additional
information. It is important to realize that if the addition of new
data cannot alter the action to be taken, then it is senseless to
accumulate new data. In other -words, the only time additional infor-
mation possesses a value is when the action to be taken can be al-
tered with the addition of new information.
Inherent in every decision process is an amount of uncertain-
ty, particularly about the true state of nature. If every parameter
in the decision process were known for sure, there would be no need
for decision makers, since the decision process would consist of
nothing more than calculating the appropriate action based on the true
states of nature.
Of course, it is evident that the true states of irture can
never be known for certain, so a framework for making an intelligent
2
decision about the problem in question is presented here, along with
a quantitative examination of a bridge and pier design problem to
show the expected losses from insufficient tnowledge and expected
gains from additional information in a practically oriented environ-
ment.
The following analysis is done in a decision theoretic frame-
work. A decision theoretic approach is presented to formalize the
thought process and provide a foundation for arriving at an intelligent
decision. It is by no means complete or exhaustive. The approach is
as follows:
1) Define the decision and possible alternatives.
2) Select an appropriate objective or utility function to
be evaluated.
a) Select the state or input variables.
b) Develop the stochastic properties of the state
variables.
c) Establish a time horizon for the decision.
d) Include possible risk aversion.
3) Select a decision based on present knowledge by evaluating
the expected value of the objective function for each al-
ternative and choose the alternative that maximizes this
expected value.
4) Analyze the uncertainty(ies).
a) Determine the expected opportunity loss due to uncer-
tainty.
3
b) Determine the expected reduction in expected oppor-
tunity loss and the total cost of gathering the infor-
mation that caused this reduction.
A time horizon and risk aversion will be omitted from this
analysis, as are the monetary considerations, such as discount factor
or return on investment; but discussion of these items and their effect
on the decision process is covered by Raiffa (1968). Other aspects of
the decision process that could be pertinent and may contribute to the
decision are discussed subjectively by Myers and Melcher (1969).
The method of solution is Bayesian in nature with a more com-
plete analysis of the theory given in Raiffa (1968) and Raiffa and
Schlaifer (1961).
It is the purpose of this thesis to demonstrate that the theo-
retic concepts about decision theory can be taken out of the texts and
classrooms and applied to a practical situation --in this case, a
bridge and pier design problem based on the annual peak flows of a
river, in which a dollar value is placed on the worth of additional
information, which is defined as the expected reduction of the Expected
Opportunity Loss, less the full cost of obtaining this infolmation.
Hopefully, this will provide the decision maker with a valuable tool
for making an intelligent decision with regard to collecting more data.
The last point, which hopefully is brought out in this the-
sis, concerns the fact that the Systems Engineering discipline enabled
the composition of this thesis, which combines varied engineering
fields and practices, demonstrating the worth of interdisciplinary stud-
4- 4- 1,ies anu Lu uranch out into other fields of research.
CHAPTER 2
PROBLEM DEFINITION
The illustrative problem concerns the worth of additional data
for the design of a bridge over the Rillito Creek, a stream near
Tucson, Arizona.
To avoid a deep analysis into bridge design, the work of
Laursen (1969), concerning bridge design and scour analysis, is ad-
. justed for use in this thesis.
Basic Problem
Laursen assumes a 500-foot bridge is to be built atop flood
dikes that rest on four piers, each containing 25 piles. The cost of
the bridge that may be lost in a flood is $150,000. The cost of sink-
ing piles is approximately $4.00 per foot or $400/ft for the entire
structure.
The problem is to detelmine how deep to put the piles in order
to minimize the cost due to sinking the piles, plus the expected cost
of replacing the bridge, caused by a loss of the bridge during a flood,
because the piles were undercut by scour. This is the variable cost of
the bridge and is defined as:
V(h, u,2) = BC • P25 + h • 400
-where h is the pile depth in feet, u and a
2 are the mean and variance
(2.1)
4
of the annual peak flows for the Rillito Creek. The computation of P25
• embodies the parameters u and 2 and will be discussed later.
The largest contribution in terms of dollar cost to the above
objective function is caused by the bridge structure. To reduce this
contribution, the probability of bridge failure must be made small,
which causes the depth of piling to be increased, but this contribution
is negligible compared to the cost of the bridge. This tradeoff is the
basis for the minimization problem.
The data needed for this analysis concerns annual peak flood
magnitudes for the Rillito Creek. Ten years of data (1951-1960) were
used for the formal analysis, making a sample size of 10. For the en-
tire data record of 46 years, a two-parameter log normal distribution
was hypothesized and a Kolmogorov -Smirnov Test did not dispute this
hypothesis.
Uncertainty enters the analysis in the fouit of lack of 'true'
or exact knowledge of the parameters of the log normal distribution
representing peak flows. If the two parameters, the mean and variance,
were known, then the exact probability of bridge failure could be cal-
culated for any depth of piling, which reduces this problem to a simple
numerical minimization. Since the parameters are not known for sure,
uncertainty is encoded into the problem by assigning probability distri-
bution functions to the parameters in the following manner.
The application of a log normal distribution for the data en-
ables the employment of normal theory on the transformed data, where it
is known from classical statistics that the distribution of the sample
mean is normal with mean u and variance 02/n, and the distribution of
6
the sans of squares is chi-square with n-1 degrees of freedom. The
probability distribution to encode the joint uncertainty is the joint
density of the normal and chi-square distributions, since the uncertain
parameters are the mean and variance of the log normal distribution. An
.appropriate transformation is made from X to ns2 / 2 in0 the chi-square
portion of this density function and this yields a normal gamma dis-
tribution as indicated by Raiffa and Schlaifer (1961), which will be
used synonymously with normal chi-square.
Bayes Risk and Opportunity Loss
The uncertainty of the state variables, u and a 2 , adds risk to
the decision process. The distribution of this joint uncertainty is
known, so minimizing the objective function over the pile depth by tak-
ing the expectation of the objective function with respect to the joint
uncertainty distribution yields the Bayes Risk of the decision process.
This is accomplished by using a 'weighted average.' Each point of the
joint uncertainty distribution has a likelihood associated with that
point. By multiplying the value of the objective function at each pos-
sible point by its likelihood at that point and summing, we get a risk
evaluation for a specific pile depth; the minimum of this process is
called the Bayes Risk and is denoted as:
R(h*, m, s2 , n-1) = Min [E 2[V(h, u, 02)]]h u, G
where h* is the value of h that minimizes the Bayes risk.
(2.2)
7
The concept of opportunity loss is introduced; it represents
a measure of the value of perfect information on the population's para-
meters. The true parameters of the population are denoted as ut and
This information would yield ht , the depth of piling that yields
the minimum variable cost,
2B(t'h u ' a
2) = MinMin [C(h, ut, a
t)]t t (2.3)
Having used h* instead of ht'
an opportunity loss has been realized,
because the optimal pile depth for the Bayes risk will differ from the
optimal pile depth if u and a 2 were known, unless the sample and popu-
lation parameters are one and the same, so V(h*, ut , a t) > V(ht , ut , a t).
2 2For ut and at , ht is the value which minimizes V(h, ut , at); hence any
value h*, that is not equal to h, yields a V(h*, u ' a2 ) that must bet t
greater than the value using ht . Thus, the opportunity loss is:
2 2 2OL(h*, ut , at) = V(h*, ut , a t) - V(ht , ut , at) (2.4)
It is the upper bound for a consulting fee in order to obtain perfect
information about next year's maximum flow.
Expected Opportunity Loss
To solve this problem, the 'weighted' averages for the opportu-
nity losses are calculated, where each opportunity loss is weighted by
the 'probability' or likelihood that it was calculated with the true
values of u and a2 . In this manner the expected opportunity loss (XOL)
8
is arrived at by taking the expectation of the opportunity losses with
respect to the normal chi-square using m, s - , and n-1 as parameters.
Thus,
XOL(h*, m, s 2 , = E 2 -fV(h*' u ' c - - V( t'h u
' o2 )] (2.5)t t t t'
where h* is the piling depth chosen on present data and ht
is the piling
depth chosen when u and e 2 are assumed known.
Expected Expected Opportunity Loss
The expected value of perfect infoimation is known, but what
is the value of one more sample -- of one more year's data? If someone
could state the exact value of next year's maximum peak flow, an XOL
could be calculated. Include this flow in the augmented data base.
Any reduction in the value of the XOL caused by the additional data point
is the value of the sample information, denoted VSI. If this VSI is
less in dollar value than the cost of obtaining the sample, it would be
valuable to wait and get next year's peak flow before building the
bridge.
Needless to say, no one can accurately state next year's maxi-
mum flow. To circumvent this problem an average of the VSI's is taken
over all possible values for next year's flow, weighting each value by
the 'probability' of obtaining that flow. This is accomplished by
taking an expectation of the expected opportunity loss of next year's
data. The expected expected opportunity loss (XXOL) is:
Ut,
at
9
_XXOL(m, s', ) = E [NOL(h*' m' s-x' n+1)]x x (2.6)
where h*, m, and s - are calculated by . including the new data point x.x x
The expectation over x is based on the normal chi-square with concomi-
tant uncertainty distribution (see Appendix B for development), which
yields:
n1/2
f(xls 2 , n-1) - (ns2)(n-1)/2 r(n/2) (2.7)
1/2 1/2 2(n+1) 7 ((114-1)Sx) n/2 r((n -1)/ 2 )
Thus, the expected value of sample information, which is our measure of
the worth of additional data is,
EVSI = XOL - XXOL (2.8)
This is a subtraction of expected opportunity losses because EVSI is
the reduction in XOL caused by the addition of a new data point to the
data base. It is the dollar worth of new information.
Based on the EVSI, a design could be made to build the bridge
to withstand a specified flood or wait for more data, in which case,
the money that may have been spent on the bridge could accumulate in-
terest or some type of return, but there would also be a loss of bene-
fits from the Bridge.
Depth-Flow Relationship
In order to solve the problem, a relationship between the depth
of scour and flow must be developed. This is done by using a power
function to the 3/5 power, as suggested by Laursen (1969). From the U.S.
Corps of Engineers Plan I (1967) the following is depicted:
10
d = K • Q3/5 (2.9)
Values for d and Q are 20 ft. and 85000 cfs., respectively, so that K
has a value of 2.2 • 10 -2. From Laursen (1969) Fig. 4, for five feet
of scour and 15 feet of flow, it is shown that
h = M • d1/2
(2.10)
and therefore, M = 2.58. Laursen (1969) suggests using a multiplying
coefficient of 0.9 for semi-circular noses on piers, so Eq. 2.10 be-
comes
h = (2.58) • d1/2 • (0.9)
and substituting in for d yields
h = (2.58) • (0.9) • (0.022 •
and simplifying,
h = .344 • Q3110
which is the empirical relationship used in this problem.
(2.11)
(2.12)
(2.13)
Probability of Bridge Failure
Another relationship needed in the problem is the probability
of the bridge failing at least once in its life span of twenty-five
years. This twenty-five year bridge life is variable and subjective,
but it is meant to account for obsolescence and depreciation. It is
assumed that a new bridge will be built after twenty-five years and
will be based on the costs at that time.
11
It is possible that during the twenty-five year life span of the
bridge a flow of enough magnitude to destroy the bridge could occur
several times, or not at all. The binomial distribution describes this
relationship as follows, where y is the number of floods in j years and
p is the probability of failure in any one year, which is independent
and constant from year to year ,
P(Y) = qT) PY (1-P) i-Y•(2 . 14 )
Since this is a probability density function, it is known that
(i ) PY (1-P) i-Y = 1 .y=0
Thus,
P(y at least once) = 1 - P(never),
and substituting the values of the problem yields
P(y at least once) = 1 - (1-025
(2.15)
(2.16)
(2.17)
This is the probability of bridge failure at least once in twenty-five
years. To determine this quantity, the probability of failure in one
year must be ascertained. This is accomplished by assuming a design
with a certain pile depth. One converts the pile depth into flow that
it protects against, and using the log normal distribution, computes the
probability of getting a flow greater than the flow protected against by
the pile depth.
12
Decision Theoretic Approach Applied to this Problem
The problem in terms of the decision theoretic approach, out-
lined in the introduction, will now be discussed to demonstrate that
adherence to a logical thought process can yield an intelligent deci-
sion.
The decision to be made is to build a bridge to a design speci-
fication based on the depth of the piles or to wait for another year's
data before building the bridge.
The objective function was previously defined as
V(h, u, cy 2) = BC P25 + h • 400
(2.18)
or, in words, the cost of constructing the bridge multiplied by the
probability of bridge failure at least once in twenty-five years, plus
the pile depth times the cost of sinking the piles one foot. This
yields an evaluation in dollar units.
The state variables for the problem consist of the pile depth
and the parameters of the distribution of the peak flows. These vari-
ables have been discussed previously.
A time horizon and risk aversion were mentioned in the intro-
duction.
The evaluation as described by Eq. 2.2 is calculated and this
yields the optimal design based on the present knowledge, that is, the
design which has the minimum expected cost associated with it.
Expected opportunity loss caused by the uncertainty of the mean
and variance of the log normal distribution is computed by Eq. 2.5.
13
Then uncertainty about next year's maximum flow is encoded into
the problem by the same computations as above for all the possible
points for next year's flow, which yields the expected expected oppor-
tunity loss. The reduction in the XOL from year to year is the dollar
value of the worth of additional information, which leaves only the
decision as to collection of more data or building the bridge now, ac-
cording to the optimal design strategy.
CHAPTER 3
PROGRA\E\IING IETIAODOLOGY
Given the preceding theoretical concepts, a CDC 6400 computer
was used to obtain numerical answers, but the program is adaptable to
other computer systems. A brief description of the processes that were
employed to achieve the desired results follow and the entire FORTRAN
computer program is presented in Appendix C.
Search by Golden Section
A search routine for the minimum of the Bayes risk and XOL was
used and it took the form of a 'Golden Section' iterative process, as
described by Wilde and Beightler (1967). This method of elimination
takes its basis from a 'golden ratio' and enables the deletion of ex-
terior points by subdividing the present interval under consideration
into preset ratios as the Pythagorean Brotherhood did many centuries
ago. This division of a line segment is shown in Fig. 1 below.
The following relationships hold:
BC _ BC - AD 1.618033989...AB - CD AC = BD - (3.1)
This search by golden section is used because it is an efficient
manner of seeking an optimum when the number of trials is unknown. The
process of determining the optimum starts by choosing a good guess for
a starting point, then jumping one unit to the right for the second
point. Since we are searching for a minimum, and not a maximum, we are
14
D
Fig. 1. Division of line segmentinto 'golden ratios'
15
16
looking for a 'desired bracket' or interval of three points, where the
function evaluated at the outer two points is greater than the evalu-
ation at the interior point. Hence, if the second point's evaluation is
less than the evaluation of the first point, the next jump is in the
same direction, but it is the previous interval times T, where T is the
golden ratio of 1.6180.... If the second point's evaluation was greater
than the evaluation of the first point, the direction of the jump is
from the first point in the opposite direction and is equal to the inter-
val size times T. Figs. 2 and 3 give a graphic illustration.
This jumping continues until the desired interval bracketing
is found. Since each jump is approximately 1.6130 times the last jump,
this bracketing procedure rapidly covers the minimum, especially with a
good first guess. Once the bracketed interval is determined, it is
subdivided as in Fig. 4 and each point is evaluated. One of the extreme
points is then deleted depending upon the evaluation of the points. The
point that is dropped is the extraneous point outside the desired bracket.
Fig. 4 gives an example.
Point A is deleted because points B, C, and D faun a desired
bracket. This procedure continues until If(B) - f(D)I<E, where 6 is a
predetermined constant, which is defined in a later section. At this
time, the midpoint of the interval is evaluated and the minimum is taken
to the value of the midpoint or the value of the interior point, which-
ever is smaller.
There are additional ways to increase the speed and accuracy of
this searching process, but this was deemed to be of sufficient accuracy
pt (1) pt C2) pt C3)
Fig. 2. Jumping if f (pt (2)) < f (pt (1))
17
f (X)
1 • T
f(x)
ls
1 • T
pt(3) Pt(1) pt(2)
Fig. 3. Jumping if f(pt(2)) > f(pt(1))
f (x)
A B C D
extraneous desired bracketing
Fig. 4. Deletion of extraneous point
19
20
and sPeed for this problem, so that other techniques were not imple-
mented.
Double Integration Routine
In much of the aforementioned theory, the term expectation was
used. To take the expectation over u and e2 entails a double integia-
tion procedure. This is accomplished by combining two one-dimensional
Simpson's rules, which is an approximate integration procedure. The
routine solves an integral of the form:
b h(x)
S f (w, x) dw dx .
a g (x)
The adaption of this routine to this specific problem is
(3.2)
where,
F(u, (5 2 ) • Nx 2 (u, a2) du do-2 (3.3)
2a = ns2 / df • x 99.9 (df),
2h = ns2 / df • x .1 (df),
geM) = m - 3 • s,
h(M) = m + 3 • s,
F(u, e 2) = V(h, u, e 2) ,
21
(u-m) 22
ns"2 ,, (df/2) -12,Nx 2 (u, 0_2 ) _ (ns L) rh,
r(dfr) 7dT/2 (27)112 el--20""
The limits for a and b are set up in this fashion, so that the computer
program is adaptable for other sample sizes ranging from 1 to 44.
These values for a and b are taken from the CRC Handbook of Ta-
bles for Probability and Statistics (1968) for the Chi-square distribution
with nine degrees of freedom, since n is equal to 10, so that the inte-
gral includes 99.8 percent of the area of this distribution. Similarly,
g(m) and h(m) are designed to include over 99.7 percent of the area of
the normal distribution. N5( 2 (u, a 2) was derived by combining the normal
and chi-square density functions and making an appropriate change in2
variables from X to , so that this function is actually a normala
gamma, as discussed by Raiffa and Schlaifer (1961). To facilitate
faster integration, the integral is broken into two segments, from the
lower limit a, to the mode of the chi-svare, (ns2)/(df-2), and from
this mode to the upper limit b. This is more efficient because the
slopes of the two segments change less rapidly than the slope of the
entire range of the chi-square distribution.
The manner in which the integration is done is similar to a
rectangular grid network, such that for each point on the horizontal
axis a one dimensional Simpson's rule is performed over the vertical
axis.
When two iterations are within 10 percent of each other, the
routine teuainates. With this error level, the noimal chi-square was
found to evaluate to over 99 percent of its combined area.
22
Similarly, a. one dimensional Simpson's rule is used to take the
expectation of next year's possible data points. This is of the form:
d2
f(s -' n) ds 2
x x 'c
(3.4)
where,
= mx -4. sx
d
= mx + 4 • sx m
n1/2 (ns 2) (n-1)/2 r(n/2) 2
f(sx' n) = (n+1) 1/2 771/2 ((n+l)sli/2 ran-l)/2)
The choice of m ± 4 • sx for the limits of the integration was chosen
on the basis that this included over
points for next year. This fact was
that computed the cumulative density
This one dimensional routine
ble integration routine to arrive at
sible point for next year's data. 'Weighting' these values by the like-
lihood of the occurrence of the point used to calculate them and in
integrating, as in Eq. 3.4, yields an XXOL for the possible new data
points.
Error Function for Probability of Bridge Failure
In order to compute the probability of bridge failure, it is
necessary to know the cumulative normal distribution. An integration
of the form:
99 percent of the possible data
ascertained from a computer run
function of f(s2 , n).
is used in conjunction with the dou-
a Bayes risk and XOL for each pos-
23
_ (n-u)r 1 s-e du
(27) 1/2 s (3,5)
could have been performed, but it was much more efficient from a time
viewpoint, to evaluate that integral using the Error Function Package
(CDC 6000 Series Computer, 1966).
To facilitate the use of this subroutine, a transformation or
standardization of variables was needed. This is accomplished as
follows:
log Q - u (3.6)s • 21/2
where division by 21/2 is the appropriate constant to yield a standard
normal transformation for the error function. The log of Q is employed
because the log of the flow is needed, since the log normal distribution
is the distribution of the annual peak flows.
Z and an error limit are the parameters passed to the subrou--5
tine. The error limit of 10 is needed because the error function is
evaluated using a Taylor's Series expansion and by specifying this
limit, the subroutine can be terminated when enough accuracy is devel-
oped in the routine.
A returning parameter E from the subroutine contains the value
of the error function, which is changed to the cumulative normal pro-
bability by
(m -u) 2
P (27)1/2 s1 2s 2
1 du = .5 + .5 • E. (3.7)
24
This is the probability of bridge failure for one year, which is con-
verted into the probability of bridge failure at least once in twenty
five years by Eq. 2.17, or in terms of the problem,
P25 = 1 - (1-p)25
= .5 + .5•
(3.8)
CHAPTER 4
SENSITIVITY AND RESULTS
The task of a Systems Engineer is not only to produce results,
but to know how these results were achieved. Knowledge of the inter-
faces between the different state variables and the effects of the er-
ror limits on program efficiency are of major importance, especially
when faced with budgetary constraints.
Choices of Error Limits for Program Efficiency
Program efficiency is defined to be adequate accuracy for the
results without too large an expenditure for computer time. To achieve
this desired efficiency, a decision had to be made concerning the various
limits that follows:
1) Limits for the mean in the double integration,
2) Limits for the variance in the double integration,
3) Relative error limit for iterations of the integrations,
4) Closeness of interval evaluation for Bayes risk,
5) Closeness of interval evaluation for the XOL, and
6) Integration limits for the possible new data.
The limits for the mean and variance of the double integration
were set to evaluate 99 percent of the area under the curve for both the
normal and chi-square distributions.
GD
2
The relative error limit for the iterations of an integration
routine applies to the successive computations of Simpson's rule. nen.
two iterations are within a specified error limit of each other, the
routine is terminated. A few combinations of these relative error limit
and limits for the mean were calculated and these results are listed in
Table 1. The final decision concerning these limits was m + 3s for the
limits of the mean, and 10 -1 for the relative iteration limit. This
produced sufficiently accurate results, while computer time was kept at
a minimum.
TABLE I
Comparison of Integration Limits
Test 1 Test 2 Test 3
Variance limits constant constant constant
Mean limits m + 2.5s m + 3.0s m + 3.0s
Iterative inte-gration limits 10 - 2 10 -2 10
-1
Evaluation ofintegration 0.9863 0.9959 1.0003
Central computerprocessing time(in seconds) 3.650 3.735 0.375
The limits for the variance are held constant in ea h case be-
cause they are the most difficult to ascertain and the 99 percent figure
was deemed 'desirable.' From Tests 1 and 2, a change in the limits for
27
the mean front m ± 2.5s to in 3.0s units effectively increased the area
under the joint probability distribution at a minimum increase in com-
puter time. From Tests 2 and 3, when the iterative integration limit
was the varying parameter, the area under the Nx - distribution is Ln-
creased to slightly over 1.0 and the time was one-tenth that of Test 2.
This represents a 90 percent decrease in computer time for each double
integration. Since the double integration is performed hundreds of
times, this represents a large amount of computer time. Hence, the
large rounding error in the evaluation of Test 3 is accepted in order
to limit the expenditure on computer time.
In the golden section searches, some type of telminating mecha-
nism had to be built into the subroutines. This was accomplished by
taking the endpoints of the 'desired bracket' and comparing their evalu-
ations for a certain degree of closeness. From trial runs it was as-
certained that the Bayes risk calculation would be in the thousands of
dollars, so a degree of closeness of $10 was chosen as sufficient accu-
racy. Each Bayes risk calculation employs one double integration, so
this is not too small a figure as far as computer time is concerned.
From the accuracy viewpoint it should be noted that a very small value,
say $1, may be fine, but since the function is very flat near the
minimum, which is known from previous trials and computer runs, this
amount of bracketing was not justifiable considering the possible repe-
tition of Bayes risk calculations that may have to be made. Also, some
additional accuracy is achieved by choosing the mini= pie depth to be
the midpoint or the interior point of the interval in question, although
the interior point itself yields sufficient accuracy for this problem.
8
The computation of the XOL employ's another golden section
search. The terminating mechanism in this case is when one endpoint's
evaluation of the 'desired bracket' is within 1 percent of the other
endpoint's evaluation. This method of measuring closeness was chosen
to correspond to the amount of error tolerated in computing the Bayes
risk, but it was made variable because the value of the XOL was harder
to pinpoint than the Bayes risk.
The limits for the possible new data points were chosen as
m + 4.0s, which accommodates almost the entire area under the 'predic-
tive' density function. This yields a range of flows between 354 and
86603 cf s. for next year's possible flows, which are far above and be-
low any momentary maximum flows observed on the Rillito Creek to date.
Actually, a smaller range would suffice, but there is no additional
time for this increased coverage, except that it may be more difficult
to compute the XOL for the extreme points, since only seventeen points
are used in computing the Simpson's rule evaluation.
A computer run, made after the original results were completed,
was initiated to ascertain the additional accuracy attained by narrow-
ing the amount of error for several of the error limits. Due to budge-
tary constraints, several limits were varied at the same time, so the
exact contribution of these changes cannot be comntified. Results and
comparisons between this run and the formal analysis are depicted in
Table 2. The amount of increased accuracy does not compensate for the
tremendous increase of 900 p3rcent in computer processing time. These
changes may be self-balancing in nature; nevertheless, the choices made
29
TABLE 2
Computer Sensitivity Comparison
FormalProblem
ComputerSensitivity Changes Caused
Limits for the mean m + 3.0s m + 3.3s Increased accuracy,less rounding error,small time increment
Limits for thevariance
constant constant None
Relative iterationintegration limit
1 0 -110 -2 Additional time for
the increased accuracy
Closeness for Bayesrisk
$10 $5 Additional time causedby a few more doubleintegrations
Closeness for XOL 10-2
10 -2 None
Limits for new data m 4.0s m 4.5s Increased accuracy,small time increment
Minimum pile depth 16.08 feet 15.63 feet Increased accuracy
Bayes risk $7450 $7451 Increased accuracy
XOL of the ori-ginal data
$2937 $2943 Increased accuracy
XOL of the newdata
$2779 $2774 Increased accuracy
Worth of addi-tional information
$158 $169 Increased accuracy
Computer processingtime (in seconds)
153 1076 903% increase in time
for the formal analysis yield sufficiently accurate results, while main-
taining a. high standard of programing efficiency.
Results
Based on the ten years of streamflow data for the Rillito Creek,
starting in 1950, the depth of piling to minimize the Bayes risk is
16.08 feet. This is equivalent to protecting against a flow of about
370,000 cfs. flow or a 'many' million year flood. The Bayes risk was
$7450. The XOL of the original data was $2937. An additional year's
data showed an XXOL of $2779. Thus, the expected value of one more
year's data is only $158.
The cost of obtaining one more data point is the actual cost
of getting the data point, plus the loss of benefits from the bridge
for that year. The total cost for building the bridge to the design
specification at present is $156,440.
Sensitivity of Objective Function
A faunal sensitivity analysis is meant to show the effect of
changes of pertinent parameters that are embodied in the objective
function. The following variations of these parameters will be dis-
cussed:
1) Changes in the sample size, or selection of different
years' data,
2) Selection of a different underlying distribution for the
data other than the log normal,
3) Changes in the costs for the bridge and piers, and
4) Changes in the depth-discharge relationship.
31
Sample Size; or Selection of Different Years Data
The choice of 10 years of data wus made. arbitrarily. The
latest ten years of Rillito Creel: annual peak flow data was employed,
but a change in the results was found to occur when a different ten-
year period was used. The period between 1940-49 poPs- sed a higher
variance than the 1950-59 data, so it is expected to exhibit a higher
Bayes risk and EVSI than the 1950-59 data, because there is more uncer-
tainty caused by the larger variance.
Analyses were also made on different sample si:es, particular-
ly five and 44 years of data. In the case of five years of data, the
worth of additional data is more valuable than for ten years, which is
to be expected since it is a small sample size subject to larger fluc-
tuation when additional data points are added.
The results for 44 years of data are interesting from the point
of showing the roundoff error involved at the accuracy level specified
and Herfindahl's (1969) diminishing returns of additional infolmation.
The EVSI can never be negative by the definition of this problem, in
that the expectation of next year's possible data points is based on
the present data; hence it is impossible to get an EVSI less than zero
because the choice of points and their associated likelihood cause a
value of EVSI greater than or equal to zero. The computer program
yields a result of - $3.00 for the EVSI for 44 years of data, which is
necessarily caused by roundoff error in the program.
A summary of the above results is presented in Table 3.
TABLE 3
Comparison of Different S3mple Si :esor Different Selection of Years
Formal1950-59
10 Years1940-49
5 Years1949-53
44 Years1917-60
Mean 3.750 3.597 3.758 3.696
Variance 0.070 0.138 0.032 0.095
Bayes risk $7450 $10089 $12994 $6064
Minimum 16.1 feet 20.3 feet 13.0 feet 14.0 feet
XOL $2937 $4542 $6730 $1613
XXOL $2779 $4241 $5513 $1616
EVSI $158 $301 S1217 -$3
Selection of Different Underlying Distribution
The selection of the underlying distribution can be the cause
of a large change in the Bayes risk and other calculations. The objec-
tive function V(h, u, a 2) is sensitive to the underlying distribution
in the following manner: P25 is the probability of bridge failure at
least once in twenty-five years, which is a function of the underlying
distribution. The lower the pile depth necessary to ascertain a fair
.amount of certainty that the bridge will not be destroyed by bridge
failure, the lower the cost of constructing the bridge, due to less
money spent on sinking the piles.
From Fig. 5 it can be seen that at point dsi on the horizon-
talaxis,theprobabilityofbridgefailurecompute issi
cls.
Fig. 5. Cumulative density functions for differentunderlying distributions for the streamflowdata.
33
smallest for the exponential, next smallest for the log normal, and
greatest for the uniform distribution. Thus. , if these values are in-
serted in the formula for P25 and computed, the value of the objective
function is largest for the uniform, followed by the log normal and
then the exponential distribution.
Therefore, the choice of the distribution for the annual peak
flow data has a definite effect on the cost of constructing the bridge.
Changes in the Bridge and Pier Costs
By careful observation of the relative costs of the objective
function, it becomes obvious that the best strategy to follow in con-
structing the bridge is to sink the piles deep enough to almost guaran-
tee that the bridge will not fail. This incurs a cost of $400/foot for
sinking the piles, plus the bridge cost; but there will be very little
risk of the bridge failing and hence little chance of having to replace
the bridge. This makes the formal analysis used in the thesis somewhat
superfluous in that it really is not needed except to save several hun-
dreds of dollars, which is negligible when compared to the actual con-
struction cost of the bridge.
If the structure and pile costs were closer to equality than in
this thesis problem, the formal analysis would be of great value in in-
hibiting the cost incurred by the conservative who overdesigns or the
'gambler' who underdesigns in order to 'save' money. This could be the
case for an oil rig in the middle of the ocean, where the cost of sink-
ing the piles would be much more in line with the structure they are
supporting. The additional cost of sinking the piles unnecessarily
deep would not be incurred if the approach used in this thesis were
employed.
Change of the Depth-Discharge Relationship
Laursen (1969) has stated that this relationship can be de-
picted using a power function to the 3/5 power. This results in the
empirical relationship shown in Eq. 2.13. A change to a power function
to the 4/5 power causes the exponent in Eq. 2.13 to increase to 4/10,
which means that a larger flow is associated with the same pile depth.
This is then reflected in the cost of constructing the bridge because
the larger the flow protected against by the pile depth, the less pile
depth needed; hence a lower construction cost. The reverse argument
is true for a smaller power function.
alum s
CONCLUSIONS
The low value for the worth of additional information is in
part due to the relatively flat minimum of the obi active function for
the Bayes risk. For the pile depth in the range of 14.88 to 16.82
feet, the Bayes risk varies by no more than $100.
Clearly, the bridge should be built now, on the basis of the
present knowledge because very little is gained by waiting for more
data.
The computer program takes only 153 seconds to arrive at an
answer, making this fofm of analysis very inexpensive at the accuracy
levels shown in Table 2.
Design modifications, other than those discussed here, are easi-
ly handled by this method of analysis. Changing of the objective func-
tion and other variables still yields the same basic design principle
-- choose the design with the lowest Bayes risk, if building on the
basis of present knowledge.
The value of additional infolmation can be expected to change,
based on the variability of the design parameters, especially the number
of years of data that are used in the analysis.
36
APPENDIX A
LISTING OF VARIABLES
lower limit for integral
upper limit for integral
BC bridge cost in dollars
2X a percentage points of the chi-square over degrees of freedom
adistribution
d flow depth in feet
df degrees of freedom
error function parameter
constant
EVSI expected value of sample information
g(m) lower limit for integral
r(x) gamma function of x
h pile depth in feet, depth of scour in feet
h* optimum pile depth for unknown u and a2
ht optimum pile depth for known u and CI
2
h* optimum pile depth for known u and G 2 with additional data point
h(m) upper limit for integral
number of years
37
38
constant
constant
In sample mean of peak flow data
MXsample mean of augmented peak flow data
sample size
N2 (z) convolution of normal and chi-square density functions
OL opportunity loss
probability of bridge failure in one year
P(y) binomial distribution of y failures
P2 5 probability of bridge failure at least once in 25 years
flow rate in cubic feet per second
R(z) Bayes risk
sample standard deviation of peak flow data
s 2
sample variance of peak flow data (biased)
s 2
sample variance of augmented peak flow data
2 population variance of peak flow data
population mean of peak flow data
V(z) objective function or utility function of vector of parameters
z, equals F(z)
VSI value of sample information
XOL expected opportunity loss
XXOL expected expected opportunity loss
y number of bridge failures
APPENDIX B
GENERATION OF NORMAL CHI-SQUARE WITHCONCOMITANT UNCERTAINTY DISIRIBUTION
This distribution falls in the family of distributions labeled
by many statisticians as 'predictive' distributions. The development
of the following approach was accomplished by Davis (1971).
Reference to a normal gamma-2 distribution was made earlier in
this thesis. This distribution is necessary for the development of the
normal chi-square with concomitant uncertainty distribution.
Before proceeding further, a list of additional notation is
furnished for use in this appendix:
L[x] likelihood of x,
proportionality constant - 1/G2 ,
sample size,
in sample mean,
y sample variance,
mX
sample mean with additional sample data,
s 2 sample variance with additional sample data
population mean,
2a population variance,
df degrees of freedom,
NK[x] non-kernel of the distribution of x,
domain of the function
39
fN (x) normal density function of x,
(x) gamma-2 density function of x,fG2
vector of uncertain parameters,
E (W) probability density function of uncertain parameters.
To begin, a statement of Bayes theorem is,
_ (W) f Ex 1W) (BI)f (x IV) (W) d117
where x is the new sample information.
The predictive density function is being sought and ifWwere
known, the predictive density would be f(x111). However, only the pro-
bability density function of W is known. Thus, for any specific x, the
predictive density function evaluated at that x is a 'weighted average'
of the f(xIW)'s, with each f(x1W) being weighted according to the like-_
lihood of W. This is accomplished by the following integration,
Sf(xIW) (-6T) dW
(B2)
which is exactly the denominator of Bayes theorem. Thus, rewriting
Bayes theorem yields:
Jf (x IIV) (w) dw - (W) f (x 1W) (141x)(B3)
11
Note that the numerator and denoiT-Iator are known because of the
conjugate relationship, as shown by Raiffa and Schlaifer (1961). Hence
the kernels, or portion of the likelihood function containing the vari-
able parameters of the distribution, of the numerator and denominator
are exactly the same, which reduces the problem to:
f (x1W) NK OV.) °NK f (x IV) NK E,- (Wix)
The normal gamma-2, as described by Raiffa and Schlaifer
(1961), can be written as:
fNG2 (u, km, y, n, df) = fN (ulm, kn) • fG2 (kly ' df) . (B5)
The likelihood of the normal portion of the normal gamma-2 is:•
1 - 1/2 1m(n-u) 2(1m) l/ 2L[fN (uirrl, kl1)] - (27) 1/2 e
(B6)
so the non-kernel portion, or what is left after the kernel is extracted
from the likelihood function, is:
NiqfN( , kn)] = (n/27) 1/2 . (B7)
The likelihood of the gamma-2 portion of the normal gamma-2 is:
Jvkdf vkdf/2) 2 (vdf/2)L[f ( 1 , df)] 7 e (df/2-11)! , (B8)
so the non-kernel portion is:
(B4)
NK[fNG2 (n )] _ (vd1/2)df/2(df/2 _ 1), (a/27) 1 2
Since df = n - 1, Eq. B10 can be written as,
1/2 ((n-l)v/2) (n-1) / 2Niqf
[G2( 1)] = (n/27) ((n-1)/2-1)!
After taking a new observation, Eq. Bll becomes,
(ns 2J2) n/2NK[fNG2(11+1)] = ((n+1)/27)1/2 x (n/2 - 1)1
vd ,df/2('i'' •NK[fG2 (k1v, df)] =(6t/2 1)!
Combining Eq. B7 and B9 yields the non-kernel part of normal
gamma-2 distribution, which is,
Now Eq. B4 becomes
0 1 2 an-l)v/2) 0-1-1)/2(n/27) ((n-1)/2-1)! f(xIW) E(11) dW - 2 n/2(..)
(0a+1)/27)
1/2 (nsx/2)(n/2-1)!R
where (1/27) 1/2 is the non-kernel of the normal.
Eq. B13 'reduces' to,
(B13)
42
(B9)
43.
Li1/2 ((n-1)v) (n-1)/2r((n-2)/2) (B14)f(x1W) E,(10 dW = (1/T ) 1/2 (n/(n+1)) ' • ,
((n+1)Ç 1141 ' r ((n-3)/2)
Finally, substituting ns2 = ( a-1)v into Eq. B14 yields,
(x 1W) (W) dW = (1/ Tr) 1 2 2,(n-1)/2 r((n-2)/2) (B15),,,
2.01/2 ((n-3)/2)(n+i) ) 1/2 )
an-1-1). sx)
which is readily solvable in terms of the problem.
APPENDIX C
LISTING OF COI\IPUTER PROGRAM .ANID RESULTS
This appendix consists of the actual listing of the CDC 6400
computer program, followed by the computer results. It was written in
FORTRAN II and TV programing language. An explanation for the func-
tion of each routine is supplied via caRment cards at the beginning of
each subroutine. Further comment cards attempt to delineate the pro-
cess that is being perfolmed at each section in the subroutines.
The output is for the formal analysis of the problem using the
ten years of streamflow data for the Rillito Creek.
44
45
. .-,.,'.' .'_ ,,'.....) \: ,_):-:(1.i,--'1,',1 0uHul,Hk.)..,,C, r)
4-r,
* i' ',L ":ii ." ,,'—'. f' ' ',..' :....*-',TE
G
:... , 1Lc. , 1 ,,..? r 6H106L -,,SJ- H.! 1 i,-. - .. , i'»,_.!., 'P(ko ,
TH-12 Fi..2.LLL ,., L.,, ,J1-.. -, , -„L : -';1'.--_-.i,-,L ,---- [-'-- E »FHI
ILG ,i ,..):: . ,-*10'H'-, iv70, G4. ** .,,=,):... Cz6-.:DS Fi.)LLo-iii THE F ,A_Lc-iIL -a FOki,!--..1 ...-- di
C. ..J.,..---),L,:::: 1I,T G* (-,:..)L 1 - ai, -.) '--E.Ai_ ht:-/-ADIkf.., Fok uUTP1
C.,.,L).::-. coL 1 - 16 CH1 .1.--'1d C .,.)T ir* CJi,._ fl 2:_) CP? br;10(,:ii:-. COSf G* COL 21 3 f_. k 0-icou;.,.11 Fx'ACHk .0.* -,;KO 4 ** COL 1 - 5 -JDF-,TS ki_i_:_,J.,'R i_J4%T. ,- HOli,'..ls *
* C.:0i, 6 - 15 FHT vi,i-zIAHLL IJAIA Fu,---,( --;;Ai ** CUL 16 - 20 ;..:Yk STr:,,h211N0 i.JiAri-\ YcLir.r. ** CAPO 5 ... ..,.0.. .0-* DO17, ** *G ***************G0**************************G***************
CuW10p4 // -/c(1V) tALOG(1u0)/5EAH/H9CP2,Cv1s905
CO(10;9 / 5 I-IP/4HEA 9 , AkNDPTSs.1uF,CO;;00:'4 /COLAj/IIIIJIMENSION HEAD(32)9LIE(135)1)!\TA 4.1EAHoYAH/2*./ ,LIE/1311 -1*/
11 11 =H[7::AD THE I:PUT OR DATA CARDS
HEAD 21HEA1)HEAD 3,CP19CP29RREAD 49.NC)PTS9FtAT914Yis,
READ FMT,(A(I) 9J=19hDRTS)C RITE IhPUTED DATA
PU'!T 50HEAO,CPI,CP29P9NYi:9(/\(I)911,NOPIS)TAkE L065 OF FLowS Af'4D STORE IN 6AmL AkkAY - A
oL) b I=10:0PT5XL06(1) = AL0610(<(T))
7,(XLOG(I)91=1,DPIS)C LOtAPUTE HEAN AND VAH1ANCE FOR-Lou FLoieS
00 8 1=1,,UPTS8 AP1EAN /J.1EAN
XL06( 1 )/NOPTSDO 9 I=1 ,0DPTS
46
v J V" +( (0( 1-) — ',', )*(.(I) — NICc ..)A / ippl5ST,i)V = F,T(V,F-...LJ l', I PADPT ,.„ 9 ,,if
C f:.,c,APuTE5 GijiHA FU -4CTIiiA- = Nt)PTS - 1III =. :OFTAIDFq?)IF (III.) 4 :,;9.5.943 .
4 , 6i01 = GA(ILJ)6 ..: TO 6C
5.; 6An = G4J-1 I(IDF)o c CONTIuL
C LOTIvE CUnTPOL OF PkOGFn FL0 .6CALL CSEAr:CH(STnEv)CALL SEAPCh(STLIEv ,;TU)C.ALL ,v,PT(T00)DI &f = To _ fooPRIJ“ 219!,HEA:05LIHEILINE9LINE,TO,Tu 0 90IFI:. 4LINE9
A LIE,L1hEC FORMAT STATEHENTS
2 rol.T(16A5/1(DA5)
3 FRnAT(3Fl.c.)
4 F0kAT(15A115)
5 For+AT(1H1,16gb/6xIloA5////1149*FIrt co31 = * 9
AF1).2/119 *BkIOGE 0 (3 5f F *,FP.)0/11A9*DISC 01,1 11 *
A* = *,F1.,;e2///7/6s*DATA E - GINS ,i YEA ,I)////C21X,*--- DATA PO“\TS ---*//// (24X9F10.3))
7 FR1AT(ihi,13A9* LOGARITHmS • *9///(11A9E20.8))
I..; FUR,IAT(////6X,*--- nE/',N Ar40 VAHlAfACEI FOR 5 A0r)LE *
A 0 0 .F SIZE *,15//////16Ao*vEAN616x9*vARIANCE =*,F15,7)
219 FORmAT(1P1f//A91645//6x916A5/////4 ,1 3 5 A 1 /X9 135 A 1 /
A,135A1///X9*FINAL RESULTS OF STuOT*////A,6*EOL OF ORIGIAL OATA =.23.6//x,CoEOL OVER ALL AE,4 RUSSIELE POINTS =*,E11.6//////
- * iciRTri uF r_1) ., E nORE 0A1A POINF =*
EE15.7///X,135AI/X,1 3 541/X91 35 A1)
STOP .Ei4U
47
FUNCflON GAMI(1)**************i******************************************
COHPUTES THE Gi ,'4i/A FW-CT1UN FOR INTEEliS
**********************************************************
GAM1=100.1 J=19K
1 6.4M1=6AHr*j't - ETUkNEND
FUNCTION GjOY(I)**********************************************************
COMPUTES GAMMA FUNCTION FOP NON INTEGERS
**********************************************************
(I,E(4.1.0R.I.EQ.3) GO TO 3K=2*(I /2....65)°1GAMR=1.00 1 J=I9K92GAMR=GAMk*(0/2.)GAMR=GAHR*SORT0.1415927)GO TO 4
3 GAMR=SOT(3.14159a7)/2.
4 RETURNEND
48
sooPOLATIE CiSEACH( (,,Tf , LV)**********************************************************
SE,4PcH t-.)Y (;OLDEN 5ECT .iw , ITCkkTIVE PkOCFSSSEACHES FOc- OF FU0CJI0 14
kEFERECE vILOE ANo HEI(--,1TLERFOODATIONS OF OPTImIZATION
F- Ck EACH PILE DEPTH CHOSEN THE ROOT“iE CALLSIMP5u, :;HICH IS THE 00UoLE InTE6ATIOr4AL.) THIS IS ThE CfALCuLATION OF THE bAYLS RIR!.4HICH YIELUS THE HIlmUH.
**********************************************************COMMON /SEAk/H9SC,PC,XMIPA,DSCOMiloN /SINP/X8AR9vAk90O0,1016AmCOWION /CCUNT/IIIIOINENSIOh PT(4)9FV(4),LIE(135)DATA CLOSE/1,/,LIr4E/135*1H*/
DEPTH TO FLo',y RELATIONSHIPr(Z) = ( / .344 ) ** ( iS / ' I
PRINT U:,0t),LikEINITIALIZES POINT .f)NO SIZE OF JumP
=PT(1) = 9.8SS = 1,GJUMP = 1.2*S5
SETS PILE DEPTH, CONv ERTS TO FLOw, Lo6 OF FLO,AND EVALUATES VIA THE DOuBLt-- UITEGRAFION_
OS = PH')= Q ( P 1 (1) )
H = AL061,(H)CALL SIMPSON ( EV(I) )PT(2)=PT(1)+6JUP
C SETS PILE DEPTH, CONVERTS TO FLOW, LUG OF FLOW,AND EVALUATES VIA THE DOUBLE INTEGRATION
DS = PT(?)H = Q ( P 1 (2) )H = ALOGI(H)CALL SIMPSON ( EV(2) .)
CHECKS EVALUATIONS FOR OIRECTION OF NEAT JtJi-YIF (Ev(1).GT.EV(2)) 1,4
SETS PILE DEPTH, CONVERTS TO FLOw, LUD OF FLU,Aû EVALUATES vIA THE 00QmLE IN1..16RA1I0fJ
1 RT(3)=PT(2)+1.b18.,34*(PT(2)-PT(1))-0S = PT(3)H = Q ( P 1 (3) )
H = AL061,(ti)CALL SIMPSUN ( EV(3) )
C! -IECS EVALUATIO:9S 'A)H kE.O ,;,.oEkio OF FOJNisIF (FV(3).GTePv( -, -. )) 1193
3 PT(I)=PT(2)P 1(2)=I- T(3)LV(I)zEv(2)EV(2)=Ev(3)GO LO I
C REORDERS POINTS
4 TA = PT(I)TB = EV(1)PT(1) = PT(2)EV(1) = EV(2)PT(2) = TAEi(2) = T8
8 PT(3)=PT(2)-1.618,:34*(PT(1)-PT(2))CSETS PILE DEPTHi CONVERTS TO FLOeJ, Loo OF pLo ,:J 5,
ANO EVALUATES VIA THE 00L-LE IiNTEOPATIONOS = PT(3)H = Q ( P1(3) ).H = ALOG10(H)CALL SIMPSON ( Ev(3) )
C -CHECKS EVALUATIONS FOR REORoEING OF POINTSIF (EV(3).6T.EV(2)) 22997
REoPDERS POINTS7 PT(1)=PT(2)
EV(1)=Ev(2)PT(2)=PT(3)EV(2)=EV(3)GO TO 8
C CHEcKS TO SEE IF EXTERIOR POINTS ARE ITHIN
C - SPECIFIED LIMIIS11 = AbS(EV(3) E4(l) )
IF (w.LE.CLOSE ) GO TO 2 026 PT(4)=1.616034*(PT(2)-PT(1))+PT(1)
C SE1S PILE LEPTH9 CONVERTS TO FLo'd, LUG OF LOw,
C AND EVALUATES VI, THE D0u6LE INTEGRATIONDS = PT(4) .H = 0 ( PT(4) )H = ALOG1((H)CALL SImPSON ( EV(4) )
C DELETES FATPANEOUS POINT BY RE0kDERIIY,5IF (EV(4).L.T0EV(2)) 1019102
c REORDERS POINTSlui PT(1)=PT(2)
EV(1)=Ev(2)P 1(2)=PT(4)EV(2)=EV(4)GO TO 11
C REORDES POINTS1a• PT(3)=PT(4)
49
50
EV(3)=EV(4)GU TO 22,- '
C RFORDEPS POINTS22R T=P1(3)
TT LV(3)RT(3)=PT(I)EV(3)=EV(1)ri (1)TtV(1) = Ti'
C CHECKS TO SEE IF EXTEKION POINTS ARE ITdiNSPECIFIED LIMITS
2Lv i = AGS(F1(3) Ev(1) )IP (‘.•:.LE.,CLOSE ) GO FO 26;_)RT(4)=PT(3)-1.618f)34*(PT(3)-PT(2))
C SE IS PILE DEPTH, CONVERTS TO FLU, Lou OFAND EVALUATES TdE DOUBLE INFL*RAW.4'4_
OS = PT(4)= Q ( PT(4) )=. ALOG1(H)
CALL SI('!PSON ( FV(4) )C DELETES EATRANEOOS POINT bY REORDERING
IF(EV(4).GT,EV(2)) 'C'01,202 •2'21 PT(1)=PT(4)
Ev(1)=EV(4)GO TO 11
C KEORDERs POINTS22 HT(3)=PT(2)
L-_,/(3)=EV(2)FT(2)=PT(4)EV(2)=EV(4)GO TO 22(J
C CONPUTE'S ANIN ENUAL TU THE NrDPO1T OF INTERVALA;II0=P1(1)+ .5*(PT(3)-PT(1))
C SEAS PILE DEPTH, CONVERTS 10 FLO, LOG OF FLOvq,ADO EVALUATES VIA THE DOUBLE INTEGPAIION_
DS = AMINH = 0(XHIN)H = AL061(H)CALL SIHPSON(V1)
C COMPUTES BAYES PASK AND MINIMUM PILE DEPTHVi = AmInl(V19Ev(2))1F (V1.EGI.EV(2)) XMIN = PT (2)
C PRI'rr RESULTS FOR 3AYES RISKPRINT 22,LINE,AmIN,V19LI1'E
22 FORHAT(////X,I35A1// X* RESULTS FOR bi.AYE*A* CALCULATIONS */////6X,*mININLP1BET.8//6A,*BAYES RISK =*,E20,6///A,13541)
lo0(-2 FOHMAT(1h1/// X,*JOuBLE INTEGRATION TO FILD ThE *A*NINIOUM PILE OEPTH*/3A, 0 0vER ihE 0OUDLD INTEur“„ *
B*TION *///X135A1///)RETURr4ENO.
5 1
StiL5P.OUTIl'iF SIP3Oi“(01)* *** **-; 1- ***********4************ *********************
COTROL PROGkAm FOP pbuLESETS THE LIMITS FOR TriE (7)0TC.k 1.1E1.,J, URVAkIL:CL n4 THIS CAbE t AU IS PROG1-00 To okL,ANThE Coi UAkt DISTIHUTION OP toJu 1H:L1 SEGHENTsr-oR FASTER INIEGRATION 1.4 0CEEDINGS.
******* * * ** *** * *** * *** ****** **** **** ** *** ******** *********LCflr'Oi. ,; /S1( 1 P/u9S941UF/Cv'tCuMrUH /EAR/h96CsPC.;X1IL ,,DS
/0000T/ 1111L)1HEJ,SIWJ CH11(44)1,CH19(44)
LOAUS ViALOES FOI . CHISJOgRE SO TmATCAN 0 E ESTABLISHED
DATA CH119CHI9/.6s4 , 6G14.fJOis.C2279.429.3b?A.k_.6541.1C719.128190147916679.14b;.2..)134.21(2v6.23 -224.24644,25989.2725281-6,2019.3u(4.31(44C..32739 .3 -3099.346,.35q79.76319. 3 7119.378o9,, 3d.539,U3934,. 44904C7, 9 4134T6,4197,04257,. 43159043711449E..44-73,
ti) ,"'F--:1 5, 4+Q 103; 3 G 71-03.476,i3ei:b56 9 3.u97 6:.,2.95r3o962.8422,2.7424 t 2.65o,2.52,2‘.513192. 4 53292.3 94 ,
: -y..3,5792.306392.265692.226492.1949.1 6 2192.1325912•1046,2.Y789i2.6547q2.0319v2.01049 1 .99:191.9/ 09
01.95 2 7,1,935591.9194 1.9V3 ,4.91b8 3 55, 1.074291.bbCosKI.8476,1,83591.E.2391.811.5,1.60L4,1.7696/TuT =
SETS UNITS FOR A AND o FOF FIRST HALF OF INTE6RAfIONA = (t-i*S) (IuF*C019(IDF))
= (r, * S) / ( IDF 2)CALL SRULl(AOIAREA)
C ADDS INCREHETAL AREAIOT = TOT 4. ii-(EA
SETS LIIiITS FOP A AND b FUN SECOND HALF OF INTEuKATION_A = (t S) / (IDF — 2)6 = (N*S) / (IOF*CH11(IDF))CALL SuL1(A,69APEA)
ADDS P,ICREt;ENTAL VALUE, OUTPUTS RESULTSTOT = TOT 4, AREAIF (IIII.E0.1) RETUR N
1(;C.059 TOTlc° FORMAT(lh,,,X9*PILE DEPTH =*,F1U.24n44
A*PISK CALCuLATION =*9.E16.7)PEToRe,ENV
52
FUNCTION PHI(5kR)***************************-r** *************************
FUCTIO ROUTINE E.:EOED TO COhBINE THE Tv40ONE-OlvF.NSIONAL SLIPSONS RULES. - RESETS THEUNI 15 ON THE INTEGRAL EACH TINE I] IS CALLLO.
-**********************************************************
A=G(AG)= hA(APG)
CALL Sk.UL2(A.6,ARGIRd1)1F(SEVSE SITCH 1)111,222
111 RRINT 77.ARG9PHI77 FLWi1AT(51-1 ARG= F20.6.10X4HPHI= F20.6)
222 RETOPNEND
FONCTION HX(X)4*********************************************************
FUNCTIOA ROUTINE TO SET THE UPPER LImiTS ON THEINNER INTEGRAL F(,) EACH POINT OF OUTER INTEGRAL.
**********************************************************
COt'HON /SIhP/4M,V,HU9109GH 1 + *1.0 * SOkT(A)HETUNNENO
FUNCTION G(X)**********************************************************
FuNCTION ROUTINE TO SET LO W ER LIMITS ON THE INNERINTEGRAL FOR EACH PINT- OF OUTER INIEGI-e.AL.
4
**********************************************************
COMMON /SIMP/XM.VIOD,ID,r;AllG = Xm - 3.0 * SORT(X )i- ETURNEA)
SUbkOOTILE SPUL-2(AAkOVAW:',A)****************************************************
A OAE DIHENSIONAL SIjPSON NULL THAT C0HUTESv . i .A-UE 0F iHE fATF6AL Q LP OHPU1FS fhEPoINT ,10PE TH AA ONCE. INSTEAD CHAN6ES THE* C0EFFIC1ENT OF THAT TER".
•
**********************************************************C SETS AND CHECivS LPIITS AAD INTEPVAL SIZE
DELH=(6—A)/2.IF(DELH)"2:9192AREA=.(.40 TO 70
C CALCULATES VALUES AT EXTREHE POINTS2 AIN=k(APG,A)*P(ARG90)
C iLXT POINT AND EVALUATESOltg#DELHYl=i--((59C)PRAk=DELH/3.*(APIN+40*Y1)
C CALCULATES REPAI;4I ,AG HOINTS AN0 cHAN6Ls COFFFICICOF THE PAST POIFJTS DURING THE CUHPUTATIOH
DU zH) 1=2/15N=2**(I-1)FK=KAREA=APIN.2.*Y1K2M1=2*K-1FL=-1.
DO 6:J H=K,K2H1FL=FL.c?..C=A+FL*DFLH/FK
60 Y2=Y2+R(ARG,C)AREA=(AHEA+4.*Y2)*DLLH/(3.*FK)YI=Y1-PY2
C RELATIVE EPROREPS=AbSF(AREA*1.E-61)
C CHECK TU SEE IF 1-v10 ITERATOHS ARE 4ITHIN 10 PERC&JIF(ABSF(AREA—PRAR)—EHS)70,71,)940
4u PPAR=AREk70 FETUP.1 ,1
END
53
54
SUBF:00TIA SkULI(A9h9AkEA)**i,***********.************************.****************
ir
A oilE D1ENSIONAE SINPSON iAJLE THAT (:Jr,,PurEVALUE OF THE INIE6RAL. 6EVER COMPUTES tHFPOINT ONE THAN ONCE. INSTEAD CHAi:uES THECOEFFICIENT OF THAT TERN.
********************************************************SETS ANu CHECKS LIMITS AND INTEPvAL SIZE
DELh=([3;-.4)/2.IF(OELH)29192
1.6C.) TO 70
CALCULATES VALUES AT EXTRE` , 1E POINTSAkIN=PH1(A)+PHI(k)
C NEXT POINT AND EVALUATESC=A+DLLPYA.PHI(C)PkA=0ELH/3.*(APIL+4.*'(1)
CALCULATES REMAINING POINTS AND CHAN'o'c_S COEFFICTEI,JSOF THE PAST POT!TTS DuHING THE CONPUTAFIONS
00 4(. I=2,15K=2**(I—I)FK=NAREA=ARIN4-2.*Y1K2M1=2*K-1FL=-1.Y2=J.o0 e 11=K9K2t;i1FL=FL-1-2.C=A+FL*UELH/FKYR=Y2+PHI(C)AREA=(AKEA+4.*Y2)*DELH/(3.*FK)Y1=Y14-Y2
C RELATIVE ERROR LIMITEPS=AHSF(AREA*1.E-01)
C CHECK TO SEE IF TwO ITERATOhS ARE WITHIN 10 PERCEr1TiF(ALISF(AREA—f:RAR)—PS)7L,97U,40
4C PRAR=kREA7., RETURN
END
55
FUNCTION R(X9O)**** 14. ****************************************************
i-‘
0
FUNCTIOk EVALuATFO IN 00 1_16LF INTEGA(iu is RpFOR HAYES RISK iT kE1UP'S AFTE CuHRuTAITON 4NO00NrIHHES PROcF_SSIIN6 ()SING THE uuLLIEN SE(;110i , .
****************************************************ti-*****e0M01, /PRP,R/RCU' Oh /SIOP/A64S.N.Iv96AllLoHdUN /COUNT/III1 -
/SEAR/H9F,C9PC.XhIDSCGMON /NINImo/TOIOENSION P(4)9FV(4)DATA EPS/.00E)1/
C NILE DEPTH TO-FL(Pj RELATIONSHIP= ( Z / 4344 ) ** (13e / 3.)
L1KZLIHOOD OF NORMAL OhI-Sc,luARE 0IS1K160TioNRk= ((N * s / yo **((iV-)/ 2.) * H * s / (A *
A X))) / ( GAO * (2. **(Ivi 2)) * ST(5 1,2331b34H * x) * FxP(( * S / (2. * X)) 4- ( ( J - AS) *C - X13)) / (2. * A)))ELL. =IF (IIII.EW.1) GO TO 30
C NO1-61AEIZING FACTOR FOR LOG OF FLU',,4Z = ((H-U) / SORT(X)) / S(_,)RT(2.)IF (Z) 10109162041b20
1 -J10 Y -= AdS(7)C CALLS ERROR FUNCTION TO EVi-LUATE NORNAL PRoBAbILIFY
CALL ERF(Y9EFS.E.Fid.C)cOHPUTE PROBABILITY OF BRIDGE FAILURE Jr ONE YEARRP = .5 + .5 * EIF (PP .GT. 1.0) PP = 1.0GO TO 1°36
lj.i2o CALL ERF(Z9EPS9E4HH9C)RP = .5 - .5 * EIF (kR 0.0) Rp = O.
PROBABILITY OF 6RIDGE FAILURE Ar LEAST ONCE IN 25 yks103 0 P25 -= 1. - (1. - PP) ** 25.
C EVALUATE THE OBJECTIVE FUNCTION WITH THE EIKELlhOu0
• I H = RN * (P25 * BC + PC * US)RETURN •
C SEARC"( BY GOLDEN SECTIONC INITIALIZATIONS OF JUNE'3 n., ST = SORT(X)
GJP = 1.2 * ST
56
C CU:.IPUTES POINT. THASFORN FLu5i. LOG Or: FLU 5AND EVALUA)ES NSIWG A INrE6Af11,,N
P (1) = TC CO.iPUTES NP5 POINT. TP/ANSFON 1LJ, LO ( , OF FLW -25
AND EVALOAlEe, 1 ;1N(-) A INr ,- (3RATION PrWcEOuKEDS = P(I)IF (OS *Li'''. 60 10 2H = OW(1))EV(1) = HT .VAL(X9U)P (2) = T GJPOS = P(2)IF (DS .LE. Gu TO 2ti = Q(F(2))EV(2) ' -VAL(xu)
CHECKS FOR DIRECTION uf- NEXT 000.)
IF (EV (1) .6T.EV (2) ) 1 6 9 4C CON PUTES ;E P014 T000SF0k0 FLU LU 6 OF FLD.
AND EVALUATES DSIi40 A 1NTE6RATION PKOcEDURE0 (3) = P(2) 4. 1.638334 * (P(2) - 0 (1))DS = P(3)IF (DS .LE. O.) GO TO 2H = O(P(3))EV(3) = HTVAL(A9U)
C CHECKS FDR DESIRED bRACKET OR r4,70 P34 (0 BE AWOL')IF (EV(3).GT.Eçt(2)) 1253
kFORDERS POINTS SO OLv , POIT CAN BE WJDEO3 0 (1) = P(2)
P (2) = 0 (3)EV(1) = FV(2)EV(2) = EV(3)GO TO 10
C REORDERS POINTS $o f4 CAN BE AUDED -
4 TT = P(1)TTT = Ev(1)P(1) = F(2)EV(1) = Ev(2)P R) = TTEV(2) = ITT'
C CoHPUTES NEw POINT , TRANSFORM FL04, LUG OF FLO'ii.040 EVALUATES USING A INTEGRATION PROCEDURE
8 P(3) = P(2) - 1.613034 * (P(1) - P(2))DS = P(3)IF (DS • LE. C.) 30 TO 2m = u(P(3))EV(3) = mTvAL(Xtu)
C CHECKS FOR DESIkED f7,'0ACKET OR NE-i POINT TU BE ALOEIF (EV(3).GT.EV(2)) 22997
REORDERS POINTS SO NEt, POINT CAN SE ADDED7 P(I) = P(2)
P (2) = P(3)Ev(1) = EV(2)
57
CC
12
11
EV(2) = EV(3)GO TO ROOTIi4oE
CHECKS TO SEE IF EXTEPluk Oi -7.SPECIFIED LIIIS OF 1 PU,T.ET
= A 6 IN1(EV(1) , Ev(3))
'AJTHIN
YY = A6S(EV(1) - EV(3))AX = .01 *IF (YY .LE. XA) GO TO 20:)
C HUT CLOSE - FAOOGH SO hoD ki_A POINTC CO POTES POINT, 1k4HSFot-1 F.L09 Lj(i OF FE09C Afsiu EVALUATES uSING A PATE6r‹,:0- 10H PkOcooL26 = 1,618034 * (P(2) - P(1)) P(1)
JS = P(4)IF (06 .LE. 0.) GO 10 2H = O(P(4))EV(4) = HTVAL(A90)
C OELETES EATRANEOOS POINTIF (Ev(4).LT.EV(2)) 1019102
C .0LoRDERS101 P(1) = P(2)
p(2) = e(4)Ev(1) = ETV(2)EV(2) = F0 (4)GO TO 11
C- C
DELETES EXTRANEOUS POINTriF_ODERS POINTS
1 02 P(3) = P(4)EV(3) = FV(4)GO TO 220
C HEORDERS POINTS229 TT = R(3)
TTT = EV(3)P(3) = P(1)EV(3) = EV(1)P(1) = TTEV(1) = - TTT
C CHECKS TO SEEIF EXTERIOR POINTS ARE vJITHINSPECIFIED LImITS OF 1 PERCENT
220 N = AmIN1(EV(1),EV(3))= ABS(EV(1) - EV(3))
XX = .01 * W1F (YY .LE. XX) GO 10 200
NOT CLOSE ENOUGH, SO ADD NE! POINTLOAPUTES •FW POINT, TkAHSFOPH FLuA, LOG OF FLU,
CAi) EVALUATES USING A INTEGRAflu.4 PRoCEOURLP(4) = P(3) - 1.6161;34 * (P(3) - P(2))
= P(4)IF (DS .LE. U.) GO TO 2o = u(P(4))EV(4) = HTVAL(X,U)
58
IF (Ev(4),GT.E.J(2)) 201,202C 0ELETES EATRANEous 1-)011qC t:EORDEqs P01 MIS201 P(1) = P(4)
EV(1) = EV(4)GO To 11
REORDERS POINTS. 202 P(3) = P(2)
P(2) = e(4)Ev(3) = EV(2)iv(2) = FV(4)GO TO 223
2 LLL = LLL 1IF (LLL ,EQ. 2) GO TO 261T = 15.60 TO 3J
C PILE 0ÉPTH SET AT MINIMUM vALLJE FOR INTEGRATIuN261 HTRUL = .001
GO TO 2888C TRUE PILE DFPTH CALCULATED
HTPuE = P(1) .5 * (0(3) - P(1))C vALuE CO M PUTED 1-OR HTPuE288 H = c)(H -MuE)
uS = HTRHEV2 = HTVAL(X9u)V 2 = AmIN1(V29Ev(2))
C VALUE COMPUTED Fop NINImuig, FROM bAYES RISSUS = 7\HINH = (XmIN)V1 = HTvgL(X9u)
C cO:ATRIuTlom FOP OPPORTUNITY LOSS CALCULATION= kpil- (v1 - v2)
C RE SETS STARTING POINT To LAST FOU NO VALUE FOR HTHuET = HTRUEIF (HTRuE.EQ0.001) T =RETURNÉND
59
5UL-i0OTI!jE SEARCH(S,XTOT)**********************************************************
CONTROL ROUTINE TO F-.;ABLE cOHPu1ATIO , '4 FOR tHEVALUE OF PILE DEPTH, ASSUING THAT MU Ai4)S%AkEo ARE THE TOUE iALi;ES SElS Ï 'RMAiEfES10 EnbLF THE 000M_E I01F(3RATION RJ3TINE TO -PP, OCELO TJJO THE GuLDFN SF(ITION sF, CH :wEN INfHE FUNCTION RR,
* **********************************************************
/S1J- -:P/AB%V:\/NO.ID. 0 Ao/5.Ei:WH9BC/PC,YHIu
COMmON /ININH/TCOmNON / cOJNT/IIIIDIMENSION LINE(135)DATA LIHE/13.5*IH*/
C SETS PAkAt'iETERS FOR SIPPSO1111 = 1ATOT = ),f = 14.H = (T/.344) ** (16 ../3.)H = 4L061(H)
C CALLS DOODLE INTEGRATION i\ID PRINTS t<ESUL1- CALL SImPSON(XToT)PRINT 2,LINE,AT0T,LINE
2 FORmA1(1H1///x9135A1///6 9*SEAr(CH FOR iIrIiu. FOR*A* EACH NU AND SIGMA S(4tJAPED*////x,* *-
*EXPECTED OPPORTUNITY LOSS =*,E2çi.e///X,133A1).-PETUR;;END
60
FUNCTION HTVAL(X,U)**********************************************************
ROUTINE THAT RETuPNS TO THE SEARCH RNOCESS. THEVALUE OF THE ObJECTIVE FUNCTION wIf ,i0or IHELIKELIHOO MULTIPLIED HY IT. COmPuftS ToF:PRO6kbILI1Y OF BRIO(iE FAILUkE dY A:4 EkOkFUNCTION IN WHIck MR. L. LANE 6AvE HIS ASSITANCL, *EVALUATES THE OBJECTIVE FuNCTIUN 46 HENTIONLI)
**********************************************************COMMON /SEAR/H9bC,PC.0 1 IN.OSCOMMO:\i /PRRR/RkDATA EP/.00001/H = AL061'AH)
C NORMALILING FACTOR FOR LOG OF FLU w= ((H-U) / SQRT(X)) / SQRT(e.)
IF (Z) 2,393Y = Abs(7)
C ERROR FuNCTION TO EVALUATE 00kmAL PkO73AHILITYCALL EHF(Y,EP.E.HH.0)
C C0i,PLITE PROBABILITY OF bRID(3E FAILOk IN ONE YEk -RRP = 05 + 5 * EIF (R - P e(T. 1.6) P,RP = 1.060 TO 4
3 CALL ERF(Z,EP.E.HH,C)RRP = .5 - * EIF (RkP .LT. 0.0) RHP = G.0
C COMPUTES PROBABILITY bR1UGE FAILURE AT LEAST k.) ,4CE IN 254 P25P = 1. - (1. -PRP) ** 25.
C EVALUATE THE OBJECTIVE FUNCTION WITHQUT THE LIKEL .i . HOUUHTVAL = (H25P * BC + PC * OS) -HETuRNEND
61
5U6ROUTIt.4E XOPT(AFA)* * * *3 * * * * 1. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * o * * * * * * * * * * * * i,‘ ** ** *1"; KES:FTS NECESSARY P.AkANETEkS TO F-_ , 6Lr. INTE0ArION ** t)VEK THi r, POSSI6LE PEA FUN FOk THL NEXT YEAk_. ** !„;ONTROL IS PA6SED TO A 0 -':E DI*NSIONAL SIP6UNS it.* UL[ FOP THE AOOVE INTEGATION. ** *o ***********************************************************
C;OMOON /AG/X (100) *X LOO ( 1 ())COHOON /SImP/XM.VAR.N.ID.GAN
C ADOS ONE NEw DATA POINT FOR NEXT YEARP',..1;INT I
1 F-,:ikFIAT(IH1//////////X.*THE FOLL041N6 PAUES OF *A*OUTPUT ARE FOR THE NEv; DATA POINTS*);,,i = N + 1
C CO:,IPUTES NEW GALHA FUNCTIOk'._ .ID = N — 1H'el = 000(N0.2)IF (MA) 21.2()921
21 6AM = GAMI(ID)00 TO 30
20 GAM = (31v ,IR(ID)30 ST = S ,:::,RT(VAP)
C SETS LIMITS FOR INTF3HATIONA = XN °. 405 * ST6 = •Xm + 4.5 * STCALL jL(A96., AREA)RETURNEND
62
SUBkouTILE RUL(A969AkEA)********** 4 **********************************************
*oNE DIALNSTOkAL SINPSONS PULE.
*******************************************************C SETS ANu LI:IITS OD INTEkvAL SIZE
DELH=03-A)/2.IF(OELH)2.1.2A!--i,EA).GO TO 7u
C CALCULATES VALUES AT EXTRENF POINTSAR1N = TAK(A) TAK(6)
C NET POINF AND LVALuAlESC=A+DELHfl = TA(C)HPAR=uELH/30*(ARIu+4.*Y1)
C CALCULATES REIJIA“JINO f-DINTS AND cHANoES CoEFFIC1ENTSOF THE PAST POI ,,1TS DURING COMPUTATIONS
30 qo 1=2915K=2**(I-1)t;K=isAREA=ARI+?.*Y1K2m1=2*K-1FL=-1.Y2=0,u0 60 M=K 9 K2H1FL=FL+2.C=A+FL*DELH/FK
6u
Y2 = y2 + TAK(C)AHEA=(AREA+40*Y2)*DELH/(3.*FK)Y1=Y14.Y2
C SPECIFIED ERROR LL-1ITEPS=At35F(AREA*1.E-i)
CHECK TO SEE IF T-)40 ITERATIONS ARE rHIN 10 PERCETIF(ABSF(AREA-RAR)-EPS)797(i,40PRAR=AREA
7u RETURNENO
FUNCTiON TAK(Y) 63*********************************************************
FuCTION EVALUATED IN 0!J•-• rYPIENSIONAL Sl'IPS0.4S kjLLFOR NEXT YEARS - POSSI6LE AT ID IS UALLL.DN .J RIAL CHI-SuAE cok;cuw,JAHTCOMPUTES NECESSARY PAPANETEPS,, SOCn AS Nr.,;4 0D VARIANCE wITH ADDITION OF IRE POINT.-
**********************************************************COMMO /1-)IP/AM9V:A.RID!6ACOM:1ON /46/X(1Lft),ALU(1)F(L) = ( SOR1(N-1.) (((N-10) * VAR)
A ** / 2.)) * ANUO ) / (SWkT(3.1415927d * N) * xpEN * (A * Z) ** ((N 1$) / 2.))INITIALIZES AND UPDATES PArfOlETEWSTOT =AN = 14°"1• = HOD(HH92)IF (H) 1,1,2XNuO = GAMI(WOr.jN = NNADEN = CiARCHN)GJ TO 3
2 xNUm = GAJARWN)H4N = NN - 1ADEN = OAMI(NN)
• SAVES OLD VALUES OF THE MEAN AND VARIANCE_3 Ad =
XV = VARALOG(,i) = Y
• CO»IPUTES NFW MEAN• = *04-1.) + ALOG(N)) / N
COPUTES NEW VARIANCEVAR = Jqt
DO 11:j i = 19N11C; VAR = VAR 4. (CALOG(I) (XL06(I) - KM)) / N
ST = S.)RT(vAP)CALL GSEARCH(ST)CALL SEARCH(ST,XTOT)YY = 10.**YPRINT lv,YY .FOR1AT(////11X,*E4 DATA POINT 4AS*,F12.2)
• RESETS PARAMETERSXVR = VAR• = 4:L3VAR = AVPROB = F(XVR)TOT = XTOT * PROBTAK = TUTRETURIArtjci
64
WORTH OF DATA STUDY RILLITO CREEK NEAR TUCSON ARIZONAMASTERS THESIS PROBLEM WILLIAM M DVORANCHIK 1970
PIER COST = 400,00BRIDGE COST = 150000,00DISCOUNT = 0.00
DATA BEGINS IN YEAR 1950
DATA POINTS 0 0
9490.0009500.0001630.0005470.0007680.0007710.0008070.0002050.0004500.0008930.000
65
We MN ,111.11 1611 LOGARITHMS
3.97726621E+003.97772361E+003021218760E+003073798733E+003088536122E+003.88705438E+003090687353E+003.31175386E+003065321P51E+003095085146E+00
are MEAN AND VARIANCE FOR SAMPLE OF SIZE 10
MEAN 3.7500272
VARIANCE = .0696379
DOUBLE INTEGRATION TO FIND THE MINIMUM PILE DEPTHOVER THE DOUBLE INTEGRATION
66
itik4.41.4 * ####** * ** * ********* ** ***** * **************** ** ****** ** **41
PILE DEPTH = 9.80 RISK CALCULATION = 1,7935188E+04
PILE DEPTH = 11,00 RISK CALCULATION = 1,1701579E+04
PILE DEPTH = 12,94 RISK CALCULATION = 8.4857637E+03
PILE DEPTH = 16.08 RISK CALCULATION = 7.4498036E+03
PILE DEPTH = 21,17 RISK CALCULATION = 8.7211287E+03
PILE DEPTH = 18,02 RISK CALCULATION = 7.7811083E+03
PILE DEPTH = 14,88 RISK CALCULATION = 7.5496138E003
PILE DEPTH = 16,82 RISK CALCULATION = 7,5486817E+03
PILE DEPTH = 15.85 RISK CALCULATION = 7,4872021E+03
i.********************************************** * * *************4
RESULTS FOR BAYES CALCULATIONS
MINIMUM 1,60832816E001
BAYES RISK 7.44980357E+03
4************************************************** ***********t
*** * ** .is.****************************************** ************A
SEARCH FOR MINIMUM FOR EACH MU AND SIGMA SQUARED
EXPECTED OPPORTUNITY LOSS 2093649538E003
4***********************ott**.atiar,****m**************it
67
WORTH OF DATA STUDY --- RILLITO CREEK NEAR TUCSON ARIZONA
MASTERS THESIS PROBLEM - WILLIAM m DVORANCHIK 1970
*****4$***44-P*.**********************************************4**********************************************************4*******************************-64**************************4
FINAL RESULTS OF STUDY
EOL OF ORIGINAL DATA = 2.936495E+03
EOL OVER ALL NEW POSSIBLE POINTS = 2,776527E+03
WORTH OF ONE MORE DATA POINT = 1.5796835E+02
41-ii, * * ********** * ********** ********************************* 4
** ***** * * **** * * ** ************ # ##** ********* * * 4-* **A. ** * ***** 4
********#4******* *******************## # ****************0-***4
REFERENCES
CDC 6000 Series Computer Systems Statistical Subroutines, Control DataCorporation, St. Paul, 1966.
CRC Handbook of Tables for Probability and Statistics, 2nd ed., TheChemical Rubber Company, Cleveland, 1968.
Davis, D. R., Department of Systems Engineering, University of Arizona,Doctoral dissertation in preparation, 1971.
Herfindahl, O., Natural Resources Infolmation for Economic Development,Johns Hopkins Press, Baltimore, 1969.
Laursen, E. M., Bridge Design and Scour Analysis, ASCE Annual Meeting- ,American Society of Civil Engineers, Louisville, Kentucky,April 1969.
Wers, B. L. and A. J. Neleher, On the Choice of Risk Levels inManagerial Decision Making, Management Sciences, Vol. 16,No. 2 (October 1969).
Raiffa, H., Decision Analysis, Addison-Wesley Publishing Co., Inc.,Reading, Massachusetts, 1968.
Raiffa, H. and R. SChlaifer, Applied Statistical Decision Theory,Graduate School of Business Administration, Harvard University,Boston, 1961.
The United States Army Corps of Engineers, Preliminary Comparisons of Analysis, Internal Working Paper, Presentation to the Bureauof Standards, Washington, D. C., 1967.
Wilde, D. J. and C. S. Beightler, Foundations of Optimization,• Prentice Hall, Englewood Cliffs, New Jersey, 1967.
68