Some recent advances in the application of the principle...

12
Water for the Future: Hydrology in Perspective (Proceedings of the Rome Symposium, April 1987). IAHS Publ. no. 164,1987. Some recent advances in the application of the principle of maximum entropy (POME) in hydrology VIJAY P. SINGH Department of Civil Engineering and Louisiana Water Resources Research Institute, Louisiana State University, Baton Rouge, Louisiana 70803, USA A. K, RAJAGOPAL Electronics Technology Division, Naval Research Laboratory, Washington, DC 20375-5000, USA ABSTRACT The principle of maximum entropy (POME) developed in communication and information sciences has recently been found to have wide ranging applications in hydrology and water resources. This paper surveys some of these applications with particular reference to (1) derivation of frequency distributions, (2) parameter estimation, (3) evaluation of data acquisition systems, (4) derivation of functional relationships, and (5) assessment of uncertainty. It is shown that POME offers a unified approach to derivation of a number of frequency distributions used for hydrological analyses. Because hydrological characteristics can be incorporated in the distributions, the approach can be applied to areas having limited data. This approach also yields a simple but useful method of parameter estimation, which is as good as the method of maximum likelihood estimation. Since POME can be applied to derive functional relationships between two or more variables, it has an advantage of performing multivariate stochastic analysis. Entropy measures the uncertainty of a mathematical model of the hydrological system, and is thus applicable to choosing between models and to designing hydrological networks for data collection. Quelques résultats de recherche récente dans l'application du principe d'entropie maximale (POME) en hydrologie RESUME Le principe d'entropie maximale (POME) mis au point dans les sciences de l'information et de la communi- cation s'est avéré récemment utilisable pour un large domaine d'application en hydrologie et pour l'évaluation des ressources en eau. Cet article présente quelques unes de ses applications qui concernent particulièrement: (1) la détermination des distributions des fréquences; (2) l'estimation des paramètres; (3) l'appréciation de la valeur des systèmes d'acquisition des données; (4) la mise au point des relations fonctionnelles; (5) l'évaluation de l'incertitude. On montre que POME offre une approche uniforme pour la détermination d'un certain nombre de 353

Transcript of Some recent advances in the application of the principle...

Water for the Future: Hydrology in Perspective (Proceedings of the Rome Symposium, April 1987). IAHS Publ. no. 164,1987.

Some recent advances in the application of the principle of maximum entropy (POME) in hydrology

VIJAY P. SINGH Department of Civil Engineering and Louisiana Water Resources Research Institute, Louisiana State University, Baton Rouge, Louisiana 70803, USA A. K, RAJAGOPAL Electronics Technology Division, Naval Research Laboratory, Washington, DC 20375-5000, USA

ABSTRACT The principle of maximum entropy (POME) developed in communication and information sciences has recently been found to have wide ranging applications in hydrology and water resources. This paper surveys some of these applications with particular reference to (1) derivation of frequency distributions, (2) parameter estimation, (3) evaluation of data acquisition systems, (4) derivation of functional relationships, and (5) assessment of uncertainty. It is shown that POME offers a unified approach to derivation of a number of frequency distributions used for hydrological analyses. Because hydrological characteristics can be incorporated in the distributions, the approach can be applied to areas having limited data. This approach also yields a simple but useful method of parameter estimation, which is as good as the method of maximum likelihood estimation. Since POME can be applied to derive functional relationships between two or more variables, it has an advantage of performing multivariate stochastic analysis. Entropy measures the uncertainty of a mathematical model of the hydrological system, and is thus applicable to choosing between models and to designing hydrological networks for data collection.

Quelques résultats de recherche récente dans l'application du principe d'entropie maximale (POME) en hydrologie RESUME Le principe d'entropie maximale (POME) mis au point dans les sciences de l'information et de la communi­cation s'est avéré récemment utilisable pour un large domaine d'application en hydrologie et pour l'évaluation des ressources en eau. Cet article présente quelques unes de ses applications qui concernent particulièrement: (1) la détermination des distributions des fréquences; (2) l'estimation des paramètres; (3) l'appréciation de la valeur des systèmes d'acquisition des données; (4) la mise au point des relations fonctionnelles; (5) l'évaluation de l'incertitude. On montre que POME offre une approche uniforme pour la détermination d'un certain nombre de

353

354 Vijay P.Singh & A.K.Rajagopal

distributions de fréquences utilisées dans des analyses hydrologiques. Comme les caractéristiques hydrologiques peuvent être intégrées dans les distributions, cette approche peut être utilisée dans des domaines à banques de données limitées. Une telle approche donne également une méthode simple mais efficace pour l'estimation des paramètres qui est aussi fiable que la méthode d'estimation considérant la probabilité maximale d'occurrence. Comme le POME peut être utilisé pour déterminer les relations fonctionnelles entre deux ou plusieurs variables, il présente l'avantage de permettre une analyse stochastique multi-variables. L'entropie mesure l'incertitude du modèle mathématique de système hydrologique. Elle est par conséquent appliquable pour le choix entre divers modèles et pour la planification de réseaux hydrologiques pour l'acquisition des données.

INTRODUCTION

A multitude of univariate frequency distributions are employed for hydrological analyses. It is, however, not clear whether there is a unified approach to derive any desired distribution. Such an approach may provide additional insight to understanding the distribution parameters, may show connections between different distributions, and may lead to an alternative way of assessing goodness of fit to experimental data. The concept of entropy provides such a unified approach. This approach can be extended to the case of multivariate distributions.

Entropy is a measure of the degree of uncertainty of random hydrological processes. That is, it indirectly reflects the information content of space-time measurements of these processes. In other words, this concept can be used to delineate optimum sampling intervals for data acquisition systems and can be extended to the design of hydrological networks. In a different vein, it can be used to assess uncertainty associated with different models of a given hydrological system.

Quite often, functional relations between two or more variables are needed. For example, runoff is related to rainfall, sediment yield is related to runoff, water quality is related to runoff, etc. These variables may or may not be deterministically related. If rainfall is stochastic then runoff is stochastic, even though the relation between the two may be deterministic. Entropy provides a means of deriving stochastic relations, taking into account both stochastic and deterministic characteristics.

The objective of this paper is to survey the various applications of entropy in hydrology and suggest other areas for its potential applications.

A SHORT HISTORICAL PERSPECTIVE

In a series of landmark contributions, Shannon (1948a,b) developed a mathematical theory of entropy. Nearly a decade later, Jaynes (1957a,b) formulated the principle of maximum entropy (POME), and

The principle of maximum entropy in hydrology 355

later (Jaynes, 1961, 1982) applied it in statistical physics. The works of Shannon and Jaynes opened up this new area of research and provided the major impetus for applications of entropy and especially POME to various areas of science and technology.

Leopold & Langbein (1962) were perhaps the first to have applied the concept of entropy in geomorphology and landscape evolution. Equations for longitudinal profiles for the most probable state of the river were derived by defining the energy distribution in the river. Scheidegger (1967) used entropy to develop a thermodynamic analogy for river meandering. Yang (1971) applied it to study stream morphology. The laws of average stream fall and the least rate of energy expenditure governing formation of natural streams were derived. Davy & Davies (1979) examined the thermodynamic basis of this concept as applied to fluvial geomorphology, and concluded that the use of entropy in analysis of stream behaviour and sediment transport was of dubious validity. Combining with Horton's and Scheidegger's stream ordering systems, Sharp (1970) used entropy to determine optimum sampling methods for pollution in a river. Paulson & Garrison (1973) applied it to measure the areal concentra­tion of water-oriented industry in the Tennessee Valley region.

Sonuga (1972, 1976) used the principle of maximum entropy in flood frequency analysis and rainfall-runoff relationships. Jowitt (1979) discussed the properties and problems associated with this concept in parameter estimation for the extreme value type I distribution. Singh & Jain (1985), and Arora & Singh (1986) compared this method of parameter estimation with six other methods and found it comparable to the method of maximum likelihood estimation and better than five other methods. This same finding was true for gamma and Pearson type III distributions (Singh & Singh, 1985a,b). For two-component extreme value distribution, this parameter estimation method was also comparable (Fiorentino et al., 1986). Singh et al. (1985, 1986) extended POME to derivation of a number of frequency distributions, their entropies and the estimation of their parameters. Krstanovic & Singh (1986a) have extended this concept to multivariate frequency analysis.

Amorocho & Espildora (1973) applied entropy to derive an objective criterion based on marginal entropy, conditional entropy and transinformation to assess the uncertainty of the Stanford Watershed model in simulating streamflow from a basin in California for which historical records were available. Their results showed the value and limitations of this concept in assessing model performance. Along similar lines, Harmancioglu (1981, 1984) used entropy to determine the optimum sampling intervals for NH^ concentrations (in ppm) in the water of Choisy-le-Roi, Paris, and to analyse daily observations of runoff at two stream gauging stations in the Esencay basin, Turkey. Later, Harmancioglu et al. (1985) extended her analysis to measure transfer of information between variables. Panu & Unny (1977) used entropy in feature extraction and developed the method of pattern recognition in hydrology.

Clearly, the history of entropy in hydrology is rather short. The concept is undergoing a period of rapid development. The results obtained so far are promising. Rajagopal et al. (1986) have presented new perspectives for potential applications of this concept in water resources research.

356 Vijay P.Singh S A.K.Rajagopal

DEFINITIONS

Entropy, first introduced by Shannon (1948a) and later successfully extended by Jaynes (1957a), is defined as expectation of information or, conversely, measure of uncertainty. If S is a system of events, E 1 ( E 2, ..., E n, and p(Ek) = pj< probability of k-th event recurring, then the entropy of the system S is

H(S) = " Ek=l pk ln PkJ C l pk = X (1)

This discrete representation is extended to the continuous case. If x e (-00,00) and y e (-00,00) are two random variables, then marginal entropy H(x) and joint entropy H(x,y) can be defined as

(.CO fOO

H(x) = - J f (x ) In f (x) dx; J f (x ) dx = 1 (2) v - o o ' -«>

(•CO /CO

H ( x , y ) = - J J f ( x , y ) l n f ( x , y ) dx dy (3 )

where f(x) is the probability density function (pdf) of random variable x, and f(x,y) is the joint pdf of x and y. Note that f of x is not the same as f of y. If y is conditioned on x then conditional entropy is defined as

f(x,y). 1 f+°o |-+°o I(X.V)

i(y|*> = - L Loo f < x . y > l n ["Tu) ] d x d y (4a)

. f + CO f + 00 I

H(y|x) = - J_ro J^ f(x,y) ln f(yjx) dx dy (4b)

because

*/ I ̂ f(x,y) f(yix) = - 7 ^ -

(4) can be related to joint entropy in (3) and marginal entropy in (2) as

H(y|x) = H(x,y) - H(x) (5)

Also

H(y|x) £ H(y) (6)

If x and y are independent then

H(x,y) = H(x) + H(y) (7)

Therefore, (3) will be bounded by (7),

H(x,y) < H(x) + H(y) (8)

In other words, the joint entropy of dependent x and y will be less than the joint entropy of independent x and y. The difference between these entropies defines transinformation T(x,y) as

The principle of maximum entropy in hydrology 357

T(x,y) = H(x) + H(y) - H(x,y) (9a)

or

/-CO *CO f fX Vl T<x>y> = L L f ( x - y > l n [ f ( x ) f ( y ) J

d x dy (9b>

which is none other than Kulback-Leibler information of f(x,y) with respect to the marginals f(x) and f (y) , and is always greater than or equal to zero by virtue of Jensen's theorem.

Transinformation represents the amount of information common to both stochastically dependent x and y. In other words, stochastic dependence reduces the uncertainty by this amount. For the independent x and y, T(x,y) = 0. (9) can also be expressed, using (5), as

T(x,y) = H(y) - H(y|x) (10)

Cast differently,

H(y) = H(y|x) + T(x,y) (11)

(11) expresses the marginal entropy as the uncertainty in y reduced by knowledge of x and the uncertainty still remaining.

Another important characteristic of entropy is that it allows assignment of probabilities (derivation of pdf) to a given system of events based on prior knowledge. Jaynes (1957b) formalized this characteristic into what is termed as the principle of maximum entropy (POME). According to POME, the derived pdf is minimally prejudiced and makes maximum use of the given information.

DERIVATION OF UNIVARIATE FREQUENCY DISTRIBUTIONS

Univariate distributions are derived as a direct consequence of POME. Given m linearly independent constraints Cj as

ci = Logi ( x ) f<x> dx> i = 1, 2, ..., m (12)

where gi(x) are some functions of x whose averages over f(x) are specified, then the maximum of H(x) in (2) subject to (12) is given by

f(x) = exp [- XQ - £ X± gi(x)] (13)

where Xi( i = 0, 1, ..., m, are the Lagrange multipliers which are determined in terms of (12) . Inserting (13) in the definition of total probability,

J^exp [- X0 - E i = 1 X± gi(x)] dx = 1

which leads to

X0 = in [J l^exp {- S X± g i ( x ) } ] (14)

358 Vijay P.Singh S A.K.Rajagopal

The Lagrange multipliers are related to C^ as

3À 0

and also

V0 d2Xr 9ZA

3X1 = var [g,(x)];

0

8Ai9A-= cov [g-^x),gj(x)], i ï j

(15)

(16)

With the Lagrange multipliers estimated from (15)-(16), the pdf in (13) is uniquely defined. Clearly, this procedure can be applied to derive any probability density function for which appropriate constraints can be found. The hydrological impact of constraints for every distribution, except for a few, is not clear and needs further research. This procedure needs modification, however, if the distribution is expressed in inverse form as for example the Wakeby distribution. Following this procedure, Singh et al. (1985, 1986) have derived a number of distributions used in hydrology.

DERIVATION OF MULTIVARIATE FREQUENCY DISTRIBUTIONS

Let the random variables be x ^ i = 1, 2, ..., n. The problem is to derive the pdf f(x^, x2, ..., xn) which maximizes the joint entropy,

/OO ft»

H(x 1 ; x 2, . . . , x n) = - J_ œd X l . . . dx n J_00f(x1, ..., x n)

In f (x-^, . . . , x„ ) dx-, . . . dx„ xn) dxx (17)

subject to some specified constraints. Essentially, solutions of the form in (13) are obtained by this procedure when generalized to multivariate distributions. For example, if the constraints are:

• CO

V± = i œ xi f(x±) dx± (18a)

/•CO (.CO ci -i = J J f(X;,xJ x- x, dx4 dx^, XJ J-œ J-co V 1 ' J ' 1 J 1 J'

j = i + 1, i = 1, 2, ..., M - 1 (18b)

Application of POME leads to multivariate normal distribution as

1 f(X)

(2TT)n/2 |cj 0 . 5 exp [- | (X - y) C x (X - u)*] (19)

where X is the vector of n variables, C is covariance matrix and |Cj is the determinant of C, * is row vector transpose, and y is vector of means. Likewise, other multivariate distributions can be derived. Substitution of (19) in (17) leads to

H(X) = f In 2TT + â In |c| + ~ (20)

In other vein, many multivariate distributions can be derived with given marginals (Finch & Groblicki, 1984). If x and y are two

The principle of maximum entropy in hydrology 359

random variables with pdf s as f(x) and g(y), and cumulative density functions (cdf) F(x) and G(y), then the joint pdf p(x,y) can be written as

P(x,y) = q{F(x), G(y)} f(x) g(y) (21)

where q(u,v) is joint pdf on the unit square, u = f (x) , v = G(y), and can be expressed as

q(u,v) = 1 + r(u,v) (22)

where r is any function on the unit square that is O-marginal and r(u,v) â -1. It is O-marginal because

Jo r(u,v) du = 0, j0 r(u,v) dv = 1 (23)

Thus

p(x,y) = f(x) g(y) [1 + r{F(x), G(y)}] (24)

This formulation can be extended to n i 2 random variables. x^, i = 1, 2, ..., n, with respective pdf's as fjs(xji) and cdf s as FJC(XJC) . The multivariate pdf then is

P<xl> x2> •••' xn) = fl<xl> f2^ x2^ ••• fn^xn) [1 + r{F1(x1),

F 2(x 2), ..., Fn(xn)}] (25)

where r(u^, u 2 , •••, un) is O-marginal and bounded below by -1 over the region O S u ^ S l , k = 1, 2, ...,n.

Since the function r(u^, u2, ..., un) is arbitrary, there can be many multivariate distributions with the same marginals. To choose from amongst these distributions the one that is most appropriate for a given hydrological problem requires an objective criterion. To this end the Kulback-Leibler information provides a clue. Applying (9b) to this construction,

1 l T(x,y) = / J q(u,v) In q(u,v) du dv (26)

By minimizing T(x,y), the function q(u,v) and consequently r(u,v) can be specified.

DERIVATION OF FUNCTIONAL RELATIONSHIPS

POME can be applied to derive probabilistic relationships between random variables. Consider, for simplicity, two random variables x and y. By specifying appropriate constraints on x and y separately and together, H(x,y) can be maximized as is also done for multivariate distributions. Then by applying Bayes's theorem, the conditional pdf of y given x can be derived. For example, if the constraints are

sx = r»Cof<x.y> x2 dx dy (26)

360 Vijay P.Singh & A.K.Rajagopal O fCO [CO r,

Sy = I J ^ f C x j ) y2 dx dy (27) y v—oo •*—co

fCO fCO

" X y J - o o J -oo

fCO /-co

°™ = J_0 0J_Mf<x .y) xy dx dy (28)

-OO fCO 1 = J_0OJ_0O

f(x,y) dx dy (29)

then maximization of (3) produces

S2 S2 + S2 x2 -2(S )2xy

f(x,y) = 2TT (S2 S 2 - Sxy)-°-5 exp[ 2 ( S

2 S2 - S 2 ) ] ( 3 0 >

and consequently

"x "y "xy

2 2 2 0-5 S ^ 2 - 2 S x y x y + ( x 2 S ^ ) f ( y | x ) = S X [2TT(S£ Sy - S x y ) ] u - a e x p [ - 2 2 2 J

2(SX Sy - Sxy) (31)

Singh & Krstanovic (1986) used this procedure to model sediment yield from upland watersheds, and Sonuga (1976) to model rainfall-runoff relationship. Recently, the authors have extended this analysis to water quality modelling.

PARAMETER ESTIMATION

The discussion on the derivation of univariate frequency distributions indicates that the Lagrange multipliers are related to the constraints on one hand and to the distribution parameters on the other. These two sets of relations are used to eliminate the Lagrange multipliers and develop, in turn, equations for estimating distribution parameters in terms of constraints. Singh et al. (1985, 1986) have derived these equations for a number of distributions.

It may be instructive to compare this POME method of parameter estimation with the method of maximum likelihood estimation (MLEM). Consider a general pdf, f(x;9) where 6 represents a family of parameters 6±, i = 1, 2, ..., k. in the MLEM the likelihood function L or its logarithm is constructed,

L = ni=i f<xi;8>; In L = E^ In f(xi;9) (32)

in which N is the sample size. The procedure of maximization of L or In L yields as many equations as the number of parameters, which are then solved to produce the desired parameter estimates. Multiplying (32) with -1/N,

" N l n L = " N Zi=l l n f< xi; Q> = - l n f(x;6) ( 3 3 )

in which the "bar" denotes the average. From (1) and (2), it i clear that

s

H(x) = - E[ln f(x;9>] ( 3 4 )

The principle of maximum entropy in hydrology 361

Therefore,

H(x) = - - In L (35)

Implicit in this equality is that the POME method involves population expectations, whereas MLEM involves sample averages. This result is valid only if f(x;6) belongs to the exponential family.

EVALUATION OF DATA ACQUISITION SYSTEMS

Hydrological data are collected in both space and time. Optimum sampling intervals are required to evaluate the efficiency of the data acquisition systems. To this end, the concept of entropy can be used meaningfully. Let us consider that a hydrological variable x is being measured at an interval of At resulting In XJ, i = 0, 1, 2, ..., N, with N being the sample size. The marginal entropy of the sample is computed with an assumed or known distribution of x. The sample is then represented by sub-series x i - k, k = 0, 1, ..., m, k = lag, m « N . The problem is to determine the uncertainty that remains in x^ when the values of x-j^, k = 1, 2, ...,m, are known. This calls for computing the conditional entropy H(x^|xi_1, x^_2 ... xi-k) using (5). This analysis can be repeated for increasing time intervals. Harmancioglu (1984) applied it to data of NH+ concentrations (in ppm) in water observed at 40 minute intervals at Choisy-le-Roi, Paris. The process of NH4 concentrations in water was assumed to be lognormally distributed. Entropy (napier) values for one of the four samples analysed were given for different time intervals as:

Time interval Marginal Conditional entropies (minutes) entropy H (x • \ x • _j, ..., xj-k)

H(x±) k = 1 k = 2 k = 3 k = 4 k = 5

40 0.952 0.000 0.000 0.000 0.000 0.000 80 0.955 0.000 0.000 0.000 0.000 0.000

120 0.949 0.168 0.000 0.000 0.000 0.000 160 0.962 0.380 0.161 0.158 0.145 0.000

The marginal entropies of the sample are about the same for different time intervals. For data that are 40 and 80 minutes apart, the uncertainty H(x^) of the process is reduced completely at the first lag indicating a strong first order serial dependence. Consequently, conditional entropies vanish indicating no uncertainty in x.̂ values given x^=i values. In terms of the sampling interval, this says that measuring NHt concentrations every 40 or 80 minutes brings no new information. An increase in time interval to 120 and 160 minutes increases conditional entropy. For example, for 120 minutes, H(xi|xi_1) = 0.168, meaning that approximately 17% uncertainty remains in x^ , given x^-x, and the amount of repeated information is 83%. Thus, if the observations were made at 120 minutes, only 17%

362 Vijay P.Singh & A.K.Rajagopal

of loss of information would be risked but it increases to 38% for 160 minutes. Thus, optimum sampling intervals can be estimated using entropy.

ANALYSIS OF UNCERTAINTY

Analogous to the preceding section, the entropy concept can be used to assess uncertainty in hydrological systems and their models, and consequently, judging the goodness of the model. Let us consider a model for monthly streamflow simulation. Monthly streamflow data are available for a sufficiently long period of time. For each month, the observed streamflow is denoted by x and the model-simulated streamflow by y. For assumed or known distributions of x and y, their marginal entropies, H(x) and H(y), can be calculated using (2). Likewise, the conditional entropy H(x|y) and the joint entropy H(x,y) are computed using (4) and (3). Then T(x,y) is calculated from (9) or (10) . These calculations can be carried out for a number of models. Based on the value of T(x,y), the best monthly streamflow model may be selected. The higher the value of T(x,y), the better the model. The conditional entropy, together with the marginal entropy, can be used to assess the reduction in the uncertainty of the model results.

CONCLUDING REMARKS

Entropy and the principle of maximum entropy are useful tools for analysis and synthesis of a wide class of hydrological systems. For the future it is hoped that they will receive greater attention from the water resources community and that more research will be undertaken on understanding their limitations and potential.

ACKNOWLEDGEMENT This study was supported in part by funds provided by the US Department of Interior, Geological Survey, through the Louisiana Water Resources Research Institute under the project, "A Multivariate Stochastic Analysis of Flood Magnitude, Duration and Volume". Dr Rajagopal was supported in part by ONR Contract N000-1486-WR-24-016.

REFERENCES

Amorocho, J. & Espildora, B. (1973) Entropy in the assessment of uncertainty in hydrologie systems and models. Wat. Resour. Res. 9(6), 1551-1522.

Arora, K. & Singh, V.P. (1986) An evaluation of seven methods for estimating parameters of the EVI distribution. Paper presented at the International Symposium on Flood Frequency & Risk Analysis, Louisiana State Univ., Baton Rouge, Louisiana, USA.

Davy, B.W. & Davies, T.R.H. (1979) Entropy concepts in fluvial geomorphology: a réévaluation. Wat. Resour. Res. 15(1), 103-106.

Finch, P.D. & Groblicki, P.. (1984) Bivariate probability densities

The principle of maximum entropy in hydrology 363

with given margins. Foundations of Physics 14(6), 549-552. Fiorentino, M. , Singh, V.P. & Arora, K. (1986) On the two-component

extreme value distribution and its point and regional estimators. Paper presented at the International Symposium on Flood Frequency & Risk Analysis, Louisiana State Univ., Baton Rouge, Louisiana, USA.

Harmancioglu, N. (1981) Measuring the information content of hydrological processes by the entropy concept. J. Civ. Engng. Faculty, Ege Univ., Special Issue: Centennial of Ataturk's Birth, Izmir, pp. 13-40.

Harmancioglu, N. (1984) Entropy concept as used in determination of optimal sampling intervals. Proc. HYDROSOFT '84 - International Conf. Hydraulic Engrg Software (Portoroz, Yugoslavia), pp 6-99 to 6-110.

Harmancioglu, N., Yevjevich, V. & Obeysekera, J.T.B. (1985) Measures of information transfer between variables. Paper presented at Fourth International Hydrology Symp. (Colorado State Univ., Fort Collins, Colorado).

Jaynes, E.T. (1957a) Information theory and statistical mechanics, I. Phys. Rev. 106, 620-630.

Jaynes, E.T. (1957b) Information theory and statistical mechanics, II. Phys. Rev. 108, 171-190.

Jaynes, E.T. (1961) Probability Theory in Science and Engineering. McGraw-Hill, New York.

Jaynes, E.T. (1982) On the rationale of entropy methods. Proc. IEEE 70(19), 939-959.

Jowitt, P.W. (1979) The extreme-value type-I distribution and the principle of maximum entropy. J. Hydrol. 42, 23-38.

Krstanovic, P.F. & Singh, V.P. (1986) A multivariate stochastic flood analysis using entropy. Paper presented at the International Symp. on Flood Frequency & Risk Analysis, Louisiana State Univ., Baton Rouge, Louisiana, USA.

Leopold, L.B. & Langbein, W.B. (1962) The concept of entropy in landscape evolution. USGS Prof. Pap. 500-A, A1-A20.

Panu, U.S. & Unny, T.E. (1977) Entropy concept in feature extraction and hydrologie time series analysis. Proc. Third International Hydrol. Symp. (Colorado State Univ., Fort Collins, Colorado) .

Paulson, A.S. & Garrison, C.B. (1973) Entropy as a measure of the areal concentration of water oriented industry. Wat. Resour. Res. 9(2), 263-269.

Rajagopal, A.K., Teitler, S. & Singh, V.P. (1986) Some new perspectives on maximum entropy techniques in water resources research. Paper presented at the International Symposium on Flood Frequency & Risk Analysis, Louisiana State Univ., Baton Rouge, Louisiana, USA.

Scheidegger, A.E. (1967) A thermodynamic analogy for meander systems. Wat. Resour. Res. 3(4), 1041-1046.

Shannon, C E . (1948a) A mathematical theory of communications, I and II. Bell System Tech. J. 27, 379-423.

Shannon, C E . (1948b) A mathematical theory of communications, III and IV. Bell System Tech. J. 27, 623-656.

Sharp, W.E. (1970) Stream orders as a measure of sample source uncertainty. Wat. Resour. Res. 6(3), 919-926.

364 vijay P.Singh S A.K.Rajagopal

Singh, V.P. & Krstanovic, P.F. (1986) A stochastic model for sediment yield using the principle of maximum entropy. Wat. Resour. Res., under review.

Singh, V.P. & Jain, D. (1985) Comparing methods of parameter estimation for EV1 distribution for flood frequency analysis. Paper presented at the Vth World Congress on Water Resources (Brussels, Belgium).

Singh, V.P., Rajagopal, A.K. & Singh, K. (1986) Derivation of some frequency distributions using the principle of maximum entropy (POME). Adv. Wat. Resour. 9(2), 91-106.

Singh, V.P. & Singh, K. (1985a) Derivation of the gamma distribution by using the principle of maximum entropy. Wat. Resour. Bull. 21(6), 941-962.

Singh, V.P. & Singh, K. (1985b) Derivation of the Pearson type (PT) III distribution by using the principle of maximum entropy (POME). J. Hydrol. 80, 197-214.

Singh, V.P., Singh, K. & Rajagopal, A.K. (1985) Application of the principle of maximum entropy (POME) to hydrologie frequency analysis. Completion Report 06, Louisiana Water Resources Research Institute, Louisiana State Univ., Baton Rouge, Louisiana, USA.

Sonuga, J.O. (1972) Principle of maximum entropy in hydrological frequency analysis. J. Hydrol. 17, 177-191.

Sonuga, J.O. (1976) Entropy principle applied to rainfall-runoff process. J. Hydrol. 30, 81-94.

Yang, G.T. (1971) Potential energy and stream morphology. Wat. Resour. Res. 7(2), 311-322.