Philosophical Basis - Rev1 (Submitted)

download Philosophical Basis - Rev1 (Submitted)

of 16

description

Yup.

Transcript of Philosophical Basis - Rev1 (Submitted)

  • 1

    APhilosophicalBasisforHydrologicalUncertainty1

    1NASA GSFC, Hydrological Sciences Lab; 8800 Greenbelt Rd, Greenbelt, MD 20771; Code 617, Bldg. 33 22Science Systems International Corporation; 10210 Greenbelt Rd, Lanham MD 20706 33University of Maryland, Earth Science Interdisciplinary Center; 5825 University Research Ct #4001, College Park, MD 20704 44University of Arizona, Department of Hydrology and Water Resources; 1133 James E. Rogers Way, Tucson, AZ 85721 55National Center for Atmospheric Research, Research Applications Laboratory; 3450 Mitchell Lane, Boulder, CO 80301 66University of British Columbia, Department of Civil Engineering; 6250 Applied Science Lane, Vancouver, BC V6T 1Z4, Canada 7Keywords: Uncertainty, Bayesian, Epistemic, Aleatory, Information 8__________________________ 9Abstract: Uncertainty is an epistemological concept in the sense that any meaningful understanding of 10uncertainty requires a theory of knowledge. Therefore, uncertainty resulting from scientific endeavors can only 11be properly understood in the context of a well-defined philosophy of science. Our main message here is that 12much of the controversy about uncertainty in hydrology has missed the point somewhat because the discussion 13has lacked grounding in and reference to essential foundational concepts. As an example, we explore the 14current debate about the appropriate role of probability theory for hydrological uncertainty quantification. Our 15main messages are: (1) that apparent limitations of probability theory are not actually consequences of that 16theory at all, but rather of deeper underlying epistemological issues, and (2) that any questions about the 17appropriateness of probability theory are better framed as questions about our preferred philosophy of science 18and/or scientific method. Our purpose here is to discuss how hydrologists might ask more meaningful 19questions about uncertainty. 20__________________________ 211. Motivation 22

    Indeed, concepts such as hazard and uncertainty as well as safety, reliability and credibility 23are not defined until a given scientific method is chosen [] (Schlesinger, 1975). 24

    There is an ongoing discussion in hydrology about the role (e.g., Sivapalan, 2009, Pappenberger and Beven, 252006), nature (e.g., Koutsoyiannis, 2010, Montanari, 2007), and appropriate handling (e.g., Beven et al., 2008, 26Stedinger et al., 2008, Vrugt et al., 2009, Mantovan and Todini, 2006, Beven et al., 2012, Clark et al., 2012) of 27uncertainty. For example, Montanari (2007) argued that the topic of uncertainty assessment in hydrology 28suffers today from the lack of a coherent terminology and a systematic approach. 29In fact, we would be somewhat worried if there was agreement on this issue, as uncertainty is best understood 30as a concept derived from reason (Lindley, 2006; ch2), which itself requires both a definition of and method 31for obtaining knowledge, and there is currently no agreement on the best choice for either in the context of 32science (Laudan, 1983). As such, we propose that our goal should not, at present, be a unified or systematic 33theory of hydrological uncertainty, but rather a robust literature that discusses how to interpret applied 34problems in the context of practical and theoretical limits to different types of scientific inference. 35Here we provide a very simple example of what this type of discussion might look like. Our primary opinion is 36that much of the existing discussion about uncertainty in hydrology has failed to frame the problem in ways 37that encourage the most meaningful types of questions. Our example here reflects on the current debate about 38the appropriate role of probability theory for hydrological uncertainty quantification (see Beven, 2015). We 39

    Grey S. Nearing1,2 1-301-614-5971 [email protected]

    Yudong Tian1,3 1-301-286-2275 [email protected]

    Hoshin V. Gupta4 1-520-626-9712 [email protected]

    Martyn P. Clark5 1-303-497-2732 [email protected]

    Kenneth W. Harrison1,3 1-301-286-7095 [email protected]

    Steven V. Weijs6 1-604-822-6301 [email protected]

  • 2

    see potential to clarify the discussion by being explicit about first principles, and we do this by outlining the 40epistemological context from which probability theory may be derived. Upon doing so we arrive at the 41following conclusions: 421. There are three fundamental issues that preclude absolutely perfect uncertainty accounting: (i) the fact that 43

    we can only conduct a finite number of experiments, (ii) the fact that we can test only a finite number of 44hypotheses, and (iii) the fact that we can only test collections of hypotheses. We argue that these issues 45precede our choice of epistemology, and will discuss several ways that modern efforts in hydrology 46attempt to address these challenges directly. 47

    2. The most straightforward scientific method that supports probability theory does not allow for any 48meaningful concept of model error. This is because any hypothesis that is tested using probability theory 49must be, in reality, either true or false, and so any model that results in nonzero error is false (i.e., we are 50left with a strictly Popperian science that is not useful for practical purposes). This is true even if we 51recognize a concept of observation uncertainty (which is, strictly speaking, a conceptual error) because 52our uncertainty about any physical measurement process must itself be modeled, and so we can are 53restricted to testing either full models of the entire experimental process or we are forced to only make 54inferences about the truth-value of hypotheses that are conditional on (almost certainly incorrect) 55exogenous assumptions. 56

    3. Finally, we discuss how an epistemological understanding of uncertainty allows us to design a vocabulary 57and taxonomy for communicating ideas about uncertainty. Although there are already several conflicting 58taxonomies of uncertainty, using one that is based on explicit epistemological axioms may help facilitate 59less ambiguous discussion both within the scientific community and during science arbitration. We dont 60necessarily encourage people to use the taxonomy that we propose here, but we do encourage efforts to be 61more explicit about the relationship between various conceptualizations of uncertainty and whatever 62scientific method those conceptualizations derive from. 63

    2. A Common Epistemology 64Knowledge: Our first step in building an epistemology is to define what we mean by knowledge. We cannot 65pretend to do this rigorously, however let us take as our basis what is perhaps the most common system of 66logic: that which is based on the three axioms that Russell (2001) arguably incorrectly (e.g., Bueno and 67Colyvan, 2004, Kosko, 1990) described as self-evident: (i) the law of identity (Whatever is, is), (ii) the 68law of non-contradiction (Nothing can both be and not be), and (iii) the law of excluded middle 69(Everything must either be or not be). Under these axioms every well-formed statement is either exclusively 70true or exclusively false. Such well-formed statements are referred to as Aristotelian in reference to the fact 71that the modus ponens, , and modus tollens, ~ ~1 (see Jaynes, 2003; 72p4), which provide a basis for the classical predicate calculus2, were essentially formalized by his syllogisms. 73In this context, knowledge is defined as our ability to distinguish the truth-value of a given Aristotelian 74proposition. 75Inductive Inference: To understand the motivation of science, we must first recognize that there is 76apparently (Davies, 1990) some regularity in relationships between events or phenomena in the universe, or 77at least between certain aspects of our phenomenological experience (Hume, 2011). Science is interested 78primarily in finding the truth-value of statements about this perceived regularity such statements are called 79 1 In propositional logic modus ponens and modus tollens describe rules of inference. The notation P Q states that P implies Q; modus ponens states that if proposition P is true, and given the relationship between P and Q, that proposition Q is also true. By contrast, modus tollens states that, given the relationship between P and Q, if Q is false then P is false. 2 The classical predicate calculus is actually an algebra, not a calculus. We will hereafter refer to this as the propositional algebra to distinguish the fact that the probability calculus is a direct extension of the propositional algebra that includes a variable (a measure of belief) that is capable of dynamic response to relationships with other variables.

  • 3

    explanantia (singular: explanans) (Hempel and Oppenheim, 1948)3 using knowledge about specific events or 80phenomena that arise from this regularity. Leaving aside the question of the sense in which sensory experience 81allows us to acquire knowledge about events (e.g., Berkeley, 1874), it impossible to proceed analytically from 82phenomenological knowledge to knowledge about the regularity of the universe using only the propositional 83algebra. This is because the modus tollens can only falsify hypotheses. To state this clearly, model evaluation 84cannot be a deductive process unless we use a purely falsification-based science (e.g., Popper, 2002). 85Thus, we are required to employ an additional rule where the consequent does not follow from the antecedent: 86 (see Jaynes, 2003; ch1). The operator may be read as weakly implies and we must 87accommodate such a notion in our epistemology. We do this by recognizing a concept of belief or plausibility, 88and although the precise nature of this concept is widely debated (Howson and Urbach, 1989), the most 89important property of a useful concept of belief is that it is subject to change. In the prepositional algebra, our 90state of knowledge might change from no knowledge to perfect knowledge but if applied correctly, the 91propositional algebra does not allow us to ever change the value of our knowledge i.e., from knowledge that 92a proposition is true to knowledge that it is false or vice-versa. Belief, on the other hand, responds dynamically 93to our interactions with the universe. 94Information: Now that we have a dynamic concept of knowledge (called belief), we recognize information as 95the property of a signal that effects a change in our state of belief about some hypothesis (e.g., Schement and 96Ruben, 1993 p8). This concept of information results from, rather than being prescribed to, our axioms. 97Probability: As argued by Jaynes (2003) and Lindley (2006), it seems strange to neglect mathematics as a tool 98for modeling the learning process given its tremendous (but not unlimited; Hamming, 1980) success in helping 99us understand other aspects of the universe (Wigner, 1960). Specifically, now that we have a dynamic 100component in our model (belief), we need a calculus. It turns out that probability theory is the only calculus 101that is consistent with the prepositional algebra when belief is defined such that it can be measured by a scalar 102(Van Horn, 2003). Richard Cox (1946) provided at least the essential groundwork for the uniqueness proof. 103The strong implication here is that if we wish to deal with epistemological questions about the appropriateness 104of probability theory (e.g., Beven, 2015) we must recognize that the probability calculus is a valid quantitative 105epistemology in this well-defined context. The real questions are not about the appropriateness of probability 106theory, but rather about the appropriateness of the Aristotelian epistemology outlined in the previous section. 107We can certainly imagine contexts in which Cox theorem is not relevant, but the question is in what sense 108such contexts are meaningful. Jaynes (2003; p23) argued that any approach that we might take to scientific 109inference should be consistent with the two strong Aristotelian syllogisms in any case where the latter are 110applicable, but certainly this is highly contentious (e.g., Jaynes, 2003 appendix A, Shafer, 1976, Kosko, 1990, 111Dubois and Prade, 2001). The rest of this paper will largely leave aside the fact that we dont know whether 112our epistemological axioms are isomorphic with the way either scientists, or the universe itself (e.g., Goyal and 113Knuth, 2011, Knuth, 2004b), processes information. However, while we strongly encourage this discussion as 114it relates to hydrology, it is important to understand that this is the level at which questions about the 115appropriateness of probability theory are relevant and important. 116Uncertainty: Instead of these larger questions, our focus in the rest of this article is on what it means to have 117uncertain knowledge in the context of the above epistemology, which is a rather standard way to interpret the 118scientific method. In this context, uncertainty is due wholly to the fact that we must rely on non-deductive 119reasoning. So in this context uncertainty is the difference between the actual truth-value of some Aristotelian 120proposition and our state of belief about that truth-value. 121 3 Although this essay primarily uses Cartwrights simulacrum account of physical laws, which directly disputes the deductive-nomological account advocated by Hempel & Oppenheim, the difference between the two is the sense in which various types of explanantia can be said to be truth-apt, not the fundamental distinction between explananda and explanantia.

  • 4

    Models: To have a clear picture of scientific uncertainty we must recognize some foundational concepts 122related to the scientific realism. Cartwrights (1983) simulacrum account of the relationship between the 123scientist and the apparent regularity of the universe recognizes a distinction between fundamental laws and 124phenomenological laws, where fundamental laws are essentially explanatory in nature and phenomenological 125laws are predictive4. This distinction centers on the idea that fundamental equations are meant to explain, 126and paradoxically enough the cost of explanatory power is descriptive adequacy so that fundamental 127equations do not govern objects in reality; they govern only objects in models. The explicit consequence of 128this is that because fundamental laws do not actually describe anything they are not truth-apt. Thus, 129phenomenological laws that connect explanatory ideas with predictions of events are the primary objects of 130scientific inquiry under a fundamentally Aristotelian epistemology. 131The simulacrum account works well in hydrology. Phenomenological laws are essentially what hydrologists 132call parameterizations (e.g., Gupta and Nearing, 2014). One or more phenomenological law constitutes a 133model, and even very simple models have a compendium of embedded assumptions, at the very least about 134how fundamental laws relate to actual entities in the models (i.e., Cartwrights bridge principles), and also 135assumptions about how the system under study relates to the rest of the universe. This problem is compounded 136for reductionist accounts of complex systems with multiple interacting components. We must therefore 137recognize that since all models contain multiple explanantia (all of which must be treated as hypothetical), 138there is no alternative to using hydrological models as working hypotheses (e.g., Clark et al., 2011, Beven, 1392012). 1403. The Causes of Uncertainty 141As is often the case, it seems that if we want to control something in this case uncertainty then we should 142seek to understand the causes of that thing. In this section we explore the causes of uncertainty in the context 143of the epistemology outlined above, however we argue strongly that these causes are actually more 144fundamental than that epistemology. We must have the epistemology to define uncertainty that is, to make it 145a coherent concept, but the relationship between any epistemology and its derived concept of uncertainty is 146mediated by a more fundamental aspect of the relationship between scientists and the observable universe 147namely to the necessarily finite nature of our existence. 148The Finite Experiments Problem: The first fundamental problem that gives rise to uncertainty is our ability 149to conduct only limited sets of experiments. This means that general hypotheses can be falsified but never 150proven (Popper, 2002). In other words, it is impossible to determine with certainty the truth-value of the most 151important hypotheses those that are both true and general (Howson and Urbach, 1989 provide a great 152discussion). It is always, in principle, possible to propose an infinite number of different models that allow us 153to correctly predict any finite number of events, and so any set of experiments that we might actually conduct 154will not provide the information necessary to differentiate between members of some large class of un-falsified 155models. This means that there will always be some fundamental uncertainty about appropriate models, and to 156the extent that these un-falsified models have conflicting implications about unobserved events we also have 157uncertainty about unobserved phenomena. 158The Finite Hypotheses Problem: A second fundamental source of uncertainty is our ability to test only a 159finite number of models. To understand this, suppose that there exists some concept of all possible models that 160we might propose as descriptions of processes that predict the outcomes of a particular set of experiments. For 1614 We use the term prediction to refer to any phenomenological statement before being tested against observations. This is in direct conflict with the suggestion by Beven and Young (2013), however our use here agrees with the etymology of the word in a way that is meaningful in the context of the scientific method. Jaynes (2003) makes precisely the same argument for using the terms prior and posterior to refer to elements of a Bayesian analysis regardless of whether we actually predict the future.

  • 5

    example, we might interpret the Church-Turing5 thesis (for a formal treatment, see Hutter, 2003 p34) as 162implying that all possible models can be expressed as programs for a universal Turing machine (e.g., 163Rathmanner and Hutter, 2011 p42). It is then possible to place an explicit prior over this complete set 164(Solomonoff, 1964). To test each hypothetical model we must use it to make predictions, which will require at 165least a finite amount of effort (Rathmanner and Hutter, 2011 chapter 9.1). Given finite resources, it is therefore 166only possible to test a finite sample of the infinite number of possible models. 167Uncertainty arises here from the fact that the act of not testing some potentially correct model is 168indistinguishable from assigning to that model zero probability in the inference prior (here the inference prior 169is formally a Bayesian prior since we are working under the probability calculus). This means that the 170probability assigned to that model in our inference posterior will also be zero, which is a mistake. First, notice 171that we run the risk of excluding a true model from the support of our prior and therefore assign a probability 172 = 0 to a true model (this essentially always happens in hydrology since none of our models are true). This is 173a special case of the more general mistake where the hypothetical posterior probability that would be assigned 174by Jaynes robot (a hypothetical machine capable of perfect probabilistic inference) to some excluded model 175might be positive, and therefore assigning to that model a prior probability of zero as a matter of convenience 176results in an incorrect state of belief after performing scientific inference even if that model is false. 177The Duhem-Quine Problem: The Duhem-Quine thesis (Harding, 1976) is the classical statement of the fact 178that any type of inductive reasoning is restricted to acting on models that are necessarily built from a 179compendium of hypothetical explanantia (this predates Cartwright but is supported by her arguments6). This is 180especially but under the simulacrum account certainly not uniquely important in the context of complex 181systems (Weinberg and Weinberg, 1988) like watersheds (Dooge, 1986) where many physical processes 182interact at different temporal and spatial scales. This makes it impossible to test individual explanantia, which 183is a problem when we are looking to discover generalities because all models require (hypothetical) specific 184explanantia to connect (hypothetical) general explanantia with the details of a particular set of experiments. If 185we can only test models that predict integrated responses of complex systems, then we cannot directly test 186hypotheses about the components of those systems, and we cannot arrive at generalities (Clark et al., 2011). 187The Duhem-Quine problem is what gives rise to disinformation. Beven & Westerberg (2011) spoke primarily 188about disinformation in data, however there really is no such thing data are what they are, and the job of a 189scientist is to explain them. Disinformation is introduced during an inference process when we project 190information from a particular data set through an incorrect explanans and onto another explanans. As an 191example, we might want to infer some description of the effective porosity of the unsaturated zone in a 192watershed through parameter estimation, but we convert stage measurements to discharge using a slightly 193incorrect rating curve. Disinformation (about various effective porosity explanantia, which are likely a 194component of some phenomenological law that relates precipitation with infiltration) is introduced here 195because information from stage data is corrupted by an incorrect explanantia about the relationship between 196stage and discharge as it is projected onto our beliefs about effective watershed-scale porosity. This holds 197even in extreme cases like where a bit is flipped during electronic transmission of some observation data. Even 198in this extreme case, disinformation is introduced during inference strictly because of another explanans in the 199 5 The Church-Turing thesis states everything that can reasonably be said to be computable by a human using a fixed procedure can be computed by a Turing machine. Rathmanner & Hutter (2011), among many others, interpret this to mean that this class of computable functions or problems is actually large enough to include any environment or problem encountered in science since [a]t a fundamental level every particle interaction can be computed by laws that can be calculated and hence the outcome of any larger system is computable. It is important to remember neither the formal statement of the thesis nor the purported implication for natural science are proven, however even if there is some larger class of functions (than Turing-computable ones) that end up being necessary to describe the physics of the universe, we are still left provably with uncertainty due to the finite hypotheses problem since this problem exists even if we restrict ourselves to Turing-computable models. 6 Incidentally, this fact is also explicit in the deductive-nomological account.

  • 6

    same model specially, the incorrect (albeit implicit) assumption that we have a noiseless communication 200channel. We believe that all cases of disinformation can be reduced to such examples, and we also believe that, 201according to the simulacrum account, disinformation is a fundamental and unavoidable source of uncertainty. 202For a more practical and quantitative discussion of the relationship between disinformation and probabilistic 203inference, see Nearing & Gupta (2015). 204This means that observation error is not a fundamental source of uncertainty. While perhaps a useful 205conceptual tool in some cases for example, when a theorist is handed a data set generated by an 206experimentalist and has no way or desire to build and/or test models of the experimental apparatus, the 207concept of observation error is in reality an example of a mind projection fallacy (Jaynes, 2003; p22). 208Observation error simply doesnt exist. In the case of Eulerian models, like what are typically used in 209hydrology (e.g., Darcy, 1856, Rodriguez-Iturbe, 2000, Reggiani and Schellekens, 2003), this applies to 210observations used to assign values to model parameters, observations of boundary conditions, and observations 211used to test models during inference. This is important to understand because this particular mind projection 212fallacy often at least partially motivates the practice of using error-functions or error-distributions to test 213models, which, as we will argue presently, is not permitted under probability theory. 2144. The Debate about Likelihoods 215Because the probability calculus derives from an Aristotelian epistemology, it is only possible to apply 216probability measures to statements that can in principle be associated with a (generally unknown) Boolean 217truth-value. Given only a small number of observations, almost any deterministic model of a complex system 218will be falsified by the modus tollens, i.e., ~ ~ (see Section 2). Therefore, because of the 219finite hypotheses problem, it is not interesting to perform inference directly on deterministic models (Weijs et 220al., 2010). If we are not comfortable assuming that it is possible in theory (not in practice) to build a true 221model, then we will need to use some generalization of probability theory that does not operate on Aristotelian 222propositions (e.g., Kosko, 1990). 223In practice hydrological models are built as deterministic approximations of physics equations, and since we 224know that these models are essentially certainly wrong, standard practice is to introduce a concept of model 225error by placing ad hoc probability distributions around the predictions made by such models (e.g., Sorooshian 226et al., 1983). Many authors (present company included) have used such distributions as the likelihood 227functions required by Bayes theorem (e.g., Nearing, 2014, McMillan and Clark, 2009, Gupta et al., 1998, 228Harrison et al., 2012). Strictly speaking, it is an error to assign a probability of anything other than = 0 to a 229falsified model (i.e., one that has any non-zero error), and when we use error distributions as likelihood 230functions we can only then apply Bayes theorem to assign probabilities to specific propositions about the 231error of such models, we cannot assign probabilities to the models themselves. 232To make this more concrete, the posterior of Bayes law as applied to an inference problem will look like 233 ( = | = ) where is a random variable over some class of (deterministic) models, and = is 234some available data. In this case, what is the proposition ? Certainly cannot be a proposition like model 235m is true, since if (deterministic) model m has any nonzero error then it is false. Similarly, consider the 236predictive distribution ( = | = ) where is a random variable over errors or residuals defined as 237differences between model predictions and experimental observations. Here random variables like = refer 238to Aristotelian propositions like our prediction of a particular phenomenon will differ by amount e as 239compared to a (hypothetical or actual) observation. In this case the random variable = simply refers to 240the proposition we use model m so that expressions like = | = are read as given that we use model 241m, the error will be amount e. But this does not help us at all during inference; if we define things this way 242we are fundamentally unable to project any information from statements like = back onto expressions like 243 = . So the pertinent question is about exactly what propositions variables like = refer to. 244One way to address this is to treat so-called error distributions as components of the models that we are 245testing. Only by conceptualizing the situation this way that our models themselves make probabilistic 246predictions can we assign nonzero probabilities to complex models. Only probabilistic models can provide 247

  • 7

    information in any well-defined sense (Nearing and Gupta, 2015, Weijs et al., 2010). But in this case, the 248resulting probabilities are associated with models that contain two types of phenomenological laws: ones that 249take the form of approximate consequences of Newtons laws, and also ones that are purely epistemic in nature 250(relating to information in the model). If we want to assign probabilities to any model, then we must perform 251inference over this whole model, not just over some individual component thereof, and if we choose a single 252error distribution (or a single likelihood function) then this is the same as restricting our inference prior to one 253that assigns non-zero probability only to those hypothetical models that include this one particular component 254 this is an example of the finite hypotheses problem. 255Thus we see that the debate in the hydrology literature that is ostensibly about likelihood functions is really 256about how to choose a degenerate prior over this particular (epistemic) component of our models the debate 257is about choosing which models to test. As an example, using a squared error objective function is equivalent 258to performing inference over a prior that assigns non-zero probability only to models that result in a certain 259type of predictive distributions those that assign probabilities that are proportional to the inverse of a squared 260distance from observations (e.g., Smith et al., 2008). It is often reported that such likelihood functions are 261incoherent with the probability calculus. However, any function with a finite total integral becomes a coherent 262probability distribution after appropriate scaling. So all likelihood functions used by Smith et al. (2008) are 263absolutely formal as long as we interpret them not as likelihood functions but rather as components of 264hydrological models. Similarly, using Stedinger et al.s (2008) apocryphally formal likelihoods is equivalent 265to performing inference over priors that assign non-zero probability only to models that make Gaussian 266predictions. The latter are also formal if and only if we interpret them as an epistemic component of our 267models and not as an error distribution or likelihood function. Of course, under this conceptualization the 268model itself is the likelihood function. 269We can interpret Mantovan & Todinis (2006) comment in this context as well. They point out a specific sense 270in which the squared error models (which they again call likelihood functions) that we discussed above are 271supposedly not consistent with the probability calculus. In particular they take issue with the fact that different 272observations (notated !, which is equivalent to Mantovan & Todinis !) are not independent conditional on 273the model (see the section therein titled Equivalence between batch and sequential learning where they use 274notation to refer to the model). This, of course, is not a requirement of the probability calculus since any 275model may predict conditionally correlated events in the sense that: 276 ! ,! ! ! ! . [1] A simple example is a bivariate Gaussian with 0 ( here denotes the bivariate Gaussian distribution with 277parameters ! , ! ,! ,! , , and ! ,! ): 278 ! ,! = ! + !! ! ! , 1 ! !! , but [2.1] ! = ! ,!! . [2.2] Inference on the parameters of this Gaussian model requires data pairs ! ,! . Mantovan & Todini correctly 279point out that the hydrological models (so-called likelihood functions) discussed by Smith et al. (2008) (i.e., 280ones where -variate predictive probabilities are related to a Euclidean distance in -space) predict a 281timeseries with some implicit autocorrelation (i.e., the non-equality in equation [1] is really a non-equality). 282Their argument should be interpreted as noting that these likelihood functions are models of a -point auto-283correlated timeseries, and the consequence is that one cannot perform inference on such models using a 284timeseries of observations at least not without considering the implications of the implied auto-285correlation by deriving !,! ,!,~! . The issue is not about the formality of certain likelihood 286functions, but rather that we must account for the fact that the model predicts auto-correlation. 287

  • 8

    To address these and related criticisms, Beven et al. (2007) claimed that formal Bayesian identification of 288models is a special case [of GLUE] and argued that we might want to use so-called informal likelihood 289functions so as to avoid over-precise predictions. We wholeheartedly agree that it is worth making an effort to 290avoid overconfident models, but this is done by choosing models that have a reduced tendency to be 291overconfident. This is actually what Beven et al. (2007) actually did, although they misinterpreted their actions 292as generalizing a particular epistemic calculus (incidentally, they claimed this without actually proposing any 293generalization of the axioms of that calculus). Further discussion is given by Clark et al. (2012). 294In fact, there have been several arguably (Lindley, 1987) successful attempts to generalize Bayes theorem 295(e.g., Frhwirth-Schnatter, 1993), however such attempts are meaningless unless they are supported by a 296generalization of the epistemological axioms that support the theorem (e.g., Kosko, 1990). It is impossible to 297interpret what it means to identify a model or even to make a prediction without an epistemological 298foundation, and the difference between violating and generalizing a theorem is that the latter requires explicit 299statement of generalized axioms and subsequent derivation of an analogous theorem. 3005. What This All Means in Practice 301Here we will outline several efforts to manage uncertainty in hydrology that are consistent with the 302epistemology outlined in Section 2 and can be understood as directly addressing the fundamental limits 303outlined in Section 3. It is important to understand that there is no possibility that any one of these efforts will 304ever solve any of the fundamental problems, but it seems at least potentially valuable to be aware of the 305relationships between our efforts to advance the science and the true underlying challenges. 306

    5.1. Building Models vs. Inference 307The ultimate objective of any applied science should probably be to build models directly from first principles, 308and since all models must account for what we do and dont know about a certain class of physical systems (as 309argued in Section 4), this means that we should at least desire to build uncertainty models from first principles. 310Notice, however, that even if we were to build models that estimated uncertainty from first principles, these 311models would still almost certainly be incorrect in almost all cases, and we would therefore still need to 312perform some form of inference (as defined above) over classes of such models. So we must make a 313distinction between how we build models and how we test them. 314At least for the foreseeable future, hydrologists will likely continue to build and test models that include 315distinct uncertainty components that are estimated directly from data. That is, we will rely on past experience 316(i.e., frequentist accounting) to tell us what a particular model gets wrong. But it is important to remember that 317building models this way is very different from testing those models once they are built. We can approach 318these tasks in one of two ways: either by (i) estimating a model error distribution directly from data (e.g., 319Sikorska et al., 2014, Wilkinson et al., 2011) and then going on to test such models using an independent data 320source, or (ii) by performing both steps simultaneously (e.g., Brynjarsdttir and OHagan, 2014, Vrugt and 321Sadegh, 2013). 322To state this in a slightly different way, the vast majority of efforts in hydrology toward quantifying 323uncertainty about future predictions rely on the chain rule of probability theory (Liu and Gupta, 2007). For 324example, according to Sikorska et al. (2014), [t]he problem of uncertainty assessment is in principle 325reducible to estimating and integrating the main sources of uncertainty through the modeling chain. But how 326do we know those main sources? Estimating the joint and conditional distributions necessary to apply the 327chain rule in this sense is precisely the purpose of the scientific method. 328

    5.2. Combinatorial Modeling Systems 329The primary impediment to estimating those main sources of uncertainty is due to the Duhem-Quine problem. 330Any error in one component of our model will result in disinformation about any other component, so it is very 331difficult estimate joint and conditional distributions over or place probabilities on each individual modeling 332hypothesis (each phenomenological law in our models). 333

  • 9

    One way to approach this is to use a modular modeling system (e.g., Clark et al., 2015, Niu et al., 2011). Such 334systems attempt to decompose models into their constituent phenomenological laws (or as close to such as 335possible). They allow the scientist to recombine various process parameterizations (different representations of 336various phenomenological laws) to produce many different models that include any particular hypothesized 337phenomenological law or parameterization. Although this does not address the finite hypotheses problem at 338all, it does at least explicitly acknowledge the Duhem-Quine problem by facilitating sensitivity analyses that 339assess the relative contribution of individual phenomenological laws to the total variability in model output. 340Further, by testing a combinatorial number of such models where each individual parameterization is held 341constant and recombined with various other parameterizations to produce different estimates of the integrated 342system response, it should be possible to use Bayes theorem to infer posterior distributions over individual 343components in a way that is explicitly cognizant of the Duhem-Quine problem. Make no mistake: this cannot 344solve the Duhem-Quine problem, but at least the approach addresses the problem head-on. 345It is important to remember that while modular modeling systems might let us infer posterior distributions over 346individual phenomenological laws, they cannot do so if they do not include (hopefully many different) 347representations of the measurement processes related to both input and response data. And they also cannot do 348so unless they contain (hopefully many different) representations of the fact that no model built in such a 349system will have perfect process representations. That is, we still need the models to recognize their own 350uncertainty, such distributions must be included as a component of each model, and the choice of these 351uncertainty distributions should be treated as identical to the choice of any and every other model component. 352These are not really error distributions, but they are distributions over phenomena or events, not over 353phenomenological laws or parameterizations. 354

    5.3. Information Benchmarking 355There also appears to be potential for using an information-centric approach to understanding the relative 356contribution of various model components to predictive uncertainty. Measuring uncertainty is always an 357approximate endeavor because the finite hypotheses problem always requires us to use degenerate priors, and 358thus our estimates of measures of uncertainty are never guaranteed to converge to the values that they would 359have in the limit of an increasing number of observational experiments. There are, however, complete bases 360over certain large classes of functions (e.g., continuous functions) that allow us to estimate functional 361relationships directly from real world data (e.g., Jackson, 1988, Cybenko, 1989), and we can sometimes use 362these to produce estimates of information and certain measures of uncertainty that approach some true values 363that would result in the limit of infinite observation data (Nearing et al., in review). The compromise here is 364that this type of model benchmarking only produces integrated measures of uncertainty, not full probability 365distributions (Nearing and Gupta, 2015), and so this offers little help in building probabilistic models. What it 366does offer though is a way to quantitatively diagnose where uncertainty is coming from. 367

    5.4. Utilitarianism 368We might resolve the problem of understanding error distributions by taking a utilitarian approach. In the 369precedent, we argued that distributions over model error cannot be used as likelihood functions for Bayesian 370inference, which is true if we want to perform inference on absolute Boolean truth-values associated with 371individual hydrological models. In this case, essentially all deterministic models will be falsified but more 372general (probabilistic) models may be assigned some non-zero probability. In practice however, we typically 373recognize that even incorrect models provide useful information. In context of the probability calculus, a 374likelihood ratio contains all of the information that we have about the choice between two models (Edwards, 3751984), and we can integrate the ratio of predictive probabilities supplied by any two models under any 376transformation. The transformation under which we integrate such a ratio implies a utility function that may or 377may not be based on a real and meaningful concept of model error (Nearing and Gupta, 2015). In this way we 378may use a concept of utility to reconcile our intuition that erroneous models can provide useful information 379with the reality of the probability calculus. 380

  • 10

    The concept of utility allows us to perform relative inference over a limited class of models. The utility 381function must be well-defined and should be appropriate for a particular decision process, however it is 382important to understand that any decision process is an act of will rather than an act of rationality (Savage, 3831962). The latter means that a concept of utility does not derive from any existing epistemology, and so if we 384desire such a concept it must be appended to the existing epistemology via the addition of one or more axioms. 385If we are willing to do this we can then use Bayes theorem to assign probabilities to statements like this 386model is the best (in a particular sense) among the limited class that I am testing. Such Aristotelian 387statements can be (and, in fact, sometimes are) true, and so can be assigned probabilities. 388But this does not solve, or even mitigate, the finite hypothesis problem because there is always the possibility 389that some model that is not included in our inference class would dominate in the utilitarian sense if it were 390considered. At best, probabilistic utilitarianism is essentially isomorphic with the interpretation of probability 391theory described in the preceding sections, and so there is no epistemological reason to prefer one over the 392other than that the former allows us to retain, in a formal sense, the familiar concept of model error at the 393expense of a more complex epistemology. 394One very attractive utilitarian philosophy is based on parsimony. This concept is supported directly by the 395neutral ontological perspective described by Davies (1990), who pointed out that [t]he existence of 396regularities [e.g., between cause and effect] may be expressed by saying that the world is algorithmically 397compressible. The quantitative theory that results from this perspective is due to Kolmogorov (1963) and 398Solomonoff (1964) who defined complexity rigorously in the context of the Church-Turing thesis. The 399complexity of data is equal to the length of the shortest program for some universal Turing machine that can 400output that data, plus the length of the description of the universal Turing machine. This theory has been used 401for hydrological model selection (Weijs et al., 2013), however, again in practice this only allows us to rank a 402finite family of models. 4036. A Derived Taxonomy 404Finally, we would like to say a few words about how uncertainty is discussed in science and engineering 405literature. As mentioned in the introduction, there is some ongoing effort among hydrologists to codify the 406concept of uncertainty into a standard taxonomy (e.g., Beven, 2015, Montanari, 2007). We here encourage the 407perspective that since uncertainty is only a meaningful concept within some epistemology, that any taxonomy 408will be largely (but not fully) dependent on the choice of epistemology. 409First, lets deal with the exception to that rule. It was implied (and referenced) above that there is probably no 410self-evident epistemology. Beven (2015) refers to the choice of epistemology, somewhat confusingly, as 411ontological uncertainty, defined as uncertainty associated with different belief systems[for example,] about 412whether formal probability is an appropriate framework. However, in the engineering and science literature, 413ontological uncertainty refers to properties of the system under study, not properties of the researchers choice 414of philosophy (e.g., Walker et al., 2003, Lane and Maxfield, 2005, Brugnach et al., 2008). It seems more 415straightforward to acknowledge the choice of epistemology as epistemological uncertainty, and to uncertainty 416about what exists as ontological uncertainty. Epistemological uncertainty (but not ontological uncertainty) 417obviously precedes our choice of epistemology. But here we arent talking about the same type of uncertainty 418that occurs within a well-defined epistemology. Epistemological uncertainty is a fundamentally different thing 419from all other types of uncertainty since uncertainty itself is only well defined in the context of a well-defined 420epistemology. It is somewhat unfortunate to use same word to describe both concepts, and this implies the 421obvious fact that we must have some meta-epistemology to discuss the choice of scientific epistemology. 422Within the context of, presumably, any epistemology that supports any empirical science, there is a 423fundamental distinction between explanantia and events. We therefore immediately recognize two distinct 424types of uncertainty: related to our ability to simulate the universe and related to our knowledge about 425individual phenomena that occurs in the universe. We might call the former simulacrum uncertainty and the 426latter phenomenological uncertainty. 427

  • 11

    The distinction between simulacrum and phenomenological uncertainty is not analogous to the distinction 428between epistemic and aleatory uncertainties defined by either Gong et al. (2013) or Beven (2015), nor is it 429analogous to the distinction between contextual and phenomenological uncertainties proposed by Kreye et al. 430(2011). Aleatory uncertainty is almost always defined with respect to some concept of randomness (Montanari 431et al., 2009, Ang and Tang, 2004, Kiureghian and Ditlevsen, 2009), and at the scales we are concerned with in 432hydrology, randomness is essentially (but perhaps not completely; Bell, 1964) an epistemic concept (Jaynes, 4332003; ch10). Thus aleatory uncertainty is equivalent to a concept of acceptability in the opinion of the 434researcher (Ang and Tang, 2004, Baecher and Christian, 2000, Kiureghian and Ditlevsen, 2009). Similarly, 435and despite his best efforts, Kalman (1994) was only able to define randomness in this ultimately subjective 436context: as nonunique[ness] modulo all relevant regularities (emphasis ours). If our goal were to build 437deterministic models (which, as argued above, should never be the objective of any applied science), we might 438be tempted to think of aleatory uncertainty as referring to uncertainty due to observations and epistemic 439uncertainty as referring to uncertainty about phenomenological laws (e.g., Gong et al., 2013). Of course, this 440distinction fails under the epistemology that supports probability theory (as discussed above), and so we must 441recognize that epistemic and aleatory do not represent a true duality most aleatory uncertainty at the scale of 442watersheds is also epistemic. 443We do agree strongly with Beven (2015) that it is absolutely necessary to recognize a concept of 444disinformation, however we see this as arising from the necessarily compound nature of our models that is, 445projecting information from data through incorrect model components onto uncertainty distributions over other 446model components. As argued above, disinformation cannot be a property of data itself, but rather is a property 447of our ability to decode the data. One may be tempted to conceptualize the measurement process as a (noisy) 448Shannon-type communication channel, however this analogy fails because we must recognize that the universe 449itself is not intentional and so there is no encoder in this analogy. The information contained in our 450observations is not about anything until we decide as such, and this particular act of deciding aboutness is 451a component of our model of the physical universe in particular, a component of our model of the 452measurement process. Thus we do not recognize in our taxonomy a concept of observation error except as a 453(potentially useful but also potentially dangerous) thought experiment. 454More generally, when considering any taxonomy, it is important to recognize that everyone has some a priori 455understanding of the central concepts (e.g., at the time of publication information is the 219th most commonly 456used word in the English language; www.wordcount.org). Therefore, effective science arbitration will probably 457require definition of all of the terms in each individual context. A scientist who understands the core concepts 458will probably be most effective at communicating their knowledge. This is why we encourage a bottom-up 459taxonomy: deriving language and communication strategies from the most fundamental principles and 460distinctions seems the best way to avoid ambiguity. 4617. Summary of the Main Points 462Our primary intent here is to show that there is a huge literature and tradition of thought that has worked to lay 463the groundwork for applied efforts, like the ones that we are currently making in hydrology, to understand 464what science can and cannot tell us. Despite perhaps because of the philosophical naivet of the current 465authors (we are all trained hydrologists, not philosophers), we are convinced that much of the current debate 466about uncertainty in hydrology could be resolved or at least redirected in more fruitful directions if we focused 467on connecting practice and discussion with fundamental concepts. 468So when Beven (2015) asked whether probability theory is an appropriate framework for presenting model 469errors it seems that this question can only be answered by asking a series of more basic questions. Are we 470willing to suppose that hypotheses are truth-apt that is, are hypotheses about the general principles of 471watershed in principle falsifiable? Are we willing to admit a concept of belief to represent a concept of 472incomplete information? Are we willing to measure belief as a scalar? If the answers to these questions are 473yes, then not only is probability theory appropriate, but it is required (Cox, 1946). So this paper really boils 474

  • 12

    down a suggestion that understanding the philosophy behind concepts that we might otherwise take for granted 475will help us to ask better questions. 476From a more practical perspective, we wish to reiterate two key messages. First that if we are indeed willing to 477work explicitly under some version of a falsification science, then we cannot use any concept of model error 478to inform distributions over observations. Instead, the model itself must produce a distribution over possible 479experimental outcomes. Admitting any concept of model error precludes us from assigning a probability to the 480model itself, and instead forces us to assign probabilities to statements about models that are interpretable only 481in the context of some a priori utility function. We expect that this, or some closely related perspective, will 482ultimately resolve the ongoing debate in hydrology about likelihood functions, which, from our perspective is 483apparently vacuous. From a practical perspective, this means that we must perform inference on the whole 484probabilistic hydrological model, including both any approximations of physics and also a representation of 485the models own missing information. We cannot presume to know what our uncertainty will look like (i.e., 486we cannot choose even the parametric form of a likelihood function) independent of knowing the rest of the 487model. 488Beven (1987) identified uncertainty research as an emerging paradigm in hydrology, and this avenue of 489investigation has been incredibly productive in the interim. Hydrology is often on the leading edge of both 490understanding and developing methods for applied uncertainty estimation; one of the reasons being that we 491deal with complex systems but mostly ones that are not Chaotic, so we are able to focus our efforts on some of 492the foundational issues without dealing with essentially unmanageable nonlinearities. We would like to see 493this paradigm continue under the understanding that our methods for uncertainty about watersheds are, in fact, 494models in exactly the same way as our models of our knowledge about watersheds. This may be especially 495fruitful if we can figure out how to leverage relationships between physical information and epistemological 496information (e.g., Knuth, 2004a). 497__________________________ 498References: 499Ang, A. H. S. and Tang, W. H. (2004) 'Probability concepts in engineering', Planning, 1(4), pp. 1-3. 500Baecher, G. B. and Christian, J. T. 'Natural variation, limited knowledge, and the nature of uncertainty in risk analysis'. 501

    Risk-Based Decisionmaking inWater Resources IX, Santa Barbara, CA USA. 502Bell, J. S. (1964) 'On the einstein-podolsky-rosen paradox', Physics, 1(3), pp. 195-200. 503Berkeley, G. (1874) A treatise concerning the principles of human knowledge. Philadelphia: JB Lippincott & Company. 504Beven, K. (1987) 'Towards a new paradigm in hydrology', IN: Water for the Future: Hydrology in Perspective. IAHS 505

    Publication, (164). 506Beven, K. (2012) 'Causal models as multiple working hypotheses about environmental processes', Comptes rendus 507

    geoscience, 344(2), pp. 77-88. 508Beven, K., Smith, P. and Freer, J. (2007) 'Comment on Hydrological forecasting uncertainty assessment: Incoherence of 509

    the GLUE methodology by Pietro Mantovan and Ezio Todini', Journal of Hydrology, 338(3), pp. 315-318. 510Beven, K. J. (2015) 'Facets of Uncertainty: Epistemic error, non-stationarity, likelihood, hypothesis testing, and 511

    communication', Hydrological Sciences Journal. 512Beven, K. J., Smith, P., Westerberg, I. and Freer, J. (2012) 'Comment on Pursuing the method of multiple working 513

    hypotheses for hydrological modeling by P. Clark et al', Water Resources Research, 48(11), pp. W11801. 514

  • 13

    Beven, K. J., Smith, P. J. and Freer, J. E. (2008) 'So just why would a modeller choose to be incoherent?', Journal of 515hydrology, 354(1), pp. 15-32. 516

    Beven, K. J. and Westerberg, I. (2011) 'On red herrings and real herrings: disinformation and information in hydrological 517inference', Hydrological Processes, 25(10), pp. 1676-1680. 518

    Brugnach, M., Dewulf, A., Pahl-Wostl, C. and Taillieu, T. (2008) 'Toward a relational concept of uncertainty: about 519knowing too little, knowing too differently, and accepting not to know', Ecology and Society, 13(2), pp. 30. 520

    Brynjarsdttir, J. and OHagan, A. (2014) 'Learning about physical parameters: The importance of model discrepancy', 521Inverse Problems, 30(11), pp. 114007. 522

    Bueno, O. and Colyvan, M. (2004) 'Logical non-apriorism and the law of non-contradiction', The law of non-523contradiction: New philosophical essays, pp. 156-175. 524

    Cartwright, N. (1983) How the Laws of Physics Lie. New York, NY: Cambridge Univ Press. 525Clark, M. P., Kavetski, D. and Fenicia, F. (2011) 'Pursuing the method of multiple working hypotheses for hydrological 526

    modeling', Water Resources Research, 47(9). 527Clark, M. P., Kavetski, D. and Fenicia, F. (2012) 'Reply to comment by K. Beven et al. on Pursuing the method of 528

    multiple working hypotheses for hydrological modeling', Water Resources Research, 48(11). 529Clark, M. P., Nijssen, B., Lundquist, J. D., Kavetski, D., Rupp, D. E., Woods, R. A., Freer, J. E., Gutmann, E. D., Wood, 530

    A. W., Brekke, L. D., Arnold, J. R., Gochis, D. J. and Rasmussen, R. M. (2015) '- A unified approach for 531process-based hydrologic modeling: 1. Modeling concept', - Water Resources Research, (- 4), pp. - 2498. 532

    Cox, R. T. (1946) 'Probability, frequency and reasonable expectation', American Journal of Physics, 14, pp. 1-13. 533Cybenko, G. (1989) 'Approximation by superpositions of a sigmoidal function', Mathematics of control, signals and 534

    systems, 2(4), pp. 303-314. 535Darcy, H. (1856) Les fontaines publiques de la ville de Dijon. 536Davies, P. C. W. (1990) 'Why is the physical world so comprehensible', Complexity, entropy and the physics of 537

    information, pp. 61-70. 538Dooge, J. C. I. (1986) 'Looking for hydrologic laws', Water Resources Research, 22(9S), pp. 46S-58S. 539Dubois, D. and Prade, H. (2001) 'Possibility theory, probability theory and multiple-valued logics: A clarification', Annals 540

    of mathematics and Artificial Intelligence, 32(1-4), pp. 35-66. 541Edwards, A. W. F. (1984) Likelihood. CUP Archive. 542Frhwirth-Schnatter, S. (1993) 'On fuzzy Bayesian inference', Fuzzy Sets and Systems, 60(1), pp. 41-58. 543Gong, W., Gupta, H. V., Yang, D., Sricharan, K. and Hero, A. O. (2013) 'Estimating Epistemic & Aleatory Uncertainties 544

    During Hydrologic Modeling: An Information Theoretic Approach', Water Resources Research, 49(4), pp. 2253-5452273. 546

    Goyal, P. and Knuth, K. H. (2011) 'Quantum theory and probability theory: Their relationship and origin in symmetry', 547Symmetry, 3(2), pp. 171-206. 548

    Gupta, H. V. and Nearing, G. S. (2014) 'Using models and data to learn: A systems theoretic perspective on the future of 549hydrological science', Water Resources Research, 50(6), pp. 53515359. 550

  • 14

    Gupta, H. V., Sorooshian, S. and Yapo, P. O. (1998) 'Toward improved calibration of hydrologic models: Multiple and 551noncommensurable measures of information', Water Resources Research, 34(4), pp. 751-763. 552

    Hamming, R. W. (1980) 'The unreasonable effectiveness of mathematics', American Mathematical Monthly, pp. 81-90. 553Harding, S. (1976) Can theories be refuted?: essays on the Dunhem-Quine thesis. Springer Science & Business Media. 554Harrison, K. W., Kumar, S. V., Peters-Lidard, C. D. and Santanello, J. A. (2012) 'Quantifying the change in soil moisture 555

    modeling uncertainty from remote sensing observations using Bayesian inference techniques', Water Resources 556Research, 48(11), pp. W11514. 557

    Hempel, C. G. and Oppenheim, P. (1948) 'Studies in the Logic of Explanation', Philosophy of science, pp. 135-175. 558Howson, C. and Urbach, P. (1989) Scientific Reasoning: The Bayesian Approach. Chicago, IL: Open Court Publishing. 559Hume, D. (2011) An enquiry concerning human understanding. Broadview Press. 560Hutter, M. (2003) Universal Algorithmic Intelligence: Technical Report IDSIA-01-03. 561Jackson, I. (1988) 'Convergence properties of radial basis functions', Constructive approximation, 4(1), pp. 243-264. 562Jaynes, E. T. (2003) Probability Theory: The Logic of Science. New York, NY: Cambridge University Press. 563Kalman, R. E. (1994) 'Randomness reexamined', Modeling, Identification and Control, 15(3), pp. 141-151. 564Kiureghian, A. D. and Ditlevsen, O. (2009) 'Aleatory or epistemic? Does it matter?', Structural Safety, 31(2), pp. 105-112. 565Knuth, K. H. (2004a) 'Deriving laws from ordering relations', arXiv preprint physics/0403031. 566Knuth, K. H. (2004b) 'What is a question?', arXiv preprint physics/0403089. 567Kolmogorov, A. N. (1963) 'On tables of random numbers', Sankhy: The Indian Journal of Statistics, Series A, 25(4), pp. 568

    369-376. 569Kosko, B. (1990) 'Fuzziness vs. probability', International Journal of General System, 17(2-3), pp. 211-240. 570Koutsoyiannis, D. (2010) 'HESS Opinions" A random walk on water"', Hydrology and Earth System Sciences, 14(3), pp. 571

    585-601. 572Kreye, M. E., Goh, Y. M. and Newnes, L. B. 'Manifestation of uncertainty-A classification'. DS 68-6: Proceedings of the 573

    18th International Conference on Engineering Design (ICED 11), Impacting Society through Engineering 574Design, Vol. 6: Design Information and Knowledge, Lyngby/Copenhagen, Denmark, 15.-19.08. 2011. 575

    Lane, D. A. and Maxfield, R. R. (2005) 'Ontological uncertainty and innovation', Journal of evolutionary economics, 57615(1), pp. 3-50. 577

    Laudan, L. (1983) 'The demise of the demarcation problem', Physics, philosophy and psychoanalysis: Springer, pp. 111-578127. 579

    Lindley, D. V. (1987) 'The probability approach to approximate and plausible reasoning in artificial intelligence and 580expert systems', Statistical Science, 2(1), pp. 17-24. 581

    Lindley, D. V. (2006) Understanding Uncertainty. John Wiley & Sons. 582

  • 15

    Liu, Y. Q. and Gupta, H. V. (2007) 'Uncertainty in hydrologic modeling: toward an integrated data assimilation 583framework', Water Resources Research, 43(7), pp. W07401, doi: 10.1029/2006WR005756. 584

    Mantovan, P. and Todini, E. (2006) 'Hydrological forecasting uncertainty assessment: Incoherence of the GLUE 585methodology', Journal of Hydrology, 330(1), pp. 368-381. 586

    McMillan, H. and Clark, M. (2009) 'Rainfallrunoff model calibration using informal likelihood measures within a 587Markov chain Monte Carlo sampling scheme', Water Resources Research, 45(4). 588

    Montanari, A. (2007) 'What do we mean by uncertainty? The need for a consistent wording about uncertainty 589assessment in hydrology', Hydrological Processes, 21(6), pp. 841-845. 590

    Montanari, A., Shoemaker, C. A. and van de Giesen, N. (2009) 'Introduction to special section on Uncertainty 591Assessment in Surface and Subsurface Hydrology: An overview of issues and challenges', Water Resources 592Research, 45(12), pp. W00B00. 593

    Nearing, G. S. (2014) 'Comment on 'A blueprint for processbased modeling of uncertain hydrological systems' by 594Alberto Montanari and Demetris Koutsoyiannis', Water Resources Research, 50(7), pp. 6260-6263. 595

    Nearing, G. S. and Gupta, H. V. (2015) 'The quantity and quality of information in hydrologic models', Water Resources 596Research, 51(1), pp. 524-538. 597

    Nearing, G. S., Mocko, D. M., Peters-Lidard, C. D. and Kumar, S. V. (in review) 'Benchmarking NLDAS-2 soil moisture 598and evapotranspiration to separate uncertainty contributions'. 599

    Niu, G. Y., Yang, Z. L., Mitchell, K. E., Chen, F., Ek, M. B., Barlage, M., Kumar, A., Manning, K., Niyogi, D. and 600Rosero, E. (2011) 'The community Noah land surface model with multiparameterization options (NoahMP): 1. 601Model description and evaluation with localscale measurements', Journal of Geophysical Research: 602Atmospheres (19842012), 116(D12). 603

    Pappenberger, F. and Beven, K. J. (2006) 'Ignorance is bliss: Or seven reasons not to use uncertainty analysis', Water 604resources research, 42(5). 605

    Popper, K. R. (2002) The Logic of Scientic Discovery. New York: Routledge. 606Rathmanner, S. and Hutter, M. (2011) 'A philosophical treatise of universal induction', Entropy, 13(6), pp. 1076-1136. 607Reggiani, P. and Schellekens, J. (2003) 'Modelling of hydrological responses: the representative elementary watershed 608

    approach as an alternative blueprint for watershed modelling', Hydrological Processes, 17(18), pp. 3785-3789. 609Rodriguez-Iturbe, I. (2000) 'Ecohydrology: A hydrologic perspective of climate-soil-vegetation dynamics', Water 610

    Resources Research, 36(1), pp. 3-9. 611Russell, B. (2001) The problems of philosophy. Oxford University Press. 612Savage, L. J. (1962) The foundations of statistical inference. Methuen. 613Schement, J. R. and Ruben, B. D. (1993) Between Communication and Information. New Brunswick, NJ: Transaction 614

    Books. 615Schlesinger, G. (1975) 'Conrmation and Parsimony', Induction, Probability, and Confirmation, 6, pp. 324. 616Shafer, G. (1976) A mathematical theory of evidence. Princeton university press Princeton. 617

  • 16

    Sikorska, A. E., Montanari, A. and Koutsoyiannis, D. (2014) 'Estimating the uncertainty of hydrological predictions 618through data-driven resampling tehcniques', Journal of Hydrologic Engineering. 619

    Sivapalan, M. (2009) 'The secret to doing better hydrological science: change the question!', Hydrological Processes, 62023(9), pp. 1391-1396. 621

    Smith, P., Beven, K. J. and Tawn, J. A. (2008) 'Informal likelihood measures in model assessment: Theoretic 622development and investigation', Advances in Water Resources, 31(8), pp. 1087-1100. 623

    Solomonoff, R. J. (1964) 'A formal theory of inductive inference. Part I', Information and Control, 7(1), pp. 1-22. 624Sorooshian, S., Gupta, V. K. and Fulton, J. L. (1983) 'EVALUATION OF MAXIMUM-LIKELIHOOD PARAMETER-625

    ESTIMATION TECHNIQUES FOR CONCEPTUAL RAINFALL-RUNOFF MODELS - INFLUENCE OF 626CALIBRATION DATA VARIABILITY AND LENGTH ON MODEL CREDIBILITY', Water Resources 627Research, 19(1), pp. 251-259. 628

    Stedinger, J. R., Vogel, R. M., Lee, S. U. and Batchelder, R. (2008) 'Appraisal of the generalized likelihood uncertainty 629estimation (GLUE) method', Water Resources Research, 44(12), pp. W00B06. 630

    Van Horn, K. S. (2003) 'Constructing a logic of plausible inference: a guide to coxs theorem', International Journal of 631Approximate Reasoning, 34(1), pp. 3-24. 632

    Vrugt, J. A. and Sadegh, M. (2013) 'Towards diagnostic model calibration and evaluation: Approximate Bayesian 633computation', Water Resources Research. 634

    Vrugt, J. A., Ter Braak, C. J. F., Gupta, H. V. and Robinson, B. A. (2009) 'Equifinality of formal (DREAM) and informal 635(GLUE) Bayesian approaches in hydrologic modeling?', Stochastic environmental research and risk assessment, 63623(7), pp. 1011-1026. 637

    Walker, W. E., Harremos, P., Rotmans, J., van der Sluijs, J. P., van Asselt, M. B. A., Janssen, P. and Krayer von Krauss, 638M. P. (2003) 'Defining uncertainty: a conceptual basis for uncertainty management in model-based decision 639support', Integrated assessment, 4(1), pp. 5-17. 640

    Weijs, S., van de Giesen, N. and Parlange, M. (2013) 'HydroZIP: How Hydrological Knowledge can Be Used to Improve 641Compression of Hydrological Data', Entropy, 15(4), pp. 1289-1310. 642

    Weijs, S. V., Schoups, G. and Giesen, N. (2010) 'Why hydrological predictions should be evaluated using information 643theory', Hydrology and Earth System Sciences, 14(12), pp. 2545-2558. 644

    Weinberg, G. M. and Weinberg, D. (1988) General principles of systems design. Dorset House New York. 645Wigner, E. P. (1960) 'The unreasonable effectiveness of mathematics in the natural sciences. Richard courant lecture in 646

    mathematical sciences delivered at New York University, May 11, 1959', Communications on pure and applied 647mathematics, 13(1), pp. 1-14. 648

    Wilkinson, R. D., Vrettas, M., Cornford, D. and Oakley, J. E. (2011) 'Quantifying simulator discrepancy in discrete-time 649dynamical simulators', Journal of agricultural, biological, and environmental statistics, 16(4), pp. 554-570. 650

    651