A critical discussion and practical recommendations on ... · A critical discussion and practical...

55
1 A critical discussion and practical recommendations on some issues relevant to the non-probabilistic treatment of uncertainty in engineering risk assessment Nicola Pedroni 1 , Enrico Zio 2 , Alberto Pasanisi 3 , Mathieu Couplet 4 1 Corresponding author. Chair “System Science and the Energy challenge”-Fondation Electricité de France (EdF) at the Laboratoire Genie Industriel (LGI), CentraleSupélec, Université Paris-Saclay, Grande voie des Vignes, 92290 Chatenay-Malabry, France. E-mail: [email protected] 2 Chair “System Science and the Energy challenge”-Fondation Electricité de France (EdF) at the Laboratoire Genie Industriel (LGI), CentraleSupélec, Université Paris-Saclay, Grande voie des Vignes, 92290 Chatenay-Malabry, France. E-mail: [email protected]. Also: Energy Department, Politecnico di Milano, Via Ponzio, 34/3 – 20133 Milano, Italy. E-mail: [email protected]. 3 EDF R&D - EIFER, Emmy-Noether-Str. 11. 76131 Karlsruhe, Germany. E-mail address: [email protected]. 4 Electricité de France, R&D, 6 Quai Watier, 78400, Chatou, France. E-mail address: [email protected].

Transcript of A critical discussion and practical recommendations on ... · A critical discussion and practical...

Page 1: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

1

A critical discussion and practical recommendations on some

issues relevant to the non-probabilistic treatment of

uncertainty in engineering risk assessment Nicola Pedroni1, Enrico Zio2, Alberto Pasanisi3, Mathieu Couplet4

1Corresponding author. Chair “System Science and the Energy challenge”-Fondation Electricité de

France (EdF) at the Laboratoire Genie Industriel (LGI), CentraleSupélec, Université Paris-Saclay,

Grande voie des Vignes, 92290 Chatenay-Malabry, France. E-mail:

[email protected]

2Chair “System Science and the Energy challenge”-Fondation Electricité de France (EdF) at the

Laboratoire Genie Industriel (LGI), CentraleSupélec, Université Paris-Saclay, Grande voie des

Vignes, 92290 Chatenay-Malabry, France. E-mail: [email protected].

Also: Energy Department, Politecnico di Milano, Via Ponzio, 34/3 – 20133 Milano, Italy. E-mail:

[email protected].

3EDF R&D - EIFER, Emmy-Noether-Str. 11. 76131 Karlsruhe, Germany. E-mail address:

[email protected].

4Electricité de France, R&D, 6 Quai Watier, 78400, Chatou, France. E-mail address:

[email protected].

Page 2: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

2

ABSTRACT

Models for the assessment of the risk of complex engineering systems are affected by uncertainties,

due to the randomness of several phenomena involved and the incomplete knowledge about some of

the characteristics of the system. The objective of the present paper is to provide operative

guidelines to handle some conceptual and technical issues related to the treatment of uncertainty in

risk assessment for engineering practice. In particular, the following issues are addressed: (1)

quantitative modeling and representation of uncertainty coherently with the information available

on the system of interest; (2) propagation of the uncertainty from the input(s) to the output(s) of the

system model; (3) (Bayesian) updating as new information on the system becomes available; (4)

modeling and representation of dependences among the input variables and parameters of the

system model. Different approaches and methods are recommended for efficiently tackling each of

the issues (1)‒(4) above; the tools considered are derived from both classical probability theory as

well as alternative, non-fully probabilistic uncertainty representation frameworks (e.g., possibility

theory). The recommendations drawn are supported by the results obtained in illustrative

applications of literature.

Keywords: uncertainty representation and propagation, Bayesian update, dependences.

Page 3: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

3

1 INTRODUCTION

The aim of this work is to critically address some conceptual and technical issues related to the

treatment of uncertainty in risk assessment for engineering practice, with a particular focus on non-

probabilistic approaches. The motivation is the acknowledgement that the subject of risk nowadays

plays a relevant role in the design, development, operation and management of components,

systems and structures in many types of industry(1, 2). This is particularly true for civil, nuclear,

aerospace and chemical systems that are safety-critical and must thus be designed and operated

within a quantitative risk-informed approach aimed at systematically integrating deterministic and

probabilistic analyses to obtain a rational decision on the utilization of resources for safety. In such

rationalization, explicit consideration is given to the likelihood of accidents (scenarios) and to their

potential consequences(3-12).

In order to describe the physical phenomena that may lead to system failure, mathematical (risk)

models are built that provide a representation of the real system dependent on a number of

hypotheses and parameters. The risk model provides numerical outputs (e.g., relevant safety

parameters) possibly to be compared with predefined numerical safety criteria for further guidance

to risk-informed decision making processes(4, 7, 10, 13, 14). However, in practice, not all the

characteristics of the system under analysis can be fully captured in the risk model, which leads to

uncertainty on both the values of the (input) parameters and on the hypotheses supporting the model

structure(3, 5-7): such input uncertainty causes uncertainty in the model outputs and, thus, in the

corresponding risk estimates. This output uncertainty must be estimated for a realistic quantification

of the system behavior and associated risk, for use in decision making(3, 15-17).

In this paper, we focus on four relevant, conceptual and practical, issues on the treatment of

uncertainty in the risk assessment of safety-critical engineering systems:

Page 4: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

4

1. The uncertainties in the model (input) parameters and hypotheses have to be first

systematically identified and classified; then, they have to be quantitatively modeled and

described by rigorous mathematical approaches coherently with the information available on

the system. The key point is to guarantee that uncertainties are taken into account in a way

that the knowledge relevant for the risk assessment process is represented in the most

faithful manner(14, 18-22). For sake of simplicity, we will not deal here explicitly with

uncertainties tainting the system’s model itself. Whether this point is the object of many

research and engineering works in the computer experiments community, e.g. Refs. 23-26,

in engineering practice it is more common to separate the phases of assessing model’s

accuracy and propagating uncertainties from input to output variables(27). See also the

interesting and pragmatic viewpoint on this issue in Ref. 28.

2. The uncertainties in the input(s) have to be propagated onto the output(s) of the risk model

(i.e., onto the risk measures), to provide the decision makers with a clearly risk-informed

picture of the problem upon which they can confidently reason and deliberate(14, 21, 22, 29).

3. The quantitative representation of uncertainty needs to be updated, in a Bayesian

framework, when new information/evidence (e.g., data) becomes available(30-32).

4. Possible dependences existing among the input parameters and variables of the system risk

model need to be properly accounted for(33). Actually, it is widely acknowledged that

neglecting such dependences could lead to dramatic underestimations of the risk associated

to the functioning of complex, safety-critical engineering systems(34-37) (e.g.,

underestimation of the magnitude of the consequences of an accident scenario).

For more than 30 years, a probabilistic framework has been embraced to address the process of risk

assessment and the treatment of the associated uncertainties. The common term used is Probabilistic

Risk Assessment (PRA, also referred to as Quantitative Risk Assessment-QRA)(4-7). However, the

Page 5: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

5

purely probability-based approaches to risk and uncertainty analysis could be challenged under the

common conditions of limited or poor knowledge, when the information available does not provide

a strong basis for a specific probability assignment(29). In this view, a broader risk description is

sought where all the uncertainties are laid out ‘plain and flat’ with no additional information

inserted in the analysis in the form of assumptions and hypotheses which cannot be proven right or

wrong. This concern has sparked a number of investigations in the field of uncertainty

representation and analysis, which has led to the developments of alternative (non-fully

probabilistic) frameworks for describing uncertainties in risk assessments(18-21, 28, 38, 39), e.g., fuzzy

set theory(40), fuzzy probabilities(41), random set theory(42), evidence theory(33, 43-46), possibility

theory(47-50), probability bound analysis using probability-boxes (p-boxes)(37, 51), interval analysis(52,

53) and interval probabilities(54); notice that most of these theories can be included within the general

common framework of imprecise probabilities(55-57).

In this context, the main objective of the present paper is to show in a systematic and

comprehensive framework how some conceptual and technical issues on the treatment of

uncertainty in risk assessment (items 1.‒4. above) can be effectively tackled outside the

probabilistic setting: practically speaking, different approaches and methods will be recommended

for efficiently addressing each of the issues 1.‒4. listed above; classical probability theory tools as

well as alternative, non-probabilistic ones (in particular, possibility theory) are considered. The

recommendations are ‘informed’ by (i) a critical review of the literature approaches to solving the

specific issues and (ii) the research work of the authors on addressing these issues: with respect to

the latter item (ii), some of the considerations are based on results contained in articles previously

published by the authors(58-61); other conclusions are instead drawn from analyses originally

presented in this paper (e.g., part of the work related to the issue of Bayesian updating).

Page 6: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

6

The remainder of the paper is organized as follows. In Section 2, risk assessment and uncertainty

analysis are introduced. In Section 3, the four conceptual and technical issues related to uncertainty

treatment in risk analysis mentioned above are presented and critically analyzed, and the relevant

literature reviewed. In Section 4, techniques are recommended to effectively tackle such issues;

results of the application of the proposed techniques to some case studies of literature are also

shown. Finally, guidelines and recommendations are summarized in the concluding Section.

2 UNCERTAINTY IN RISK ASSESSMENT

The quantitative analyses of the phenomena occurring in many engineering systems and

applications are based on mathematical models, which are translated into numerical computer codes

for quantification(4, 7, 13). In engineering practice, the mathematical models are not capable of

capturing all the characteristics of the system under analysis(3, 15). This leads to uncertainty on both

the values of the model input parameters/variables and on the hypotheses supporting the model

structure. Such uncertainty propagates within the model and causes uncertainty in its outputs(16).

Formally, we consider a mathematical model fZ(Y), which depends on the input quantities Y = {Y1,

Y2, …, Yj, …, YN} and on the (possibly implicit) function fZ(·). The model is used to evaluate one or

more output quantities Z = {Z1, Z2, …, Zl, …, ZO} of the system under analysis:

Z = { Z1, Z2, …, Zl, …, ZO} = fZ(Y) = fZ(Y1, Y2, …, Yj, …, YN). (1)

By way of example, in the risk-based design of a flood protection dike the output quantity of

interest may be represented by the water level of the river in proximity of a residential area(62). In

what follows, for the sake of simplicity of illustration and without loss of generality we consider

only one (scalar) output Z, i.e., Z = {Z1, Z2, …, Zl, …, ZO} ≡ Z = fZ(Y).

The uncertainty analysis of Z requires an assessment of the uncertainties about Y and their

propagation through the model fZ(·) to produce an assessment of the uncertainties about Z.

Page 7: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

7

In the context of risk assessment, uncertainty is conveniently distinguished into two different types:

‘aleatory’ (also known as ‘objective’, ‘stochastic’ or ‘irreducible’) and ‘epistemic’ (also known as

‘subjective’, ‘state-of-knowledge’ or ‘reducible’)(1-3, 7, 15, 16, 63-65). Aleatory uncertainty is related to

random variations, i.e., to the intrinsically random nature of several of the phenomena occurring

during system operation. It concerns, for instance, the occurrence of the (stochastic) events that

define various possible accident scenarios for a safety-critical system (e.g., a nuclear power plant)(6-

10, 66, 67), physical quantities like the maximal water flow of a river during a year, extreme events like

earthquakes or natural processes like erosion and sedimentation(62, 68). Epistemic uncertainty is

instead associated to the lack of knowledge about some properties and conditions of the phenomena

underlying the behavior of the systems. This uncertainty manifests itself in the representation of the

system behavior, in terms of both uncertainty in the model structure fZ(·) and hypotheses assumed

and parameter uncertainty in the (fixed but poorly known) values of the internal parameters Y of the

model(14, 16, 69). While the first source of uncertainty has been widely investigated and more or less

sophisticated methods have been developed to deal with it, research is still ongoing to obtain

effective and agreed methods to handle the uncertainty related to the model structure(43, 45). See also

Ref. 28 who distinguishes between model inaccuracies (the differences between Z and fZ(Y)), and

model uncertainties due to alternative plausible hypotheses on the phenomena involveda. In this

paper, we are concerned only with the uncertainty in the model parameters Y = {Y1, Y2, …, Yj, …,

YN}: an example is represented by the (imprecise) basic event probabilities in a fault tree(6, 7, 34, 70).

3 SOME ISSUES ON THE PRACTICAL TREATMENT OF

UNCERTAINTIES IN ENGINEERING RISK ASSESSMENT: A

CRITICAL LITERATURE SURVEY

a Notice that model uncertainty also includes the fact that the model could be too simplified and therefore would neglect some important phenomena affecting the final result. This latter type of uncertainty is sometimes identified independently from model uncertainty and is known as completeness uncertainty(6, 7).

Page 8: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

8

In Sections 3.1-3.4, four issues relevant to the treatment of uncertainty in engineering risk

assessment are critically discussed, on the basis of the available literature on the subject.

3.1 Quantitative modeling and representation of uncertainty coherently with

the information available on the system

Probability models are typically introduced to represent aleatory uncertainty: see, for example, the

Poisson/exponential model for events randomly occurring in time (e.g., random variations of the

operating state of a valve)(67, 68), the binomial model for describing the “failures on demand” of

mechanical safety systems(7, 71) and the Gumbel model for the maximal water level of a river in a

particular year(62). Probability models constitute the basis for the statistical analysis of the data and

information available on a system, and are considered essential for assessing the aleatory

uncertainties and drawing useful insights on its random behavior(22). They are also capable of

updating the probability values, as new data and information on the system become available.

A probability model presumes some sort of model stability, by the construct of populations of

similar units (in the Bayesian context, formally an infinite set of exchangeable random variables)(30,

72). In this framework, the standard procedure for constructing probability models of random events

and variables is as follows: (i) observe the process of interest over a finite period of time, (ii) collect

data about the phenomenon, (iii) perform statistical analyses to identify the probability model (i.e.,

distribution) that best captures the variability in the available data and (iv) estimate the internal

parameters of the selected probability modelb (3, 4, 6, 7, 30, 34, 73, 74). However, such ‘presumed’ model

stability is often not fulfilled and the procedure (i)-(iv) above cannot be properly carried out(75).

In the engineering risk assessment practical context, the situations are often unique, because the

structures systems and components are, in the end, uniquely manufactured, operated and

b In a frequentist view, the available data are interpreted as observable random realizations of an underlying, repeatable probabilistic model (e.g., a probability distribution) representing the aleatory phenomenon of interest, which can be approximated with increasing precision by the analyst as the size of the available data set increases(3).

Page 9: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

9

maintained, so that their life realizations is not identical to any others. Then, the collection of

repeated random realizations of the related random phenomena of interest (e.g., failure occurrences)

means in reality the construction of fictional populations of non-existing similar situations. Then,

probability models in general cannot be easily defined; in some cases, they cannot be meaningfully

defined at all. For example, it makes no sense to define the (frequentist) probability of a terrorist

attack(76). In other cases, the conclusion may not be so obvious. For example, the (frequentist)

probability of an explosion scenario in a process plant may be introduced in a risk assessment,

although the underlying population of infinite similar situations is somewhat difficult to describe(21).

In addition, even when probability models with parameters can be established (justified) reflecting

aleatory uncertainty, in many cases the amount of data available is insufficient for performing a

meaningful statistical analysis on the random phenomenon of interest (e.g., because collecting this

data is too difficult or costly); in other casas, the pieces of data themselves may be highly imprecise:

in such situations, the internal parameters of the selected probability model cannot be estimated

with sufficient accuracy and epistemic (state-of-knowledge) uncertainty is associated with them(49,

77). A full risk description needs to assess the (epistemic) uncertainties about these quantities. This

framework of two hierarchical levels of uncertainty is referred to as “two-level” setting(14, 22, 62, 78).

In the current risk assessment practice, the epistemic uncertainty in the parameters entering the

(probability) models of random events is typically represented by (subjective) probability

distributions within a Bayesian framework: subjective probability distributions capture the degree

of belief of the analyst with respect to the values of the parameters entering the aleatory models,

conditional on his/her background knowledge(1-3, 6, 7, 18, 28, 34, 68, 70, 79-84). However, the probability-

based approach to epistemic uncertainty representation can be challenged by several practical and

conceptual arguments. First of all, representing epistemic uncertainty by probability distributions

(albeit subjective) amounts in practice to representing partial ignorance (imprecision) in the same

way as randomness (variability)(49, 77): then, the resulting distribution of the output can hardly be

Page 10: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

10

properly interpreted: “the part of the resulting variance due to epistemic uncertainty (that could be

reduced) is unclear”(77). Also, the fully probabilistic framework for assessing risk and uncertainties

may be too narrow, as the subjective expert knowledge that the probability distributions are based

on could be poor and/or even based on wrong assumptions, thus leading to conclusions that can

mislead decision making. Actually, in the unique situations of risk assessment, the information

available may not represent a sufficiently strong knowledge-basis for a specific probability

assignmentc. Furthermore, in practical risk assessment and decision making contexts, “there are

often many stakeholders and they may not be satisfied with a probability-based assessment based on

subjective judgments made by one analysis group”(21): again, a broader risk description is sought.

“It is true that adopting the subjective probability approach, probabilities can always be assigned,

but the information basis supporting the assignments may not be reflected by the numbers

produced. One may for example assess two situations both resulting in subjective probabilities

equal to, e.g., 0.7, but in one case the assignment may be supported by substantial amount of

relevant data, the other by no data at all”(21).

To overcome the above shortcomings of the fully probabilistic representation of uncertainty in risk

assessment, alternative (non-fully probabilistic) approaches for representing and describing

epistemic uncertainties in risk assessment have been suggested(18-21, 37), e.g., fuzzy set theory(40),

fuzzy probabilities(41), random set theory(42), Dempster-Shafer theory of evidence(33, 43, 44, 46, 85),

possibility theory(47-50), interval analysis(52, 53), interval probabilities(54) and probability bound

analyses using p-boxes(37, 51).

In probability bound analysis, intervals are used for those parameters for which, due to ignorance,

the analyst is not able or willing to assign a precise probability: rather, he/she prefers to describe

such parameters only ‘imprecisely’ by means of a range of values, all of which coherent with the

c Evidently, in those situations where the information is not of a type of “degree of belief” (in the sense of a subjective probability), one does not have the information needed to assign a specific probability: in those cases, the analyst may accept that and he/she is lead to interval probabilities or to develop such knowledge.

Page 11: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

11

information available and reflecting his/her (scarce) background knowledge on the problem; for the

other components, traditional probabilistic analysis is carried out. This procedure results in a couple

of extreme limiting Cumulative Distribution Functions (CDFs) (namely, a probability box or p-box)

that bound above and below the “true” CDF of the quantity of interest. However, this way of

proceeding results often in very wide intervals and the approach has been criticised for not

providing the decision-maker with specific analyst and expert judgments about epistemic

uncertainties(18). The other frameworks mentioned above allow for the incorporation and

representation of incomplete information. Their motivation is to be able to treat situations where

there is more information than that supporting just an interval assignment on an uncertain

parameter, but less than that required to assign a single specific probability distribution.

All these theories produce epistemic-based uncertainty descriptions and in particular probability

intervals. In fuzzy set theory membership functions are employed to express the degree of

compatibility of a given numerical value to a fuzzy (i.e., vague, imprecisely defined) set (or

interval). In possibility theory, uncertainty is represented by using a possibility distribution function

that quantifies the degree of possibility of the values of a given uncertain parameter, say, Y.

Formally, an application of possibility theory involves the specification of a pair ( )YU π, (called

possibility space), where: (i) U is a set that contains everything that could occur in the particular

universe under consideration (e.g., it contains all the values that parameter Y can assume); (ii) πY is

the possibility distribution function, defined on U and such that 0 ≤ πY(y) ≤ 1 for y ∈ U and

sup{πY(y): y ∈ U} = 1(47-50). Whereas in probability theory a single probability distribution function

is introduced to define the probability of any interval (or event) A, in possibility theory one

possibility function gives rise to a couple of probability bounds (i.e., upper and lower probabilities)

for interval A, referred to as possibility and necessity measures and defined as ( ) ( ){ }yAΠ Y

Ay

Y π∈

= sup

and NY(A) = 1 ‒ ( ){ }yY

Ay

π∉

sup = 1 ‒ ( )cY AΠ , respectively(47-50). Finally, in evidence theory

Page 12: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

12

uncertainty is described by a so-called body of evidence, i.e., a list of focal sets/elements (e.g.,

intervals) each of which is assigned a probability (or belief) mass (so-called Basic Probability

Assignment-BPA). Formally, an application of evidence theory involves the specification of a triple

(U, S, m) (called evidence space), where: (i) U is a set that contains everything that could occur in

the particular universe under consideration (namely, the sample space or universal set); (ii) S is a

countable collection of subsets of U (i.e., the set of the so-called focal elements); (iii) m is a

function (i.e., the BPA) defined on subsets of U, such that: (i) m(A) > 0, if A ∈ S; (ii) m(A) = 0, if A

⊂ U and A ∉ S, and (iii) ( )∑∈

=SA

Am 1. For a subset A of U, m(A) is a number characterizing the

probability (or degree of belief) that can be assigned to A, but without any specification of how this

degree of belief might be apportioned over A: thus, it might be associated with any subset of A. In

this respect, the function m induces the so-called plausibility and belief measures that bound above

and below the probability of a given set A of interest: such measures are defined as

( ) ( )∑∅≠∩

=AB

BmAPl and ( ) ( )∑⊂

=AB

BmABel , respectively. Measure Bel(A) can be viewed as the

minimum degree of belief that must be associated with A (i.e., it accounts for the evidence

“supporting” A). Similarly, measure Pl(A) can be viewed as the maximum degree of belief that

could be associated with A (i.e., it accounts for the evidence “not contradicting” A)(43, 44, 46, 85, 86).

For the sake of completeness and precision, it is worth pointing out that the most of the theories

mentioned above are ‘covered’ by the general common framework of imprecise probabilities(55-57).

Actually, as highlighted above, “a key feature of imprecise probabilities is the identification of

bounds on probabilities for events of interest”(87). “The distance between the probability bounds

reflects the indeterminacy in model specifications expressed as imprecision of the models. This

imprecision is the concession for not introducing artificial model assumptions” (56). For further

reflections on this subject, the reader is referred to Refs. 72, 55, 88 and 89.

Page 13: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

13

It is worth admitting that these imprecise probability-based theories have not yet been broadly

accepted for use in the risk assessment community. Till now, the development effort made on these

subjects has mostly had a mathematical orientation, and it seems fair to say that no established

framework presently exists for practical risk assessment based on these alternative theories(21).

Among the alternative approaches mentioned above, that based on possibility theory is by many

considered one of the most attractive for extending the risk assessment framework in practice. In

this paper, we focus on this approach for the following reasons: (i) the power it offers for the

coherent representation of uncertainty under poor information (as testified by the large amount of

literature in the field, see above); (ii) its relative mathematical simplicity; (iii) its connection with

fuzzy sets and fuzzy logic, as conceptualized and put forward by Zadeh(90): actually, in his original

view possibility distributions were meant to provide a graded semantics to natural language

statements, which makes them particularly suitable for quantitatively translating (possibly vague,

qualitative and imprecise) expert opinions; finally, (iv) the experience of the authors themselves in

dealing and computing with possibility distributions(58-61). One the other hand, it is worth

remembering that possibility theory is only one of the possible “alternatives” to the incorporation of

uncertainty into an analysis (see the approaches mentioned above).

3.2 Propagation of uncertainty to the output of the system model

The scope of the uncertainty analysis is the quantification and characterization of the uncertainty in

the output Z of the mathematical model fZ(Y) = fZ(Y1, Y2, …, Yj, …, YN) that derives from

uncertainty in analysis inputs Y = {Y1, Y2, …, Yj, …, YN} (see Section 2)(16). In the light of the

considerations reported in the previous Section 3.1, this requires the joint, hierarchical propagation

of hybrid aleatory and epistemic uncertainties through the model fZ(Y)(67).

When both aleatory and epistemic uncertainties in a two-level framework are represented by

probability distributions, a two-level (or double loop) Monte Carlo (MC) simulation is usually

Page 14: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

14

undertaken to accomplish this task(62, 74, 91): the result is a ‘bundle’ of aleatory probability

distributions, one for each realization of the epistemically-uncertain parameters.

Alternatively, when the epistemic uncertainties are represented by possibility distributions, the

hybrid Monte Carlo (MC) and Fuzzy Interval Analysis (FIA) approachd is typically considered. In

the hybrid MC-FIA method the MC technique(92) is combined with the extension principle of fuzzy

set theory(93-97), within a “two-level” hierarchical setting(49, 58, 60, 98-100). This is done by: (i) FIA to

process the uncertainty described by possibility distributions: in synthesis, intervals for the

epistemically-uncertain parameters described by possibility distributions are identified by

performing a repeated, level-wise interval analysis; (ii) MC sampling of the random variables to

process aleatory uncertainty(49): given the intervals of the epistemically-uncertain parameters,

families of probability distributions for the random variables are propagated through the model.

Instead, if the epistemic uncertainties are described within the framework of evidence theory, the

Monte Carlo (MC)-based Dempster-Shafer (DS) approach employing Independent Random Sets

(IRSs)e is typically undertaken. In the MC-based DS-IRS method the focal sets (i.e., intervals)

representing the epistemically-uncertain parameters are randomly and independently sampled by

MC according to the corresponding probability (or belief) masses(39, 101-103).

In the present paper, particular focus is devoted to the MC-FIA approach: a detailed description of

this technique and an illustrative application are reported in Section 4.2.

3.3 Updating as new information becomes available

In this Section, we address the issue of updating the representation of the epistemically-uncertain

parameters of aleatory models (e.g., probability distributions), as new information/evidence (e.g.,

data) about the system becomes available.

d In the following, this method will be referred to as “hybrid MC-FIA approach” for brevity. e In the following, this method will be referred to as “MC-based DS-IRS approach” for brevity.

Page 15: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

15

The framework adopted is the typical Bayesian one ‒ that is based on the well-known Bayes rule ‒

when epistemic uncertainties are represented by (subjective) probability distributions(30-32, 73, 104-107).

Alternatively, when the representation of epistemic uncertainty is non-probabilistic, other methods

of literature can be undertaken(108). In Ref. 109, a Generalized Bayes Theorem (GBT) has been

proposed within the framework of evidence theory. In Refs. 110 and 111, a modification of Bayes

theorem has been presented to account for the presence of fuzzy data and fuzzy prior Probability

Distribution Functions (PDFs). In Refs. 112 and 113, a purely possibilistic counterpart of the

classical, well-grounded probabilistic Bayes theorem has been proposed to update the possibilistic

representation of the epistemically-uncertain parameters of (aleatory) probability distributions.

Finally, Ref. 114 has introduced a hybrid probabilistic-possibilistic method that relies on the use of

Fuzzy Probability Density Functions (FPDFs), i.e., PDFs with possibilistic (fuzzy) parameters (e.g.,

fuzzy means, fuzzy standard deviations, …): it is based on the combination of: (i) Fuzzy Interval

Analysis (FIA) to process the uncertainty described by possibility distributions and (ii) repeated

Bayesian updating of the uncertainty represented by probability distributions.

In the present paper, the purely possibilistic Bayes’ theorem is taken as reference: a detailed

description of the approach and illustrative applications are reported in Section 4.3.

3.4 Dependences among input variables and parameters

Two types of dependence need to be considered in risk assessment(33). The first type relates to the

(dependent) occurrence of different (random) events (in the following, this kind of dependence will

be referred to as ‘objective’ or ‘aleatory’). An example of this objective (aleatory) dependence may

be represented by the occurrence of multiple failures which result directly from a common or shared

root cause (e.g., extreme environmental conditions, failure of a piece of hardware external to the

system, or a human error): they are termed Common Cause Failures (CCFs) and typically can

concern identical components in redundant trains of a safety system(7, 115); another example is that of

Page 16: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

16

cascading failures, i.e., multiple failures initiated by the failure of one component in the system, as a

sort of chain reaction or domino effect(116, 117).

The second type refers to the dependence possibly existing between the estimates of the

epistemically-uncertain parameters of the aleatory probability models used to describe random

events/variables (in the following, this kind of dependence will be referred to as ‘state-of-

knowledge’ or ‘epistemic’). This state-of-knowledge (epistemic) dependence exists when the

epistemically-uncertain parameters of aleatory models are estimated by resorting to dependent

information sources (e.g., to the same experts/observers or to correlated data sets)(7, 34).

Considerable efforts have been done to address objective and state-of-knowledge dependences in

risk analysis. In Ref. 118, objective dependencies among random events/variables have been treated

by means of alpha factor models within the traditional framework of CCF analysis. In Refs. 33 and

119, the use of Frank copula and Pearson correlation coefficient has been proposed to describe a

wide range of objective dependences among aleatory events/variables. In Ref. 120, (fuzzy)

dependency factors are employed to model dependent events/variables. In Ref. 121 the rank

correlation method has been proposed to characterize dependencies between epistemically uncertain

variables. In Refs. 7 and 34, total (perfect) state-of-knowledge dependence among the failure rates

of mechanical components has been modeled by imposing maximal correlation among the

corresponding (subjective) probability distributions. In Refs. 122 and 123, state-of-knowledge

dependences among the probabilities of the Basic Events (BEs) of a Fault Tree (FT) have been

described by traditional correlation coefficients and propagated by the method of moments. In Ref.

124, statistical epistemic correlations have been modeled by resorting to the Nataf transformation

within a traditional Monte Carlo Simulation (MCS) framework. In Refs. 59 and 91, the Dependency

Bound Convolution (DBC) approach(33, 125) and the Distribution Envelop Determination (DEnv)

method(126-129) have been adopted to account for all kinds of (possibly unknown) objective and

epistemic dependences among correlated events/variables.

Page 17: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

17

In the present paper, particular focus is devoted to the DEnv method: a detailed description of the

technique and an illustrative application to Fault Tree Analysis (FTA) are reported in Section 4.4.

4 RECOMMENDATIONS FOR TACKLING THE CONCEPTUAL

AND TECHNICAL ISSUES ON UNCERTAINTY IN

ENGINEERING RISK ASSESSMENT

On the basis of the considerations made in Section 3, techniques are here recommended for tackling

the four issues presented before. Guidelines on the recommended use of the techniques in practice

are provided, with illustrative applications to simple risk assessment models.

4.1 Quantitative modeling and representation of uncertainty coherently with

the information available on the system

In all generality, we consider an uncertain variable Y , whose (aleatory) uncertainty is described by

a probability model, e.g., a PDF )|( θypY , where }...,, ..., , ,{ 21 Pm θθθθ=θ is the vector of the

corresponding internal parameters (see Section 3.1). In a two-level framework, the parameters θ

are themselves affected by epistemic uncertainty(62, 78). In the present work, we recommend to

describe these epistemic uncertainties by the (generally joint) possibility distribution )(θθπ . A

random variable Y with possibilistic parameters θ is referred to as a Fuzzy Random Variable

(FRV) in the literature(49). Details about FRVs are given in the following Section 4.1.1; then, the

benefits of using a possibilistic description of epistemic uncertainty (instead of the classical,

probabilistic one) are demonstrated by means of an illustrative example in Section 4.1.2.

4.1.1 Recommended approach: Fuzzy Random Variables (FRVs)

By way of example, we consider the uncertain variable Y (e.g., the maximal water level of a river

in a given year) described by a Gumbel probability model, i.e., Y ~ )|( θypY = ),|( δγypY =

Page 18: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

18

−−+−−δ

γδ

γδ yyexpexp1 . We suppose that parameter 2θδ = (i.e., the scale parameter) is

known with absolute precision, i.e., it is a fixed point value ( 2θδ = = 100), whereas parameter

1θγ = (i.e., the location parameter) is epistemically-uncertain.f We consider, for the sake of the

example, that the only information available on the value of the parameter 1θγ = is that it ranges in

the interval [aγ, bγ] = [900, 1300], with most likely value (i.e., mode) cγ = 1100. When the

background knowledge on a parameter is partial like in the present case, the classical procedure for

describing its uncertainty is to identify the corresponding maximum entropy PDF. However, this

way of proceeding does not eliminate the fact that the information available on 1θγ = is not

sufficient for assigning a single specific PDF to describe the epistemic uncertainty in the parameter.

In facts, such scarce information is compatible and consistent with a variety of PDFs (e.g., truncated

normal, lognormal, triangular, …) that obviously comprise also the maximum entropy one.

Alternatively, one of the ways of representing the uncertainty on 1θγ = is offered by the framework

of possibility theory(47-50). For the simple numerical example considered above, a triangular

possibility distribution )(γπ γ with core (i.e., vertex) cγ = 1100 and support (i.e., base) [aγ, bγ] =

[900, 1300] could be used (Figure 1, left)(50): indeed, it can be demonstrated that such possibility

distribution encodes the family of all the probability distributions with mode cγ = 1100 and support

[aγ, bγ] = [900, 1300](43, 47, 50) (obviously, this does not mean that the triangular possibility

distribution is the only one able to describe such a probability family).

Actually, for a given set S (i.e., of a given interval of values of parameter γ) the possibility function

)(γπ γ gives rise to probability bounds (i.e., upper and lower probabilities), referred to as necessity

and possibility measures { ( )SNγ , ( )SΠ γ } and defined as ( ) { })(sup γπ γ

γ

γ

S

SΠ∈

= and

f Obviously, in real risk assessment studies, a situation where one parameter of a given (aleatory) probability model is perfectly known and the other one is affected by significant epistemic uncertainty is far unlikely. However, notice that this example is here introduced only for the purpose of clearly and simply illustrating the basics of possibility theory.

Page 19: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

19

( ) { })(sup1)(1 γπ γ

γ

γγ

S

SΠSN∉

−=−= , respectively, where S is the set (interval) complementary to S

on the axis of real numbers(43, 47, 50). It can be demonstrated that the probability )(SPγ of the

interval S is bounded above and below by such necessity and possibility values, i.e.,

( ) )()( SΠSPSN γγγ ≤≤ : see Ref. 130 for a formal proof of this statement. Also, from the

definitions of { ( )SNγ , ( )SΠ γ } and referring to the particular set S = (‒∞, γ], we can deduce the

associated cumulative necessity and possibility measures ( ) ( ]( )γγγ ,∞−= NSN and

( ) ( ]( )γγγ ,∞−= ΠSΠ , respectively (Figure 1, right). These measures can be interpreted as the

limiting lower and upper bounds ( )γγF and ( )γγF to the “true” CDF ( )γγF = P((‒∞, γ]), that we

can build in coherence with the scarce information available on γ, i.e., only the mode and support.

In other words, we can state that the triangular possibility distribution )(γπ γ of Figure 1 left

produces a couple of CDFs (Figure 1 right), that bound the family of all the possible CDFs with

mode cγ = 1100 and support [aγ, bγ] = [900, 1300] (see Refs. 43, 47 and 50 for a formal proof).

In order to provide an additional practical interpretation of the possibility distribution )(γπ γ of

1θγ = , we can define its so-called α-cut sets (intervals) γαA = { γ : )(γπ γ ≥ α }, with 0 ≤ α ≤ 1.

For example, γ5.0A = [1000, 1200] is the set (interval) of γ values for which the possibility function

is greater than or equal to 0.5 (dashed segment in Figure 1, left). In the light of the discussion

above, the α-cut set γαA of parameter γ can be interpreted as the (1 – α)·100% Confidence Interval

(CI) for γ, i.e., the interval such that αγ γα −≥∈ 1][ AP : actually, ( )γ

αAN ≤ ][ γαγ AP ∈ ≤ ( )γ

αAΠ ,

which becomes, by definition of possibility and necessity measures, { })(sup1 γπ γ

γ γαA∉

− ≤ ][ γαγ AP ∈ ≤

{ })(sup γπ γ

γ γαA∈

, i.e., 1 – α ≤ ][ γαγ AP ∈ ≤ 1 (47) (Figure 1, left shows three CIs for α = 0, 0.5 and 1).

In general, other techniques exist for constructing possibility distributions: for example, in Refs. 58

and 60 methods based on Chebyshev inequality are used to compute possibility distributions using

Page 20: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

20

estimated means and variances; finally, in Ref. 47 indications are provided to build possibility

functions for uncertain parameters with known supports and means and/or quantiles.

850 900 950 1000 1050 1100 1150 1200 1250 1300 13500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

γ

πγ (

γ )

A0γ

A1γ

A0.5γ

α = 0

α = 1

α = 0.5

850 900 950 1000 1050 1100 1150 1200 1250 1300 13500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

γ

Cum

ulat

ive

prob

abili

ty Π γ

Figure 1. Left: triangular possibility distribution )(γπ γ of parameter γ. Right: bounding lower and

upper CDFs for γ, ( )γγF = ( ]( )γγ ,∞−N and ( )γγF = ( ]( )γγ ,∞−Π , respectively

As a result of the possibilistic description of the epistemically-uncertain parameter γ , for each

possibility (resp., confidence) level α (resp., 1 – α) in [0, 1], a family of PDFs (namely,

( ){ }α

δγ ,|ypY ) can be generated also for random variable Y, by letting parameter γ range within

the corresponding α-cut set γαA , i.e., ( ){ }

αδγ ,|ypY = ( ){ }100,:,| =∈ δγδγ γ

αAypY . By way of

example, Figure 2, left, shows four PDFs belonging to the family ( ){ }α

δγ ,|ypY for α = 0 (solid

lines) and α = 0.5 (dashed lines). In the same way, a bundle of CDFs for Y, namely ( ){ }α

δγ ,|yF Y =

( ){ }100,:,| =∈ σγδγ γαAyFY , can be constructed, which is bounded above and below by

( ) ( ){ }100,|sup ==∈

δγγαγ

α yFyF Y

A

Y and ( ) ( ){ }100,|inf ==∈

δγγαγ

α yFyF Y

A

Y , respectively. Since )(γπ γ can

be interpreted as a set of nested CIs for parameter γ (see above), it can be argued that the α-cuts of

)(γπ γ induce also a set of nested pairs of CDFs ( ) ( )( ){ }10:, ≤≤ ααα yFyF YY which bound the “true”

CDF ( )yFY with confidence larger than or equal to (1 – α), i.e., ( ) ( ) ( ) ααα −≥≤≤ 1][ yFyFyFP YYY ,

with 0 ≤ α ≤ 1(49, 94, 95). Using the jargon of evidence theory(44-46), ( )yF Yα and ( )yF Y

α are often

referred to as the plausibility and belief functions (of level α) of the set S = (‒∞, y], i.e.,

Page 21: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

21

( ) ( )SPlyF YYαα = and ( ) ( )SBelyF YY

αα = , respectively (see Section 3.1). For illustration purposes,

Figure 2, right, shows ( )SPlYα and ( )SBelYα , S = (‒∞, y], built in correspondence of the α-cuts of

level α = 0 (solid lines), 0.5 (dashed lines) and 1 (dot-dashed line) of )(γπ γ (Figure 1, left).

Finally, the set of nested pairs of CDFs ( ) ( )( ){ }10:, ≤≤ ααα SPlSBel YY , S = (‒∞, y], can be

synthesized into a single pair of plausibility and belief functions as ( ) ( )∫=1

0

αα dSPlSPl YY and

( ) ( )∫=1

0

αα dSBelSBel YY , respectively (dotted lines in Figure 2, right): in other words, ( )SPlY and

( )SBelY are obtained by averaging the nested plausibility and belief functions generated at

different possibility levels α ∈ [0, 1]. The plausibility and belief functions ( )SPlY and ( )SBelY are

shown to represent the “best bounds” for the “true” CDF ( )yFY of Y(49, 94, 95, 131).

600 700 800 900 1000 1100 1200 1300 1400 1500 16000

0.5

1

1.5

2

2.5

3

3.5

4x 10

-3

Y

pY(y

| γ,

δ =

100

)

α = 0

α = 0.5

600 700 800 900 1000 1100 1200 1300 1400 1500 16000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Y

Cum

ulat

ive

prob

abili

ty

Pl(S), Bel(S)

α = 1α = 0.5

α = 0

Figure 2. Example of Fuzzy Random Variable (FRV)

4.1.2 Illustrative example

The benefits coming from the use of the hybrid probabilistic and possibilistic FRV approach

illustrated in the previous Section 4.1.1 are here shown by comparison with a traditional, purely

probabilistic two-level framework, where the epistemically-uncertain parameter γ of the Gumbel

probability model is itself described by a single PDF )(γγp ; notice that the results reported

hereafter are presented for the first time in this paper. In order to perform a fair comparison between

Page 22: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

22

the two approaches, a sample from the PDF )(γγp here employed is obtained by applying the

principle of insufficient reason(132) to the possibility distribution ( )γπ γ of Figure 1, left. The

procedure for obtaining such a sample is(132, 133): (i) draw a random realization α* for α in [0, 1) and

consider the α-cut level ] ,[ *** ααγα γγ=A = ( ){ }*: αγπγ γ ≥ ; (ii) sample a random realization γ* for γ

from a uniform probability distribution on γα*A . Other techniques for the transformation of

possibility distributions into PDFs can be found in Refs. 60, 132, 134 and 135.

In order to highlight the effects of different representations of epistemic uncertainty, we analyze the

95-th quantile 95.0Y of the uncertain variable Y. Figure 3 shows the corresponding bounding CDFs

( )95.095.0

yFY

and ( )95.095.0

yFY (y0.95 ℜ∈ ) produced using a possibilistic (solid lines) representation of

the epistemically-uncertain parameter γ together with the single CDF ( )95.095.0

yFY

(y0.95 ℜ∈ )

obtained by resorting to a probabilistic description of parameter γ (dashed line). In this respect, it is

important to remember that in a two-level hierarchical framework of uncertainty representation, the

quantiles of an uncertain variable Y are not fixed point values, but rather they are epistemically-

uncertain variables. In particular, if epistemic uncertainty is represented by probability distributions,

then the quantiles of Y are correspondingly described by probability distributions. In the same way,

when epistemic uncertainty is represented by possibility distributions, then the quantiles are

described by possibility distributions(58). In this latter case, a couple of bounding CDFs (i.e., of

cumulative possibility and necessity measures) can be associated to the corresponding quantiles

using the formulas introduced in Section 4.1.1. The advantage of using a non-probabilistic

representation of epistemic uncertainty lies in the possibility of providing bounds on the estimates

of the 95-th quantile (in the light of the scarce information available on the variable). For example,

let us refer to the quantitative indicator ][ *95.095.0 yYP > , i.e., the probability that 95.0Y exceeds a

given safety threshold y0.95* (= 1400 in this case): the point estimate provided by the purely

Page 23: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

23

probabilistic approach is 0.0285, whereas the interval produced by the hybrid FRV approach is [0,

0.3225]. If a faithful description of the (scarce and imprecise) information available on the variable

Y leads to ][ *95.095.0 yYP > = [0, 0.3225], then the analyst cannot exclude that in reality such

exceedance probability is equal to the highest “possible” value obtained (i.e., 0.3225)g. In this light,

it can be seen that the upper bound 0.3225 of the interval [0, 0.3225] (representing a conservative

assignment of the exceedance probability “informed” by a faithful representation of the imprecision

related to γ) is about 11 times larger than the corresponding point value generated by the purely

probabilistic methodh. This means that if we base our analysis on an inappropriately precise

assumption (i.e., on the selection of a specific probability distribution even in presence of scarce

information), we may significantly underestimate risk (for example, the probability of a given

accident and/or the severity of the corresponding consequences). Instead, using families of

probability distributions (within the framework, e.g., of possibility theory) implicitly introduce a

sort of “informed conservatism” into the analysis, which should lead the decision maker to be

“safely”-coherent with his/her limited and/or imprecise background knowledge.i

On the other hand, it has to be acknowledged that: (i) a poorly done possibilistic analysis can be just

as misleading as a poorly done probabilistic analysis; (ii) even if possibility theory constitutes a

“rigorous” tool for transforming the available information into an uncertainty representation, such

uncertainty description remains, to a certain degree, subjective: for example, “it requires additional

judgments in the specification of the possibility function”(21); (iii) as argued in Ref. 18, even if a

possibilistic analysis “meets to a large extent the ambition of objectivity by trying to reduce the

g A remark is in order with respect to this sentence and to the existence “in reality” of a “true value” of the probability. For subjective probabilities such “true” values have no meaning. Instead “true” values can be obviously defined for the frequentist probabilities. In the present paper we adopt a two-level, hierarchical framework to model uncertainty (also called probability-of-frequency approach): thus, a true value of the exceedance probability can be defined. h In this paper, a given (probability or consequence) estimate is considered in a broad sense more “conservative” than another one, if it implies and leads to a higher level of risk. However, since a discussion on the concept of “conservatism” goes beyond the scopes of the present paper, further details are not given here for brevity: the interested reader is referred to, e.g., Ref. 136 and references therein. i In other words, in line with Ref. 18, we may say that such non-probabilistic approaches can be considered “objective” tools for transforming the available information into an uncertainty representation, in the sense that they try to avoid the use of “subjective judgements and assumptions” in the analysis.

Page 24: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

24

amount of arbitrary assumptions made in the analyses, it does not provide the decision maker with

specific scientific judgments about epistemic uncertainties from qualified analysts and experts”.

Actually, “expressing epistemic uncertainties means a degree of subjectivity, and decision making

normally needs to be supported by qualified judgments” (18). In this respect, sometimes the bounds

provided by a possibilistic analysis may be considered rather “non-informative and the decision

maker would ask for a more refined assessment: in such cases, it is expected that the analysts are

able to give some qualified judgments, so that a more precise assessment can be obtained”(18).

1050 1100 1150 1200 1250 1300 1350 1400 14500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

95th percentile Y0.95 of Y

Cum

ulat

ive

prob

abili

ty

Purely probabilistic

FRV

Figure 3. Epistemic distributions of the 95-th quantile 95.0Y of Y ~ Gum(γ, δ) obtained through a

possibilistic (solid lines) and probabilistic (dashed line) representation of parameter γ

4.2 Propagation of uncertainty to the output of the system model

With reference to the model function ( ) ( )NjZZ YYYYffZ ..., , ..., , , 21== Y (1), we consider N input

variables Y = {Y1, Y2, …, Yj, …, YN} affected by hybrid aleatory and epistemic uncertainty and

represented by PDFs { Njyp jjYj,...,2,1:)|( =θ } with epistemically-uncertain parameters {jθ : j =

1, 2, …, N} described by possibility distributions )( jjθπ

θ . In order to jointly propagate these mixed

uncertainty representations, we recommend the hybrid Monte Carlo (MC) and Fuzzy Interval

Analysis (FIA) approach detailed in Section 4.2.1; the effectiveness of the proposed approach is

assessed by comparison with other uncertainty propagation techniques in Section 4.2.2.

4.2.1 Recommended approach: Monte Carlo (MC) and Fuzzy Interval Analysis (FIA)

Page 25: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

25

The hybrid MC and FIA approach combines the MC technique(92) with the extension principle of

fuzzy set theory(93-97, 135) by means of the following main steps(49, 58, 60, 98-100):

1. set α = 0 (outer loop, processing epistemic uncertainty by FIA);

2. select the α-cut sets jAθα of the N (joint) possibility distributions )( jjθπ

θ of the

epistemically-uncertain parameters vectors {jθ : j = 1, 2, …, N};

3. letting parameters jθ range within the corresponding α-cuts jAθα , ‘generate’ families of

CDFs ( ){ }αj

Y yF jθ| = ( ){ }jj AyF jj

Y θ

θθ α∈:| for the “hybrid” input variables jY , j = 1, …, N;

4. propagate the (input) families of CDFs ( ){ }αj

Y yF jθ| through the mathematical model

( )nYYYfZ ..., , , 21= by standard MC sampling to obtain the (output) family of CDFs

( ){ }α

zF Z for Z (inner loop, processing aleatory uncertainty by standard MC simulation);

5. identify the limiting CDFs (that bound above and below the “true” CDF ( )zF Z with

confidence (1 ‒ α)·100%) as those that ‘envelop’ the family ( ){ }α

zF Z , i.e., ( )zF Zα =

PlαZ((‒∞, z]) = ( ){ }zF Zαmax and ( )zF Z

α = BelαZ((‒∞, z]) = ( ){ }zF Zαmin , respectively;

6. if 1<α , then set ααα ∆+= (e.g., 05.0=∆α in this paper) and return to step 2. above;

otherwise, stop the algorithm.

The output of the algorithm is thus represented by nested sets of plausibility and belief functions

( ) ( )( ){ }10:, ≤≤ ααα APlABel ZZ , A = (‒∞, z]: these sets of functions can then be synthesized into a

single pair of plausibility and belief functions, PlZ(A) and BelZ(A), A = (‒∞, z], as described in

Section 4.1.1.

It is worth noting that performing an interval analysis on α-cuts assumes total dependence among

the epistemically-uncertain parameters. Actually, this procedure implies strong dependence among

the information sources (e.g., the experts or observers) that supply the input possibility

Page 26: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

26

distributions, because the same confidence level (1 – α ) is chosen to build the α-cuts for all the

epistemically-uncertain parameters(48) (see Section 4.4 for further discussions on dependence).

4.2.2 Illustrative examples

In what follows, we report some of the results obtained in a previous work by some of the

authors(58), in which the hybrid MC-FIA approach (Section 4.2.1) is used to propagate mixed

probabilistic and possibilistic uncertainties through mathematical models. The effectiveness of the

proposed technique is shown by means of a comparison with the MC-based Dempster-Shafer (DS)

approach employing Independent Random Sets (IRSs), where the possibility distributions

describing the epistemically-uncertain parameters are discretized into focal sets that are randomly

and independently sampled by MC (see Section 3.2). Such discretization requires the following

steps(48): (i) determine Ne (nested) focal sets for the generic possibilistic parameter θ as the α-cuts

] ,[ttt

A ααα θθ= , e Nt ..., 2, 1,= , with 0...1 121 =>>>>= − ee NN αααα ; (ii) build the probability

mass distribution of the focal sets by assigning 1+−=∆= ttttm αααα . In this paper, Ne = 21 and

mαt = 0.05 are chosen for the sake of comparison with the hybrid MC-FIA approach (see Section

4.2.1).(58, 60)

For the exemplification, we consider the mathematical model Z1 = fZ1(Y1, Y2, Y3) already presented

in Ref. 58: the uncertain inputs Y1, Y2·and Y3 are described by lognormal probability distributions

with (triangular) possibilistic parameters. Figure 4 shows the plausibility and belief functions,

( ]( )1,1 zPl Z ∞− = ( )11 zF Z and ( ]( )1,1 zBelZ ∞− = ( )1

1 zF Z of the model outputs Z1 produced by the

hybrid MC-FIA (solid lines) and MC-based DS-IRS (dashed lines) approaches(58). The results are

very similar, i.e., in the present case, the effect of the different uncertainty propagation method is

not evident(58, 60). Then, in order to highlight the effects of the different uncertainty propagation

approaches, the upper and lower CDFs, ( )95.01

95.01 zF Z and ( )95.0

1

95.01 zF Z , respectively, of the 95-th

Page 27: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

27

quantile 95.01Z of Z1 are further analyzed. Figure 5, left, shows the bounding CDFs for 95.0

1Z

produced by the hybrid MC-FIA (solid lines) and the MC-based DS-IRS (dashed lines) approach;

for illustration purposes, Figure 5, right, shows the possibility distributions ( )95.01

95.01 zZπ that

corresponds to the CDFs by means of the relations ( )95.01

95.01 zF Z = ( ){ }95.0

11

95.01

95.01

95.01

sup zZ

zZ

π≤

and

( )95.01

95.01 zF Z = ( ){ }95.0

11

95.01

95.01

95.01

sup1 zZ

zZ

π>

− (48).

0 200 400 600 800 1000 12000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Model output, Z1

Cum

ulat

ive

prob

abili

ty

1

MC-based DS-IRS: BelZ1((-∞, z1])

MC-based DS-IRS: PlZ1((-∞, z1])

Hybrid MC-FIA: BelZ1((-∞, z1])

Hybrid MC-FIA: PlZ1((-∞, z1])

Figure 4. Plausibility and belief functions ( ]( )1,1 zPl Z ∞− and ( ]( )1,1 zBelZ ∞− of model output Z1 obtained by the hybrid MC-FIA (solid lines) and MC-based DS-IRS (dashed lines) approaches(58)

500 600 700 800 900 1000 11000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

95th percentile Z10.95 of Z1

Cum

ulat

ive

prob

abili

ty

1 1 1 2 3

Hybrid MC-FIA: Pl

Hybrid MC-FIA: BelMC-based DS-IRS: Pl

MC-based DS-IRS: Bel

548.2 1031.1

0.05

0.95

500 600 700 800 900 1000 11000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

95th percentile Z10.95 of Z1

Pos

sibi

lity

valu

e, π

Z10.

95(z

10.95

)

0.05

548.2 1031.1

A0.05Z

1

0.95

Hybrid MC-FIA

MC-based DS-IRS

Figure 5. Left: upper and lower CDFs ( )95.01

95.01 zF Z and ( )95.0

1

95.01 zF Z of 95.0

1Z obtained by the hybrid MC-FIA (solid lines) and MC-based DS-IRS (dashed lines) approaches. Right: possibility

distributions ( )95.01

95.01 zZπ corresponding to ( )95.0

1

95.01 zF Z and ( )95.0

1

95.01 zF Z (58)

Page 28: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

28

It can be seen that the hybrid MC-FIA method produces a larger gap between the upper and lower

CDFs ( )95.01

95.01 zF Z and ( )95.0

1

95.01 zF Z than the MC-based DS-IRS approach in the regions where the

cumulative probabilities are close to “extreme” values, i.e., where ( )95.01

95.01 zF Z ≈ 0 and ( )95.0

1

95.01 zF Z ≈

1. This is explained as follows. Notice that the values of 95.01Z for which ( )95.0

1

95.01 zF Z ≈ 0 and

( )95.01

95.01 zF Z ≈ 1 correspond to the lower and upper bounds, respectively, of the α-cut of level α ≈ 0

of the possibility distribution ( )95.01

95.01 zZπ (by way of example, the α-cut

95.0105.0

ZA of level α = 0.05

produced by the hybrid MC-FIA is indicated by arrows in Figure 5, right). All this considered, it

should be noticed that the α-cut 95.0

105.0

ZA of level α = 0 of the possibility distribution ( )95.01

95.01 zZπ can be

generated only by “combining” and propagating through the model function Z1 = fZ1(Y1, Y2, Y3) the

α-cuts of level α = 0 of all the possibilistic parameters of the model inputs Y1, Y2 and Y3. Such

combination of α-values, i.e., {α1 = 0, α2 = 0, α3 = 0}, is always “processed” by fuzzy interval

analysis in the hybrid MC-FIA method, due to the underlying assumption of total dependence

among the information sources (e.g., the experts or observers) that supply the parameters possibility

distributions: actually, the same possibility (resp. confidence) level α (resp., 1 – α) is chosen to

build the α-cuts for all the epistemically-uncertain parameters (see Section 4.2.1). On the contrary,

such combination of possibility (resp., confidence) values, i.e., {α1 = 0, α2 = 0, α3 = 0} (resp., {1-α1

= 1, 1-α2 = 1, 1-α3 = 1}), cannot be obtained easily (i.e., with high probability) by the MC-based

DS-IRS approach, which performs a plain random sampling among independent intervals. This is

coherent with the real processes of expert elicitation, in that it is difficult to find different

(independent) experts that provide estimates about different uncertain parameters with the same

(and, in this case, maximal) confidence.

The higher conservatism of the hybrid MC-FIA approach is reflected, e.g., by the values of the

exceedance probability ][ *95.01

95.01 zZP > (here *95.0

1z = 1000). For example, it can be seen that

][ *95.01

95.01 zZP > ranges within [0, 0.1500] for the hybrid MC-FIA method, whereas it is 0 for the

Page 29: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

29

MC-based DS-IRS approach: thus, the uncertainty propagation performed by random sampling of

independent focal sets leads to a dramatic underestimation of the exceedance probability(58).

Some considerations are in order with respect to the results shown. The comparison shows that the

choice of the uncertainty propagation method (and implicitly of the state of dependence among the

epistemically-uncertain parameters of probability models) is not so critical (e.g., in risk-informed

decisions) only when the extreme bounding upper and lower CDFs of the model output are of

interest to the analysis: actually, the curves produced by the hybrid MC-FIA and the MC-based DS-

IRS approaches are almost identical. However, the analysis of other quantitative indicators (e.g., the

distribution of a given quantile of the output) shows that the hybrid MC-FIA method produces a

larger separation between the plausibility and belief functions than the MC-based DS-IRS approach,

giving rise to more conservative results (in particular, in the range of small probabilities that are of

particular interest in the risk assessment of complex, highly reliable systems).

4.3 Updating the uncertainty representation with new information

In all generality, let )(θθπ be the (joint) prior possibility distribution for the epistemically-

uncertain parameters ]...,, ..., , ,[ 21 Pm θθθθ=θ of the (aleatory) PDF )|( θypY of input variable Y

(built on the basis of a priori subjective engineering knowledge and/or data). For example, in the

flood risk assessment example used in what follows Y may represent the yearly maximal water flow

of a river described by the Gumbel distribution of Section 4.1: thus, Y ~ )|( θypY = Gum(θ) =

Gum(γ, δ) = ),|( δγypY and )(θθπ = ),(, δγδγπ . Moreover, let ]...,,...,,,[ 21 Dk yyyy=y be a vector

of D observed pieces of data representing the new information/evidence available for the analysis:

referring to the example above, y may represent a vector of D values collected over a long period

time (e.g., many years) of the yearly maximal water flow of the river under analysis.

Page 30: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

30

The new evidence acquired can be used to update the a priori uncertainty representation )(θθπ =

),(, δγδγπ of θ = [γ, δ], i.e., to calculate the posterior possibility distribution )|( yθθπ (i.e.,

)|,(, yδγδγπ ) of θ after y is obtained. In Section 4.3.1, a method based on a purely possibilistic

counterpart of the classical, probabilistic Bayes theorem is suggested for the updating; in Section

4.3.2, the effectiveness of the recommended approach is assessed, for the first time to the best of the

authors’ knowledge, by comparison with a hybrid probabilistic-possibilistic technique of literature.

4.3.1 Recommended approach: purely possibilistic Bayes’ theorem

The purely possibilistic method (hereafter also referred to as ‘Approach A’ for brevity) is based on

a purely possibilistic counterpart of the classical, probabilistic Bayes theorem(112, 113):

{ })()|(sup

)()|()|(

θyθ

θyθyθ

θθ

θ

θθ

θ

ππ

πππ

L

L

⋅⋅= , (2)

where )|( yθθ

Lπ is the possibilistic likelihood of the parameter vector θ given the newly observed

data y, and quantities )|( yθθπ and )(θθπ are defined above. Notice that { })()|(sup θyθ

θθ

θ

ππL ⋅ is a

normalization factor such that { })|(sup yθθ

θ

π = 1, as required by possibility theory(50).

It is worth mentioning that forms of the possibilistic Bayes theorem alternative to (2) can be

constructed as a result of other definitions of the operation of ‘conditioning’ with possibility

distributions: the reader is referred to Refs. 112 and 113 for technical details. In this paper,

expression (2) is recommended because “it satisfies desirable properties of the revision process and

leads to continuous posterior distributions”(113).

The possibilistic likelihood )|( yθθ

Lπ is here obtained by transforming the classical probabilistic

likelihood function )|( yθθL through normalization, i.e., )|( yθ

θ

Lπ = { })|(sup

)|(

θ

θ

θ

L

L (137). This choice

has been made for the following main reasons:

Page 31: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

31

i. the transformation is simple and can be straightforwardly applied to any distribution(137);

ii. the resulting possibilistic likelihood is closely related to the classical, purely probabilistic

one (which is theoretically well-grounded) by means of the simple and direct operation of

normalization that preserves the “original structure” of the experimental evidence;

iii. it can be easily verified that the resulting possibilistic likelihood keeps the sequential nature

of the updating procedure typical of the standard Bayes theorem.

Notice that other techniques of transformation of probability density functions into possibility

distributions exist: see Refs. 132, 134 and 135. Techniques are also available to construct possibility

distributions (and, thus, possibilistic likelihood functions) directly from experimental data (i.e.,

without resorting to “artificial” transformations): see Refs. 138 and 139 for details.

4.3.2 Illustrative example

For completeness, we report (and extend) some of the results obtained in a previous work by the

authors(61), in which the purely possibilistic Bayes’ theorem described above is applied for updating

the possibilistic parameters of aleatory PDFs in a simple literature case study involving the risk-

based design of a flood protection dike(62). In the risk model considered the maximal water level cZ

of a river (i.e., the output variable Z of the model) is given as a function of several (uncertain)

parameters (the inputs Y to the model), i.e., ( ) ( )svmZcZ KZZQfZYYYYfZc

,,,,,, 4321 === , with Y1 =

Q ~ ( )δγ ,Gum , Y2 = mZ ~ )( ZmZmN σµ , , Y3 = vZ ~ )( ZvZvN σµ , and Y4 = sK ~ )( KsKsN σµ , ; the

‘priors’ of the parameters of the aleatory PDFs of the inputs are represented by triangular possibility

distributions: see Ref. 61 for further details.

The benefits coming from the use of the proposed method are here shown by means of a

comparison to the hybrid probabilistic-possibilistic approach proposed in Ref. 114 (hereafter also

called ‘Approach B’ for brevity) and based on the combination of: (i) FIA to process the uncertainty

Page 32: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

32

described by possibility distributions; and (ii) repeated Bayesian updating of the uncertainty

represented by probability distributions.

In order to simplify the notation, in what follows let θ be one of the uncertain parameters of the

PDFs of Y1 = Q, Y2 = Zm, Y3 = Zv and Y4 = Ks, i.e., θ = γ , δ , Zmµ , Zmσ , Zvµ , Zvσ , sKµ or Ksσ . By

way of example, Figure 6 illustrates the possibility distributions of the epistemically-uncertain

parameters γ and σKs of the aleatory PDFs ( )δγ ,qpQ (left) and )( KsKssK kp s σµ ,| (right) of Y1 = Q

and Y4 = Ks, respectively: in particular, the prior possibility distributions )(θπ θ are shown as solid

lines, whereas the marginal posterior possibility distributions )|( yθπ θ obtained by Approaches A

(Section 4.3.1) and B(114) using D1 = 149 and D4 = 5 pieces of data are shown in dashed and dot-

dashed lines, respectively; the point estimates MLEθ̂ produced by the classical MLE method are also

shown for comparison (dots)(61, 62).

850 900 950 1000 1050 1100 11500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

γ

π γ (

γ )

Prior

Posterior (Approach A)Posterior (Approach B)

MLE estimate

1 2 3 4 5 6 7 8 9 100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

σKs

πσ Ks (

σK

s )

Figure 6. Prior and posterior possibility distributions of the epistemically-uncertain parameters γ and σKs of the aleatory PDFs ( )δγ ,qpQ (left) and )( KsKss

K kp s σµ ,| (right) of Y1 = Q and Y4 = Ks.

The point estimated obtained by the classical MLE method are shown for comparison

From a mere visual and qualitative inspection of Figure 6, it can be seen that both approaches are

suitable for revising the prior possibility distributions by means of empirical data. In particular, it is

evident that: (i) the most likely (i.e., preferred) values cθ of the epistemically-uncertain parameters

(i.e., those values in correspondence of which the possibility function equals 1) are moved towards

Page 33: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

33

the MLE estimates MLEθ̂ in all the cases considered; (ii) the area Sθ underlying the corresponding

possibility distributions is significantly reduced: noting that this area is related to the imprecision in

the knowledge of the possibilistic parameter (i.e., the larger the area, the higher the imprecision), it

can be concluded that both approaches succeed in reducing the epistemic uncertainty (nevertheless,

note that the agreement between the results obtained with the two numerical procedures does not

necessarily establish the correctness or appropriateness of those procedures).

On the other hand, it is evident that the strength of Approach A in moving the most likely values cθ

towards the corresponding MLE estimates MLEθ̂ is always higher than that of Approach B.

Actually, the distances between the MLE estimates MLEθ̂ and the posterior most likely values

produced by Approach A are 1.35-7.94 times lower than those generated by Approach B.

In addition, it is interesting to note that the strength of Approach B in reducing epistemic

uncertainty is slightly higher than that of Approach A only when the amount of available data is

quite large (i.e., in the revision of the possibility distributions of parameters γ and δ of the PDF of

Y1 = Q, by means of D1 = 149 pieces of data): actually, the areas underlying the corresponding

possibility distributions are reduced by 25.56-30.49% and 28.74-33.01% by Approaches A and B,

respectively. In all the other cases, the power of Approach A in reducing the epistemic uncertainty

is higher than that of Approach B and this difference becomes more and more evident as the size of

the data set decreases. This is particularly evident in the estimation of the standard deviation σKs of

Ks (Figure 6, right): on one side, the posterior distribution produced by the hybrid approach (B)

seems not to be influenced by the revision process (actually, the most likely value of the parameter,

Kscσ = 6.72, and the area underlying the corresponding posterior possibility distribution,

KsSσ =

3.95, are quite close to those of the prior, i.e., 6.89 and 4.11, respectively); on the other side, the

posterior distribution generated by the purely possibilistic approach (A) is almost centered on the

point estimates obtained by the MLE method and the corresponding area is reduced by about 9%.

Page 34: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

34

Finally, in addition to the strength of the approaches in revising the (prior) possibilistic description

of the uncertain parameters of aleatory variables, also the computational cost associated to the

methods has to be taken into account. In this respect, the time tcomp required by Approach B is

approximately T·Nα times larger than that of Approach A, since it entails T repetitions of the purely

probabilistic Bayes theorem for each of the Nα α-cuts analyzed (in this case, T·Nα = 100·21 = 2100):

see Ref. 114 for details.

Several considerations are in order with respect to the results obtained. Both methods succeed in

updating the possibilistic description of the epistemically-uncertain parameters of (aleatory)

probability distributions by means of data. In addition, when the Bayesian update is performed

based on a data set of large size (e.g., > 100 in this case), the strength of the two approaches in

reducing the epistemic uncertainty is quite similar. This demonstrates that although the two

methods are conceptually and algorithmically quite different, in presence of a “strong”

experimental evidence they produce “coherent” results (i.e., posterior possibility distributions that

bear the same overall “uncertainty content”): this is a fair outcome since the results provided by the

two methods are expected to be more and more similar (i.e., more and more coherent with the

experimental evidence) as the size of the data set increases. Instead, the strength of the purely

possibilistic approach (A) in reducing epistemic uncertainty is consistently higher than that of the

hybrid one (B) in presence of medium- and small-sized data sets (e.g., ≈ 5-30 in the present study)

(which is often the case in the risk analysis of complex safety-critical systems). In such cases,

embracing one method instead of the other may significantly change the outcome of a decision

making process in a risk assessment problem involving uncertainties: this is of paramount

importance in systems that are critical from the safety view point, e.g., in the nuclear, aerospace,

chemical and environmental fields.

However, it is absolutely important to acknowledge that even if the strength of method A in

reducing epistemic uncertainty is higher than that of method B, this does not necessarily imply that

Page 35: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

35

method A is “better” or “more effective” than method B overall. Actually, if on one side a

consistent reduction in the epistemic uncertainty is in general desirable in decision making

processes related to risk assessment problems (since it significantly increases the analyst confidence

in the decisions), on the other side this reduction must be coherent with the amount of information

available. In this view, an objection may arise in the present case: is the remarkable strength of

Approach A in reducing epistemic uncertainty (with very few pieces of data) fully justified by such

a small amount of data? In other words, is this considerable reduction of epistemic uncertainty

coherent with the strength of the experimental evidence or is it too optimistic? With respect to that,

it has to be admitted that the uncertainty reduction power of the purely possibilistic approach (A) is

strongly dependent on the shape of an artificially constructed possibilistic likelihood that could in

principle bias the analysis. However: (i) in the approach recommended in the present paper, this

possibilistic function is very closely related to the classical, purely probabilistic one (which is

theoretically well-grounded) by a simple and direct operation of normalization that preserves the

“original structure” of the experimental evidence; (ii) in general, a probability-to-possibility

transformation (properly performed according to the rules of possibility theory) always introduces

additional artificial epistemic uncertainty into the analysis, i.e., it does not artificially reduce it

(because it replaces a single probabilistic distribution by a family of distributions)(132, 134, 135). On the

basis of the considerations above, it seems unlikely that the purely possibilistic approach (A) may

produce results that are dangerously over-optimistic with respect to those of the hybrid one (B).

Finally, the computational time required by the hybrid approach (B) is consistently (i.e., hundreds or

thousands times) higher than that associated to the purely probabilistic one (A): this is explained by

the necessity of repeatedly applying many (e.g., hundreds) times the purely probabilistic Bayes’

theorem for each α-cut analyzed.

Page 36: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

36

4.4 Dependence among the input variables and parameters

As discussed in Section 3.4, both objective and state-of-knowledge dependences need to be

considered in risk assessment analyses(7, 33, 34). However, in many practical cases the state of

dependence among the uncertain model parameters and variables is difficult to define precisely. In

such situations, conservatism requires that all kinds of (possibly unknown) dependences be

accounted for. To do this, the Distribution Envelope Determination (DEnv) method is here

recommended(126-129). In Section 4.4.1 the method is outlined in detail; a demonstration of the

approach on a case study concerning Fault Tree Analysis (FTA) is given in Section 4.4.2.

4.4.1 Recommended approach: Distribution Envelope Determination (DEnv) method

The DEnv method allows computing extreme upper and lower CDFs ( )zF ZDEnv and ( )zF Z

DEnv on the

output Z of the model fZ(Y1, Y2, …, Yj, …, YN) (1) no matter what dependencies exist among the

inputs; these bounds are also the “pointwise best possible, which means they could not be any

tighter without excluding some possible dependences” (33). Notice that this approach can be applied

both at the objective and epistemic levels(58, 59). The method requires the following steps(126-129):

1. represent the uncertainty on the inputs Y1, Y2, …, Yj, …, YN within the framework of

Dempster-Shafer (DS) theory of evidence. The application of evidence theory produces a

description of the inputs in terms of so-called DS structures ( )( ){ }jjiY

iY niAmA j

j

j

j...,,2,1:, = , j =

1, 2, …, N: in other words, input Yj is described by a set of nj intervals (focal elements)

],[ jjj

j

ij

i

j

iY yyA = each of which is assigned a probability (or belief) mass ( )j

j

iYAm , ij = 1, 2, …,

jn , j = 1, 2, …, N. Notice that the DS structures described above can be transformed into

upper and lower Cumulative Distribution Functions (CDFs) 1YF and 1YF (called cumulative

plausibility and belief functions, respectively): in particular, ( )11 yF Y = ][ 11 yYP < =

Page 37: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

37

( )[ ]∑

≠∩ 0,0 111

1

1

yA

iY

iY

Am and ( )11 yF Y = ][ 11 yYP < = ( )

[ ]∑⊂ 1

11

1

1

,0 yA

iY

iY

Am . An exemplary DS structure and the

corresponding upper and lower CDFs are pictorially shown in Figure 7j (33, 43, 44, 46, 85);

0.1 0.2 0.3 0.4 0.5 0.60.3

0.4

0.5

0.6

0.7

0.8

0.9

1

m(

Ai Y

1 )

A1Y

1

y1

A2Y

1

0.1 0.2 0.3 0.4 0.5 0.60

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

FY

1 ( y

1 )

y1

Figure 7. Exemplary DS structure (left) and corresponding upper and lower CDFs (right)

2. propagate the input focal elements through the model fZ(Y1, Y2, …, YN) to obtain the output

focal elements Nj iiiiZA ......21 = ],[ ............ 2121 NjNj iiiiiiii zz as ( ){ }

=∈NjZ

NjAYYYYYf

ji

jYj

,...,,...,,min 21,...,2,1:

,

( ){ }

=∈NjZ

NjAYYYYYf

ji

jYj

,...,,...,,max 21,...,2,1:

, i j = 1, 2, …, jn , j = 1, 2, …, N;

3. properly assign the (joint) probability masses )( ......21 Nj iiiiZAm to the output focal elements

Nj iiiiZA ......21 obtained at step 2. above in such a way that the resulting upper CDF on Z is the

maximal possible (i.e., ( )zF ZDEnv = ( ){ }zF Zmax ) and the resulting lower CDF on Z is the

minimal possible ( ( )zF ZDEnv = ( ){ }zF Zmin ) given a precise set of constraints(126-129):

( )

( ) ( ){ } ( )[ ]

zAmzFzF

NjniAm

zAAAAfA

iiiiZ

ZZDEnv

jjiiii

Z

Ni

NYji

jYiY

iYZ

NijiiiZ

Nj

Nj

==

==

∑≠∩

=

,maxmax

:...,,2,1,...,,2,1,Find

0,0,...,,...,,

......

......

22

11

......21

21

21

(3)

j Notice that representing the uncertainty in the inputs Y1, Y2, …, Yj, …, YN by DS structures does not impair the generality of the description. Actually, any other type of distribution that may be used to describe the uncertainty in Y1, Y2, …, Yj, …, YN can be easily transformed into a DS structure: approaches for transforming probability distributions can be found in Ref. 102, whereas techniques for transforming possibility distributions can be found in Ref. 33.

Page 38: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

38

( )

( ) ( ){ } ( )[ ]

zAmzFzF

NjniAm

zAAAAfA

iiiiZ

ZZDEnv

jjiiii

Z

Ni

NYji

jYiY

iYZ

NijiiiZ

Nj

Nj

==

==

∑⊂

=

,minmin

:...,,2,1,...,,2,1,Find

,0,...,,...,,

......

......

22

11

......21

21

21

(4)

subject to the constraints that: (i) the probability masses ( )j

j

iYAm are conserved (i.e.,

( ) ( )j

j

j

j

j

j

N

N

Nj iY

n

i

n

i

n

i

n

i

n

i

iiiiZ AmAm =∑∑ ∑ ∑ ∑

= = = = =

+

+

1

1

2

2

1

1

1

1

21

1 1 1 1 1

............ , i j = 1, 2, …, jn , j = 1, 2, …, N); and (ii) the

probability masses )( ......21 Nj iiiiZAm are larger than or equal to zero.

It is worth noting that in order to construct the entire CDFs ( )zF ZDEnv and ( )zF Z

DEnv for Z,

such optimization problems have to be solved for all the values z of interest.

Finally, notice that an alternative sampling-based approach to (i) the propagation of a DS structure

through a model and (ii) the construction of approximations to the cumulative plausibility and belief

functions can be found in Refs. 39 and 44; however, being a sampling-based strategy, this approach

cannot encompass the treatment of unknown dependences between uncertain variables.

4.4.2 Illustrative example

In what follows, we report some of the results obtained in a previous work by some of the

authors(59), in which the effects of objective and epistemic dependences are analyzed with reference

to the Top Event (TE) probability P(X) of a Fault Tree (FT) containing nBE = 6 Basic Events (BEs)

with epistemically-uncertain probabilities ( ){ }6...,,2,1: == BEj njBP . The order of magnitude of

the BE probabilities is around 10-3: this is reasonable for realistic safety-critical systems where the

components are usually highly reliable. Further details can be found in the cited reference.

Two classes of analyses are performed to analyze the effects of different states of objective

(Analysis 1) and epistemic (Analysis 2) dependence among the BEs. In Analysis 1, three

configurations (namely, T1-T3) are considered: Configuration T1 represents the reference, baseline

case where all the BEs are considered (objectively) independent. On the opposite, Configuration T3

Page 39: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

39

represents the extreme (most conservative) case where no assumptions about the states of objective

dependence among all the BEs are made. Instead, Configuration T2 represents an ‘intermediate’

case. In particular, in Configuration T2 positive objective dependence is assumed between two of

the six BEs (in this case, between two events representing failures of mechanical components): this

situation is far from unlikely in real systems and may be due to several causes, e.g., (i) shared pieces

of equipment (e.g., components in different systems are fed from the same electrical bus) or (ii)

physical interactions (e.g., failures of some component create extreme environmental stresses,

which increase the probability of multiple-component failures). Finally, in Analysis 2 only three

‘extreme’ situations (namely, E1-E3) are considered: in particular, in Configurations E1, E2 and E3

states of independence, total (perfect) dependence and unknown epistemic dependence,

respectively, are assumed among all the probabilities of all the BEs of the FT.

Figure 8, left, depicts the upper and lower CDFs ( )XPF and ( )XPF obtained for P(X) under different

assumptions of objective dependence among the BEs (Analysis 1). In order to provide a quantitative

evaluation of the effects of such states of objective dependence, the interval ],[ 95.095.0XX

pp =

( )( ) ( ) ( )( ) ( )]95.0,95.0[11 −− XPXP FF for the 95-th percentile ( ) 95.0XP of P(X) is computed. Notice that

95.0Xp = ( )( ) ( )95.0

1−XPF can be interpreted as a conservative assignment of ( ) 95.0XP (i.e., a

conservative estimate of risk). It can be seen that the values of 95.0Xp are 7.23·10-4 and 8.98·10-3 in

Configurations T1 and T2. This means that neglecting a hypothetical state of positive dependence

between only one pair of BEs is sufficient for underestimating 95.0Xp (and, thus, the risk associated to

the system) by 12.42 times. Finally, Configuration T3 represents the ‘extreme’ case where unknown

objective dependence is assumed among all the BEs of the FT. Actually, the value of 95.0Xp is

2.59·10-2, i.e., 35.87 times larger than those obtained under the ‘baseline’ assumption of objective

independence among all the BEs (Configuration T1).

Page 40: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

40

With reference instead to Analysis 2, Figure 8, right, depicts the upper and lower CDFs ( )XPF and

( )XPF obtained under different states of epistemic dependence among the probabilities of all the

BEs (Analysis 2). It is evident that the upper and lower CDFs ( )XPF and ( )XPF obtained under the

assumption of unknown epistemic dependence (dot-dashed lines) completely envelop all the others

(i.e., they obviously represent more conservative estimates of the bounding distributions). In

addition, it is worth noting that the lower (resp., upper) CDFs obtained under the assumption of

perfect epistemic dependence are very close to those produced by the assumption of unknown

epistemic dependence in the region where the cumulative probability is very close to the ‘extreme’

upper bound 1 (resp., lower bound 0). In other words, the CDFs produced under assumptions of

perfect and unknown epistemic dependence are almost identical in the range of extreme

probabilities (i.e., extreme quantiles) that are of particular interest in the risk assessment of highly

reliable systems. This is confirmed by the analysis of 95.0Xp , whose values are 4.40·10-4, 6.41·10-4

and 7.23·10-4 under the assumptions of independence, total and unknown dependence, respectively.

10-5

10-4

10-3

10-2

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Probability P(X) of the Top Event (TE) X

Cum

ulat

ive

prob

abili

ty

Analysis 1

T1: up CDFT1: low CDF

T2: up CDFT2: low CDF

T3: up CDF

T3: low CDF

0 10-4

10-3

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Probability P(X) of the Top Event (TE) X

Cum

ulat

ive

prob

abili

ty

Analysis 2

E1: up CDFE1: low CDF

E2: up CDF

E2: low CDF

E3: up CDFE3: low CDF

Figure 8. Effect of objective (left) and epistemic (right) dependences on P(X)

Some considerations are in order with respect to the results obtained. With respect to Analysis 1, the

assumption of objective independence among the BEs always leads to a serious underestimation of

the risk associated to the system (here represented by the upper bound of the 95-th quantile of the

TE probability) with respect to the assumptions of perfect and unknown objective dependence:

Page 41: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

41

actually, the corresponding estimates may differ even by orders of magnitude. Moreover, this

underestimation is shown to be quite dramatic for small BE probabilities (e.g., around 10-3 like in

the present case): this poses concerns for the risk assessment of systems where the components are

highly reliable and, thus, characterized by small failure probabilities.

With respect to Analysis 2, it is shown that: (i) the assumption of epistemic independence among

the probabilities of random events leads to a non-negligible underestimation of the risk associated to

the system (here represented by the upper bound of the 95-th quantile of the TE probability) with

respect to the assumptions of perfect and unknown epistemic dependence (e.g., by about 1.5 times):

this is particularly evident in the estimation of small probabilities and extreme quantiles that are of

paramount importance in the risk assessment of highly reliable systems; (ii) the estimates for the

upper bound of the 95-th quantile of the TE probability produced by the assumptions of perfect and

unknown epistemic dependence are comparable and (iii) the effects of epistemic dependence

among the BE probabilities are quantitatively less relevant and critical than those of objective

dependence among the BEs.

5 CONCLUSIONS AND DISCUSSION

In this paper, the following conceptual and technical issues on the uncertainty treatment in the risk

assessment of engineering systems have been considered: (1) quantitative modeling and

representation of uncertainty, coherently with the information available on the system of interest;

(2) propagation of the uncertainty from the input(s) to the output(s) of the model of the system; (3)

(Bayesian) updating of the uncertainty representation as new information becomes available; (4)

modeling and representation of dependences among the model input variables and parameters.

Different approaches to tackle each of the issues 1.‒4. listed, outside a fully probabilistic

framework have been compared. On the basis of the comparisons and of previous research by the

authors(58-61), the following guidelines and recommendations have been drawn:

Page 42: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

42

1. for the first issue, the Fuzzy Random Variable (FRV) approach can be one of those

recommended for uncertainty modeling and representation, in particular when the data and

information available on the problem of interest are scarce, vague and/or imprecise. In such

a framework, aleatory uncertainty is represented by probability models (i.e., probability

distributions), whereas epistemic uncertainty in the internal parameters of the aleatory

models is described by possibility distributions. The resulting FRV defines a family of

nested pairs of aleatory probability distributions, each of which bounds the “true”

probability distribution with a given confidence level. In the examples here proposed, the

FRV approach has been shown to provide more conservative results than the classical,

purely probabilistic one in the estimation of important quantities, like the distribution of a

quantile of the model output. On the other hand, this does not mean that possibility

distributions should be always used to represent epistemic uncertainty. Actually: (i) other

non-probabilistic approaches exist for tackling problems characterized by imprecise

information (see, e.g., evidence theory); (ii) in some cases (e.g., in the presence of a relevant

amount of data) also classical probability theory can obviously serve this purpose;

2. for the second issue, in general the hierarchical propagation of hybrid aleatory (probabilistic)

and epistemic (possibilistic) uncertainty should be carried out coherently with the state of

dependence between the epistemically-uncertain parameters, if known. On the other hand, if

the objective of the analyst is that of producing conservative risk estimates, then the MC-

FIA approach should be adopted. Actually, it has been show to provide more conservative

results than the MC-based DS-IRS approach in the estimation of the distributions of a given

quantile of the model output. In addition, this higher conservatism is particularly evident in

the range of extreme probabilities (i.e., around 0 and 1) that are of paramount importance in

realistic risk assessment applications involving highly reliable engineering systems;

Page 43: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

43

3. for the third issue, the purely possibilistic counterpart of the classical probabilistic Bayes’

theorem is strongly suggested (instead of the hybrid one) for updating, in a Bayesian

framework, the possibilistic representation of the epistemically-uncertain parameters of

(aleatory) probability distributions. This is due to the following reasons: (i) its strength in

reducing epistemic uncertainty is significantly higher, in particular when the amount of

available data is small: this is important in decision making processes since reducing

epistemic uncertainty significantly increases the analyst confidence in the decisions; (ii) the

computational time required is consistently lower. However, it has to be remarked that the

construction of a possibilistic likelihood required by this method still represents an issue to

be tackled from the theoretical and practical viewpoint in order to avoid introducing biases

in the analysis: with respect to that, future research will be devoted to the investigation of

additional methods, either resorting to probability-possibility transformations or (preferably)

building possibilistic functions directly from rough experimental data;

4. for the fourth issue, it has been shown that (i) the treatment of objective dependences among

random events is very critical since they have a dramatic impact on the system risk

measures: in this view, unknown (or, at least, positive) objective dependence should be

assumed among random events, in particular if the corresponding probabilities are small

(e.g., of the order of 10-3–10-2); (ii) the conditions of epistemic dependence should not be

neglected when small probabilities and extreme quantiles have to be estimated: with respect

to that, unknown (or, at least, perfect) epistemic dependences should be assumed in order to

obtain conservative risk estimates; however, if objective dependences are also present, the

effects of epistemic dependence are likely to be overwhelmed by those of objective

dependence. In the light of these results, the DEnv method is strongly recommended to

account for all kinds of (possibly unknown) objective and epistemic dependences.

Page 44: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

44

Tackling these issues is undoubtedly a step forward towards the use of non-probabilistic settings for

risk analysis in engineering and gives encouraging perspectives for future studies. Without taking

any dogmatic position, the proposed methodology could be considered as an alternative to the more

common probabilistic approach.

However, these conclusions should not hide some difficulties in the practical use of these methods

which are currently partially addressed and should be more investigated in future works. A first

limitation comes from the computational cost of these approaches, which can become prohibitive

when the system model is a CPU time consuming code as it can be often the case in complex

system safety studies (e.g. finite elements CFD or mechanical codes, multi-physical coupling of

models). This problem can be tackled by distributed computing and/or meta-modeling techniques

(the latters adding an additional level of uncertainty).

Another interesting perspective for future work is the definition of point risk indicators (e.g. a

probability of failure). That can be done quite easily (at least by a conceptual viewpoint) in the

fully-probabilistic framework: the Bayesian point estimator is the value minimizing the expectation

of a loss function depending on the “distance” of the proposed value from the real (albeit unknown)

one, the loss function encoding the conservatism one wants to give to the point risk indicator(107). In

some studies, it should be interesting to have at disposal a formal theory for choosing a point value

of the quantity of interest to be estimated, especially when the interval resulting from the

methodology shown hereinbefore is very large. Depending on the context, the counterpart of the

conservatism of these intervals is the difficulty in making decision under their base.

As a closing general comment, it is worth pointing out that although in some of the

recommendations above we have chosen “informed” conservatism as a possible criterion of

comparison between the probabilistic and possibilistic methods, we do not want to overstate the

benefits of conservatism in itself. Actually, (i) assumptions that are conservative without

justification may cause a misallocation of resources; (ii) excessive conservatism can impair the

Page 45: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

45

results of an entire analysis, because it makes the analysts appear to lack an understanding of the

problem under consideration; (iii) in some cases, an assumption, which is conservative with respect

to some results of a given analysis, may be non-conservative with respect to other results of the

same analysis. Overly conservative assumptions can be as damaging to decision making as overly

optimistic assumptions (i.e., not “anchored” to the available information). In an analysis performed

to support a possibly critical, safety-related decision, the appropriate goal is to be neither overly

optimistic nor overly pessimistic in the assumptions used, but rather to use “objective” and rigorous

tools for transforming the available information into a “faithful” description of the uncertainties that

are present in the analysis and its results.

REFERENCES

1. Aven T. Risk assessment and risk management: Review of recent advances on their

foundation. European Journal of Operational Research 2016; 253(1): 1-13.

2. SRA (Society of Risk Analysis). Glossary of the specialty group on Foundations of Risk

Analysis. http://www.sra.org/news/sra-develops-glossary-risk-related-terms; 2015.

3. Apostolakis GE. The concept of probability in safety assessment of technological systems.

Science 1990; 250: 1359-1364.

4. NASA (National Aeronautics and Space Administration). Risk-Informed Decision Making

Handbook. NASA/SP-2010-576 – Version 1.0. Washington, DC (USA): Office of Safety and

Mission Assurance, NASA Headquarters; April 2010.

5. USNRC. Reactor Safety Study, an Assessment of Accident Risks. WASH 1400, Report

NUREG-75/014. Washington, DC (USA): U.S. Nuclear Regulatory Commission; 1975.

6. USNRC. An approach for using probabilistic risk assessment in risk-informed decisions on

plant-specific changes to the licensing basis. NUREG-1.174 – Revision 1. Washington, DC:

US Nuclear Regulatory Commission; 2002.

7. USNRC. Guidance on the Treatment of Uncertainties Associated with PRAs in Risk-Informed

Decision Making. NUREG-1855. Washington, DC: US Nuclear Regulatory Commission;

2009.

Page 46: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

46

8. U.S. NRC. Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants.

NUREG-1150, Vols. 1-3. Washington, DC: U.S. Nuclear Regulatory Commission, Office of

Nuclear Regulatory Research, Division of Systems Research; 1990-1991.

9. Breeding RJ, Helton JC, Gorham ED, Harper FT. Summary Description of the Methods Used

in the Probabilistic Risk Assessments for NUREG-1150. Nucl Eng Des 1992; 135(1): 1-27.

10. Helton JC, Breeding RJ. Calculation of Reactor Accident Safety Goals. Reliability

Engineering and System Safety 1993; 39(2): 129-158.

11. Helton JC, Hansen CW, Swift PN, eds. Special Issue: Performance Assessment for the

Proposed High-Level Radioactive Waste Repository at Yucca Mountain, Nevada. Reliability

Engineering and System Safety 2014; 122: 1-456.

12. NAS/NRC (National Academy of Science/National Research Council). Evaluation of

Quantification of Margins and Uncertainties for Assessing and Certifying the Reliability of the

Nuclear Stockpile. Washington, DC: National Academy Press; 2008.

13. US EPA (Environmental Protection Agency). Guidance on the Development, Evaluation, and

Application of Environmental Models. EPA/100/K-09/003. Washington, DC: U.S.

Environmental Protection Agency, Council for Regulatory Environmental Modeling; 2009.

14. Helton JC, Hansen CW, Sallaberry CJ. Conceptual Structure and Computational Organization

of the 2008 Performance Assessment for the Proposed High-Level Radioactive Waste

Repository at Yucca Mountain, Nevada. Rel. Eng. Syst. Safety 2014; 122: 223-248.

15. Helton JC, Oberkampf W. Alternative representations of epistemic uncertainties. Reliability

Engineering and System Safety 2004; 85(1-3): 1-10.

16. Helton JC, Johnson JD, Sallaberry CJ, Storlie CB. Survey of sampling-based methods for

uncertainty and sensitivity analysis. Rel. Eng. Syst. Saf. 2006; 91: 1175-1209.

17. Helton JC, Davis FJ. Latin Hypercube Sampling and the Propagation of Uncertainty in

Analyses of Complex Systems. Reliability Engineering and System Safety 2003; 81(1): 23-69.

18. Aven T. On the Need for Restricting the Probabilistic Analysis in Risk Assessments to

Variability. Risk Analysis 2010; 30(3): 354-360.

19. Aven T. Interpretations of alternative uncertainty representations in a reliability and risk

analysis context. Reliability Engineering & System Safety 2011; 96(3): 353-360.

20. Aven T, Steen R. The concept of ignorance in a risk assessment and risk management context.

Reliability Engineering and System Safety 2010; 95(11): 1117–1122.

Page 47: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

47

21. Aven T, Zio E. Some considerations on the treatment of uncertainties in risk assessment for

practical decision making. Reliability Engineering and System Safety 2010; 96(1): 64-74.

22. Helton JC, Sallaberry CJ. Uncertainty and Sensitivity Analysis: From Regulatory

Requirements to Conceptual Structure and Computational Implementation. IFIP Advances in

Information and Communication Technology 2012; 377 AICT: 60-76.

23. Kennedy M, O’Hagan A. Bayesian calibration of computer models. Journal of the Royal

Statistical Society, Series B 2001; 63: 425-464.

24. Bayarri MJ, Berger JO, Paulo R, Sacks J, Cafeo JA, Cavendish J, Lin CH, Tu J. A framework

for validation of computer models. Technometrics 2007; 49: 138-154.

25. Oberkampf WL, Trucano TG. Verification and validation benchmarks. Nuclear Engineering

and Design 2008; 238(3): 716-743.

26. Roy CJ, Oberkampf WL. A comprehensive framework for verification, validation, and

uncertainty quantification in scientific computing. Computer Methods in Applied Mechanics

and Engineering 2011; 200(25–28): 2131-2144.

27. Pasanisi A, Dutfoy A. An Industrial Viewpoint on Uncertainty Quantification in Simulation:

Stakes, Methods, Tools, Examples. In: Dienstfrey AM, Boisvert RF, Ed. Uncertainty

Quantification in Scientific Computing. Berlin (Germany): Springer; 2012. p. 27-45.

28. Aven T. Some reflections on uncertainty analysis and management. Reliability Engineering

and System Safety 2010; 95: 195-201.

29. Dubois D, Guyonnet D. Risk-Informed Decision Making in the Presence of Epistemic

Uncertainty. Int. J. General Systems 2011; 40(2): 145-167

30. Bernardo JM, Smith AFM. Bayesian Theory. Chichester (UK): Wiley; 1994.

31. Bedford T, Cooke R. Probabilistic Risk Analysis. Foundations and Methods. Cambridge (UK):

Cambridge University Publishing Ltd; 2001.

32. Kelly DL, Smith CL. Bayesian Inference for Probabilistic Risk Assessment: A Practitioner’s

Guidebook. London (UK): Springer-Verlag; 2011.

33. Ferson S, Nelsen RB, Hajagos J, Berleant DJ, Zhang J, Tucker WT, Ginzburg LR, Oberkampf

WL. Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds

analysis. Tech. Report SAND2004-3072. Albuquerque, NM: SANDIA National Labs; 2004.

34. Apostolakis GE, Kaplan S. Pitfalls in risk calculations. Reliability Engineering 1981; 2(2):

135-145.

35. Ferson S. What Monte Carlo methods cannot do. Hum Ecol Risk Assess 1996; 2: 990–1007.

Page 48: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

48

36. Ferson S, Burman MA. Correlation, dependency bounds and extinction risks. Biol. Conserv.

1995; 73: 101–105.

37. Ferson S, Ginzburg LR. Different methods are needed to propagate ignorance and variability.

Reliability Engineering and Systems Safety 1996; 54: 133–144.

38. Helton JC, Johnson JD, Oberkampf WL. An exploration of alternative approaches to the

representation of uncertainty in model predictions. Reliab Eng Syst Safe 2004; 85: 39-72.

39. Helton JC, Johnson JD. Quantification of Margins and Uncertainties: Alternative

Representations of Epistemic Uncertainty. Reliab Eng Syst Safe 2011; 96(9): 1034-1052.

40. Klir GJ, Yuan B. Fuzzy Sets and Fuzzy Logic: Theory and Applications. Upper Saddle River,

NJ: Prentice-Hall; 1995.

41. Beer M. Fuzzy probability theory. In: Meyers RA, Ed. Encyclopedia of Complexity and

Systems Science - Vol.6. New York (NY): Springer; 2009. p. 4047–4059.

42. Molchanov I. Theory of Random Sets. New York (NY): Springer; 2005.

43. Ferson S, Kreinovich V, Ginzburg L, Sentz K, Myers DS. Constructing probability boxes and

Dempster-Shafer structures. SAND2002-4015. Albuquerque, NM: SANDIA National Labs;

2003.

44. Helton J, Johnson JD, Oberkampf WL, Storlie CB. A sampling-based computational strategy

for the representation of epistemic uncertainty in model predictions with evidence theory.

Computer Methods in Applied Mechanics and Engineering 2007; 196: 3980–98.

45. Le Duy TD, Dieulle L, Vasseur D, Berenguer C, Couplet M. An alternative comprehensive

framework using belief functions for parameter and model uncertainty analysis in nuclear

probabilistic risk assessment applications. Proceedings of the Institution of Mechanical

Engineers, Part O: Journal of Risk and Reliability 2013; 227(5): 471-490.

46. Sentz K, Ferson S. Combination of Evidence in Dempster-Shafer Theory. Technical Report

SAND 2002-0835. Albuquerque, New Mexico: SANDIA National Laboratories; 2002.

47. Baudrit C, Dubois D. Practical Representations of Incomplete Probabilistic Knowledge.

Computational Statistics & Data Analysis 2006; 51(1): 86-108.

48. Baudrit C, Dubois D, Guyonnet D. Joint Propagation and Exploitation of Probabilistic and

Possibilistic Information in Risk Assessment. IEEE Trans Fuzzy Syst 2006; 14(5): 593-608.

49. Baudrit C, Dubois D, Perrot N. Representing parametric probabilistic models tainted with

imprecision. Fuzzy Sets and System 2008; 159(15): 1913-1928.

Page 49: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

49

50. Dubois D. Possibility Theory and Statistical Reasoning. Comput Stat Data An 2006; 51: 47-

69.

51. Crespo LG, Kenny SP, Giesy DP. Reliability analysis of polynomial systems subject to p-box

uncertainties. Mechanical Systems and Signal Processing 2013; 37(1-2): 121-136.

52. Ferson S, Hajagos JG. Arithmetic with uncertain numbers: rigorous and (often) best possible

answers. Reliability Engineering and System Safety 2004; 85: 135-152.

53. Ferson S, Kreinovich V, Hajagos J, Oberkampf W, Ginzburg L. Experimental Uncertainty

Estimation and Statistics for Data Having Interval Uncertainty. Technical Report SAND2007-

0939. Albuquerque, New Mewico: SANDIA National Laboratories; 2007.

54. Weichselberger K. The theory of interval-probability as a unifying concept for uncertainty. Int.

J. Approx. Reasoning 2000; 24(2–3): 149–170.

55. Beer M, Ferson S. Special issue “Imprecise probabilities - What can they add to engineering

analyses?”. Mechanical Systems and Signal Processing 2013; 37(1-2): 1-3.

56. Beer M, Ferson S, Kreinovich V. Imprecise probabilities in engineering analyses. Mechanical

Systems and Signal Processing 2013; 37(1-2): 4-29.

57. Blockley D. Analysing uncertainties: Towards comparing Bayesian and interval probabilities.

Mechanical Systems and Signal Processing 2013; 37(1-2): 30-42.

58. Pedroni N, Zio E. Empirical comparison of methods for the hierarchical propagation of hybrid

uncertainty in risk assessment, in presence of dependences. International Journal of

Uncertainty, Fuzziness and Knowledge-Based Systems 2012; 20(4): 509-557.

59. Pedroni N, Zio E. Uncertainty analysis in fault tree models with dependent basic events. Risk

Analysis, an International Journal 2013; 33(6): 1146-73.

60. Pedroni N, Zio E, Ferrario E, Pasanisi A, Couplet M. Hierarchical propagation of probabilistic

and non-probabilistic uncertainty in the parameters of a risk model. Computers and Structures

2013; 126: 199–213.

61. Pedroni N, Zio E, Pasanisi A, Couplet M. Empirical Comparison of Two Methods for the

Bayesian Update of the Parameters of Probability Distributions in a Two-Level Hybrid

Probabilistic-Possibilistic Uncertainty Framework for Risk Assessment. ASCE-ASME J. Risk

Uncertainty Eng. Syst., Part A: Civ. Eng. 2016; 2(1): 04015015.

62. Limbourg P, de Rocquigny E. Uncertainty analysis using evidence theory – confronting level-

1 and level-2 approaches with data availability and computational constraints. Reliability

Engineering and System Safety 2010; 95(5): 550-564.

Page 50: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

50

63. Parry GW. The Characterization of Uncertainty in Probabilistic Risk Assessments of Complex

Systems. Reliability Engineering and System Safety 1996; 54(2-3): 119-126.

64. Hoffman FO, Hammonds JS. Propagation of Uncertainty in Risk Assessments: The Need to

Distinguish Between Uncertainty Due to Lack of Knowledge and Uncertainty Due to

Variability. Risk Analysis 1994; 14(5): 707-712.

65. Paté-Cornell ME. Uncertainties in Risk Analysis: Six Levels of Treatment. Reliability

Engineering and System Safety 1996; 54(2-3): 95-111.

66. Helton JC, Davis FJ, Johnson JD. Characterization of Stochastic Uncertainty in the 1996

Performance Assessment for the Waste Isolation Pilot Plant. Reliability Engineering and

System Safety 2000; 69(1-3): 167-189.

67. Helton JC, Gross MB, Hansen CW, Sallaberry CJ, Sevougian SD. Expected Dose for the

Seismic Scenario Classes in the 2008 Performance Assessment for the Proposed High-Level

Radioactive Waste Repository at Yucca Mountain, Nevada. Reliability Engineering and

System Safety 2014; 122: 380-398.

68. US NRC. EPRI/NRC-RES Fire PRA methodology for nuclear power facilities, Volume 2:

detailed methodology. Technical Report NUREG-CR-6850. Washington, DC: US Nuclear

Regulatory Commission; 2005.

69. Helton JC, Martell M-A, Tierney MS. Characterization of Subjective Uncertainty in the 1996

Performance Assessment for the Waste Isolation Pilot Plant. Reliability Engineering and

System Safety 2000; 69(1-3): 191-204.

70. Huang D, Chen T, Wang MT. A fuzzy set approach for event tree analysis. Fuzzy Sets and

Systems 2001; 118: 153–165.

71. Krzykacz-Hausmann B. An approximate sensitivity analysis of results from complex computer

models in the presence of epistemic and aleatory uncertainties. Reliab Eng Syst Safe 2006; 91:

1210-1218.

72. De Finetti B. Theory of Probability. New York, NY: Wiley; 1974.

73. Atwood CL, LaChance JL, Martz HF, Anderson DL, Englehardte M, Whitehead D, Wheeler

T. Handbook of Parameter Estimation for Probabilistic Risk Assessment. Technical Report

NUREG/CR-6823. Washington, DC: US Nuclear Regulatory Commission; 2003.

74. Frey HC, Burmaster DE. Methods for Characterizing Variability and Uncertainty: Comparison

of Bootstrap Simulation and Likelihood-Based Approaches. Risk Anal 1999; 19(1): 109-130.

Page 51: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

51

75. Bergman B. Conceptualistic pragmatism: a framework for Bayesian analysis? IIE Transactions

2009; 41: 86-93.

76. Aven T, Heide B. Reliability and validity of risk analysis. Reliab Eng Syst Safe 2009; 94:

1862–1868.

77. Dubois D. Representation, propagation and decision issues in risk analysis under incomplete

probabilistic information. Risk Analysis 2010; 30: 361–368.

78. Helton JC, Pilch M, Sallaberry CJ. Probability of Loss of Assured Safety in Systems with

Multiple Time-Dependent Failure Modes: Representations with Aleatory and Epistemic

Uncertainty. Reliability Engineering and System Safety 2014; 124: 171-200.

79. Singpurwalla N. Reliability and Risk. A Bayesian Perspective. New York (NY): Wiley; 2006.

80. North W. Probability Theory and Consistent Reasoning. Risk Analysis 2010; 30(3): 377-380.

81. Cooke R. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford; New

York: Oxford University Press; 1991.

82. Ayyub BM. Elicitation of Expert Opinions for Uncertainty and Risks. Boca Raton, FL: CRC

Press; 2001.

83. Hacking I. The Emergence of Probability: A Philosophical Study of Early Ideas About

Probability, Induction and Statistical Inference. New York: Cambridge University Press; 1975.

84. Bernardo JM. The concept of exchangeability and its applications. Far East Journal of

Mathematical Sciences 1996; 4: 111-121.

85. Shafer G. Perspectives on the theory and practice of belief functions. International Journal of

Approximate Reasoning 1990; 4: 323–62.

86. Dempster AP. Upper and Lower Probability Inferences Based on a Sample from a Finite

Univariate Population. Biometrika 1967; 54(2-3): 515-528.

87. Kozine I, Filimonov Y. Imprecise reliabilities: experiences and advances. Reliab. Eng. Syst.

Saf. 2000; 67: 75–83.

88. Walley P. Statistical reasoning with imprecise probabilities. New York, NY: Chapman and

Hall; 1991.

89. Lindley DV. Understanding uncertainty. Hoboken, NJ: Wiley; 2006.

90. Zadeh LA. Fuzzy sets as a basis for a theory of possibility. Fuzzy Set Syst 1978; 1: 3-28.

91. Karanki DR, Kushwaha HS, Verma AK, Ajit S. Uncertainty Analysis Based on Probability

Bounds Approach in Probabilistic Safety Assessment. Risk Anal 2009; 29(5): 662-675.

Page 52: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

52

92. Zio E. The Monte Carlo Simulation Method for System Reliability and Risk Analysis.

Springer Series in Reliability Engineering. London, UK: Springer; 2013.

93. Baraldi P, Zio E. A combined Monte Carlo and possibilistic approach to uncertainty

propagation in event tree analysis. Risk Analysis 2008; 28(5): 1309-1325.

94. Baudrit C, Couso I, Dubois D. Joint propagation of probability and possibility in risk analysis:

toward a formal framework. Internat. J. Approx. Reasoning 2007; 45(1): 82–105.

95. Baudrit C, Guyonnet D, Dubois D. Joint propagation of variability and imprecision in

assessing the risk of groundwater contamination. J Contam Hydrol 2007; 93: 72–84.

96. Kentel E, Aral MM. Probabilistic-fuzzy health risk modeling. Stoch. Envir. Res. and Risk Ass.

2004; 18: pp. 324–338.

97. Zadeh LA. Fuzzy sets. Information and Control 1965; 8: 338–353.

98. Kentel E, Aral MM. 2D Monte Carlo versus 2D Fuzzy Monte Carlo Health Risk Assessment.

Internat. J. Stochastic Environ. Res. Risk Assess. 2005; 19: 86–96.

99. Möller B, Beer M. Fuzzy Randomness: Uncertainty in Civil Engineering and Computational

Mechanics. Berlin, Germany: Springer; 2004.

100. Moller B, Beer M, Graf W, Sickert JU. Time-dependent reliability of textile-strengthened RC

structures under consideration of fuzzy randomness. Comput Struct 2006; 84: 585–603.

101. Fetz T, Oberguggenberger M. Propagation of uncertainty through multivariate functions in the

framework of sets of probability measures. Reliab Eng Syst Safe 2004; 85: 73-87.

102. Tonon F. Using random set theory to propagate epistemic uncertainty through a mechanical

system. Reliability Engineering and System Safety 2004; 85: 169–181.

103. Helton JC, Oberkampf WL, Johnson JD. Competing Failure Risk Analysis Using Evidence

Theory. Risk Analysis 2005; 25(4): 973-995.

104. Siu N, Kelly D. Bayesian parameter estimation in probabilistic risk assessment. Reliability

Engineering and System Safety 1998; 62: 89–116.

105. Lindley DV. The philosophy of statistics. The Statistician 2000; 49(3): 293-337.

106. Kelly DL, Smith CL. Bayesian inference in probabilistic risk assessment — The current state

of the art. Reliability Engineering and System Safety 2009; 94: 628–643.

107. Pasanisi A, Keller M, Parent M. Estimation of a quantity of interest inuncertainty analysis:

Some help from Bayesian decision theory. Reliab Eng Syst Safe 2012; 100: 93–101.

Page 53: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

53

108. Ferson S. Bayesian methods in risk assessment. Technical Report available at:

www.ramas.com/bayes.pdf; 2005.

109. Smets P. Belief Functions: The Disjunctive Rule of Combination and the Generalized

Bayesian Theorem. International Journal of Approximate Reasoning 1993; 9: 1-35.

110. Viertl R. Statistical Methods for Fuzzy Data. Chichester, UK: Wiley; 2011.

111. Viertl R, Hareter D. Generalized Bayes-theorem for non-precise a-priori distribution. Metrika

2004; 59(3): 263–273.

112. Dubois D, Prade H. Bayesian conditioning in possibility theory. Fuzzy Sets and Systems 1997;

92: 223-240.

113. Lapointe S, Bobeè B. Revision of possibility distributions: A Bayesian inference pattern.

Fuzzy Sets and Systems 2000; 116: 119-140.

114. Stein M, Beer M, Kreinovich V. Bayesian Approach for Inconsistent Information. Information

Sciences 2013; 245: 96–111.

115. US NRC. Common-cause failure database and analysis system: event data collection,

Classification, and coding. NUREG/CR-6268. Washington, DC: US Nuclear Regulatory

Commission; 2007.

116. Watts DJ. A Simple Model of Global Cascades on Random Networks. Proc. Natl. Acad. Sci.

U.S.A. 2002; 99: 5766-5771.

117. Zio E, Sansavini G. Component Criticality in Failure Cascade Processes of Network Systems.

Risk Analysis 2011; 31(8): 1196–1210.

118. Vaurio JK. Consistent mapping of common cause failure rates and alpha factors. Reliability

Engineering and System Safety 2007; 92(5): 628–645.

119. Sadiq R, Saint-Martin E, Kleiner Y. Predicting risk of water quality failures in distribution

networks under uncertainties using fault-tree analysis. Urban Water 2008; 5(4): 287-304.

120. Ferdous R, Khan F, Sadiq R, Amyotte P, Veitch B. Fault and Event Tree Analyses for Process

Systems Risk Analysis: Uncertainty Handling Formulations. Risk Anal 2011; 31(1): 86-107.

121. Iman RL, Davenport JM. Rank Correlation Plots for Use with Correlated Input Variables.

Communications in Statistics: Simulation and Computation 1982; B11(3): 335-360.

122. Zhang Q. A method dealing with correlations in uncertainty propagation by using traditional

correlation coefficients. Reliability Engineering and System Safety 1993; 41(2): 107–114.

Page 54: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

54

123. Kafrawy KF, Rushdi AM. Uncertainty analysis of fault tree with statistically correlated failure

data. Microelectronics and Reliability 1990; 30: 157–175.

124. Karanki DR, Jadhav PA, Chandrakar A, Srividya A, Verma AK. Uncertainty analysis in PSA

with correlated input parameters. Int J Syst Assur Eng Manag 2010; 1: 66–71.

125. Williamson RC, Downs T. Probabilistic arithmetic I: Numerical methods for calculating

convolutions and dependency bounds. Int J Approx Reason 1990; 4: 89-158.

126. Berleant D, Goodman-Strauss C. Bounding the results of arithmetic operations on random

variables of unknown dependency using intervals. Reliable Computing 1998; 4: 147-165.

127. Berleant D, Zhang J. Representation and problem solving with Distribution Envelope

Determination (DEnv). Reliability Engineering and System Safety 2004; 85: 153-168.

128. Berleant D, Xie L, Zhang J. Statool: a tool for distribution envelope determination, an interval-

based algorithm for arithmetic on random variables. Reliab Comput 2003; 9(2): 91–108.

129. Berleant D, Anderson G, Goodman-Strauss C. Arithmetic on Bounded Families of

Distributions: A DEnv Algorithm Tutorial. In: Hu C et al., Eds. Knowledge Processing with

Interval and Soft Computing. London, UK: Springer-Verlag; 2008. p. 183-210.

130. Dubois D, Prade H. When upper probabilities are possibility measures. Fuzzy Sets and

Systems 1992; 49: 65-74.

131. Ralescu D. Average level of a fuzzy set. In: Bertoluzza C, Gil MA, Ralescu DA, eds.

Statistical Modeling, Analysis and Management of Fuzzy Data. Berlin Heidelberg: Springer;

2002. p. 119–126.

132. Dubois D, Prade H, Sandri S. On possibility/probability transformations. In: Lowen R,

Roubens M, eds. Fuzzy Logic: State of the Art. Dordrecht, the Netherlands: Kluwer Academic

Publishers; 1993. p. 103-112.

133. Smets P. Constructing the pignistic probability function in a context of uncertainty. In:

Henrion M, Shachter RD, Kanal LN, Lemmer JF, eds. Uncertainty in Artificial Intelligence 5.

Amsterdam, The Netherlands: North-Holland; 1990. p. 29-39.

134. Dubois D, Foulloy L, Mauris G, Prade H. Probability-Possibility Transformation, Triangular

Fuzzy Sets, and Probabilistic Inequalities. Reliable Computing 2004; 10: 273-297.

135. Flage R, Baraldi P, Zio E, Aven T. Probability and possibility-based representations of

uncertainty in fault tree analysis. Risk Analysis 2013; 33(1): 121-33.

136. Aven T. On the use of conservatism in risk assessments. Reliability Engineering and System

Safety 2016; 146: 33–38.

Page 55: A critical discussion and practical recommendations on ... · A critical discussion and practical recommendations on some ... In this paper, we focus on four relevant, conceptual

55

137. Anoop MB, Balaji Rao K, Gopalakrishnan S. Conversion of probabilistic information into

fuzzy sets for engineering decision analysis. Computers and Structures 2006; 84: 141-155.

138. Masson MH, Denoeux T. Inferring a possibility distribution from empirical data. Fuzzy Sets

and Systems 2006; 157: 319–340.

139. Mauris G. Inferring a Possibility Distribution from Very Few Measurements. In: Dubois D et

al., Eds. Soft Methods for Handling Variability and Imprecision, ASC 48. Berlin, Heidelberg:

Springer-Verlag; 2008. p. 92–99.

Acknowledgments

The authors wish to thank the numerous different reviewers and the Area Editor, Prof. Seth

Guikema, for their constructive comments that helped improving significantly the quality of the

paper.