The Effects of Complexity

download The Effects of Complexity

of 27

Transcript of The Effects of Complexity

  • 7/28/2019 The Effects of Complexity

    1/27

    Nuclear Engineering and Design 204 (2001) 127

    The effects of complexity, of simplicity and of scaling inthermal-hydraulics

    Novak Zuber *

    703 New Mark Esplanade, Rock6ille, MD 20850, USA

    Received 22 May 2000; accepted 22 May 2000

    Abstract

    This lecture has a twofold purpose. First, we will assess the state of the art and the trends in thermal-hydraul

    (T-H) technology, within the context of replicating and non-replicating information systems. Four T-H examples a

    used to illustrate that an ever-increasing complexity in formulating and analyzing problems leads to inefficien

    obsolescence and evolutionary failure. By contrast, simplicity, which allows for parsimony, synthesis and clarity

    information, ensures efficiency, survival and replication. This comparison (complexity versus simplicity) also provid

    the requirements and guidance for a success path in T-H development. The second objective of this paper is

    demonstrate that scaling provides the means to process information in an efficient manner, as required by competiti

    (and, thereby, replicating) systems. To this end, the lecture summarizes the essential features of the Fractio

    Change, Scaling and Analysis approach, which offers a general paradigm for quantifying the effects that an agentchange has on a given information system. The paper will further demonstrate that a single concept and a sin

    method may be used to scale and analyze all transport processes in a given field of interest (fluid mechanics, he

    transfer, etc.) and/or across fields and disciplines (mechanics, biology, etc.) Therefore, the paradigm: (1) ensu

    economy and efficiency in addressing and resolving technical or scientific problems; and (2) enables a cultu

    cross-pollination between different information systems (disciplines). By means of a simple example in the Append

    we shall: (1) demonstrate the efficiency to be gained through scaling; and (2) illustrate the inefficiency a

    wastefulness of computer-based safety studies as presently conducted. 2001 Elsevier Science B.V. All rig

    reserved.

    www.elsevier.com/locate/nuceng

    1. Introduction

    1.1. Purpose

    I would like to begin my remarks by thanking

    the Organizing Committee for inviting me to par-

    ticipate. At this stage of my life, I am not certain

    how many more opportunities I shall have

    comment on the state of the art as concerthermal-hydraulics (T-H) technology, and to ou

    line avenues that, in my opinion, hold gre

    promise for the improvement, advancement a

    enrichment (by cultural cross-pollination) of t

    and other branches of technology and of scien

    This paper has, therefore, a twofold purpo

    First, I shall assess the developmental trends * Tel.: +1-301-4243585.

    0029-5493/01/$ - see front matter 2001 Elsevier Science B.V. All rights reserved.

    PII: S 0 0 2 9 - 5 4 9 3 ( 0 0 ) 0 0 3 2 4 - 1

  • 7/28/2019 The Effects of Complexity

    2/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 272

    the T-H technology and register my concerns with

    regard to the dangers that lie ahead if the ap-

    proach to formulating and analyzing T-H prob-

    lems continues to increase in complexity (with no

    greater degree of efficiency, I might add). I am

    well aware that my views on this subject will be ill

    received by some code developers and users, and

    certainly by most of the code jockeys. So be it.

    Were I to give you less than my candid opinion,

    based on 48 years of experience in this technology

    (starting with my first year as a graduate student

    at UCLA), I would be remiss in my responsibili-

    ties as a member of two professional societies, i.e.

    the American Nuclear Society (ANS) and the

    American Society of Mechanical Engineers

    (ASME).

    My second objective is to summarize and

    demonstrate the key features of the FractionalChange, Scaling and Analysis method (FCSA),

    which are: simplicity, parsimony, synthesis, effi-

    ciency and versatility.

    To this end, this paper will demonstrate that a

    single concept (simplicity and parsimony) and a

    single methodology (again, simplicity and parsi-

    mony) may be used for the following.

    1. To scale all transfer processes associated with

    particles, waves, diffusion and vorticity (syn-

    thesis) across hierarchical levels ranging from

    Kolmogorovs micro scale to a nuclear reactor(synthesis and efficiency).

    2. To derive Kolmogorovs scaling relations for:

    the inertial subrange; and

    the micro range (synthesis).

    3. To scale across disciplines; for example, from

    fluid mechanics to biology (6ersatility).

    I shall then conclude the paper with a brief

    discussion of three analogies, which are obtained

    by applying the FCSA method to these subjects,

    and which exhibit the same hyperbolic relation.

    1. The analogy between the equations of quan-tum mechanics and those derived using the

    FCSA method.

    2. The analogies between Mach number (or

    Froude number) scaling in fluid mechanics and

    the scaling in biology of:

    the life span of mammals; and

    the incubation period for avian eggs.

    1.2. Outline

    The paper is divided into eight sections. T

    salient characteristics and requirements of rep

    cating and non-replicating information syste

    are summarized in Section 2. Four examples fro

    T-H are used in Section 3 to illustrate and discu

    the effects of complexity. Section 4 summari

    the key features of the FCSA method. Section

    describes the hierarchical levels used in the prese

    demonstration of the FCSA method, which

    presented in Section 6. The analogies are d

    cussed in Section 7, and the paper concludes w

    a summary and recommendations in Section 8

    2. Replicating and non-replicating systems

    Consider the information systems shown in F

    1: the salient feature of each is replicatio

    whereas the ubiquity of change is the character

    tic of its environment. How a system intera

    with and responds to these changes determines

    evolutionary success or failure.

    Replication consists of four information p

    cesses or stages: acquisition, storage, retrieval a

    transmission. These stages are shown in Fig.

    together with their respective requirements a

    the means by which the requirements are mThus, the acquisition of information must be

    fective, which is accomplished through simplici

    The storage of information must be optim

    which may be realized by means of parsimo

    and synthesis. The retrieval of information m

    be fast and easy, both of which are achiev

    through efficiency. Finally, the transmission

    information must be intelligible, which is arriv

    at through the clarity of the message. In biolo

    and ecology, the transmission is by means

    genes, whereas in the other systems noted in F1, transmission is, according to Dawkins (197

    Brodie (1996), Blackmore (1999), via memes.

    Information systems may be divided into t

    broad groups according to their characterist

    and their response to the ever-changing enviro

    ment. One group consists of replicating system

    the other, non-replicating systems.

  • 7/28/2019 The Effects of Complexity

    3/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    Replicating systems are generally flexible and

    adaptable. Thus, they are able to adjust to and

    stay current with the changing environment. They

    are efficient and, consequently, remain competi-

    tive. They survive, replicate, and may be consid-

    ered evolutionarily successful.

    Non-replicating systems, on the other hand, a

    inflexible and maladaptable. They are rigid,

    commodating change only with great difficul

    Within the context of an ever-changing enviro

    ment, they can become obsolete and inefficie

    and therefore non-competitive. Eventually, th

    Fig. 1. Characteristics of replicating and non-replicating systems.

  • 7/28/2019 The Effects of Complexity

    4/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 274

    systems collapse and disappear. Consequently,

    they may be considered evolutionary failures.

    I could cite numerous examples of successes

    and/or failures for each of the systems shown in

    Fig. 1. For purposes of this paper, however, I

    shall use only four examples from T-H to illus-

    trate how complex information (complexity) in-

    evitably leads to failure, in as much as it does not

    meet the process requirements noted in Fig. 1.

    After discussing the four T-H examples, I shall:

    1. demonstrate that the FCSA method provides

    the means to meet the requirements of Fig. 1;

    and

    2. summarize and demonstrate the key features

    of the FCSA paradigm for quantifying the

    effect (response) that a given agent of change

    (environment) has on any of the systems

    shown in Fig. 1.

    3. Effects of complexity

    I have selected four examples to illustrate the

    potential effects of complexity on the analysis and

    resolution of T-H problems. The first two exam-

    ples (the modeling of drop evaporation in mist

    flows and of debris dispersal) were chosen to

    demonstrate that studies and results, which do not

    meet the requirements in Fig. 1, end up on anevolutionary junk heap. In these instances, the

    results went unused after being generated and

    reported.

    The second set of examples (the Reynolds

    Stress Equation and multi-fluid formulations, as

    currently used in T-H) was selected because they

    exhibit the characteristics of non-replicating infor-

    mation systems. They do not, therefore, appear to

    follow an evolutionary success path.

    3.1. Drop e6aporation in mist flows

    In the 1970s, the Nuclear Regulatory Commis-

    sion (NRC) sponsored an experimental and ana-

    lytical research program directed at modeling

    evaporation rates of droplets in two-phase mist

    flows. This information was needed for a closure

    equation in computer codes.

    The results of that 2-year effort are shown

    Fig. 2. It can be seen that the droplet Nuss

    number was correlated in terms of two Reynol

    numbers (one for liquid and one for vapor) and

    a dimensionless temperature. The correlation w

    expressed as a series consisting of 64 terms, w

    17-digit accuracy and with coefficients rangi

    from 109 to 1036. This is indeed a range of astrnomical proportions. At the time, it prompted m

    to comment that were I a droplet, I could n

    evaporate, as I would not know how to perfor

    such complex calculations!

    Looking at these results, one must ask: Wh

    information and/or knowledge were acquire

    What can be stored? What can be retrieved? Wh

    can be transmitted to meet future needs? T

    answers to all four questions are identical: no

    ing. In the context of Fig. 1, this research effo

    produced an evolutionary failure. It not onwasted funds, but, more importantly, wasted

    opportunity to impress upon students the value

    the efficient production of meaningful and usef

    results, as demanded in a competitive technolo

    cal environment.

    3.2. Debris disposal

    Roughly a decade ago, the NRC sponsored

    experimental and analytical research program d

    signed to model debris dispersal from a reaccavity during severe accidents. The results of th

    effort, shown in Fig. 3, were expressed in terms

    14 dimensionless parameters and correlated

    terms of the functions expressed in Fig. 4.

    Observing these results, one might legitimate

    ask: What was learned? How was our understan

    ing of the process improved? How may we expla

    the results? How useful are the results? Again, t

    answers to all of these questions are negative. Th

    effort also produced an evolutionary failure. T

    should come as no surprise, given that the resuing information failed to meet the requirements

    Fig. 1. In fact, the results were unintelligible.

    3.3. Reynolds stress equation

    I shall now summarize some features of t

    Reynolds stress equation using information fro

  • 7/28/2019 The Effects of Complexity

    5/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    Fig. 2. A computer-generated correlation for drop evaporation in mist flows.

    a highly instructive book by Wilcox (1998). The

    summary will serve to illustrate the problems that

    lie ahead if trends toward ever-increasing com-

    plexity in T-H technology continue unchecked. I

    give my views of those problems in the subsequent

    section.

    The closure problem of Reynolds stress equ

    tion is illustrated in Fig. 5, reproduced fro

    Wilcoxs book. It can be seen that the averagi

    procedure generates six new equations (one f

    each component of the Reynolds Stress Tens

    and 22 new unknowns. To quote Wilcox:

  • 7/28/2019 The Effects of Complexity

    6/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 276

    This exercise illustrates the closure problem of

    turbulence. Because of the nonlinearity of the

    NavierStokes equation, as we take higher and

    higher moments, we generate additional un-

    knowns at each level. At no point will this

    procedure balance our unknowns/equations

    ledger. On physical grounds, this is not a partic-

    ularly surprising situation. After all, such opera-tions are strictly mathematical in nature, and

    introduce no additional physical principles.In

    essence, Reynolds a6eraging is a brutal simplifica -

    tion that loses much of the information contained

    in the Na6ierStokes equation. The function

    turbulence modeling is to devise approximatio

    for the unknown correlations in terms of flo

    properties that are known so that a sufficie

    number of equations exist. In making su

    approximations, we close the system.

    I may add that this brute force approach, chaacterized by ever-increasing complexity and d

    creasing information content (given t

    proliferation of unknowns), does not meet t

    requirements of the four information proces

    Fig. 3. Proposed similarity parameters.

  • 7/28/2019 The Effects of Complexity

    7/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    Fig. 4. A computer-generated correlation for debris dispersal.

    shown in Fig. 1. Indeed, this and similar analyses

    should be contrasted to the simplicity, clarity,

    informative content and sheer elegance of the

    analyses of Kolmogorov (1941a,b), reported more

    than half a century ago.

    3.4. Multi-fluid formulations

    3.4.1. State of the art

    Most T-H calculations related to nuclear safety

    were performed by computer codes based on the

    two-fluid model. Their development was initiated

    in the mid-1970s to address the Large Break (LB)

    loss of coolant accident (LOCA) issue, and has

    been carried on over the past two decades

    through several versions of RELAP and TRAC.

    The adequacies and shortcomings of these

    codes are well established. Supported by a mas-

    sive validation process, these codes were used tosuccessfully bring closure to the LB LOCA issue

    for conventional NPP. However, time-consuming

    and costly modifications were required to address

    Small Break (SB) LOCA (in the wake of the TMI

    accident), due to the inadequacy of closure equa-

    tions (interfacial package), and flow regime

    maps, and to difficulties related to numerics and

    nodalization. These inadequacies and difficult

    were further augmented in the course of applyi

    these codes to advanced Nuclear Power Pla

    (NPP), Three Mile Island designs. The requis

    code modeling improvements were again ti

    consuming and costly.

    The difficulties in modifying codes to acco

    modate new requirements stem, in large pafrom their complexity, i.e. the vast number

    closure relations (the constitutive package),

    gether with transition criteria and splines, ea

    introducing a set of coefficients (dials). The lat

    may be adjusted or tuned to produce an accep

    able agreement between code calculations and

    specific set of experimental data (say, the Pe

    Clad Temperature (PCT)). However, this tunin

    procedure also generates compensating erro

    which limit the applicability of the code to

    different set of requirements or design. Ththrough complexity, these codes have already b

    come more inflexible and maladaptive to chang

    Such complexity, together with the tuning pr

    cedure and the exacerbating effect of often inad

    quate documentation, render the assessment

    code modeling capabilities more a matter of fai

    than of reason.

  • 7/28/2019 The Effects of Complexity

    8/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 278

    Such complexity, together with the numerics

    and nodalization, also make the two-fluid codes

    slow running. For some of the codes, the ratio of

    computing time to real time is about 20:1. This

    drastically limits the applicability (and thereby the

    usefulness) of these codes to advanced NPP de-

    signs, in which transients of interest may last for 2

    weeks or more.

    Two approaches have been used to redress

    these shortcomings. One involves three-fluid for-

    mulations, while the other applies various averag-

    ing techniques to obtain two-fluid equations

    containing Reynolds stresses. In my opinion, nei-

    ther approach appears to be very promising w

    regard to NPP applications, in as much as ea

    introduces a new set of unknowns. This, in tu

    requires additional closure equations that gen

    ate additional coefficients (tuning dials). The d

    velopment is open ended.

    As is the case with single-phase flow (

    Wilcox, 1998), these two approaches only dcrease the information content of a formulati

    (by increasing the number of unknowns) witho

    any improvement of its physics.

    This recourse to ever-increasing complexity

    order to reconcile theory with observation

    minds me of the increasingly complex calculatio

    in astronomy that were required to reconcile t

    geocentric concept of the universe with planeta

    observations. The complex geocentric model w

    accepted as valid (and politically correc

    throughout an entire millennium, only to be dcarded (an evolutionary failure) and replaced

    the heliocentric model, because of the latte

    simplicity, consistency and predictive capabil

    (Bronowski, 1973). Is there not a lesson to

    learned from such history that would be of bene

    to the research and development (R&D) efforts

    T-H?

    3.4.2. Concerns

    In my judgment, one of the greatest concerns

    any professional in this field should be the indcriminate use of two- or three-fluid models, whi

    invariably claim a good agreement with expe

    mental data. Yet, some of these formulations a

    codes are known to be inadequate, flawed and/

    incorrect. The good agreement may be explain

    only in terms of the carefully tuned dials hidd

    in the code, as noted by Travkin and Catt

    (2000).

    Although good agreement with experimen

    data may ensure the continuation of project fun

    ing, such formulations cannot contribute to tfund of knowledge. Laws of variable coefficien

    and tuning dials are not yet laws of physics.

    Yet these continuous claims to success ha

    institutionalized the art of tuning as an accep

    able methodology for addressing and resolvi

    technical issues and/or scientific problems. I cou

    apply to this modern art of tuning, perhaps tFig. 5. Closure problem with the Reynolds stress equation

    (Wilcox, 1998).

  • 7/28/2019 The Effects of Complexity

    9/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    Fig. 6. Success path.

    notion of tuning spins, in an analogy to the

    acknowledged spins used to address and resolve

    political issues. It would thus appear that spin

    doctors are not limited to political circles!

    Such comments should not be construed as

    opposition on my part to the use of codes such as

    RELAP and TRAC; this is not at all the case. These

    codes are adequate for the purpose for which they

    were originally designed, extensively tested and

    validated. My concerns relate to the use andmodification of these codes to accommodate new

    designs and/or meet new requirements (for speed,

    repetitive calculations, long-lasting transients, etc.)

    brought about by the de-regulation of the power

    industry (a changing environment). The increasing

    complexity inherent in such an approach leads

    inevitably to the evolutionary junk heap.

    What is needed now, and will be all the more vi

    in the future, are flexible, accurate and efficie

    codes to maintain a competitive edge in a changi

    environment. I cannot emphasize here enough th

    speed and accuracy are not incompatible requi

    ments. They may both be realized through mod

    larity and flexible architecture.

    For these codes to be evolutionary success

    they must have the characteristics of the replicati

    information systems illustrated in Fig. 1. Goingstep further, the stages and requirements of

    success path for such code development efforts a

    presented in Fig. 6, which reflects and is found

    upon the lessons I have learned through lo

    involvement with this technology.

    The steps and requirements are so obvious, th

    one might wonder why Fig. 6 is included in th

  • 7/28/2019 The Effects of Complexity

    10/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 2710

    paper. The answer is quite simple. Just 1 year ago,

    I attended a review meeting concerned with a new

    code development program. To my amazement,

    not only were the elements of the program not

    integrated, but fewer than one-third of the steps

    in Fig. 6 had even been considered! As a result of

    that meeting, I felt compelled to prepare Fig. 6

    and attach it to my evaluation memorandum. I

    am including it in this paper with the thought that

    it might perhaps be of some use to a broader

    audience.

    4. FCSA method

    I would hope that the four examples in the

    preceding section suffice to illustrate the effects

    that increasing complexity has (or inevitably willhave) on T-H analyses.

    In contrast to complexity, simplicity has long

    been recognized as the desideratum for any sci-

    ence. Let me quote a few such pronouncements.

    Ockham: Pluritas non est ponenda sine necessi-

    tate; paraphrased variously by Jeffrey and

    Berger (1992), It is vain to do with more what

    can be done with less. An explanation of the

    facts should not be more complicated than

    necessary. Among competing hypotheses, fa-

    vor the simple.Gibbs: One of the principal objects of practi-

    cal research is to find the point of view from

    which the subject appears in its greatest

    simplicity.

    Maxwell: The greatest desideratum for any

    science is its reduction to the smallest number

    of dominating principles.

    In what follows, I shall demonstrate that these

    requirements (simplicity, parsimony and synthe-

    sis) are the key features of the FCSA method.

    Indeed, it was statements such as those quotedthat guided the development of this paradigm,

    which quantifies the effects of a changing environ-

    ment upon an information system. For reasons

    cited in Section 1.1, these same features enable the

    FCSA method to process information with effi-

    ciency and versatility through the stages shown in

    Fig. 1.

    In these times of rapid change vis-a-vis techn

    ogy and economy (engendered by globalizatio

    and the age of information), efficiency and v

    satility are of utmost importance, not only

    industrial, but also to educational systems. T

    days graduates must be flexible, adaptable a

    versatile in their professional careers, lest th

    become obsolete and, perhaps, unemployable.

    The FCSA method originated during the cou

    of a program designed to scale severe acciden

    (Zuber, 1991). A summary of that effort w

    recently published by Zuber et al. (1998). Af

    1991, developments were reported by Zub

    (1993, 1994, 1995). In this paper, I shall outli

    the method and summarize some of the recen

    published results (Zuber, 1999), omitting much

    the commentary and detail, to illustrate its pote

    tial benefits to T-H.To emphasize the generality of the method

    ogy, I shall continue to employ words and expr

    sions (such as information system, agent

    change, cell, influence length, signal, etc.) that a

    applicable to many different disciplines. I do th

    deliberately, as words are often codes for a part

    ular message, and therefore tend to restrict t

    generality and import of a concept.

    However, for purposes of this paper relating

    T-H, the information entities of interest are ma

    momentum and energy, and the agents of chan

    are fluxes of mass, momentum and energy acro

    system boundaries. In this paper, therefore, t

    application and demonstration of the FCS

    method will be carried out in terms of these thr

    variables and parameters.

    4.1. Spatial and temporal scales

    Consider a signal being transferred across

    area, A, and being felt (integrated) within a vume, V, which will be referred to as the inform

    tion containing volume or receiver.

    As the volume is a measure of the capacitanc

    and the transfer area gauges the intensity of

    process, the spatial scale that characterizes

    transfer process is the transfer area concentratio

    defined by

  • 7/28/2019 The Effects of Complexity

    11/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    1

    u=

    A

    V(1)

    which henceforth will be referred to as the influ-

    ence or characteristic length u.

    One can look at boundary conditions as agents

    of change or constraints imposed upon the trans-

    fer area that induce changes within the informa-tion containing volume. Similarly, one can look

    upon these changes as the response of the en-

    closed entity (amount of information) to the con-

    straint-induced perturbations at the boundary.

    Therefore, the rate of change of such an entity is

    determined by the time constant for this internal

    accommodation process. We shall consider sepa-

    rately the effects of constraint and of the

    response.

    We define M as the metric for the entity con-

    tained in V, and b as the agent of change.Clearly, both M and b are problem specific. For

    applications discussed in this paper, M will denote

    momentum or energy, whereas b will represent

    forces, fluxes or power. The rate of change of M

    due to the action of b is then

    dM

    dt=F (2)

    To obtain the time constant for the receiver, we

    define as the fractional rate of change (FRC) of

    M. Thus,

    =1

    M

    dM

    dt(3)

    In as much as we consider perturbations from a

    steady state, M0, it follows from Eqs. (1) and (2)

    that the FRC can be expressed as:

    =F

    M0(4)

    which implies a linear approximation of Eq. (3).This approach has four important features.

    First, the kinetic aspect of the problem is ac-

    counted for by Eq. (2).

    Second, the kinematic aspects are reflected in

    Eq. (3), in that the FRC introduces, via the

    concept of action (discussed in Section 4.3), the

    kinematic relation that specifies the process signal.

    Therefore, the FRC provides the temporal sc

    for a particular transfer process in the receiver

    Third, the kinematic and kinetic aspects a

    combined, via the effect metric V (see Section 4

    to produce the scaling criterion for the trans

    process of interest.

    Fourth, it is this decomposition of the proble

    into kinetics and kinematics that allows the usea single method to analyze and scale differe

    transfer processes.

    The second temporal scale depends upon t

    time, ~, during which the change is being o

    served, i.e. integrated. We shall denote it as clo

    time to differentiate it from the process time sc

    1/.

    For an open, flow system, the clock time

    defined by

    1~=Q

    V

    where Q is the volumetric flow rate. It can be se

    that ~ is a function of the spatial scale an

    therefore, of the hierarchical level at which t

    change is to be observed. For one-dimension

    flows through ducts with constant cross-section

    area, ~ becomes a function of length and of t

    average velocity, 6, of the fluid.

    In this paper, we consider transfer processes

    three hierarchical levels, macro, meso and micrwhich are identified by three spatial scales: Lsthe system, u of the cell and uk of the dissipatio

    We therefore shall use three scales for the clo

    time ~. Thus, for the system:

    ~s=Ls

    v

    for the cell:

    ~c=

    u

    vand for dissipation:

    ~k=uk

    6k

    where vk is the dissipation velocity discussed

    Section 6.3.

  • 7/28/2019 The Effects of Complexity

    12/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 2712

    4.2. The effect metric d

    Having defined the FRC of M by , and the

    clock time by ~, we now designate the metric for

    change as

    V=~ (9)

    Since ~ is the time for observing (integrating) achange, it follows from Eqs. (2) and (4) that

    V=F

    M0~=

    MM0

    M0(10)

    Consequently, while the FRC is the metric

    for the intensity of a transfer process, the metric V

    denotes the fractional change (%) of the informa-

    tion metric M during a period ~, as a consequence

    of the information transfer rate b. For this rea-

    son, we shall refer to V as the effect metric (the

    effect ofb on M during ~).Consider now processes for which the effect

    metric V is a constant, say V0. It follows then,

    from Eq. (9), that

    ~=V0 (11)

    which plots a hyperbola as shown in Fig. 7. We

    note that this hyperbola divides the ~ plane

    into two regions.

    In region (1), the hyperbola V0 limits the clock

    time ~, during which a given FRC (call it 1) can

    be sustained. As characterizes a particulartransfer process b1, the hyperbola V0 delineates

    the region within which this process can be main-

    tained. In region (2), the metric M can no longer

    sustain the fractional change at the rate 1.

    Therefore, the process either ceases or must

    change to a completely different kind of process.

    Section 7 will demonstrate the following.

    1. In compressible flow, the effect metric V is t

    inverse of the Mach number. Therefore, t

    hyperbola V0=1 divides the ~ plane in

    two regions: the subsonic (V\1) and the s

    personic (VB1).

    2. The life span of mammals scales according

    such a hyperbola, so that, indeed, small mammals (e.g. mice) have high metabolic rates a

    short life spans, while the opposite holds tr

    for large mammals (elephants), which have

    low , but a long residence time, ~. Thus, f

    all mammals, the hyperbola V0 separates t

    region of life (region (1)) from that of t

    beyond (region (2)).

    3. The time to hatch eggs also scales as a hyp

    bola. Thus, for avians, the hyperbola V0 sep

    rates the egg time (region (1)), from t

    flying time (region (2)).

    4.3. Kinematic and kinetic aspects

    We proceed by defining three importa

    parameters.

    Process velocity: 6p=u (

    Process action: Ap=6pu=u2 (

    Flow action Af=6u (

    Given that the FRC is the temporal scale fthe intensity of a particular transfer process, th

    6p is the metric for its speed.

    The rationale for using the concept of action

    a parameter is discussed elsewhere (Zuber, 199

    Briefly, it is a parameter that characterizes qua

    tum, diffusion and wave phenomena. In t

    present application, it provides the key to unco

    pling kinematics from kinetics.

    The action parameters Ap and Af are intr

    duced in the formulation via the effect metric

    Considering a cell and its clock time, ~, it follofrom Eqs. (7), (9), (13) and (14) that the eff

    metric V can be expressed as:

    V=u

    6=

    Ap

    Af(

    which clearly illustrates the kinematic features

    V. However, by means of Eq. (10), we can alFig. 7. The plane and effect metric V0 (Zuber, 1994, 1999).

    Note: high frequency, short life; low frequency, long life.

  • 7/28/2019 The Effects of Complexity

    13/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    Fig. 8. The road map to the effect metric and four similarities.

    the fractional rate of change (FRC) (step

    in Fig. 8), which specifies the temporal scale f

    the change;

    the cell clock time ~c, which specifies the du

    tion of the change within the cell (step 2);

    the effect metric V, which quantifies the eff

    (fractional change) that b had on M duri

    clock time ~c (step 3); and two action parameters, Ap and Af, the first

    which (step 4) specifies the process (a

    thereby the signal), and the second that

    counts for the flow (step 5).

    By means of the two action parameters, t

    effect metric V may be expressed in terms of t

    kinetic and kinematic aspects of the process (st

    6). The first relation accounts for the agent

    change (the environment), and the second for t

    speed of the signal within the cell (the speed

    accommodation, of adjustment).The relations in Fig. 8 (steps 3 and 6) demo

    strate that a transformation (scaling) that p

    serves the effect metric V automatically a

    simultaneously satisfies and ensures four simila

    ties: kinematic, kinetic, temporal and fractiona

    The concepts of a cell (a quantum of volum

    defined by the influence length u, and of t

    fractional rate of change , impart simplicity a

    generality to the FCSA method. The concept

    two action parameters, Ap and Af, introdu

    synthesis and parsimony. When these concepts acombined and expressed as the effect metric

    the FCSA provides a general, efficient, flexib

    and adaptable method for quantifying changes

    information systems induced by an agent

    change.

    In this lecture, these attributes will be demo

    strated by applying FCSA to momentum and

    energy at three hierarchical levels.

    5. Hierarchical levels

    Given that I intend to demonstrate the app

    cability of the FCSA method to processes a

    systems in which the spatial scale varies over

    or more orders of magnitude, I shall structure t

    demonstration and interpret the results within t

    context of the theory of hierarchical systems. T

    express V in terms of the kinetic parameter b.

    Thus, we can write:

    Ap

    Af=V=

    F

    M0

    u

    6(16)

    thereby demonstrating the dual interpretation of

    V as both kinematic and kinetic.

    4.4. Features and road map

    The road map for applying the FCSA method

    to an information system is displayed in Fig. 8,

    which also outlines its key features.

    FCSA considers an entity M, contained in a

    volume V, acted upon by an agent b, which

    induces changes in M. The rate of change of M is

    specified by the signal that propagates within the

    volume V. The latter depends upon the transfermode, i.e. whether it is by waves, diffusion, vortic-

    ity or a combination thereof.

    Information containing volume V, defined by

    the influence length u, is referred to as a cell in the

    hierarchical scales considered in this paper. At

    that level, FCSA considers five parameters (con-

    cepts), which include:

  • 7/28/2019 The Effects of Complexity

    14/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 2714

    relevance of this theory was recognized and used

    to develop a hierarchical approach to scaling (Zu-

    ber, 1991; Zuber, 1998) and was adopted in the

    studies of Reys and Hochreiter (1998), Ishii et al.

    (1998), Peterson et al. (1998).

    One of the key features of hierarchical struc-

    tures is that the question (information needed)

    determines the hierarchical level at which the an-

    swer is to be found. Thus, lower levels provide

    more detailed information than those higher up.

    The three levels shown in Fig. 9 reflect the follow-

    ing three posed questions, and illustrate an appli-

    cation of the FCSA method.

    Taking momentum as the subject of interest, we

    start from the top, i.e. from the macro level, and

    address the overall problem with the initial

    question:

    1. What parameters scale the change of momen-tum of the entire system?

    2. Having identified the overall parameters at the

    macro level, we proceed to the meso level with

    a more detailed question:

    3. What processes affect the overall parameters,

    and how are these processes scaled?Having

    identified the processes and the relevant scal-

    ing relations at the meso level, we proceed to

    the micro level, with a still more detailed

    question:

    4. How are the changes of momentum tra

    formed in an irreversible loss, and what sca

    the dissipation rate?

    The implementation of this procedure is a

    shown in Fig. 9.

    At the macro level, the system specifies t

    spatial scales that are used in Eq. (4) to determi

    the FRC of momentum m,s for the system. Tsystem also determines the clock time ~s. Th

    two parameters generate the effect metric for t

    whole system, V, which both scales the change

    momentum and identifies the overall governi

    parameters.

    The meso level is defined by the influence leng

    u, which specifies the dimensions of a cell with

    which the process is to be addressed in mo

    detail. A particular process is specified by t

    process action Ap, which is used to obtain t

    FRC of momentum m,p for that process. The calso determines the clock time ~c. These t

    parameters (m,p and ~c) generate, in turn, t

    effect metric Vp, which scales that particu

    process.

    At the micro level, the assumption of isotrop

    ity specifies the fluid action Af, which, togeth

    with the FRC m,p, generates all parameters th

    characterize the dissipative level (i.e. the FRC

    momentum k, the spatial scale uk, the velocity

    and the clock time ~k). These parameters in tu

    generate the effect metric Vk, which turns outbe equal to unity, reflecting a compl

    dissipation.

    In the following section, it will be demonstrat

    and confirmed that the same method (FCSA) a

    the same metric (V) may be applied to the hiera

    chical levels shown Fig. 9, and may be used

    generate of the appropriate scaling criteria

    reflect the degree of interest and details of ea

    level.

    6. Demonstration through applications

    6.1. Macro-le6el scaling

    We consider four examples, which are ill

    trated in Fig. 10. The first deals with scali

    pressure drop in flows through ducts. The metFig. 9. Fractional change scaling at three hierarchical levels

    (Zuber, 1999).

  • 7/28/2019 The Effects of Complexity

    15/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    Fig. 10. Examples of sealing at the macro/system level (Zuber, 1994, 1999).

    M is therefore the momentum (zV6), and the

    agent of change b is the stress force at the wall

    (|wA). Inserting these (system) parameters in Eq.(4) yields the FRC of system momentum m,s(expressed in terms of the friction factor f ) that,

    with the system clock time ~s, generates the system

    momentum effect metric Vm,s. It can be seen that

    it is identical with the definition of the friction

    loss factors given in Bird et al. (1960).

    The second subject is the scaling of heat trans-

    fer by forced convection. The metric M is now the

    enthalpy (zVcpDT), and the agent of change b is

    the heat transfer rate from the wall (hAD

    T, whereh is the heat transfer coefficient). Following the

    same procedure as already described, we obtain

    the FRC of system enthalpy, e,s, and the system

    enthalpy effect metric Ve,s, which is now a func-

    tion of the Stanton number, St, and identical to

    relations given in any standard textbook on the

    subject.

    The third topic deals with the scaling of a loss

    of coolant accident (LOCA) in nuclear reactors.

    The metric M is now the enthalpy (zVcpDT), in

    the primary side, whereas the agent of change b isthe reactor power. Thus, we obtain the FRC e,sof the enthalpy in the reactor, and the effect

    metric Ve,s. The system clock time ~s is given by

    Eq. (5), where Q is the volumetric flow rate of the

    fluid through the break. If fluid properties and

    clock time are preserved, then the effect metric

    Ve,s becomes the power to volume scaling rule:

    F1

    V1=F2

    V2(

    This scaling rule was used to design and operatest facilities that produced experimental data ne

    essary to validate large computer codes, which,

    turn, were used to address and resolve saf

    issues relating to LOCA (Boyack et al., 199

    Zuber et al., 1990).

    The final topic deals with the life span of ma

    mals. It is included here only to demonstrate th

    the FCSA method may be applied to situatio

    for which differential equations are not availab

    and the processes are not sufficiently well und

    stood to use the Buckingham ^ Theorem. I shreturn to this subject in Section 7.

    We note that the effect metrics V scale pressu

    drop and heat transfer at the system level, a

    identify the overall parameters that must be co

    sidered. However, they provide no information

    to what processes affect these two paramete

    This information must be sought at the me

    level.

    The effects of two or more agents acting sim

    taneously on a system are discussed in Append

    A, in order to illustrate the efficiency that may realized through scaling.

    6.2. Meso-le6el scaling

    At this level, we analyze processes that occ

    within the volume of a cell defined by the infl

    ence length u. Within such a cell, we consid

  • 7/28/2019 The Effects of Complexity

    16/27

  • 7/28/2019 The Effects of Complexity

    17/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    and 3) to obtain the effect metric for the momen-

    tum Vm (step 4.)

    We use the kinematic branch ofVm to identify

    three types of flows: those for which (1) Ap is

    constant, (2) Vm is constant, and (3) neither Apnor Vm is constant. These three types characterize

    three transport processes.

    For diffusion-dominated processes, the transferis by molecular action. Consequently, the process

    action Ap is constant and equal to twice the

    kinematic viscosity.

    For vorticity-dominated (high-velocity) flows,

    the transfer is independent of molecular effects.

    The process action Ap must therefore be propor-

    tional to Af, with Vm as a constant, say V0. Thus,

    Ap=V0Af (22)

    By introducing the definitions of the actionparameters (Eqs. (13) and (14)), we can express

    Eq. (22) as

    mu

    6=V0 (23)

    indicating a constant vorticity flow.

    For flows affected by both diffusion and vortic-

    ity, we interpolate between these two limits by

    defining an eddy action Ae in terms of a general-

    ized geometric mean

    Ae=Ad1mAv

    m (24)

    which reduces to flows dominated by diffusion for

    m=0 and by vorticity for m=1.

    Expressing the kinematic branch ofVm in terms

    of three process actions (step 5) leads to three

    effect metrics Vm (steps 6, 7 and 8), i.e. one for

    each mode of transfer.

    Referring to the results shown in Fig. 11, we

    observe the following:(1) For flows through circu-

    lar ducts (with a diameter D), the influence length

    (defined by Eq. (1)) is

    u=D

    4(25)

    and the cell Reynolds number Reu becomes:

    Reu=6u

    26=6D

    86=

    Re

    8(26)

    (2) For diffusion-dominated flows through circ

    lar pipes (step 6), the friction factor f becomes

    f=16

    Re(2

    which is identical to the relation derived by H

    genPoisseulle for viscous flows. Note, howev

    that Eq. (27) was derived here without even wring the differential equation, much less solvi

    it.(3) For vorticity-dominated flows over a rou

    surface, the friction factor f depends only up

    the geometry and configuration of the roughne

    As

    u

    6=~c=V0 (2

    these flows plot as hyperbolae in the ~ pla

    shown in Fig. 2.(4) By varying the exponent m

    Eq. (24), we obtain different relations between tReynolds number and the friction factor in step

    Thus, when m=0.75, we obtain fReu-0.25, whi

    is the relation proposed by Blasius for turbule

    flow. Whereas, when we take m=0.80, we obta

    fReu-0.20, which is the relation proposed by H

    mann, also for turbulent flow.(5) The kine

    branch ofVm (step 4) is an energy ratio, which

    expressed in terms of the friction factor in steps

    7 and 8. This branch does not specify (accou

    for) the type of transfer. The kinematic side do

    that. This observation should come as no surpriin as much as flow pattern is a metaphor f

    kinematics (and vice versa).

    6.2.1.2. Wa6es and 6ibrations. I shall now outli

    the application of the FCSA method to wave a

    vibrations, summarizing the findings of Zub

    (1999).

    Once again, we identify the metric M w

    momentum (Vz6) and the agent with the p

    turbation force lF, at the boundary, to obtain t

    FRC m. Thus,

    m=lF

    Vz6(2

    For a particular problem of interest, we rela

    the perturbing force lF to the specific energy

    the displacement lu, and express the FRC mterms of these two parameters. Thus,

  • 7/28/2019 The Effects of Complexity

    18/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 2718

    Fig. 12. The effect metric V for various types of waves and vibrating systems (Zuber, 1999).

    m=E

    6ulu

    u(30)

    Following the procedure illustrated in Fig. 8,

    we introduce the action parameters Ap and Af to

    express the effect metric in terms of the kinematic

    and kinetic branches:

    Ap

    Af=Vm=

    E

    62

    lu

    u

    (31)

    Again, we use the kinematic branch to specify

    the process. Consequently, for waves, we identify

    the process action Ap with the wave action defined

    by

    A=E

    (32)

    In discussing wave phenomena, Witham (1974),

    Johnson (1997) note that the wave action, defined

    by Eq. (32), is a quantity even more fundamental

    than the energy, in as much as it is conserved,while neither wave energy or frequency is.

    Substituting Eq. (32) into Eq. (31) yields:

    1

    m=

    lu

    6(33)

    which, upon substitution into Eq. (30), generates

    the FRC m for waves and vibrations:

    m=Eu

    (3

    whereupon the effect metric for the cell becom

    Vm=m~c=E6

    (3

    Fig. 12 shows the results obtained by applyi

    the FCSA method to various types of wave ph

    nomena and vibrating systems. Additional exaples are not included, given the limitations on t

    length of this paper. This figure illustrates t

    following.

    For pressure waves (where c is the velocity

    sound), Vm is identical to the inverse of t

    Mach number.

    For surface waves in deep water (where u is t

    wavelength), Vm is identical to the inverse

    the Froude number.

    For long waves (where p0 is the depth of t

    channel), Vm is again identical to the inversethe Froude number.

    For a vibrating string (where E is the longitu

    nal elasticity), the FRC m becomes the f

    quency of vibrations.

    For a spring-mass cell (where k is the spri

    constant, u its length and m the particle mas

    the FRC m is the frequency of oscillations

  • 7/28/2019 The Effects of Complexity

    19/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    For an array of n spring-mass cells, the FRC

    m is again the frequency of oscillations.

    I trust that these applications were sufficient to

    demonstrate that FCSA provides a single method-

    ology to scale and to analyze transfer processes

    associated with particles, waves, diffusion and

    vorticity.

    6.2.2. Fractional change of energy due to

    dissipation

    In this section, I shall outline the application of

    FCSA to scale and analyze the effect of dissipa-

    tion on the kinetic energy, summarizing the re-

    sults of Zuber (1999).

    For this application, the metric M is the kinetic

    energy (zV62/2) and the agent of change b is

    the rate of dissipation due to the stress force

    (F|=|A.) We let the volume V be in contact with

    the wall, and evaluate the stress there to obtainthe FRC of kinetic energy kE. Thus:

    kE=|wA6

    zV(62/2)=2

    E

    6u=2m (36)

    The effect metric VkE may be expressed, there-

    fore, in terms ofm or Vm to yield the well-estab-

    lished scaling criterion

    VkE=kE~c=2m~c=2Vm=f (37)

    and be evaluated by means of the three relations

    derived in the preceding section (steps 6, 7 and 8in Fig. 11.)

    We proceed to scale and analyze the rate of

    dissipation by means of the FCSA method. To

    this end, we define the specific dissipation rate by:

    m=F|6zV

    (38)

    and express it in terms of the FRC kE from Eq.

    (36). Thus:

    m=kE62

    2=m62 (39)

    We use this relation in the road map (see Fig.

    13) that shows two parallel paths: one for m, the

    other for m. The parallel paths serve two purposes.

    They reveal the relationship between the specific

    energy E and the specific dissipation rate m, as well

    as demonstrate how each relates to the other

    parameters.

    Following the FCSA method, we introduce t

    action parameters Ap and Af, and obtain t

    invariant (step 4) and the specific dissipation ra

    m in terms of three parameters (step 5.)

    Again, we use the process action Ap to spec

    the transfer processes: one for diffusion-dom

    nated and the other for vorticity-dominated flo

    (step 6). The third mode (for combined transfshown in Fig. 11 has been omitted for the sake

    simplicity. These two expressions for Ap lead

    two relations for m (steps 7 and 8) and t

    relations for m (steps 9 and 10), each valid for t

    specified transfer process.

    The relations for vorticity (step 7) and for

    diffusion-dominated process (step 8) are identic

    to the relations shown in Fig. 11 (steps 7 and

    respectively), albeit expressed in terms of differe

    parameters.

    The dissipation rate for diffusion-dominatflows (step 9) can be expressed in terms of t

    pipe diameter D and average velocity 6. Th

    from Eqs. (14) and (25), we have

    m=646

    D262

    2(4

    valid for HagenPoisseulles flow.

    The dissipation rate for vorticity-dominat

    flows (step 10) is the Kolmogorov-5/3 equati

    for the inertial subrange (Kolmogorov, 1941a,To demonstrate this statement, we shall expr

    the dissipation rate given by

    m=1

    V01/2

    E3/2

    u(4

    in terms of the parameters used by Kolmogoro

    i.e. in terms of the wavenumber k.

    k=1

    u(4

    and of the energy spectrum E(k):

    E(k)=E

    k=Eu (4

    We can therefore express Eq. (41) in terms

    these parameters to obtain:

    E(k)=V01/3m2/3k5/3 (4

  • 7/28/2019 The Effects of Complexity

    20/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 2720

    which is Kolmogorovs equation when the con-

    stant V01/3 is identified with Kolmogorovs con-

    stant Ck.

    We note, in closing, that Eq. (44) can also be

    derived (Zuber, 1999) by considering a wave

    eddy duality in analogy to the wave-particle dual-

    ity in quantum mechanics. The only difference

    between these two approaches is the exponent of

    the constant V0. For the waveeddy duality, the

    exponent is 4/3 instead of 1/3 in Eq. (44). The

    approach described is preferable, in as much as no

    waveeddy duality is invoked in the derivation of

    Eq. (44).

    6.3. Micro-le6el scaling

    In this section, we shall apply the FCSmethod to scale and analyze the dissipation raat the micro scale, summarizing the results Zuber (1999).

    As the spatial scale decreases, diffusion b

    comes the dominant transfer process (for a discusion, see Zuber, 1999. Consequently, we start wthe relations shown in steps 8 and 9 in Fig. 13

    Consider next the effect of isotropy. Fisotropic conditions, the two action parametmust be equal. Consequently, we identify Af w2w, thereby reflecting a random motion (step 1

    Fig. 13. Road map for scaling the specific dissipation rate at the meso and micro levels.

  • 7/28/2019 The Effects of Complexity

    21/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    We then introduce the relations from step 11

    into those of steps 8 and 9, which leads to equa-

    tions that are valid at the micro level (steps 12, 13

    and 14). These equations merit further comment.

    First, the three identities in step 13 are identical

    (except for the numerical factor of 2) to the

    relations derived by Kolmogorov, and which

    define the Kolmogorov scale for dissipation.Consequently, we have denoted them by the

    subscript k. In transforming one identity to an-

    other, we made use of Eqs. (13) and (14).

    Thus:

    Ap

    2w=kuk

    2

    2w=

    6kuk

    2w=1 (45)

    The last identity corresponds to a Reynolds num-

    ber for dissipation: Rek=2.

    Second, given that the specific energy E is deter-

    mined by the constraint, the three equations insteps 12 and 14 show that the spatial length uk,

    velocity 6k, and the specific dissipation rate m are

    determined by the constraint.

    Third, comparing the third equation in step 13,

    where m is expressed in terms of Kolmogorovs

    dissipation velocity 6k, with Eq. (19), for the

    friction velocity 6, it can be seen that these two

    equations are identical. Consequently, these two

    velocities correspond to each other, and

    6

    2

    =6k2

    =E (46)which, for a given specific energy E, is the

    parameter for scaling velocities.

    Fourth, the effect metric Vk at the micro-scale

    level is expressed in terms of FRC k and the

    micro-level time clock defined by Eq. (8). Thus,

    Vk=k~k=kuk

    6k

    (47)

    which, in view of Eq. (45), becomes

    Vk=1 (48)indicating that, at the micro level, the entire en-

    ergy has been dissipated during the microlevel

    clock time ~k.

    I hope that the applications presented have

    both demonstrated and confirmed that the FCSA

    method provides a single paradigm for scaling and

    analyzing transfer processes across hierarchical

    scales ranging from nuclear reactors to K

    mogorovs micro scale.

    7. Analogies

    In this section, I would like to briefly note thr

    analogies: one between the concepts and equtions summarized in the paper and those in qua

    tum mechanics; and the other two between flu

    dynamics and two biological processes. Giv

    that these analogies deal with topics outside t

    field of T-H, I shall not discuss them in any det

    (these subjects will be covered in separate public

    tions.) I am citing them here merely to illustra

    the versatility of the FCSA method.

    7.1. Analogy with quantum mechanics

    We have already noted that Eq. (21) is t

    mathematical analog of de Broglies equation

    quantum mechanics. Fig. 14 shows that this an

    ogy may be extended to other parameters a

    equations.

    The analogies illustrated now are mathematic

    in nature. However, there is also a conceptu

    analogy to be drawn, with potentially significa

    implications.

    Consider de Broglies equation. It relates to t

    specific energy E and to two velocities, one for twave and the other for the particle. The fi

    characterizes the kinematics of the process, wh

    the second characterizes the kinetics. Thus,

    Broglies equation displays and combines bo

    features (kinematic and kinetic) of the problem

    Consider now Eq. (21), which is based upon t

    concept of fractional change and was derived

    uncoupling the kinematics from the kinetics.

    also has two velocities: one for the process (

    and the other for the fluid particle (6).

    By introducing two action parameters, one fprocess (Ap) and the other for flow (Af,) a

    relating them to particular flow processes,

    were able to identify a process velocity (6p) f

    each flow pattern. As a result, Eq. (21) was us

    to scale and analyze not only wave processes (

    does de Broglies equation), but other processes

    well, including diffusion, vorticity and their co

  • 7/28/2019 The Effects of Complexity

    22/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 2722

    Fig. 14. Analogies (Zuber, 1999).

    bined effect, dissipation, single-particle and multi-

    particle vibrations, etc.

    Therefore, by developing and utilizing the con-

    cept of equal fractional change in conjunction

    with the concept of action, we have extended the

    relevance and usefulness of de Broglies equation

    to diverse processes and numerous applications.

    7.2. Analogies with biological processes

    I shall now comment briefly on the analogy

    between fluid dynamics and two biological pro-

    cesses; specifically, the life span of mammals and

    the hatching time of avian eggs.

    Biological studies of mammals have shown that

    an allometric relation correlates the observed

    metabolic rates as a function of body weight, with

    an exponent of approximately 0.25, whereas

    the life span correlates with an exponent of ap-proximately +0.25 (Kleiber, 1932; Stahl, 1967;

    McMahon, 1984, and others.) Therefore, their

    products yield the hyperbolic relation shown in

    Figs. 2 and 15a.

    Thus, for mammals, the hyperbola V0 is a

    dividing line that separates the region of life (re-

    gion (1)) from that of the beyond (region (2)).

    In a similar manner, one can obtain a hype

    bolic relation (Fig. 15a) for the time to hat

    avian eggs. In this case, Region (1) corresponds

    the egg time, whereas region (2) is the flyi

    time for avians.

    The hyperbola V0 in Fig. 15a indicates th

    time has the same (symmetric) effect on the l

    cycle of each species. In as much as life is specifi

    by a hyperbola, different stages are specified

    hyperbolae VBV0 . For example, species rea

    their reproductive stage at V=0.25V0, matur

    at V=0.50V0, lose their reproductive capacity

    V=0.75V0 and die at V=V0. The effect of tim

    is the same, i.e. life is time symmetric.

    The effect metric V in Fig. 15a translates t

    energetic relation of a process into a tempo

    relation. The constancy ofV0 implies time symm

    try and conserves the energy ratio. This illustrathe theorem of Noether (1918) (Mills, 199

    Oliver, 1994, and others), which states that

    every symmetry in nature, there corresponds

    conservation law. Thus, the energy conservati

    law corresponds to time symmetry. We may the

    fore conclude that life is a manifestation

    Noethers theorem.

  • 7/28/2019 The Effects of Complexity

    23/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    We have shown in Section 6.2 that, for com-

    pressible flow, the effect metric Vm is the inverse

    of the Mach number. Consequently, lines of con-

    stant Vm plot as hyperbolae in the ~ plane

    (Fig. 15a,) and as straight lines in the 66pplane (Fig. 15c.) Thus, the metric Vm=1 sepa-

    rates the subsonic flow (Vm\1, region (2)) from

    the supersonic flow (VmB1, region (2)). In this

    application, the effect metric translates the kinetic

    relation of the process into a kinematic one.

    These observations indicate that the process of

    compressible flow and mammal life share several

    analogous features.

    There is an additional interesting observation

    that may be made with respect to this analogy. It

    was observed by von Karman that a shock wave

    divides space into two regions. Ahead of t

    shock is the region of silence, of no informati

    (see Fig. 15c,) since the velocity of the sign

    (announcing the change) is lower than that of t

    particle (airplane). Information is available b

    hind the shock wave, given the awareness of

    passage.

    While von Karman considered supersonic flo

    in the context of space and information, the an

    ogy shown in Fig. 15 suggests that life may

    considered in the context of time and informatio

    Thus, as we move through time (or as time pas

    us by), we have no information as to what

    ahead, or of what the future holds. Information

    available to us only from the past (or what

    behind us in time).

    Fig. 15. Analogy between the life span of mammals and fluid dynamics.

  • 7/28/2019 The Effects of Complexity

    24/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 2724

    8. Summary and recommendation

    This lecture had two objectives: to contrast the

    effects that complexity and simplicity have (and

    will continue to have) on R&D efforts in T-H;

    and the other, to demonstrate that scaling pro-

    vides the means by which to process information

    in an efficient manner.The complexity versus simplicity comparison

    was made in the context of replicating and non-

    replicating information systems, which allowed

    the following.

    1. To illustrate how ever-increasing complexity in

    formulating and analyzing T-H problems in-

    evitably leads to inefficiency, obsolescence and

    evolutionary failure.

    2. To note that simplicity, which allows for parsi-

    mony, synthesis and clarity of information,

    ensures efficiency, survival and evolutionarysuccess.

    3. To identify the requirements (and the means

    of achieving them) that a successful R&D

    effort must have.

    4. To provide a success path for an R&D effort

    to follow.

    To meet the second objective, I summarized the

    key features of the Fractional Change, Scaling

    and Analysis method (FCSA), which are simplic-

    ity, parsimony, synthesis, efficiency and versatil-

    ity. These features were demonstrated by applyingthe FCSA paradigm to various processes. It confi-

    rmed that a single concept (simplicity and parsi-

    mony) and a single methodology (again, simplicity

    and parsimony) may be used to:

    1. scale all transfer processes associated with par-

    ticles, waves, diffusion and vorticity (synthesis)

    across hierarchical levels ranging from Kol-

    mogorovs micro scale to a nuclear reactor

    (synthesis and efficiency);

    2. derive Kolmogorovs scaling relations for:

    The inertial subrange and the micro range(synthesis); and

    3. scale across disciplines; for example, from fluid

    mechanics to biology (6ersatility).

    In the context of the fluid dynamics-life span

    analogy of Fig. 15, I cannot know what lies

    ahead. However, I can integrate information from

    the past to conclude that, by pursuing the path of

    ever-increasing complexity, the T-H technolo

    will follow doggedly in the footsteps of the do

    bird. I sincerely hope that this will not be the ca

    Appendix A. Efficiency through scaling

    This appendix has a twofold purpose: first, illustrate the inefficiency and, therefore, the was

    fulness, associated with computer code safe

    analyses as presently conducted; and, second,

    demonstrate, through a simple example, the sa

    ings (in terms of effort, time and funds) that m

    be realized through scaling, and the increas

    efficiency thereby afforded a safety analy

    process.

    To my knowledge, the results of LB and

    LOCA safety studies (either experimental or co

    puter based have never been cast in a dimensioless form by means of appropriate scali

    relations. Thus, a full synthesis of the resu

    (information in the context of Fig. 1)) has n

    been achieved.

    The consequences of this lack of synthesis we

    (and still are) inefficiency and wastefulness, in

    much as each power level, each break size, ea

    break location, each reflood rate, etc., require

    separate experiment and a separate computer c

    culation, both of which are time consuming a

    expensive.The already onerous task of having to consid

    separately the effects of varied and numero

    parameters (a piecemeal approach, at best) w

    greatly augmented by the de-regulation of t

    power industry, which introduced competition

    the environment. The attendant quest for e

    ciency generated, in turn, the need for best es

    mate (BE) calculations and for quantification

    uncertainties, which, to be done properly, requ

    numerous sensitivity calculations.

    Although scaling provides the technical ratnale and methodology for reducing the number

    parameters in an equation (by casting the equ

    tion in a non-dimensional form and expressing

    in terms of scaling groups), this, as already note

    was not performed in conjunction with BE calc

    lations. Instead, arm-waving arguments are us

    to justify a reduction in the number of sensitiv

  • 7/28/2019 The Effects of Complexity

    25/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    calculations, for the sole purpose of reducing,

    thereby, the cost of a safety analysis.

    It is my considered opinion that such safety

    analyses are bound to be most detrimental to

    nuclear power technology. There are many lessons

    that should have been learned from past experi-

    ence and history. However, as my views on this

    subject have been expressed in various documentsavailable to the public, I shall not debate the issue

    further here. Instead, I shall endeavor to illus-

    trate, by means of a simple example, the savings

    and efficiency that may be realized through scal-

    ing, and to contrast these positive effects to the

    wastefulness and inefficiency of a piecemeal

    approach.

    Consider a fluid of density z, specific heat cp,

    flowing at a mass flow rateW, through a vessel of

    volume V, heated by two sources. For a well-

    stirred vessel, it may be assumed that the fluidtemperature in the vessel is equal to that at the

    outlet. Consequently, the energy balance becomes:

    VzcpdT

    dt=Wcp(TinT)+h1A1(T1T)

    +h2A2(T2T) (A1)

    where Tin is the temperature of the fluid at the

    inlet, T1 and T2 are the temperatures of the first

    and second sources, respectively, and h and A are

    the heat transfer coefficient and transfer area of asource.

    In the context of the FCSA method, the metric

    M is the enthalpy in the vessel, whereas the agents

    of change are the two energy sources, each of

    which generates a FRC of enthalpy , thus:

    h1A1

    zcpV=

    h1

    zcp

    1

    u1=1 (A2)

    h2A2

    zcpV=

    h2

    zcp

    1

    u2=2 (A3)

    By means of these two parameters and the

    definition of the system (vessel) clock time given

    by

    1

    ~s=W

    zV(A4)

    we can transform Eq. (A1) into

    ~sdT

    dt=TinT+V1(T1T)+V2(T2T) (A

    Thus, each source (agent) has its own effect met

    V.

    Inas much as we are interested in changes t

    ward or away from a state of equilibrium, we c

    use the equilibrium temperature T

    , to expr

    the temperature in a non-dimensional form. Thu

    T+=TTi

    TTi

    (A

    where

    TTi=

    V1(T1Ti)+V2(T2Ti)

    1+V1+V2(A

    is obtained from the steady-state solution of E

    (A5).

    Expressing the temperature T in Eq. (A5) terms of the dimensionless temperature T+ resu

    in

    ~sdT+

    dt=(1+V1+V2)(1T

    +) (A

    By defining the dimensionless time as

    t+=(1+V1+V2)t

    ~s(A

    we can transform Eq. (A8) into

    dT+

    dt=1T+ (A

    which, with the initial condition

    T+=0 at t+=0 (A

    integrates into:

    T+=1et+

    (A

    and, in view of Eqs. (A6) and (A9), can

    transformed into:

    TTi

    TTi

    =1e(1+V1+V2)~s (A

    Referring to these results, we observe t

    following.

    (1) For one source only (say V1), Eq. (A

    reduces to the equation cited in Bird et

    (1960) (p. 490).

  • 7/28/2019 The Effects of Complexity

    26/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 2726

    (2) The method can be extended to any number

    of sources and/or sinks.

    (3) Given that

    V1+V2=(1+2)~s (A14)

    we may define the FRC for the system by

    s=%i

    i (A15)

    and the effect metric for the system by

    Vs=%i

    Vi (A16)

    Thus:

    1+V1+V2=1+s~s=1+Vs (A17)

    indicating that the FRC and the effect met-

    rics V are additive.

    (4)Consider a system transient induced by a set

    of transfer processes. Given that, to each pro-

    cess, there corresponds a metric V (which

    quantifies its effect), the attendant set of V

    metrics can be used to establish a hierarchy that

    ranks the processes according to their impacton the transient. (The larger the effect metric V,

    the more important the process.)

    Such a quantitative, hierarchical ranking can be

    used for both experimental and computer-based

    studies.

    For experiments, the hierarchy provides a

    quantitative basis for establishing the design

    and operation of a test facility. Specifically, it

    identifies which effect metric V must be pre-

    served in order to assure that a transfer process

    will have the same effect in the prototype asthat observed in the test facility.

    For computer-based analyses, the hierarchy es-

    tablishes and ranks the modeling capabilities

    that a code must have in order to assure its

    applicability to a specific transient in a NPP.

    (5)Consider now Eq. (A1). It has three design

    parameters (V, A1, A2), two property parame-

    ters (z, cp) and six operational parameters (WTin, h1, T1, h2, and T2).In the context of this particular example, tinformation of interest to a designer or toplant engineer would be concerned with teffects these parameters (and their variatiocan have on the fluid temperature in the vess

    This information is contained in and providby a single relation (Eq. (A12)), made possibby scaling and expressing Eq. (A1) in a dimesionless form (Eq. (A10)).This simple example demonstrates the tributes of scaling in processing informatithrough the four stages represented in Fig.Specifically: the acquisition of information was effectiv

    in as much as it required a single integrati(i.e. of Eq. (A10)) (simplicity);

    the storage of information was optimal, inmuch as it is represented by a single curvethe T+t+ plane (synthesis and parsimony

    the retrieval of information is fast and eain as much as it can be obtained fromsingle curve (efficiency);

    the transmission of information is intellible, in as much as Eq. (A13) shows teffects of the various parameters (clarity)

    (6) Consider now solving Eq. (A1) by means a computer code. In as much as this equation

    cast in a dimensional form, to evaluate teffects of any parameter requires a separintegration (one for each variation).Such an acquisition of information, one piecea time, is ineffective, time consuming and epensive. Furthermore, the storage and retrievof information is most inefficient, given theach integration is displayed by a separcurve. Multiple curves are thereby generatwhich vary from facility to facility and froone test condition to another. This not onprecludes parsimony and synthesis, but it reders the transmission of information mdifficult, if not unintelligible.I hope that this simple example has served t

    1. demonstrate the efficiency can be gainthrough scaling; and

    2. illustrate the inefficiency and wastefulness the repetitive approach taken by computbased safety studies as presently conducted

  • 7/28/2019 The Effects of Complexity

    27/27

    N. Zuber /Nuclear Engineering and Design 204 (2001) 1 27

    Unfortunately, this piecemeal approach to ex-

    periments and to computer-based analyses in

    which the effects of various parameters are tested

    and evaluated separately, one at a time, reflects a

    cultural attitude that seems to permeate the cur-

    rent T-H technology. Although cultures are not

    easily changed, I see no reason to promulgate the

    inane wastefulness of the code jockey attitude.

    With regard to the latter, however, I am quite

    optimistic. I am convinced that only those organi-

    zations that stress efficiency in processing infor-

    mation through the four stages of Fig. 1 will be

    successful and will survive in a competitive envi-

    ronment. Conversely, maladaptive organizations

    that allow the code jockey culture to prevail, will

    be inevitably relegated to the evolutionary junk

    heap.

    References

    Bird, R.B., Stewart, W.E., Lightfoot, E.N., 1960. Transport

    Phenomena. Wiley, New York, p. 216.

    Blackmore, S., 1999. The Meme Machine. Oxford University

    Press, Oxford.

    Boyack, B.E., Catton, I., Duffey, R.B., Griffith, P., Katsina,

    K.R., Lellouche, G.B., Levy, S., Rohatgi, U.S., Wilson,

    G.E., Wulff, W., Zuber, N., 1990. An overview of the code

    scaling, applicability and uncertainty evaluation methodol-

    ogy. Nucl. Eng. Des. 119, 117.

    Brodie, R., 1996. Virus of the Mind. Integral Press, Seattle,WA.

    Bronowski, J., 1973. The Ascent of Man. Little, Brown and

    Co., Boston, MA, pp. 155211.

    Dawkins, R.D., 1976. The Selfish Gene. Oxford University

    Press, Oxford, pp. 203215.

    Ishii, M., Revankar, T., Leonardi, R., Dowlati, R., Bertodano,

    M.S., Babelli, I.I., Wang, W., Pokharna, H., Ransom,

    V.H., Viskanta, R., Han, J.T., 1998. The three-level scaling

    approach with application to the Purdue University Multi-

    Dimensional Integral Test Assembly (PUMA). Nucl. Eng.

    Des. 186, 177211.

    Jeffrey, W., Berger, J., 1992. Ockhams razor and Bayesian

    analysis. Am. Sci. 80, 6472.Johnson, R.B., 1997. A Modern Introduction to the Mathe-

    matical Theory of Water Waves. Cambridge University

    Press, Cambridge, p. 98.

    Kleiber, M., 1932. Body size and metabolism. Hilgardia 6,

    315353.

    Kolmogorov, A.X., 1941a. Dissipation of energy in locally

    isotropic turbulence. Dokl. Akad. Nauk. SSSR 32, 1618

    (in Russian)

    Kolmogorov, A.X., 1941b. The local structure of turbulenc

    incompressible viscous fluids of very large Reynolds nu

    ber. Dokl. Akad. Nauk. SSSR 30, 913 (in Russian).

    McMahon, Th. A., 1984. Muscles, Reflexes and Locomoti

    Princeton University Press, Princeton, NJ, pp. 28329

    Mills, R.I., 1994. Space, Time and Quanta. W.H. Freem

    New York, pp. 89, 141.

    Noether, E., 1918. Invariante variations probleme. Kgl. G

    Wiss. Nachr. Gottingen Math. Phys. 2, 235 257.Oliver, O., 1994. The Shaggy Steed of Physics. Springer, N

    York, pp. 3670.

    Peterson, P.F., Schrock, V.E., Greif, R., 1998. Scaling

    integral simulation of mixing in large, stratified volum

    Nucl. Eng. Des. 186, 213224.

    Stahl, W.R., 1967. Scaling of respiratory variables in ma

    mals. J Appl. Physiol. 22, 453460.

    Travkin, V.S., Catton, I., 2000. Transport phenomena

    heterogeneous media based on volume averaging theo

    In: Green, G. (Ed.), Advances in Heat Transfer, Vol.

    Academic Press, New York, (in press).

    Wilcox, D.C., 1998. Turbulence Modeling for CFD,

    Canada, California, DCW Industries Inc., La CanaCalifornia, pp. 3639.

    Witham, G.B., 1974. Linear and Nonlinear Waves. Wi

    New York, p. 245.

    Zuber, N., 1991. A hierarchical, two-tiered scaling analysis.

    Technical Progress Group: An Integrated Structure a

    Scaling Methodology for Severe Accident Technical Is

    Resolution, Appendix D. NUREGICR 5809. US Nucl

    Regulatory Commission.

    Zuber, N., 1993. Two-phase systems in the context of co

    plexity: a hierarchical approach for analysis and exp

    ments. Invited Lecture at the ASME Fluids Engineer

    Conference, Washington, DC, 2024 June 1993.

    Zuber, N., 1994. Scaling and analysis of complex systeInvited Lecture at PSAIPAX Severe Accidents94, ES

    and EXS, Ljubljana, Slovenia, 1720 April 1994.

    Zuber, N., 1995. A general method for scaling and analyz

    transport processes. Invited Lecture at the ASME-JSM

    Annual Conference and Exhibit, Hilton Head, SC, 13

    August 1995.

    Zuber, N., 1999. A general method for scaling and analyz

    transport processes. In: Lehner, M., Mewes, D., Tausch

    R., Dinglreiter, U. (Eds.), Applied Optical Measuremen

    Springer Verlag, Berlin.

    Zuber, N., Wilson, G.E., Boyack, B.D., Catton, I., Duf

    R.B., Griffith, P., Katsina, D.R., Lellouche, G.E., Lvey,

    Rohatgl, U.S., Wulff, W., 1990. Evaluation of scalecapabilities of best estimate codes. Nucl. Eng. Des. 1

    97109.

    Zuber, N., Wilson, G., Ishii, M., Wulff, W., Boyack,

    Dukler, A., Griffith, P., Healzer, J., Henry, R., Lehner

    Levy, S., Moody, F., Pilch, M., Sehgal, B., Spencer,

    Theofanous, T., Valente, J., 1998. An integrated struct

    and scaling methodology for severe accident technical is

    resolution Nucl Eng Des 186 122