Sevin, IsIEMS Plenary Talk

download Sevin, IsIEMS Plenary Talk

of 10

Transcript of Sevin, IsIEMS Plenary Talk

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    1/10

    ISIEMS -1

    Large-Scale Simulations in Structural Mechanics: Retrospective and

    Prospective Views

    by Dr. Eugene Sevin

    Plenary Talk to the ISIEMS 14

    Seattle, WA 20 October 2011

    ABSTRACT

    The development of finite element modeling in structural mechanics is traced from its early

    beginnings, through the period of aboveground and underground nuclear testing that gave rise

    to the underlying experimental database, and the Cold War military requirements that led to

    todays large-scale simulation capabilities. Limitations to that capability are explored by means

    of a case study seeking to validate ground shock damage modeling of underground tunnels.

    Required improvements to both geologic modeling of discontinuities and mechanics of code

    implementation are identified.

    I INTRODUCTION & HISTORICAL OVERVIEW

    Originally I had in mind titling this talk Weapons and Targets as it reminded me of the old jokeabout the difference between Mechanical and Civil engineers: Mechanical Engineers build

    weapons. Civil Engineers build targets. I suppose at the time Civil Engineers were meant to bear

    the brunt of the joke, but if so, the tables have turned. The bad guys response to the accuracy

    of our precision guided conventional weapons is to go underground wherever they can. Thus,

    following the law of unintended consequences, our prowess in weapon accuracy ushered in the

    era of hard and deeply buried targets which are difficult-to-impossible to defeat with

    conventional weapons.

    But thats not the subject of my talk. Rather, my remarks will address (1) the origin of critical

    elements in computational structural modeling, (2) the limits in current large-scale modeling

    capability as I see them, and (3) my view of what the future may hold for improvements in both

    code physics and code mechanics.

    But to begin at the modern beginning: the ASCE (American Society of Civil Engineers) held its

    first structural engineering conference devoted to electronic calculation in 1958, which I

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    2/10

    ISIEMS -2

    attended. The late Professor Nathan Newmark, who both inspired and chaired the conference

    and was then Head of the Civil Engineering department at the University of Illinois, said in his

    introductory remarks, the high-speed computer is more than a new toolits use involves a

    revolution

    As I said, I was at the 1958 ASCE conference and presented a paper on dynamic column

    buckling. The numerical solutions were done on an IBM 650 computer; I carry more computing

    power with me today on my iPhone

    in methods, concepts and even in education. Earlier that year (1958), Newmark had

    published a paper in Civil Engineering magazine, A Revolution in Design Practice (still worthreading today) In which he argued that the real impact of computers on the engineering

    profession will be in the direction of implementing more rigorous analysis, not in simply doing

    the old things faster. As in so many other things during his lifetime, Professor Newmark got that

    exactly right.

    The 2

    nd

    ASCE conference on Electronic Computation was held two years later in September1960. There, Professor Ray Clough presented his seminal paper The Finite Element Method in

    Plane Stress Analysis; while this was not the very first paper on the finite element method,

    Clough was the first to give the name Finite Element Method1

    From the perspective of

    - so today we can belatedly

    celebrate its Golden Anniversary. Thus, by 1960 the essential computational modeling tool to

    advance the art of computational mechanicswas defined (although the name was yet to be

    coined).

    our technical community, which relies on the FEM to predict structural

    failure, the development of a supporting experimental database is crucial. In fact, this began

    almost immediately after World War II, largely in the context of U.S. nuclear weapon testing.

    The period from 1945 to 1992 saw a mammoth investment by the U.S. in some 1000 nuclear

    tests2, atmospheric and underground. Most of these were for weapon development, but there

    were 100 atmospheric weapons effects tests and 67 underground tunnel tests during this

    period. That led to a greatly improved understanding of airblast and ground shock loading of

    aboveground and shallow buried structures, as well as the ground shock response of deeply

    buried structures. Many of these tests were well-instrumented scientific experiments that can

    still serve us well today3

    1 R.W. Clough, Early history of the finite element method from the view point of a pioneer, INTERNATIONAL

    JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, 2004. Other innovators who Clough cites as sharing in the

    development of the FEM were John H. Argyris, M. J. Turner and O. C. Zienkeiwicz.

    .

    2United States Nuclear Tests, July 1945 through September 1992, DOE/NV209 (Rev. 14), December 1994

    3DTRA maintains an online library (STARS) that includes a complete collection of nuclear and conventional weapon

    test reports, as well as individual guides to the nuclear weapons effects test database. However, much of this

    material is limited in distribution to U.S. government agencies and U.S. government contractors.

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    3/10

    ISIEMS -3

    Cold War operational requirements, with its emphasis on nuclear weapons, provided the

    primary motivation for research in computational simulation of the effects of munitions on

    structures. At the same time, force protection developments against conventional weapon

    threats were gaining prominence within the military services, particularly the US Army. These

    relied heavily on field experimentation, which was practical for conventional high explosiveweapons but not so practical for nuclear weapons. The string of US Embassy bombings In the

    mid-70s, and especially the October 1983 bombing of the Marine Barracks in Beruit, led to

    increased emphasis on structural protection against terrorist-type HE threats within the

    military.

    In time, terrorist threats, along with continuing targeting concerns for hard and deeply buried

    targets, came to replace Cold War requirements. While those interests exist today, the focus on

    research and application has turned almost entirely from nuclear to conventional weapons.

    The Defense Nuclear Agency (DNA, remnants of which are now incorporated in the DefenseThreat Reduction Agency, DTRA) was responsible for designing and conducting DoDs nuclear

    weapons effects tests. At that time, DNA also was the principal funding agency for related NWE

    application research within the Army, Navy and Air Force. Under this general R&D support, the

    Services extended the experimental database to include conventional weapons effects and a

    broad range of structural modeling and response investigations.

    Well, thats a very brief historical overview of how we got to where we are today. So where are

    we today and what do we recognize as shortcomings in todays structural simulation

    capabilities? I dont claim a global view of present day capabilities but I do have a reasonable

    appreciation of what the DTRA-supported R&D community can do in the hard target area.

    II PRESENT CAPABILITIES & SHORTCOMINGS

    We have codes today that are capable of running very large 3D simulations (with millions of

    finite elements) that are advertised as able to predict fracture and structural failure under

    dynamic loads. These analyses tend to be major undertakings, requiring the largest

    supercomputers, and involving considerable time and expense to set up, execute and interpret.

    But what is the basis for estimating the predictive accuracy of such simulations? Verification

    and validation of these models and software generally is accepted as being essential; however,these processes seldom are well-documented, nor is accuracy in the face of modeling and input

    parameter uncertainty often assessed quantitatively.

    These views may sound harsh to you, and there may be exceptions of which Im unaware, but

    let me flesh out the argument with a recent case study of underground facilities. I think

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    4/10

    ISIEMS -4

    overcoming some of the shortcomings evidenced in this study is of general applicability and

    points the way to needed improvements in structural modeling capabilities.

    Over the 5-year span from 2003-2007, DTRA conducted an analytical and experimental study to

    determine the practicality and predictive accuracy of selected high fidelity (first-principle

    physics) codes to model the ground shock response of deeply buried tunnels. Significant

    emphasis in the design of this study was placed on software verification and model validation.

    As I use these terms, Verification is concerned with correct evaluation of the underlying

    mathematical models, while Validation

    In the DTRA study, code verification was evaluated on the basis of 20 computational problems

    of increasing complexity; these problems focused mainly on modeling of joints and other types

    of geologic discontinuities. Validation typically involves comparison of prediction withexperiment. Here, the validation database consisted of two so-called precision tunnel tests in

    manufactured jointed limestone at laboratory scale and two 3000 lb high-explosive (HE) field

    tests in a limestone quarry, one with and one without tunnels. A larger scale tunnel experiment

    was planned but did not take place. Instead, a simulation was performed on a portion of the

    DIABLO HAWK Underground Nuclear (UGT) structures test bed, of which more later.

    deals with how well the models perform for their

    intended purposes. Or more simply put, verification asks if were solving the equations correctly

    while validation asks if these are the correct equations.

    At the outset of the program DTRA asked me to assemble a Modeling Assessment Group (MAG)

    comprised of independent subject matter experts to (1) review the V&V planning effort, (2)

    assess the programs success, and (3) recommend confidence and limitations in code

    application.

    Five experienced modeling teams participated in the study, each using its own codes. Each code

    can be described as a general purpose finite element solver embodying non-linear, large

    deformation, explicit time domain formulations. One of the codes was Eulerian; the others

    were either fully Lagrangian or Arbitrary Lagrangian-Eulerian (ALE) codes. Most had adaptive

    mesh refinement. One modeling team used the commercial ABAQUS/Explicit code. All of the

    codes included a variety of element and interface properties to model the dynamic response of

    structures in a jointed rock mass.

    The five teams operated independently, more competitively than cooperatively, although

    frequent joint meetings were held with the MAG and with an internal DTRA analysis team to

    compare interim results. A considerable effort was made to standardize model definitions and

    input parameters, and code outputs to facilitate comparison of results. Still, the modeling

    teams largely brought their own expertise to finalizing the simulation models.

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    5/10

    ISIEMS -5

    I dont have the time to tell you much about the program, which is well-documented4,5,6

    The MAG judged the overall accuracy of the best-performing team to be about 30-40 percent in

    terms of peak velocities and tunnel damage. The primary source of uncertainty was in assigning

    the physical properties of the rock. Also, the prediction of rock failure by all of the teams was

    essentially heuristic in nature and an obvious area for improvement.

    and

    generally available to most in this audience. The bottom line is that while a number of

    important lessons were learned, the program fell short of its original validation goals, both

    because of limited experimental results and because the modeling efforts proved to be more

    complicated and required more extensive resources than originally envisioned. Thus it was notpossible to assess quantitatively the overall accuracy and predictive uncertainty among the high

    fidelity codes. However, it was possible to rank order the performance of the modeling teams

    based on the intermediate-scale tunnel test.

    It also was clear that the skills of the modeling team and their experience in modeling deepunderground structures were integral to overall code performance and accuracy.

    III CASE STUDY: LARGE-SCALE SIMULATION

    The large-scale tunnel test was intended as the central validation experiment and its

    cancellation was a major setback to realizing the goals of the project. Instead, it was decided to

    use the results from an existing underground nuclear test (UGT) as a proxy validation data set.

    The DIABLO HAWK UGT was conducted in 1978 at the Nevada Test Site (NTS); it was a test oftunnels of various sizes and reinforcement designs in NTS tuff

    7

    4Final Report of the Tunnel Target Defeat ACTD Modeling Assessment Group, Northrop Grumman Report,

    September 2007

    , a weak, nearly saturated

    sedimentary rock with a highly faulted, irregular 3D geology. This test was selected fromother tests of tunnels because of its well-documented geology, instrumentation, and range of

    damage results. While this experiment does not meet usual conditions for validation as the

    experimental results are known in advance, the complexity of the test bed and the different

    5 Proceedings from the 78th Shock and Vibration Symposium on CD-ROM, Shock and Vibration Information

    Analysis Center, 20086

    Supplement to the Final Report of the Tunnel Target Defeat ACTD Modeling Assessment Group:

    The DIABLO HAWK Simulation, Northrop Grumman Report, February 20097

    The NTS is geologically situated in the Basin and Range province, a region characterized by folded and faulted

    Paleozoic sedimentary rocks overlain by Tertiary volcanic tuffs and lavas. NTS Tuff is a weak, nearly saturated rock

    (unconfined compression strength of about 0.2 kbars). The majority of the faults within the tunnel complex have

    very high dip angles and are nearly parallel to the pre-Tertiary rock ridge.

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    6/10

    ISIEMS -6

    levels of damage sustained by the test structures made it a very useful and revealing exercise,

    nonetheless.

    The DIABLO HAWK simulation was carried out by 3 of the 5 original modeling teams using the

    same codes and modeling approaches as for the original test simulations. Their results were

    sobering and served to curb somewhat the enthusiasm for earlier findings. These simulations

    point up modeling limitations of the codes, the ability to characterize complex geologic sites,

    and the ability to represent massively discontinuous media. They also provide insight into

    desired future code capabilities.

    I am going to show you 3 charts of the DIABLO HAWK event: (1) a plan layout of the test bed

    showing the region of the simulation [Chart 1], (2) a cross-section of the geology showing

    bedding planes and faults as prescribed for the modelers [Chart 2 ], and (3) posttest

    photographs showing large block motions in a test tunnel [Chart 3]. Available time doesnt

    allow me to say more about the test here, but a sense of the complexity of the simulation canbe garnered from these charts.

    Chart 1 DIABLO HAWK Test Bed Showing Damage Levels

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    7/10

    ISIEMS -7

    Block motion along a fault

    Dip-slip 2; Strike-slip 10Block motion along a bedding plane

    3 laterally and 5-10 longitudinally.

    Chart 2 Faults and Beds in the GAMUT Model

    Chart 3 DIABLO HAWK Post-Test Block Motions Along Faults

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    8/10

    ISIEMS -8

    It was anticipated that the modelers would represent the faults and bedding planes as discrete

    surfaces capable of transmitting normal and shear tractions according to a Coulomb friction

    law. In fact, none of the modelers did so; rather, faults and bedding planes were modeled as

    conventional finite elements with Mohr-Coulomb failure surfaces.

    The simulations exhibited features that made it difficult to evaluate overall accuracy in a

    consistent manner. For example, one teams rock model was much weaker than the DIABLO

    HAWK tuff due to a combination of material and element properties and mesh design choices.

    This led to large overestimates of displacements in comparison to both the free field

    measurements and high speed camera data.

    The Eulerian model used by another team reduced the weakening effect of faults and bedding

    planes due to averaging of properties at the element level. Consequently, their rock model was

    stronger than the DIABLO HAWK tuff and resembled a more homogeneous medium.

    The third teams simulation provided the most faithful representation of the specified DIABLO

    HAWK test bed geology; still they underestimated damage to the intermediate range

    structures, and showed other features not evident in the data.

    The primary lesson learned from the DIABLO HAWK simulations was that the modelers didnt

    represent media with extensive discontinuities very well. Discontinuities such as faults and

    bedding planes, and how they are modeled, can have a pronounced effect on predicted ground

    shock, and can influence ground shock energy flow by channeling, and diversion. Regions with

    oriented flaws or fractures behave as anisotropic materials that may give rise to highly

    directional stress-wave propagation, and to date cannot be accurately modeled as intactisotropic material with equivalent material properties.

    The simulation also highlighted that fact that software validation has meaning only in relatively

    narrow bounds of application. The same code and modeling team that performed best in the

    intermediate-scale HE tunnel test didnt fare nearly so well in the DIABLO HAWK simulation.

    This realty needs to be kept in mind when the notion of validated software is bandied about, as

    it may have limited practical relevance.

    IV NEEDED MODELING IMPROVEMENTS

    The DTRA study, rather than validating available software and modeling capabilities,

    demonstrated the need for improvements in modeling technology, both in terms of physics

    modeling and code mechanics. Requirements for verification and validation merit separate

    consideration as does the manner of quantifying uncertainty.

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    9/10

    ISIEMS -9

    For the application Ive been discussing, improvements in physics modeling would include

    higher fidelity constitutive models for faults and joints, to include material softening, failure

    and fracture modeling, scale and rate effects, and history dependence. Because of the

    impracticality of characterizing real world geologic sites to the level of detail required by a

    deterministic model, there is a need to be able to selectively model joint-sets interacting withintact material inelastically, either explicitly (as for a few well-defined faults) or implicitly by

    means of a homogenized representation of massive jointing. The need for probabilistic

    modeling also needs to be considered.

    DTRA organized a workshop on Ground Shock in Faulted Media in January 2010 that dealt with

    a number of such modeling and code improvements. The proceedings of the workshop, which

    involved funded studies, are now available from DTRA.

    Looking to the future, I believe we will see a much stronger focus on dealing comprehensively

    with uncertainties

    8

    The DOE weapon laboratories have been pursuing a probability risk assessment methodology

    (referred to as Quantification of margins and uncertainty, or QMU) under their Science-based

    Stockpile Stewardship Program to predict the reliability, safety and security of nuclear

    weapons. The relevancy to our community of this technology to estimate predictive uncertainty

    remains to be determined.

    in both deterministic and probabilistic formulations. Ground shock is anobvious example where knowledge of faults, joint systems, and other inhomogeneities are

    typically unknown at the level of detail required by the simulation.

    A committee of the National Academies currently is examining practices for verification andvalidation and uncertainty quantification of large-scale computational simulations in several

    research communities. Their report Mathematical Science Verification, Validation, and

    Uncertainty Quantification is expected late this yeaar or early next.

    A more conventional approach is to characterize predictive uncertainty on the basis of

    extensive variation of parameter studies; that is, by repeating a simulation systematically over a

    broad range of model assumptions and parameter values in Monte Carlo-like fashion. For large-

    scale simulations today this probably is unaffordable except in very high-value instances, but it

    is something I think that we need to plan for as a regular event. Key to making this a practical

    approach will be the ready availability of greatly increased computing power together with

    significant improvements in code mechanics.

    8Comprehensive uncertainty includes uncertainties in domain knowledge, validation referent, modeling, user

    effects and interpretation of results

  • 7/29/2019 Sevin, IsIEMS Plenary Talk

    10/10

    ISIEMS -10

    Improving code mechanics means significantly reducing problem setup time, running time and

    the time to interpret results. Constructing a three-dimensional finite element model of a

    complex system of upward of 2-4 million or more finite elements typically is an extremely labor

    intensive trial and error effort, and in the DTRA study was a bottleneck of the model building

    process. For example, in the DIABLO HAWK simulation Ive discussed, mesh building absorbedmost of the available resources and impacted adversely on the modeling teams ability to fully

    exploit the simulation (e.g., through even modest sensitivity studies) and interpret its results.

    The end goal would be to provide an integrated tool-chain (mesh generation to simulation to

    post-processing) based on a unified results database that requires no translators or manual

    intervention to go from one process to the next. For geologic models this should include a

    capability to generate a mesh conforming to the geometry of a fault and to use a sliding

    interface algorithm to model arbitrarily large differential tangential motions along the fault

    surface. Also, XFEM-like technology should be available to model newly generated fractures.

    In summary, my view is that while we are capable today of running very large 3D FEM structural

    response simulations, we do not yet have a solid basis for dealing with uncertainty of problem

    variables and predictive accuracy. Characterizing failure of geologic and structural materials

    under dynamic loads needs to progress beyond heuristic criteria. The investment of time and

    effort to set up, execute and interpret large-scale simulations needs to be reduced substantially

    to where variation of parameter-type studies become practical and routine.

    The formality of the validation process and the role of experimentation are less clear to me, and

    I suppose will depend on how high a value is placed on the importance of a particular suite of

    simulations. However, I do see a need for development of benchmark-type problems and

    supporting proof-of-concept experiments that collectively lend credence to the validity of the

    simulation tools.