High Performance Computing - DTIC · The High Performance Computing Modernization Program’s...

195
High Performance Computing contributions to DoD Mission Success 2002

Transcript of High Performance Computing - DTIC · The High Performance Computing Modernization Program’s...

Page 1: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

High Performance Computingcontributions to

DoD Mission Success2002

Page 2: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

Report Documentation Page Form ApprovedOMB No. 0704-0188

Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering andmaintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information,including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, ArlingtonVA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if itdoes not display a currently valid OMB control number.

1. REPORT DATE MAR 2003 2. REPORT TYPE

3. DATES COVERED 00-00-2003 to 00-00-2003

4. TITLE AND SUBTITLE High Performance Computing contributions to DoD Mission Success 2002

5a. CONTRACT NUMBER

5b. GRANT NUMBER

5c. PROGRAM ELEMENT NUMBER

6. AUTHOR(S) 5d. PROJECT NUMBER

5e. TASK NUMBER

5f. WORK UNIT NUMBER

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Department of Defense,High Performance Computing ModernizationProgram Office,1010 North Glebe Road, Suite 510,Arlington,VA,22201

8. PERFORMING ORGANIZATIONREPORT NUMBER

9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S)

11. SPONSOR/MONITOR’S REPORT NUMBER(S)

12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited

13. SUPPLEMENTARY NOTES

14. ABSTRACT

15. SUBJECT TERMS

16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as

Report (SAR)

18. NUMBEROF PAGES

194

19a. NAME OFRESPONSIBLE PERSON

a. REPORT unclassified

b. ABSTRACT unclassified

c. THIS PAGE unclassified

Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18

Page 3: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

Approved for public release; distribution is unlimited.

For more information about the DoD HPC Modernization Program Office and the DoD HPCModernization Program, visit our Web site at http://www.hpcmo.hpc.mil

Backgrounds (front and back cover):

Forecasting the Weather Under the Sea: NAVO MSRC Scientific Visualization Center Allows Navy to Seethe Ocean Environment for Undersea Recovery Operations by Pete Gruzinskas (pg. 89)Still image showing Ehime Maru retrieval over real bathymetry data. The animation was designed to conceptualize/visualizewhat the recovery would look like in the context of the real data collected by the USNS SUMNER.

Front cover captions (clockwise from left to right):

Identification of “Hot-Spots” on Tactical Vehicles at UHF Frequencies by Anders Sullivan and LawrenceCarin (pg. 119)Impulse response mapped on T-72

Novel Energetic Material Development Using Molecular Simulation by John Brennan and Betsy Rice (pg.103)Molecular configuration snapshot of reacting system at P=28.47 GPa (blue: N2 molecules; red: O2 molecules; green: NOmolecules)

Multidisciplinary Applications of Detached-Eddy Simulations of Separated Flows at High ReynoldsNumbers by J. Forsythe, S. Morton, R. Cummings, K. Squires, K. Wurtzler, and P. Spalart (FY 03 DoDChallenge Project)

Development of a High-Fidelity Nonlinear Aeroelastic Solver by Raymond Gordnier (pg. 100)Acoustic radiation due to travelling wave flutter of a flexible panel

Simulation and Redesign of the Compact A6 Magnetron by Keith Cartwright (pg. 121)This shows the Titian/PSI A6 magnetron simulated in ICEPIC. This snap shot shows the electric field and particles 50 ns afterthe start of the applied voltage. The camera is pointing upstream toward the pulse power. The iso-surfaces in this figure arethe electric field pointing around the cathode (theta direction). Red is in the clockwise direction when facing upstream, greenis in the counter clockwise direction. The power extracted from the magnetron into the wave-guides shows a pi phaseadvance between the two cavities. The desired mode has a 2pi phase advance for the extractors, this magnetron isoperating in the wrong mode. The phase advance from cavity to cavity is not constant in this design. The phase advance forthe extracting cavities and the cavity between is pi/2. The rest of the cavities have a 3pi/4 advance, which makes the averagephase advance 2pi/3.

Novel Electron Emitters and Nanoscale Devices by Jerry Bernholc (pg. 159)Electron distribution near a tip of a BN/C nanotube in a field emitter configuration

Page 4: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

HIGH PERFORMANCE COMPUTING

contributions tocontributions tocontributions tocontributions tocontributions to

DOD MISSION SUCCESS 2002

MARCH 2003

Page 5: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 6: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

OFFICE OF THE UNDER SECRETARY OF DEFENSE

ACQUISITION, TECHNOLOGY

AND LOGISTICS

3000 DEFENSE PENTAGON WASHINGTON , DC 20301 -3000

MAR 1 8 2003

The Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP) provides world-class, commercial, high-end high performance computing capabilities to the DoD science and technology and test and evaluation communities. The program supports over 600 high performance computing projects. The user base exceeds 4,000 scientists and engineers, who are located at more than 100 DoD laboratories, test centers, contractor, and academic sites. In an era of Homeland Security and the War on Terrorism, the contributions made by the HPCMP community are of increasing importance, providing the answers to tomorrow's questions.

These success stories focus on the contributions made by the HPC community to the transformational goals of the Department. Whether through long-term research or more immediate support to address critical operational issues, our scientists and engineers are now routinely using HPCMP resources in a pervasive environment to assist in the transformation of our military forces. They are assisting in the protection of critical bases of operations by simulating chemical warfare agents and their antidotes, as well as blast protection from explosions. Others are conducting research that will ensure the integrity of our information systems in the face of attack. HPCMP supported research will allow the projection and sustainment of US forces in distant hostile environments through better weapons system design. DoD scientists are using HPCMP resources to improve synchronization within the battlespace environment and to develop the next generation of tools needed to deny enemies sanctuary. Still others are developing new materials to enhance the capability and survivability of space systems, leveraging information technology and innovative concepts to develop joint operational pictures. Enhancing our response to terrorist threats, improving the design of structures and weapons systems, conducting robust testing programs, and comprehensively evaluating the structural design of entire platforms - all are represented in this technical report, and all require high performance computing.

The threat environment of the 21st century requires a defense posture that is flexible, innovative, and fully capable of superior response across the spectrum of conflict. The Department of Defense's scientific, engineering, and testing communities are fulfilling those challenges using HPCMP resources.

~fl1~~ Director Defense Research and Engineering

Charles 1. Holland Deputy Under Secretary of Defense

(Science and Technology)

0

Page 7: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 8: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– i –

SECTION 1 INTRODUCTION 1Introduction 3

Overview of the High Performance Computing Modernization Program 3Mission and Vision 3

HPCMP Organization 4The HPCMP Community 4

HPCMP Support of Transformation and the War on Terrorism 7Transformation Operational Goals 7

SECTION 2 SUCCESS STORIES 9Protect Bases of Operation 11

Project and Sustain US Forces 35

Deny Enemies Sanctuary 91

Enhance the Capability and Survivability of Space Systems 131

Leveraging Information Technology 145

SECTION 3 HIGH PERFORMANCE COMPUTING MODERNIZATION

PROGRAM 165Introduction 167

A User-Centered Management Focus 167Integrated Program Strategy 167

User Requirements 168World-Class Shared HPC Capability 168

DoD HPC Challenge Projects 168HPCMP Computational Technology Areas 168

Program Components 172

High Performance Computing Centers 172

Networking 174Defense Research and Engineering Network 174

Security 175

Software Applications Support 175Common High Performance Computing Software Support Initiative 175

Programming Environment and Training 176

SECTION 4 ACRONYMS 177

Contents

Page 9: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– ii –

Page 10: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

SECTION 1INTRODUCTION

Page 11: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 12: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 3 –

Introduction

The Department of Defense (DoD) High Performance Computing Modernization Program(HPCMP) was initiated in 1992 by a Congressional directive to modernize the DoD laboratorieshigh performance computing (HPC) capabilities. The HPCMP operates under the auspices of theDeputy Under Secretary of Defense for Science and Technology. Since its inception, the programhas realized great success in its efforts to modernize the Department’s laboratories and testcenters to levels on par with other world-class organizations.

This technical report, HPCMP Contributions to DoD Mission Success 2002, will provide insightsinto how the program has evolved and chronicle some of the near term and long term nationaldefense work done by the science and technology (S&T) and test and evaluation (T&E)communities using HPCMP resources. The High Performance Computing Modernization Programis placing critical enabling 21st century technology into the hands of the Department’s science,technology, test, and evaluation communities that allows them to realize the Department’stransformational goals.

This report is organized into four sections. The first section provides an overview of the programand its accomplishments as well as an explanation of how the stories are grouped. The secondsection consists of a selection of stories detailing the scientific research and engineering workdone by the scientists and engineers in the Department’s laboratories and test centers, workingwith academia and industry in seamless high performance computing environments. The thirdsection provides a detailed discussion of the structure of the HPCMP and the fourth section is alist of acronyms used throughout the publication.

Overview of the High Performance ComputingModernization Program

The Department of Defense High Performance Computing Modernization Program wasestablished as a Joint Program Office in 1993. This mandate was subsequently expanded toinclude the Department’s test and evaluation community as well as the Missile Defense Agency(MDA) [formerly the Ballistic Missile Defense Organization]. In the following years, the Programhas taken fragmented, Service-oriented research and engineering efforts and transformed theminto a nearly seamless, joint operating environment that leverages the best practices of all of theServices and Agencies as well as the national academic and industrial infrastructure.

MISSION AND VISION

The High Performance Computing Modernization Program’s mission is to

“Deliver world-class, commercial, high-end, high performance computationalcapability to the DoD’s science and technology (S&T) and test and evaluation(T&E) communities, facilitating the rapid application of advanced technologyinto superior warfighting capabilities.”

Page 13: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 4 –

INTRODUCTION

The Director of the HPCMP has articulated that mission in the following vision for the Program:

[To create] “A pervasive culture existing among DoD’s scientists andengineers where they routinely use advanced computational environments tosolve the most demanding problems.”

Today’s HPC Modernization Program supports over 4,000 scientists and engineers workingthrough resources located at sites throughout the United States, including Alaska and Hawaii. AsFigure 1 illustrates, the Program has provided a phenomenal increase in the computationalpower available to the community, along with the tools and intellectual expertise needed to fullyuse the resources effectively. The innovative and groundbreaking work being done usingprogram resources is found throughout the Program’s components — of the current 41 DoDChallenge Projects (listed in Section 3), over 90% are directly or indirectly related to one or moretransformational objectives (see page 7). The software applications support initiative contributesdaily to the expanding software suite available to the scientific and engineering efforts. TheDefense Research and Engineering Network (DREN) is a premier departmental example ofleveraging information technology as well as providing groundbreaking defensive input/output (I/O) through its work with network intrusion devices and multi-tiered access.

HPCMP ORGANIZATION

The HPCMP has three components, which are detailed in Section 3. These components – HPCCenters, Networking, and Software Applications Support — provide a technologically advancedcomputational environment to support the ongoing and emerging needs of the Department’slaboratories and test centers. The High Performance Computing Centers, shown in Figure 2,provide local andgeneralized support to theHPCMP community basedon Service andAgency-validatedrequirements and priorities.The centers also supportoutreach to thecommunities in which theyreside, encouragingstudents to becomeinvolved with engineeringand scientific disciplines.

THE HPCMPCOMMUNITY

The HPCMP communityconsists of over 4,000scientists, engineers,computer specialists, networking specialists, and security experts working throughout the UnitedStates. All three Services and several Defense Agencies participate in the program.

Figure 1. DoD S&T and T&E growth in computational capability

Page 14: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 5 –

2002 SUCCESS STORIES

As shown in Figure 3, the user base is diverse, drawing from the government workforce,academia, and industry.

The community is organized around 10 computational scientific disciplines, which are known asComputational Technology Areas (CTA). The CTAs are described in Table 1. A nationallyrecognized expert in that discipline leads each CTA. In addition, there are a subset of projectswhich focus on the Department’s highest priority, most computationally demanding work — theDoD Challenge Projects.

Figure 3. User Demographics

Figure 2. HPC Centers (As of December 2002)

Page 15: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 6 –

INTRODUCTION

Table 1. Computational Technology Areas

ATC noitpircseD

MSC larutcurtSlanoitatupmoCscinahceM

dnaslairetamfogniledomlanoisnemid-itlum,noituloserhgihehtsrevoCgnidulcnisnoitidnocgnidaolfoegnardaorbaotdetcejbusserutcurts

foegnarediwasessapmocneMSC.evislupmidna,cimanyd,citatsssertscitsaleraenilsahcusscinahcemdilosnismelborpgnireenigne

sserts,scinahcemmuunitnocniseidutscisabrofdesusiMSC.sisylanalairetamdnalarutcurtsdna,seidutsngisedgnireenignerofsisylana

.sdaolevislupmiotsnoitciderpesnopser

DFC diulFlanoitatupmoCscimanyD

etaruccaehtsilaogesohwsnoitatupmocecnamrofrephgihsrevoCDFC.noitomsagdnadiulfgnibircsedsnoitauqeehtfonoituloslaciremun

fongisedgnireenignerofscimanyddiulffoseidutscisabrofdesusifosnoitcaretniehtgnitciderprofdnasnoitarugifnocwolfxelpmoc

.noisluporpdnanoitsubmocrofwolfdiulfhtiwyrtsimehc

MCC yrtsimehClanoitatupmoCecneicSslairetaMdna

seitreporpcisabtciderpotdesusloothcraeserlanoitatupmocehtsrevoCrotluciffidebyamhcihwslairetamdnaseicepslacimehcwenfo

dnayrtsimehcmutnauq,DoDnihtiW.yllatnemirepxeniatbootelbissopmismetsyslacimehcwenngisedotdesuerasdohtemscimanydralucelom

fotnempolevedehtnideyolpmeeraseuqinhcetgniledometatsdilosdna.slairetamecnamrofrep-hgihwen

AEC lanoitatupmoCdnascitengamortcelE

scitsuocA

s'llewxaMfosnoituloslanoisnemid-itlum,noituloser-hgihehtsrevoCfosnoituloslanoisnemid-itlum,noituloser-hgihehtsallewsasnoitauqe

DoDemoS.sesagdna,sdiulf,sdilosnisnoitauqeevawcitsuocaehtannetnatuobasdleifcitengamortceleehtgnitaluclacedulcnisnoitacilppa

dnaaes,ria,dnuorglacitcatfoserutangiscitengamortceleeht,syarrahgih,snoitinumdeirubfoerutangiscitengamortceleeht,selcihevecaps

yranilpicsidretniehtsallewsa,ecnamrofrepevaworcimrewop.smetsysresaldnascimanydordyhotengamnisnoitacilppa

OWC naecO/rehtaeW/etamilCnoitalumiSdnagniledoM

etamilcs'htraeehtfonoitalumislaciremunetaruccaehthtiwdenrecnocsIdnaytilibairavcirehpsomtafotsacerofdnanoitalumisehtgnidulcni

laciremunfotnempolevedehtsedulcniOWC.ytilibairavcinaecoyletomerdnautis-nifonoitalimissaehtrofseuqinhcetdnasmhtirogla

.smetsysnoitciderplaciremunotnisnoitavresbodesnes

PIS gnissecorPegamI/langiS gnissecorplangistsetalehtfotsetdna,noitaulave,hcraesersezisahpmEmetsysdeddebmeesehtdrawotdetceridstpecnoc s ehtelbanelliwsihT.

evisnepxelanoitidart , tnemelpmiotderiuqer'sexobkcalb'euqinu-yratilim-ffo-laicremmocybdecalperebotgnissecorpegami/langisdeeps-hgih

.tnempiuqedesab-CPHflehs-eht

/SMFI4C

dnagniledoMsecroFnoitalumiS

dna,sisylana,gniniartrofnoitalumisdnagniledomlevelecrofnosesucoF,tnempoleveddnahcraesersedulcniniamodnoitisiuqcaehT.noitisiuqca

gnihcrarevoehT.scitsigoldnanoitcudorpdna,noitaulavednatset.dedulcnisiniamodnoitisiuqcAdesaBnoitalumiS

MQE ytilauQlatnemnorivnEnoitalumiSdnagniledoM

fogniledomsekotS-reivaNlanoisnemid-eerht,noituloser-hgihehtsrevoCtropsnart/etaftneutitsnoc-itlumdnatnanimatnocdnascimanydordyh

,smetsysbusdnaltewdnametsysocelairtserretdnacitauqaehthguorhthtiwsnoitcennocretniriehtdna,syawhtapcigoloegordyhdelpuocrieht

.seicepslacigoloibsuoremun

NEC scinortcelElanoitatupmoCscinortceleonaNdna

xelpmocetalumisdnaledomotsdohtemlanoitatupmocdecnavdasesU,erafrawcinortcele,lortnoc,dnammoc,snoitacinummocrofscinortcele

hgihfoesU.snoitacilppadetalerdna,gnisnes,ecnegilletnilangisotytinummocscinortceleDoDehtselbanestessagnitupmocecnamrofrep

dnathgisniniag,stpecnocwenerolpxe,smelborpxelpmocevloslautrivmrofrep,scisyhpgniylrednuehtfognidnatsrednudevorpmi

.saediwentsetdna,gnipytotorp

TMI tseTdnagniledoMdetargetnIstnemnorivnE

dnaslootnoitalumis,sledomdetargetnifonoitacilppaehtsesserddAoslaTMI.snoitalumispool-eht-ni-erawdrahdnastsetevilhtiwseuqinhcet

gnimagrawdesab-ygolonhcetgnisuybtpecnoc-fo-foorpsedivorpnopaewfonoitalumisehtrofslootnoitargetnierawtfosdnagniledom.txetnoclanoitarepolautrivanismetsysdnasmetsysbustnenopmoc

Page 16: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 7 –

2002 SUCCESS STORIES

Within the DoD Challenge Projects, scientists and engineers are using program resources to workon projects such as the evaluation and retrofit for blast protection in urban terrain; three-dimensional modeling and simulation of bomb effects for obstacle clearance; the active control offuel injectors in full gas turbine engines; and the seismic signature simulations for tactical groundsensor systems and underground facilities. Others are working on improving naval capabilitiesthrough unprecedented modeling of the climate, oceans, and weather conditions of the world aswell as investigating littoral environments. Still others directly support the airborne laser andmissile defense programs. Program resources helped to develop thermobaric weapons and toprovide forensic data for investigation of terrorist bombings and consequence management.Additionally, program scientists and engineers are developing codes to help development of newmaterials that will enhance our capabilities and survivability in space.

HPCMP Support of Transformation and the War onTerrorism

TRANSFORMATION OPERATIONAL GOALS

The stories included in this Technical Report are grouped to reflect the Department’stransformation operational goals. The 2001 Quadrennial Defense Review Report states that “thepurpose of transformation is to maintain or improve US military preeminence in the face ofpotential disproportionate discontinuous changes in the strategic environment.” It also states that“the transformation must therefore be focused on emerging strategic and operational challengesand the opportunities created by these challenges.” The 2001 Quadrennial Defense ReviewReport lists the six critical operational goals that provide the focus for DoD’s transformationefforts:1

1. Protecting critical bases of operations (US homeland, forces abroad, allies, and friends) anddefeating chemical, biological, radiological, nuclear, or high-yield explosive weapons and theirmeans of delivery;

2. Assuring information systems in the face of attack and conducting effective informationoperations;

3. Projecting and sustaining US forces in distant anti-access or area-denial environments anddefeating anti-access and area-denial threats;

4. Denying enemies sanctuary by providing persistent surveillance, tracking, and rapidengagement with high-volume precision strike, through a combination of complementary airand ground capabilities, against critical mobile and fixed targets at various ranges and in allweather and terrains;

5. Enhancing the capability and survivability of space systems and supporting infrastructure; and

6. Leveraging information technology and innovative concepts to develop an interoperable, jointCommand, Control, Communications, Computers, Intelligence, Surveillance, andReconnaissance architecture and capability that includes a tailorable joint operational picture.

1 2001 Quadrennial Defense Review Report, September 30, 2001, pg. 30; www.defenselink.mil/pubs/qdr2001.pdf

Page 17: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 18: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

SECTION 2SUCCESS STORIES

“Our challenge in the 21st century is to defend our cities and our infrastructure from newforms of attack while protecting our forces over long distances to fight new adversaries”

— Defense Secretary Donald Rumsfeld in a speech to students at the National Defense University, January 21, 2002

Page 19: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 20: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

PROTECT BASES

OF OPERATION

Chemical DefenseCombating Terrorism

Consequence Management

Missile DefenseBiological Defense

Page 21: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 12 –

PROTECT BASES OF OPERATION

References (from top to bottom):

Defense Against Chemical Warfare Agents by Margaret Hurley, Jeffery Wright, William White, and Gerald LushingtonOverlaid crystal structures of Torpedo californica AChE with reversible (red) and irreversible (blue) inhibitors bound to active siteresidues (green)

Effect of Planar and Grid Tail Fins on Canard Roll Control Effectiveness on a Generic Missile by James DeSpiritoCanard-controlled missile with grid fins

Dispersion of Biological or Chemical Aerosols in Cities by Shahrouz Aliabade, Robert Bafour, Jalal Abedi, and AndrewJohnsonGeometric model of downtown Atlanta, GA, to be used for fine-scale (1 to 5 meter) numerical simulations of biological/chemicaldispersion

Modeling Structural Response to Blasts by Joseph BaumAirblast/fragments impact on a steel plate

Page 22: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 13 –

2002 SUCCESS STORIES

Protect Bases of Operations

Since the Cuban missile crisis in 1962, no single event has shocked the nation into the reality of ourvulnerability as did the events of September 11, 2001. The need to defend the US homeland andoverseas bases has spurred increased attention to developing defenses against chemical warfareagents, cleaning up contaminated soils and groundwater, strengthening the resistance of buildings toblast events, and developing technologies to detect potential terrorist attacks from land or sea. Thefollowing success stories share a common theme — the unprecedented challenge of protecting ourinfrastructure and warfighters by taking advantage of the invaluable HPC resources made availablethrough the High Performance Computing Modernization Program.

The threat of nerve and other chemical warfare agents, a major concern, has been the focus of studiessuch as those by Margaret Hurley and her associates (“Defense Against Chemical Warfare Agents”).In this investigation, HPC resources are being used to model the interaction of nerve agents and theenzyme acetylcholinesterase (AchE), enabling numerous advancements over previous simulationsbased on classical molecular dynamics techniques. In another research effort, Robert Bernard et al.(“Pore-Scale Contaminant Transport”) describe improved measurement and modeling systems for thestudy of chemical and biological agent transport in permeable media. Advances in HPC technologieshave made possible the complex computations necessary for these improved systems andsubsequent successful applications on military sites.

Contamination of soils and groundwater from explosives, fuels, or other agents can be hazardous tothe environment and to human beings, as well as costly to clean up. Computational simulationmethods were employed in a project led by Jerry Leszczynski (“Exploring Cleanup Technologies forExplosives in Soils and Groundwater”) to develop remediation technologies for soils and groundwatercontaminated by explosives. Using a quantum-chemical (QC) approach, Dr. Leszczynski’s teaminvestigated the breakdown of nitrobenzene through interaction with the surface of an iron oxide (FeO)mineral normally found in soil. Parallel supercomputing resources significantly enhanced the study. Inanother contamination study, J.D. Nieber and his associates (“Promoting Environmental Stewardship:Understanding Fingering Flow in Unsaturated Porous Media”) focused on the movement of fluids inporous media in order to determine effective remediation processes. Dr. Nieber’s team used a newapproach to solve the multi-phase fluid problem.

Simulations on high performance computers are invaluable in the development of technologies toprotect the nation’s infrastructure from blasts or explosives. Joseph Baum (“Modeling StructuralResponse to Blasts”) describes the use of HPC and a coupled Computational Fluid Dynamics (CFD)/Computational Structural Dynamics (CSD) code to model the processes involved in a blast event, andspecifically the interactions of the blast and the structural response. Starting with a “glued-mesh”approach, Dr. Baum and his team of researchers have developed a more effective “embedded-mesh”technique. The computational simulations using this approach illustrate the structural responses to beexpected in a car or truck bombing. These findings will enable the design of structures that are moreresistant to blast events. Tommy Bevins et al. (“Evaluation of Blast Response to the Pentagon”)provide an in-depth description of the retrofit designs for repair of the Pentagon after the September11, 2001, attack. A number of factors had to be taken into consideration, including the loads on thecomponents of the structure, the response of these components to the loads, and improved safety forthe occupants. The complex computations required millions of equations and thousands of timesteps.

Defense measures include protection and surveillance devices that are essential to homeland security.Modeling approaches being developed by John Starkenberg and Toni Dorsi (“Modeling Methods forAssessment of Gun-Propellant Vulnerability”) will be used to analyze the reactive responses ofpropellant beds to unplanned stimuli. Simulation results will be used in the design of propellant beds

Page 23: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 14 –

PROTECT BASES OF OPERATION

with the goal of reducing the violence of these reactive responses. The modeling approachesdeveloped in this study are being used for the Army’s Future Combat System in the design of high-loading-density gun-propellant charges. In another research effort, infrared search and track systemsunder development offer improved surface and air defense capabilities (“A Highly Parallel, ScalableEnvironment for Developing Search and Track Algorithms,” Cottel, Douma, and Henderson). Thesesystems require highly computationally intensive spatial, spectral, and temporal processing for real-time applications. Jeffery Dugan and his associates (“Robust Scalable Information Processing forMultistatic Sonars”) describe improved algorithms that increase the performance of acoustic sonar,enhancing the ability to detect submarine threats and reducing false pings. HPC resources were usedto test and validate the algorithm for an improved range rate estimator for the Hyperbolic FrequencyModulated (HFM) waveform, thus reducing the percentage of high-speed clutter as well as thealgorithmic and operator resources.

The significance of these projects and their reliance on HPC technologies cannot be underestimated.The large storage and processing capabilities made possible by HPC resources have enabled therobust testing, accuracy, and time and cost effectiveness that are essential to homeland defense.

Lee ByrneUS Army Corps of Engineers, Engineer Research and Development Center (ERDC), Vicksburg, MS

Dr. Jeffery P. HollandUS Army Corps of Engineers, Engineer Research and Development Center (ERDC), Vicksburg, MS

Page 24: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 15 –

2002 SUCCESS STORIES

The threat of nerve agents and other chemicalwarfare agents has been a constant specterover the last century, and will continue to beso. Despite international conventions

banning their use, confirmed or alleged incidentshave occurred over the last two decades alone inAfghanistan, Iraq, Iran, and Japan. Defense againstand therapy for exposure to chemical warfare agentswill remain a cornerstone of protection for thewarfighter (as well as civilians) in years to come.

Investigating Nerve AgentsWhile much information has been obtained on thefunction of nerve agents and one of their primarytargets, the enzyme acetylcholinesterase (AChE),substantial questions remain. Researchers continueto investigate the mechanism of their interaction, aswell as the mechanism of related compounds usedfor therapy and prophylaxis. Experimental studies onthese systems are, by their very nature, dangerousand expensive, making this an ideal target fortheoretical investigation. Dr. Margaret Hurley and herassociates are attempting to understand the keymolecular features leading to reversible andirreversible binding in the active site ofacetylcholinesterase (and related reactions). Indoing so, they will develop a solid basis for devisingchemical prophylaxis and therapeutics techniques,as well as developing a reliable technique forprediction of new, possibly dangerous, compounds.

Understanding the ChemistryPrevious simulations of these systems have beenperformed by using classical molecular dynamicstechniques, or by quantum mechanical study of aseverely truncated section of the active site residues.Classical techniques are extremely useful tools inunderstanding gross structuralfeatures contributing to theactive site chemistry, butcannot provide the detailedenergetics necessary tounderstanding the chemistryinvolved. Truncated modelsmay leave out surroundingresidues and other features thatplay a key role in determiningthe mechanism. Mixedquantum/classical (QM/MM)methods have been generallyaccepted as a reasonable compromise, capable ofsimultaneous representation of both the subtlereactive (QM) phenomena and the bulk (MM)environment dependence.

Applying HPC to Higher QM/MMModelsAccess to HPC resources (both hardware andsoftware) has allowed the research community tomake numerous improvements over previoustreatments of AChE systems. They are able to treatsubstantially larger and/or higher accuracy QMmodels, and to run larger and/or higher accuracy

Defense Against Chemical Warfare AgentsMARGARET HURLEY

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

JEFFERY WRIGHT

Soldier and Biological Chemical Command (SBCCOM), Natick, MA

WILLIAM WHITE

Edgewood Chemical and Biological Center (ECBC), Aberdeen Proving Ground, MD

GERALD LUSHINGTON

University of Kansas, Lawrence, KS

“Experimental studieson these systems are,by their very nature,dangerous andexpensive, making thisan ideal target fortheoreticalinvestigation.”

Page 25: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 16 –

PROTECT BASES OF OPERATION

QM/MM models. High performance computingresources supported one of the first systematiccomparisons of competing QM/MM methodologies todetermine the effect of the algorithm on results. Theprogram also provided sufficient resources toperform a mapping of model details (size, method,level of accuracy, etc.) and their effect on results.

Reproducing the Short StrongHydrogen BondThe results to date have demonstrated theeffectiveness of these models. The research teamhas been able to reproduce the Short StrongHydrogen Bond (SSHB) in the active site residuesseen experimentally in the bare AChE enzyme. Inaddition, the models confirm greatly reduced energybarriers to the phosphylation reaction in enzymerelative to related calculations of phosphonatehydrolysis in aqueous solution. They have alsodemonstrated the charge transfer between agent andoxyanion hole residues of enzyme, thus showingimportance of quantum effects in non-active siteresidues to the reaction.

Currently, the team is conducting a detailed mappingof the aging reaction (by which the agent becomesirreversibly bound to the enzyme) and the origin ofstereospecificity in these systems, as well asimplementation of new higher accuracy QM/MMmethods.

ReferencesHurley, M.M., Wright, J.B., Lushington, G.L., and White,W.E., “QM and Mixed QM/MM Simulations of Model NerveAgents with Acetylcholinesterase,” Theoretical ChemistryAccounts (Special Issue on QM/MM), 2002. [Accepted]

Hurley, M.M.,Wright, J.B., Lushington, G.L., and White,W.E., Studies of Acetylcholinesterase and Model NerveAgent Interactions: a QM/MM Approach. Presented at the10th Conference on Current Trends in ComputationalChemistry, Jackson, MS, November 1–3, 2001.

CTA: Computational Chemistry and Materials ScienceComputer Resources: SUN E10000 and SGI Origin3000 [ARL], IBM SP P3 [ASC], and Cray T3E [ERDC]

Figure 1. Overlaid crystal structures of Torpedo californica AChE withreversible (red) and irreversible (blue) inhibitors bound to active siteresidues (green)

Page 26: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 17 –

2002 SUCCESS STORIES

Modeling gas and liquid flows inpermeable media is important inunderstanding the dispersal andpersistence of chemical and biological

agents. Until recently, the direct numerical simulation(DNS) of flow and transport in permeable media wasregarded as a computationally intractable problem.Resolving flow even in a small sample requires anenormous computational effort due to the complexphysical boundaries. However, recent advances inthe application of HPC technologies havedemonstrated that such computations are not onlyfeasible, but can be performed on a reasonableproduction schedule, leading to improvements inmeasurement and modeling systems.

Pore-Scale SimulationsDNS in permeable media is often referred to as pore-scale flow and transport because flow is resolved onthe length scale of typical pore spaces, using thethree-dimensional Navier-Stokes equations. Speciestransport is modeled on the same length scale bysolving a convection-diffusion equation, eithercoupled with the flow calculation or, more typically,using a steady-state flow field. As shown in Figure 1,no tunable parameters are needed to solve theseequations; only the molecular viscosity and diffusioncoefficients need be specified.

Early examples of pore-scale simulation wereconducted for civilian projects, including research onoil recovery conductedat Los Alamos NationalLaboratory (LANL) in theearly 1990’s. Thetechnology wassubsequently applied toDoD projects within theEnvironmental QualityModeling and Simulation(EQM) ComputationalTechnology Area,including modeling ofsubsurfacecontamination on militarysites. These simulationsrequired up to 512processors, 200gigabytes main memory, and three days of wall-clockprocessing time.

Modeling Flow ColumnsOne application of pore-scale simulation is themodeling of physical experiments in laboratory flowcolumns. This type of simulation has revealedartifacts inherent in experimental measurements andprotocols. Simulations predicted the existence of adamping effect in Nuclear Magnetic Resonance(NMR) velocity measurements caused by moleculardiffusion. Simulations also predicted that in flowcolumn experiments, the confining wall enhancesexperimental measurements of the hydrodynamicdispersion coefficient compared to an unconfinedpermeable medium (Figure 2).

Pore-Scale Contaminant TransportROBERT BERNARD, STACY HOWINGTON, AND JOHN PETERS

Engineer Research and Development Center (ERDC), Vicksburg, MS

ROBERT MAIER

Army High Performance Computing Research Center (AHPCRC), Minneapolis, MN

DANIEL KROLL AND H. TED DAVIS

University of Minnesota, Minneapolis, MN

“Until recently, the directnumerical simulation (DNS) offlow and transport inpermeable media wasregarded as a computationallyintractable problem. Recentadvances in the application ofHPC technologies havedemonstrated that suchcomputations are not onlyfeasible, but can be performedon a reasonable productionschedule, leading toimprovements in measurementand modeling systems.”

Figure 1. Navier-Stokes (top) and convection-diffusion equations(bottom) where v v v v v denotes the fluid velocity, p the pressure, c thespecies concentration, ì the viscosity, Dm the coefficient of moleculardiffusion

Page 27: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 18 –

PROTECT BASES OF OPERATION

Further applications of the pore-scale simulation havebeen identified, including the flow of combustiongases in ceramic foams, particle residence time inchromatographic columns and spectrographicsampling chambers, and shear stress calculations insaturated soil microstructures (Figures 3–6). A newchallenge is to extend the technology to the flow ofcolloidal suspensions, including aerosols.

ReferencesR.S. Maier, D.M. Kroll, R.S. Bernard, S.E. Howington, J.F.Peters, and H.T. Davis, “Enhanced dispersion in cylindricalpacked beds,” Phil. Trans. R. Soc. Lond. A 360, pp. 497–506, 2002.

R.S. Maier, D.M. Kroll, R.S. Bernard, S.E. Howington, J.F.Peters, and H.T. Davis, “Pore-scale simulation ofdispersion,” Physics of Fluids 12 (8), pp. 2065–2079, 2000.

Figure 3. Three-dimensionalview of dissolved chemicalconcentration in apolydisperse packed bed.Flow is from top to bottom,dissolving a thin chemicalfilm from the solid surfaces.The intensity of red colordenotes the relativeconcentration.

Figure 4. Quasi-2D view(thin section) of dissolvedchemical concentration inpolydisperse bed

Figure 5. Contours ofdissolved chemicalconcentration in cross sectionof a three-dimensionalmonodisperse packed bed.Flow of water is from top tobottom, washing a thinchemical film from solidsurfaces. Darker contoursdenote higher concentration.

Figure 6. Residual chemicalfilm on solid surfaces afterwashing is highlighted inblack

Figure 2. An example of permeable media: thin section of a cylinder(r = 25d) packed with monodisperse spheres of diameter d (fullcylinder includes over one million spheres)

CTA: Envionmental Quality Modeling and SimulationComputer Resources: Cray T3E [AHPCRC, NAVO]and IBM SP [ERDC]

Page 28: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 19 –

2002 SUCCESS STORIES

Computational simulation methods havebecome powerful research tools fordeveloping effective, passive, in situclean-up technologies for explosives in

soils and groundwater within the US Armyenvironmental restoration mission. Among thesecomputational methods, the quantum-chemical (QC)approach is one of the fastest growing and mostpowerful. The status of QC models permits thedetermination of numerous properties of moleculesand molecular complexes involving hundreds ofatoms. It is important to note that QC predictions forsmall- and medium-sized molecules and molecularcomplexes (tens of atoms) approaches the sameaccuracy as that achieved from high-qualityexperimental data. This suggests that computationalstudies can be more feasible and more cost efficientthan experimental investigations.

QC Simulations of theDecomposition of NitrobenzeneA team of AHPCRC/Jackson State Universityresearchers led by Dr. Jerzy Leszczynski performed acomputer simulation of the decomposition of thenitroaromatic compound, nitrobenzene, throughinteraction with the surface of an iron oxide (FeO)mineral typically found in soil. This process includedthe formation of the adsorbed state, thedecomposition of nitrobenzene to nitrosobenzene,and the formation of the adsorbed complex betweenthe nitrosobenzene and the surface of FeO mineral.The simulation has been performed using the densityfunctional theory level of theory. The parallel versionof the Gaussian-98 package, updated for the CrayT3E, was employed. The computer time necessaryfor completion of this project was an estimated50,000 node hours using up to 64 nodes per job onthe Cray T3E.

It was concluded that the adsorbed moleculeundergoes destruction by transferring one of itsoxygen atoms to a surface iron atom in FeO. Thistransfer is possible because the surface iron atomsare not fully coordinated. The probability of such areaction pathway was determined by calculating avalue of the activation energy for the separation of anoxygen atom from nitrobenzene and subsequentformation of a new FeOchemical bond. It waspredicted that the transitionof an oxygen atom to thesurface of iron oxide isaccompanied by therotation of the aromatic ringto form the mostenergetically favorableorientation. This orientationstabilizes the structure bymeans of the formation of anew nitrogen-iron (N…Fe)bond. The calculatedactivation energy is 22.4kcal/mol. The relativelysmall size of the activationbarrier suggests that the proposed mechanism forthe decomposition of nitrobenzene is realistic.

The final step of this project involved simulation of theproduct of nitrobenzene decomposition:nitrosobenzene adsorbed onto iron oxide. Theadsorbed nitrosobenzene interacts with the ironoxide surface through its oxygen and nitrogen atomsforming an adsorption complex with an adsorptionenergy of 12.5 kcal/mol. This is almost one half ofthat calculated for the adsorption of nitrobenzene.This result suggests that nitrosobenzene will bedesorbed from the surface of iron oxide easier thannitrobenzene after the chemical reaction of

Exploring Cleanup Technologies for Explosives inSoils and GroundwaterJERZY LESZCZYNSKI

Jackson State University, Jackson, MS

Sponsor: Army High Performance Computing Research Center (AHPCRC), Minneapolis, MN

“It is important to note thatQC predictions for small-and medium-sizedmolecules and molecularcomplexes (tens of atoms)approaches the sameaccuracy as that achievedfrom high qualityexperimental data. Thissuggests thatcomputational studies canbe more feasible and morecost efficient thanexperimentalinvestigations.”

Page 29: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 20 –

PROTECT BASES OF OPERATION

decomposition has occurred. The total scheme ofthe described transformation is presented inFigure 1.

Applying QC methods allows computational chemiststo successfully investigate not only simple moleculesbut also to extend their studies to problems ofindustrial and environmental importance. The powerof QC investigations is increased dramatically whenmodern parallel supercomputers are used. Inparticular, these simulations allow iron oxidecontaining minerals to be considered as possiblecatalysts for the degradation of nitroaromaticcompounds.

Figure 1. Reduction of nitrobenzene by (001) face of iron oxide

References“Parallel Computing. Fundamentals & Applications,”Proceedings of the International Conference ParCo99,Imperial College Press, 2000.

L. Radom, J.A. Pople, and P.V.R. Schleyer. Ab InitioMolecular Orbital Theory. John Wiley & Sons, 1986.

A.D. Becke, Phys. Rev. A 38, p. 3098, 1988.

C. Lee, W. Yang, and R. G. Parr. Phys. Rev. B, 37, p. 3098,1988.

Y. Joseph, W. Ranke, and W. Weiss. J. Phys. Chem. B, 104,p. 3224, 2000.

J. Sauer. Chem. Rev. 89, p. 199, 1989.

A. Pelmenschikov, and J. Leszczynski. J. Phys. Chem. B.103, p. 6886, 2000.

L. Gorb, J. Gu, D. Leszczynska, and J. Leszczynski. Phys.Chem. Chem. Phys. 2, p. 5007, 2000.

CTA: Envionmental Quality Modeling and SimulationComputer Resources: Cray T3E, IBM SP2, and CraySV1 [AHPCRC]

Page 30: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 21 –

2002 SUCCESS STORIES

In 1998, the DoD cleanup budget alreadyexceeded one billion dollars1. While promotinggood stewardship, this also diverts scarceresources from other Defense priorities.

Research that lowers cleanup costs, and anticipatespotential hazards, will have a long ranging impact onboth the Department and the welfare of US citizens.

Many US military sites have been contaminated byfuels, cleaning fluids, andmunitions. Researchersworking for the ARL areseeking to understand themovement of fluids through thesoil and the deep unsaturatedzone; this is important to theremediation of contaminatedsites, and to predict thebehavior of fluids in any newcontamination events.Modeling fluid movement in

soils and deep unsaturated zones involves solvingcoupled, nonlinear mass balance equations thatquantify the movement of individual fluids in multi-phase fluid conditions.

Solving the Multi-Phase FluidProblemConventional approaches to solving the multi-phasefluid problem have used mass balance equationsand assumed equilibrium capillary pressure-saturation relationships. The typical mass balanceequation is represented by

where p is the fluid pressure, K(S) is the hydraulicconductivity, and S is the fluid saturation. In theconventional method, the fluid pressure is relateddirectly to the equilibrium capillary pressure-saturation relation by

p=P(S)

where P is the equilibrium capillary pressure.

In the new approach, the ARL team of researchscientists kept the mass balance equation butreplaced the equilibrium relation with the non-equilibrium relation

where τ is a relaxation parameter.

The solution of the unconventional equations leads toa completely new realm of flow characteristics. Theconventional equations yield solutions that areunconditionally stable (in the physical sense). Thenew equations lead to flows that can becomeunstable for a wide range of conditions. Researchersconcluded that the conditions leading to stability area special case of the more general case where non-equilibrium conditions prevail.

Unstable flow of fluids in soils and the deepunsaturated zone is important because the unstableflows lead to inefficient remediation of contaminatedsites and faster movement of contaminants.

High performance computing resources areimportant to the study of fluid flow processes such asunstable flows in porous media. For two-dimensionaldomains, it is sufficient to use scalar processorcomputations, but for three-dimensional domains, it

Promoting Environmental Stewardship:Understanding Fingering Flow in UnsaturatedPorous MediaJ.L. NIEBER AND A.Y. SHESHUKOV

University of Minnesota, St. Paul, MN

1 Guide to the DoD Environmental Security Budget, October 1998, DUSD(I&E)

“Unstable flow of fluidsin soils and the deepunsaturated zone isimportant because theunstable flows lead toinefficient remediationof contaminated sitesand faster movementof contaminants.”

Page 31: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 22 –

PROTECT BASES OF OPERATION

is necessary to perform computations on parallelcomputer platforms. At present, the ARL team hasinvestigated the physical process for only the twodimensional case, and for homogeneous porousmedia. In the future, the ARL research team willcontinue to use HPCMP resources to considerheterogeneous media, and three-dimensionaldomains.

Figure 1. Distribution of fluid saturation in a two-phase situationshowing the invasion of a wetting fluid into a porous mediumoriginally saturated with non-wetting fluid. The invading fluid movesin a fingered flow pattern, representing a condition of unstable flow.The fluid flow solution was derived using a conventional massbalance equation for two-dimensional flow with a non-equilibriummodel for the capillary pressure – saturation relationship.

ReferencesEgorov, A.G., R.Z. Dautov, J.L. Nieber, and A.Y. Sheshukov,Stability analysis of traveling wave solution for gravity-driven flow, Proceedings of the 14th InternationalConference on Computational Methods in Water Resources(in press), 2002.

Dautov, R.Z., A.G. Egorov, J.L. Nieber, and A.Y. Sheshukov,Simulation of two-dimensional gravity-driven unstable flow,Proceedings of the 14th International Conference onComputational Methods in Water Resources (in press),2002.

CTA: Environmental Quality Modeling and SimulationComputer Resources: Cray T3E [AHPCRC]

Page 32: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 23 –

2002 SUCCESS STORIES

Modeling Structural Response to BlastsJOSEPH BAUM

Defense Threat Reduction Agency (DTRA)/SAIC, Alexandria, VA

The White House, the Capitol, and thePentagon are three of the mostprestigious, recognizable, and powerfulsymbols in the United States. After the

events of September 11, 2001, these buildings havealso become prime targets for a terrorist attack in theform of a car or a truck bomb. Structural response ofthese structures to large-size charges is currentlybeing simulated on high performance computers.

Dr. Joseph Baum and a group of researchassociates, sponsored by the Defense ThreatReduction Agency (DTRA), are using highperformance computers and a coupled CFD andComputational Structural Dynamics (CSD) code tomodel all the relevant physical processes controllingthese scenarios. These processes include: 1)detonation wave initiation and propagation within theexplosive; 2) CSD modeling of the car/truck/casefragmentation; 3) blast wave expansion through thebreaking case, diffracting about the flying fragments;4) blast and fragment propagation and impact on thetarget; 5) target structural response to the blast andfragment loading; and 6) the resulting new blast andfragment environment.

Using High Performance Computersto Accurately Model Fluid-StructureInteractionsThis HPCMP DoD Challenge Project is addressing aninability to accurately model fluid-structureinteractions that require simultaneous modeling ofsuch events as a blast near a building and theresulting structural response, or blast near a ship,and the resulting holing and blast penetration.Researchers gain a better understanding of fluid-structure interactions by coupling CFD and CSDcodes. For a typical blast problem, the CFD portionmodels the detonation process within the explosiveusing the appropriate equation-of-state for thespecific explosive, and at the same time, models theair blast that has started propagating. If this is a

cased explosive (i.e., an explosive in some steelcasing—such as a pipe), the CSD part will model thebreakup of the case. The CFD will then model thepropagation in the air of both the expanding shockwave and the fragments until they impact an object (awall of a structure, a car, etc.). At that point, the CSDcode will model the response of the impactedstructure to this air blast and fragment loading. If, forinstance, the CSD model is a ship, the CSD code willmodel the hull holing and fragmentation and then theair blast going through the breaking/fragmentingstructure into the next compartment.

The “Glued-Mesh”ApproachThe initial coupled CFD/CSDalgorithm used the so-called“glued-mesh” approach, wherethe CFD and CSD faces matchidentically. Failure of thisapproach to model severestructural deformation, as wellas crack propagation in steeland concrete, led the team overthe last year to develop the so-called “embedded-mesh”approach. Here, the CSD objects float through theCFD domain. While each approach has its ownadvantages, limitations, and deficiencies, theembedded approach was proven to be superior forthe class of problems modeled here. The embeddedapproach enables easier modeling of the response ofcivilian structures, which include a variety ofstructural materials, such as brick, stone, steel,aluminum, concrete, and glass.

Comparison to the EmbeddedApproachFigure 1 shows a comparison between the gluedapproach (Figure 1a) and the embedded approach(Figures 1b to 1e). The first approach did not allow

“The embeddedapproach enableseasier modeling of theresponse of civilianstructures, whichinclude a variety ofstructural materials,such as brick, stone,steel, aluminum,concrete, and glass.”

Page 33: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 24 –

PROTECT BASES OF OPERATION

for the application of a physically correct crackpropagation model, as each crack had to bemodeled with at least 4–6 fluid elements across thenarrow gap, at a prohibitive computational cost. Thefragments had to be pre-defined, hence precludingthe modeling of weapons for which there are noarena data. In addition, the break-up of eachfragment was a topology-changing event, requiringglobal remeshing at a fairly high computational cost,even when performed in parallel using a parallelizedgrid generation scheme. In comparison, theembedded approach allows both the treatment ofcrack propagation and the formation of smallfragments. Thus, the cracking model automaticallydetermines the size of the fragments. The resultsshown in Figures 1c-e demonstrate: 1) therandomness of the size of the fragments; 2) thenarrow cracks between the fragments, some of whichare very small; 3) The large number of fragmentsmodeled, a simulation that would have beencomputationally prohibitive with the glued approach;4) mesh adaptation to both the structure (smallerelements within the cracks will yield more accuratepressure relief and detonation products expansion)and density gradients; and 5) most fragments reachabout the same velocity (within a range of about15%), in agreement with available experimental data.After being applied to the study of blast and fragmentinteraction with reinforced concrete (military)structures, the new methodology is applied to thesurvivability study of important US Governmentcenters as well as major transportation links andinfrastructure centers.

A second simulation modeled fragment/airblastinteraction with a steel wall chamber. Similar to theweapon case breakup case, three levels of meshrefinement were used in the CSD model. Thestandard DYNA3D element erosion model was usedto eliminate failed CSD elements.

The numerical predictions show that the impactingweapon fragments arrive ahead of the airblast,punching holes through the plate (Figure 2a). Next,the pressure blast from the detonation tears theweakened plate apart (Figure 2b). The eroded plateelements were converted into particles that caninteract with the rest of the structure. Contactconditions were enforced between all entities toavoid fragment interpenetrations and therefore afailure in meshing.

The results will give researchers a betterunderstanding of the structural response of a varietyof materials to a car or truck bomb. This will enableengineers to design buildings which are moreresistant to these types of blasts.

Figure 1. Comparison of glued and embedded CSD schemesapplied to the weapon fragmentation study

(a) (b)

(c)

(d)

(e)

Page 34: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 25 –

2002 SUCCESS STORIES

Figure 2. Airblast/fragments impact on a steel plate

(a)

(b)

ReferencesJ.D. Baum, et al., “A Recent Developments of a CoupledCFD/CSD Methodology,” AIAA 2001-2618, Proc.15th AIAACFD Conf. Anaheim, CA, 2001.

J.D. Baum, et al., Coupled CFD/CSD Methodology: NewDevelopments and Applications; Fifth World Congress onComputational Mechanics, Vienna, Austria, Eds.: H.A.Mang, F.G. Rammerstorfer, and J. Eberhardsteiner, July 7–12, 2002.

R. Lohner et al, Advances in FEFLOI, AIAA-2002-1024, AIAA40th Aerospace Sciences Meeting, Reno, NV, 2002.

CTA: Computational Fluid DynamicsComputer Resources: SGI Origin 3000 [ERDC]

Page 35: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 26 –

PROTECT BASES OF OPERATION

Evaluation of Blast Response of the PentagonTOMMY BEVINS, BYRON ARMSTRONG, JAMES BAYLOT, AND JAMES O’DANIEL

US Army Corps of Engineers, Engineer Research and Development Center (ERDC), Vicksburg, MS

Protecting our military personnel fromterrorist attacks is critical to DoD missionsuccess. Real life scenarios rarelyconsist of simply an explosion, the

propagation of the blast load over a flat terrain, andits impact on a structure. The blast environment isaffected by the presence of other structures, as wellas the natural topography. A completeunderstanding of this complex blast environment,including the effects of adjacent structures on theblast loads on a structure, is needed to protectbuilding occupants in an urban environment. Theseeffects were studied in a combined experimental/analytical study conducted at the Engineer Researchand Development Center (ERDC). The analyticalstudy used high-priority HPC processing hoursprovided under the HPCMPO DoD Challenge Project:“Evaluation and Retrofit for Blast Protection in UrbanTerrain.” Simulations of several different experimentgeometries and combinations of structures havebeen performed as part of this DoD ChallengeProject. The analyses have compared very well withexperimental results, giving credibility to theseanalysis methods for use in further blast propagationstudies.

The Defense Department placed a high priority onthe rapid repair of the damages to the Pentagonresulting from the September 11, 2001, terroristattack. The US Army Corps of Engineers had beentasked with providing possible retrofit designs to thePentagon in order to improve its response to a rangeof terrorist threats. Because of the desire to rapidlyrepair the Pentagon, the Corps of Engineersrecommendations for improvements had to befinalized and presented to the Pentagon RenovationOffice (PenRen) within eight weeks. This effortinvolved determining the loads on the components ofthe Pentagon and the response of these componentsto the loads. It also involved developing, analyzing,and evaluating retrofit concepts to improve safety ofbuilding occupants.

Prediction of Loads and Response ofPentagonThe Pentagon is a complex structure and in somerespects behaves like a group of structures ratherthan one. There are five concentric pentagonal ringsseparated by a lightwell (air space) over the top threefloors of adjacent rings. Thelightwell walls containhundreds of windows. If aterrorist bomb is detonatedoutside the Pentagon, theblast pressures will propagateover the top of the Pentagonand into the lightwells, thusloading the windows. Thepropagation of blast into thelightwells is similar to thepropagation of blast over,around, and betweenstructures. An accuratedetermination of the loads onthese lightwell windows willallow the Department toimprove the structure and consequently, protect itsoccupants. Figure 1 shows that airblast propagationinto the lightwells could be a hazard to occupants ofrooms near the lightwells.

The Pentagon is a concrete frame structure withnonstructural exterior in-fill walls. The in-fill walls ofthe structure are two-layer brick walls clad with largelimestone blocks. Many large windows are presenton the first four floors of the outer wall of thePentagon. Multiple scenarios were evaluated,including the existing Pentagon exterior walls, thecurrent Pentagon retrofit walls, and the enhancedretrofit designs.

Research conducted under the challenge project hasdemonstrated that the HPC simulations match theexperiments well. Further simulations will beperformed beyond those for which experiments will

“This research isimportant to DoDbecause it transitionstechnology needed toimprove the survivabilityof the Pentagon to thoseresponsible for its blastretrofit. The researchalso decreases costs byhelping to determine thecritical areas of thePentagon so thatresources are expendedin those locations wherethey are most needed.”

Page 36: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 27 –

2002 SUCCESS STORIES

be conducted. The results will be used to developengineering models for blast in urban terrain.

HPC simulations were used to evaluate the safety ofthe Pentagon occupants to a range of terroristthreats. The retrofits currently installed in portions ofthe Pentagon as well as other improved retrofits wereevaluated. Figure 2 shows that the bricks of theexterior wall will be a hazard to occupants for thethreat analyzed. The brick walls of the retrofittedPentagon will fail as shown in Figure 3, but will becaptured by the geotextile fabric and will not be ahazard. The results of the study were presented toPenRen, and PenRen has asked the ERDC toparticipate in a study to enhance the safety ofPentagon occupants.

This research is important to DoD because ittransitions technology needed to improve thesurvivability of the Pentagon to those responsible forits blast retrofit. The research also decreases costsby helping to determine the critical areas of thePentagon so that resources are expended in thoselocations where they are most needed. This research

promotes the understanding of experiments and thephenomenology of the blast environment, which willin turn lead to better engineering models forpredicting blast loads.

Engineering Models Too SimpleBlast loadings on structures have typically beenevaluated using empirical equations that relate blastpressure characteristics (incident pressure, incidentimpulse, reflected pressure, reflected impulse, shockfront velocity, time of arrival, and dynamic pressure)to known physical parameters (explosive chargeweight, standoff, and the angle between the shockfront propagation direction and the normal to thestructure). These relationships have beendetermined for both hemi-spherical surface burstsand for spherical bursts in free air. These equationsassume that there are no obstacles between theexplosive source and the target. The simplestapproach to determine the loads on a structurebehind an obstacle is to assume that the obstaclehas no effect on the pressure on the structure behindit. A second approach adjusts the standoff distanceto account for the distance that must be traveled togo around the obstacle. Neither of these methodsprovides reliable or precise estimates of thepressures time-history on a shielded target. Oneproblem is that the relationship between the incidentpressure, the shock front velocity, and the reflectedpressure is not the same for a clear path to a targetas it is for an obstructed path to a target. Recentexperiments have demonstrated that even if theincident pressure at a target point is essentially thesame with and without an obstruction, the shock frontvelocity at the target may be significantly different,leading to a significantly different reflected pressure.

The response of masonry walls to blast is typicallymodeled using single-degree-of-freedom (SDOF)representations of the wall to determine a pressure-impulse (P-I) diagram. The applied pressure andimpulse can then be plotted on the diagram. If thatpair is above and to the right of the curve, then theresponse exceeds the damage level associated withthe P-I curve. This type of analysis is valid and easilyapplicable to simple masonry walls such as a singlelayer brick wall. For complicated structures where abrick wall interacts with another wall, these methodscan only be used as a guideline, unless validated bysome other means.

Figure 1. Blast propagation into lightwells

Figure 2. High-hazardfailure of originalPentagon wall

Figure 3. Response ofsuccessfully retrofittedPentagon wall

Page 37: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 28 –

PROTECT BASES OF OPERATION

Explicit Time-Stepping MethodModels Blast ResponseWith the availability of HPC, it is possible to calculatethe problem of shock propagating through the air bydividing the large volume — including the structure,the air, and the explosives — into a large number ofsmall cells. The propagation of shock is modeled inan explicit time-stepping method. The methodconserves mass and momentum and represents thecorrect equations of state for the air and theexplosives. Determining the solution requires thecomputation of millions of equations for each ofthousands of time steps. HPC resources allow thesecalculations to be performed in a finite length of real-time for significant amounts of simulation time. Theadvantage of this method over the usual method forobtaining loads is that the incident pressure, shockfront velocity, dynamic pressure, and reflectedpressure are all computed based on the physics ofthe problem rather than on the combination ofempirical equations that are not actually valid for theproblem being solved. Historically, how tocomputationally solve the governing equations wasknown, but there was not enough computer power tosolve significant problems in a practical length ofactual time.

Simplified methods provided an overall answer to theresponse of the structure, whereas these methodsprovide a solution with detailed information about theresponse. HPC makes it possible to explicitly modelthe interaction of individual bricks of a brick wall.Using procedures that have been developed atERDC, the model of the complex wall system of thePentagon was developed. The model included thetwo-layer brick wall, the concrete frame, and thelimestone façade. Models were also developed forseveral retrofit designs. Approximate P-I diagrams forseveral wall configurations were validated using HPCsimulations. Scientific visualization has been usedextensively to assist in understanding thephenomenology of blast interacting with structuresand the response of those structures to blast.

Initial Blast Load EvaluationsCompletedThe availability of high-priority HPC time has allowedERDC to complete initial evaluations of the methodsbeing used to study blast loads on structures inurban terrain. These initial evaluations have beendone in time to impact the PenRen Program. Theevaluations are also very encouraging and givecredibility to the analyses to determine thesurvivability of the Pentagon to a terrorist attack. ThePenRen asked the ERDC to assist in overcoming

some of the deficiencies identified by the HPCanalyses performed in the original study.Experimental results are veryuseful for studying blast loadson structures, but are notsufficient to completelyunderstand thephenomenology of blastpassing around an obstacleand then loading a structure.HPC results were used to fillin gaps in the data and tovisualize how the physicalprocess evolves.

Continued expansion of HPC resources, includingnumber of processors available and the computespeed of those processors, allowed the size of themodels to expand. By allowing the use of a largernumber of cells or elements, more detail could beinserted into the models and fewer simplificationsneeded to be taken. All of these models wereapproximations, and the capability to more explicitlymodel the subject allowed more accurate solutions tobe obtained.

ReferencesHall, R.L, et al., “Pentagon Rebuild Retrofit Studies,” USArmy Engineer Research and Development Center SpecialReport, Nov 2, 2001.

Hall R.L., et al., “Terrorist Threats Against the Pentagon,”Proceedings, 34th Joint Panel Meeting on Wind andSeismic Effects, Gaithersburg, MD, May 13–18, 2002.

“Historically, how tocomputationally solve thegoverning equations wasknown, but there was notenough computer powerto solve significantproblems in a practicallength of actual time.”

CTA: Computational Structural MechanicsComputer Resources: SGI Origin 3000 and IBM SMP[ERDC] and IBM SMP [ASC]

Page 38: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 29 –

2002 SUCCESS STORIES

Modeling Methods for Assessment of Gun-Propellant VulnerabilityJOHN STARKENBERG AND TONI DORSEY

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

In order to increase the muzzle energy inmedium- and large-caliber guns (andconsequently their lethality), approachesinvolving increasing the available mass of gun

propellant are being pursued by scientists at theARL. This generally means packing more propellantinto the available volume, which may adversely affectthe vulnerability of associated weapons systems by

allowing the propellant bedto support detonation.

Starkenberg and Dorsey aredeveloping modelingapproaches for assessingthe violence of the reactiveresponses of propellant bedsto unplanned stimuli,emphasizing the tendencyfor detonation to spread fromone part of a bed to another.Their results will be appliedto emerging propellant

technologies to determine how to design beds tominimize reactive response violence.

Studying Detonation PropagationModelsTraditionally, gun-propellant vulnerability has beenassessed through experiments, not simulations. Nowexplosive initiation models such as those included inthe Eulerian finite-volume continuum mechanicssolver CTH may also be used to model detonationpropagation and failure. A detailed computationalstudy of the use of these models for detonationpropagation was undertaken. Based on theunderstanding gained in that study, largethree-dimensional simulations of the spread ofdetonation between propellant grains and betweenparts of more advanced charge designs were

undertaken using CTH’s History Variable ReactiveBurn (HVRB) model.

The fundamental study showed that extremely finecomputational resolution and special calibrations ofthe HVRB model are required to accuratelydetermine the smallest propellant grain diameter thatallows a detonation to propagate. This is referred toas the failure or critical diameter. Eight differentpropagation and contact modes for propagationbetween pairs of cylindrical propellant grains in a bedwere identified and simulated. Figure 1 shows theresults of a simulation of grains in side-to-side pointcontact with the detonation spreading directly fromone grain to the other. The results show thatdiameters larger than the critical diameter arerequired for grain-to-grain propagation and explainexperimental observations. This indicates that

“The modelingapproaches developed inthis program are beingused for the Army’s FutureCombat System to helpdesign high-loading-density gun-propellingcharges to providemaximum performancewith minimum impact onsystems survivability.”

Figure 1. Sequence of plots showing propagation of detonationbetween propellant grains

Page 39: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 30 –

PROTECT BASES OF OPERATION

detonation can be prevented from spreadingthroughout a bed even when the grains are largerthan the critical diameter. Results from simulations ofmore advanced propellant bed designs also showhow the beds can be designed to minimize theviolence of the reactive response.

Modeling Used to Design PropellantChargesThe modeling approaches developed in this programare being used for the Army’s Future Combat Systemto help design high-loading-density gun-propellantcharges to provide maximum performance withminimum impact on systems survivability. Theapproach is suitable for evaluating special propellantconfigurations even when samples for testing areunavailable. This helps in understanding the systemvulnerability implications of using specific propellantconfigurations on proposed weapons platforms.

ReferencesStarkenberg, J., “Modeling Detonation Propagation andFailure Using Explosive Initiation Models in a ConventionalHydrocode,” Proceedings of the 12th InternationalDetonation Symposium, San Diego, CA, August 2002.

Starkenberg, J. and T.M. Dorsey, “Simulations ofDetonation Propagation Between Propellant Grains,”Proceedings of the JANNAF Propulsion Systems HazardsSubcommittee Meeting, Destin, FL, April 2002.

Starkenberg, J. and T.M. Dorsey, “Detonation Propagationand Failure in Advanced Propellant Configurations,”Proceedings of the JANNAF Propulsion Systems HazardsSubcommittee Meeting, Monterey, CA, November 2000.

Starkenberg, J. and T.M. Dorsey, “Further Studies ofDetonation Failure in Layered Propellant Configurations,”Proceedings of the JANNAF Propulsion Systems HazardsSubcommittee Meeting, Cocoa Beach, FL, October 1999.

Starkenberg, J. and T.M. Dorsey, “Detonation Failure inLayered Propellant Configurations,” Proceedings of theJANNAF Propulsion Systems Hazards SubcommitteeMeeting, Tucson, AZ, December 1998.

Starkenberg, J. and T.M. Dorsey, “An Assessment of thePerformance of the History Variable Reactive BurnExplosive Initiation Model in the CTH Code,” Proceedingsof the 11th

International Detonation Symposium, Snowmass,CO, September 1998.

CTA: Computational Structural MechanicsComputer Resources: SGI Origin 2000, SGI Origin3000, and IBM SP Power3 [ARL]

Page 40: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 31 –

2002 SUCCESS STORIES

Robust Scalable Information Processing forMultistatic SonarsJEFFERY DUGAN, CAROL NELEPOVITZ, AND DANIEL STERNLICHT

Office of Naval Research (ONR), Arlington, VA

Acoustic sonar performance for variousAnti-Submarine Warfare programs benefitsfrom improved algorithms that increasedetection of the submarine threat, as well as

reduce the false alarms caused by misclassifiedclutter detections. Current algorithm developmentachieves robustness by analyzing data from a largenumber of at-sea tests, which represent differentenvironments, target dynamics, and transmittedsignals (Figure 1).

Active sonar systems benefit from the ability toestimate target range. Complex pings can be usedto improve the target state estimation by providing arange-rate estimate. One algorithm tested andvalidated using the HPC resources is an improvedrange rate estimator for one of the mainstaywaveforms in active sonar, the Hyperbolic FrequencyModulated (HFM) waveform. Improved range rateestimation reduces the percentage of high-speedclutter thus reducing both the algorithmic andoperator resources.

Correlation-Based AlgorithmsTraditionally, range rate estimates for HFM waveformshave been calculated using a correlation basedalgorithm. The range rate estimates from thecorrelation-based algorithm tended to incorrectlyestimate the range rate of bottom-induced clutter. Inthe system, there are multiple channels. Thecorrelation-based algorithm uses pair wisecorrelation of the HFM echo returns to update thetime delay of the HFMs. The correlation informationis reduced to two time delays. The two time delaysare then used in a line fit, and the slope of the line isused to estimate the range rate. The existing methodloses the shape information of the echoed HFM, andspurious noise spikes can cause the line fit to fit at aninappropriate speed estimate.

Improved Algorithm AnalysisThe HPC resources are used to improve algorithmanalysis, with a goal of transitioning the algorithmsinto fleet use. The Space and Naval Warfare(SPAWAR) System Center (SSC) Signal Processorand the ORINCON Information Processor (IP), hostedon the HPC HP-V2500 Multiprocessor Computer, areflexible and can be configured for processing datafrom various acoustic arrays (Figure 2).

The HPC resources enable efficient and robustalgorithm evaluation due to the amount of data thatcan be stored and processed. The Scalable

Figure 2. Signal and Information Processing Streams

Figure 1.SurveillancePlatforms

Page 41: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 32 –

PROTECT BASES OF OPERATION

Programming Environment (SPE) available on theV2500 provides a well-defined interface, portability,and reduced development time. The SPE layerallows for modularization, and adding a module tothe system is relatively easy. Algorithms that are

being compared may be run inparallel, and their classifierReceiver OperatingCharacteristics (ROC) curvescan be analyzed quickly.

HPC allowed this researchproject to robustly test andevaluate the improvement inactive sonar detection andclassification when applyingORINCON’s improved rangerate algorithm.

The SPAWAR System Center,San Diego (SSCSD) AdvancedVisualization for Intelligence,

Surveillance, and Reconnaisance (ADVISR)visualization tool has been used in some of the workas well as in MATLAB. MATLAB has been aninvaluable tool for data analysis and prototyping. TheADVISR simulations allowed for viewing the evolutionof the situation over time and provided improvedviewer comprehension.

Target Detections ImprovedProcessing within the Scalable Signal Processor andActive Information Processor consists of severalsteps. The echoes for the HFM transmissions arebeam formed, matched filtered, noise normalized,and thresholded. The IP receives the thresholdcrossings and attempts to group together crossingsthat are from the same echo source. This is theclustering portion of the IP. The next step is tocombine the multiple HFM waveforms that weretransmitted into detections. Detections use an M-out-of-N criterion where M is the minimum number of themultiple HFM sweeps and N is the number of HFMsweeps transmitted. These HFM detections can becombined with other waveform types, such ascontinuous waveform, to form wave train detections(Figure 3). The wave train detections are sent to thetracker for linking over time. At all stages,

classification calculations are performed and targetor clutter likeness is determined.

For the data analyzed, target detections for surfaceand sub-surface detections were achieved at tacticalranges. The improved range rate parameter iscalculated in the detection portion of the IP. Analysisof the improved range rate parameter has shown amarked reduction in the range rate of high-speedclutter detections while preserving the range rate oftargets of interest. As shown in the results plot(Figure 4), the algorithm provides a minimum 61percent reduction in “high-speed” clutter detections.At a lower range-rate X1, which is half of X2, over 50percent of the clutter detections are removed. Thisimprovement reduces the single-ping false alarm ratewhen alerting the operator to detections that arethought to be from moving objects. The importanceof this project is the capability to robustly testcandidate algorithms. The availability of the HPCresources allowed use of the SPE. The SPE allowedfor the design of the processing components in amodular, scalable fashion. This architecture provideda test bed for active signal processing that allowseasy replacing of various candidate classificationalgorithms. The large storage and computingcapabilities allowed very large quantities of data to beprocessed, allowing robust testing.

Estimating Target Range RatesIn this environment and architecture, Navyresearchers have tested a novel approach toestimating target range rate with HFM waveforms thatare not typically associated with providing range rate.In addition to estimating the range rate of a detection,they also computed a new classification clue “t-ratio”which is the speed estimate divided by the standarddeviation of the speed estimate. The t-ratio

“The HPC resourcesare used to improvealgorithm analysis, witha goal of transition ofalgorithms into fleetuse. The HPCresources enableefficient and robustalgorithm evaluationdue to the amount ofdata that can bestored andprocessed.”

Figure 3. HFM Wavetrain

Figure 4. Range Rate Results

Page 42: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 33 –

2002 SUCCESS STORIES

computation represents the confidence of the rangerate estimate. Adding t-ratio to the classification clueset resulted in a very significant reduction in thenumber of clutter detections that would be classifiedas targets and sent to an operator, thus reducing theirworkload and allowing them to focus on otherdetections, while preserving the range rates oftargets of interest.

ReferencesAlan Thompson, Richard Van Dorn, and Jeffery Dugan,“Improved HFM Range Rate Estimation in the SURTASS/LFA Information Processor”, ORINCON, submitted to theFeb. 1999, NSUA conference, October 1, 1998.

“Full Spectrum Processing Algorithm DescriptionDocument for Improved Range Rate Estimation 1.0”,Contract No. N66001-98-D-5007, ORINCON, OCR 98-4319-S-0359, CDRL A003BA, July 28, 1999.

CTA: Signal/Image ProcessingComputer Resources: HP V2500 [SSCSD]

Page 43: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 44: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

PROJECT AND

SUSTAIN USFORCES

Anti-Access CapabilitiesMinimize Logistics Footprint

Page 45: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 36 –

PROJECT AND SUSTAIN US FORCES

References (from left to right):

Detached-Eddy Simulation of Massively Separated Flows over Aircraft by Kyle Squires, James Forsythe, Scott Morton,Kenneth Wurtzler, William Strang, Robert Tomaro, and Philippe SpalartFlow over the delta wing at 27° angle of attack. Isosurface of vorticity

Airdrop Systems Modeling: Fluid-Structure Interaction Applications by Richard Benney, Keith Stein, Tayfun Tezduyar,Michael Accorsi, and Hamid JohariThis figure shows the structural response of the upper canopy as it deforms and interacts with the surrounding flow field. Thecolors on the canopy correspond to the differential pressures, which are a part of the fluid dynamics solution.

Absolute Scalability of Space/Time Domain Decomposition in Computational Structural Dynamics for Virtual Modeling andTesting Applications by R. Kanapady and K. K. TammaSolid modeling of keel beam

Accurate Modeling of Lossy Frequency Selective Surfaces for Real-Time Design by John Meloling, Wendy Massey, andDavid HurdsmanSmith chart of boundary admittance using past approximations (green) and more accurate solutions (blue)

Page 46: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 37 –

2002 SUCCESS STORIES

Project and Sustain US Forces

“All warfare is based on deception. Hence, when able to attack, we must seem unable; when usingour forces, we must seem inactive; when we are near, we must make the enemy believe we are faraway; when far away, we must make him believe we are near.”1 The following success stories describesome important efforts to use HPC for the design and analysis of military platforms and the battlespaceenvironment to help project and sustain US forces.

High performance computing is critical to the research, design, acquisition, and testing of militaryplatforms with superior warfighting capabilities. One of the primary goals of these warfightingplatforms is to project US forces into hostile situations and sustain them in that situation.

High performance computing contributes significantly to the design of these weapons platforms in avariety of ways, as chronicled by the first group of seventeen success stories. What is particularlyimpressive is the breadth of computational work represented in terms of length scales. They rangefrom the nanoscale as detailed in the Jarvis and Carter accomplishment of using quantum mechanicsto investigate thermal barrier coatings on jet engine turbine blades to the size of large vehicles asdiscussed in the Gorski, Hyman, Taylor, and Wilson article on hydrodynamics on entire ships.Component level simulations of armor and composites are discussed in articles by Holmquist andJohnson, and Shires, Henz, and Mohan. Analysis of components such as turbomachines and fullsystems such as modern airplanes is performed with Computational Fluid Dynamic (CFD) simulationsin articles by Moin, You, Wand, and Mittal; by Jurkovich, Rittinger, and Weyer; by Polsky; by Rizzetta;by Green and Ott; and by Forsythe, Morton, Squires, Wurtzler, Strang, Tomaro, and Spalart. Pushingthe computational frontiers downward in size to the nanoscale to model specific surfaces atomisticallyor upward in size to model aerodynamic flow in a time-dependent fashion around whole aircraftrepresents an expansion of the computational capability envelope that is now just becoming possiblewith present-day systems.

Along another dimension, this set of success stories represents several of the program’s majorcomputational areas. In addition to CFD, there are also stories in computational chemistry andmaterials science, computational structural mechanics, and computational electromagnetics andacoustics. The stories by Hill, Navale, and Sancer, and by Mark, Clarke, Namburu, Le, and Nehrbassemploy the techniques of computational electromagnetics (CEM) to compute the electromagneticsignatures of electromagnetically large combat vehicles. Of particular interest is interdisciplinarycomputational work, as presented in the Soo, Feiz, and Menon contribution on incorporation of finiterate chemistry into computational fluid dynamic studies of fuel injectors for propulsion systems, andthe coupled fluid/structure investigations of aircraft wing response to artillery damage of Hinrichsen.The interaction of structures with the surrounding land, air, and sea environments is investigated in thearticle by Prakash on testing armor, in the article by Benney, Stein, Tezduyar, Accorsi, and Johari onairdrop systems, and in the article on the effects of mines by Landsberg.

All in all, these seventeen stories make impressive contributions to solving real DoD problems. Everysingle one of them addresses a real mission need, ranging from a specific, short-term problem on acurrent weapon system through basic research and design work on future weapons systems.

Synchronization and coordination within the battlespace to project and sustain US Forces are keytransformational military goals. The 2001 Quadrennial Defense Review states that DoD will “… developan interoperable, joint C4ISR architecture and capability that includes a tailorable joint operationalbattlespace.” This battlespace is defined by “layers” of information where some layers are static (e.g.,

1 Sun Tzu, The Art of War

Page 47: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 38 –

PROJECT AND SUSTAIN US FORCES

land, buildings, space) and others are dynamic (e.g., weather, ocean, foliage, weapons locations). Thebattlespace environment itself is the subject of study the next group of five stories that discussspecifically how the Common High Performance Computing Software Support Initiative (CHSSI) andDoD HPC efforts help build the dynamic weather/ocean layers.

Characterizing the environmental battlespace is the most important goal in building dynamic weather/ocean layers. But what does “characterizing the environmental battlespace” really mean? It is theability to take real-time weather/ocean observations, combine the observations with numerical modelfields, and create a volumetric dataset where weather/ocean layers are defined in space and time.Allard et. al., provides an overview on how their CHSSI portfolio efforts are building the ocean, tide,wave, estuarine, and riverine layers for littoral regions. Through new techniques in coupling differentmodels to create a synthesized environment, the littoral battlespace layers will constantly provide thewarfighters with the appropriate readiness to conduct their missions.

The Fleet requires wave and surf hind-casts and short-term forecasts for accurate mission planning.Successful amphibious assaults are enabled by accurate wave modeling along the beach. Smithdescribes how HPC resources were critical to building the “wave” layer within the littoral battlespaceportfolio. The integration effort described by Allard et. al., involves the coupling of the wave layer toother littoral layers, and the fuzing of the layers (in time and space) to create a volumetric dataset.Burnett focuses on the atmospheric layer and describes a successful story of how HPC resourcesallowed the operational community to continue providing operational weather layers to the warfighterwhile the Production Center converted from vector to scalable architectures. The story by Hughes,Wegiel, McAtte, Michalakes, Barker and LaCroix discusses integration of data from real-time weatherobservations with HPC forecast models.

The final critical part in developing the environmental battlespace is the visualization andunderstanding of the various layers. Gruzinskas describes how HPC efforts in visualizing thebattlespace environment will ensure that planners and decision-makers have an in-depthunderstanding of the constantly changing environmental conditions. He describes a particular casewhere the Navy supported the Ehime Maru salvage operations using virtual environments and on-scene data. The effort actually went a step further and integrated real-time observations with the virtualenvironment.

Characterizing the battlespace environment is an on-going effort. These success stories show that theMeterology and Oceanography Community is only a third of the way toward reaching their goal ofcreating a volumetric battlespace. Initial steps in building the littoral environmental battlespace andcrucial DoD HPC support of CHSSI, Programming Environment and Training (PET), and MSRCs willallow the community to reach their goal within the next five years.

Dr. William BurnettNaval Meteorology and Oceanography Command, Stennis Space Center, MS

Dr. Larry P. DavisHPCMP Office, Arlington, VA

Dr. Robert E. Peterkin, Jr.HPCMP Office, Arlington, VA

Page 48: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 39 –

2002 SUCCESS STORIES

Thermal barrier coatings (TBCs) of jetengine turbines afford thermal andoxidative/corrosive protection of theunderlying metal superalloy that is crucial to

aircraft performance. This coating must withstandtemperature cycling over a range exceeding 1000 °C,as the engine is taken from rest to maximumoperating capacity. The highest temperaturesaccessed within the combustion chamber are evenbeyond the melting point of the superalloy substrate.Nevertheless, such high temperature combustion isrequired to attain optimal engine power andmaximum fuel efficiency.

For years, the TBC of choice has beenzirconia with ~7–8% yttria (yttria-stabilizedzirconia, YSZ) added to stabilize thetetragonal/cubic phases of zirconia overthe relevant temperature cycling range.This Y2O3-ZrO2 coating affords significantthermal protection, exhibiting very lowthermal conductivity. Unfortunately, it isnot well suited to oxidative/corrosiveprotection. Furthermore, its materials properties arenot optimally matched to the single crystal nickelalloy substrate. As a result, an intermediateprotective layer, commonly called the “bond coat”and typically consisting of a NiCrAlY alloy, isdeposited on the nickel superalloy prior to depositionof the YSZ. This layer provides an intermediatethermal expansion coefficient to that of the superalloyand the YSZ. It also is designed to promote YSZadhesion, and must supply a source of oxidative/corrosive protection from the harsh combustiongases. The Al content in the bond coat isresponsible for this latter task. The oxidation productof aluminum, Al2O3, forms a nearly continuous layerthat serves as a slow-growing, effective barrieragainst further oxidation of the underlying metal alloy.

TBC SpallationUnfortunately, a process known as spallation,whereby the TBC peels off the underlying substrate,limits a TBC’s operational lifetime. TBC spallation iscatastrophic because: 1) the superalloy composingthe turbine blade is left unprotected; and 2) largespalled coating pieces may pose additional dangerduring engine operation. Turbine blades with failedTBC’s must be recoated or replaced. To helpminimize spallation, a nickel alloy bond coat isdeposited on the superalloy substrate before the TBC

is applied. This bond coat can promoteadhesion, serve as a graded thermalexpansion layer, and limit hot oxidation/corrosion resulting from high temperatureoxygen diffusion through zirconia. As aresult of the TBC’s transparency to hightemperature oxygen diffusion, most bondcoat alloys in current production form athermally grown oxide layer of primarilyAl2O3. The thickening of the thermallygrown oxide with repeated thermal cycling

is accompanied with increased spallation. Research,sponsored by AFOSR, is being conducted to suggestchemical modifications to the bond coat alloy that willimprove adhesion at the ceramic/metal interface. Itwill inhibit oxide formation or form oxide productsmore ideally suited to the imposed interface andthermal cycling requirements. In addition to jetengine turbine applications, improved TBCs mightpermit more efficient running of stationary powerplants—providing benefit both by decreasingpollution and cost.

Trial and Error MethodsThe physical and chemical complexity of TBCs hasrequired “trial and error” empirical engineeringmethods for most material advances. From a

How Quantum Mechanics Can Impact Jet Engines:Improving Aircraft Design at the NanoscaleEMILY A. JARVIS AND EMILY A. CARTER

University of California, Department of Chemistry and Biochemistry, Los Angeles, CASponsor: Air Force Office of Scientific Research (AFOSR)

“Only with the recentadvent of highperformance computingcapabilities is it feasibleto explore the detailedinteractions at metal-ceramic interfaces.”

Page 49: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 40 –

PROJECT AND SUSTAIN US FORCES

theoretical standpoint, the rich physics underlying thefundamental nature of metal/oxide interfaces is onlyrecently being explored. Before this, the strongattractive force at nonreactive metal/ceramicinterfaces had been attributed to classicalelectrostatic effects. Although it is likely theseelectrostatic forces are important, a theoryemphasizing these contributions to the exclusion ofchemical bonding mechanisms provides anincomplete picture of interface adhesion. Moreimportantly, the logical conclusions that can beinferred from such a theory result in counter-productive materials design predictions.

Exploring Metal-Ceramic InterfacesOnly with the recent advent of high performancecomputing capabilities is it feasible to explore thedetailed interactions at metal-ceramic interfaces. Inmany instances, quantum mechanical effects maydominate the interface adhesion. Treating thequantum mechanical problem in metal/ceramicinterface studies is very computationally challenging,requiring extensive computing power and memory.Neglecting to treat the quantum mechanical problem,or treating these effects at a very approximate level,is far less computationally expensive but excludesthe effects necessary for crucial insights intomaterials advancements.

Researchers at the University of California, LosAngeles, employ density functional theory (usingplanewave bases and pseudopotentials within theVienna Ab Initio Simulation Package) to investigateatomic-level interactions at ideal ceramic/metalinterfaces designed for TBC applications. Thesecalculations are computationally intensive but can berun efficiently on parallel platforms. Although thesystem size is limited to a few hundred atoms due tocomputational size constraints, the three-dimensional(3-D) periodic boundary conditions simulate aneffective “infinite” interface neglecting long-rangedrelaxation effects. They have performed both staticcalculations, where the ionic coordinates are relaxedto a local minimum energy configuration using aconjugate-gradient algorithm, and moleculardynamics simulations of high-temperature annealingto better approximate the extreme environment in jetengine combustion chambers.

Previously, the calculations provided insight intoatomic-level contributions to the observed spallationof TBCs. They found the ideal interface betweenAl2O3 and the bond coat alloy, as well as the Al2O3interface with the zirconia topcoat, is unfavorablefrom the standpoint of interfacial adhesion andchemical bonding. In particular, the ceramic/metal

interface adhesion dramatically decreases withincreasing oxide thickness. Their calculationsindicated a fundamental weakness of such interfacescould be linked to the high ionicity, and thus closed-shell-like behavior, of the Al2O3 interface with an alloyprimarily composed of Ni, an element with a mostlyfilled d-shell. Such electronic structure results ineffective “closed-shell repulsions” at the interfaceinstead of strong bonding interactions.

Investigating Chemical ModificationsAccordingly, the team investigated chemicalmodifications to the metal alloy bond coat designedto promote a strong interface by permitting bondinginteractions at the ceramic/metal rather than closed-shell repulsions. They explored the atomic-levelimpact of early transition metal dopants at theceramic/metal interface currently relevant in TBCsAl2O3/Ni (Figure 1). Their initial results indicated thatearly transition metal doping of the bond coat alloyeffect a dramatic increase in adhesion. Theircalculations showed this bond coat alloy “doping”resulted in a ~45% increased adhesion, relative to

nickel, at the ZrO2/Niinterface and a dramatic100% increase inadhesion at the Al2O3/Niinterface.

Over the past year, theteam has performeddetailed studiesexploring how theelectron density ismodified as a result ofearly transition metalinterface doping. Theyalso investigated theeffects of modifying theinterface electronicstructure throughaltering the oxygenstates of the ceramic.

They calculated the ideal adhesion at the SiO2/Ni andZrO2/SiO2 interfaces and found it to be much greaterthan the adhesion at the corresponding interfaceswith Al2O3 instead of SiO2. Similar to early transitionmetal interface dopants, the dramatically increasedadhesion is a result of quantum mechanicalinteractions at the interface. The Si-O bond in SiO2 isless ionic than the Al-O bond in Al2O3, thus theoxygen in SiO2 does not contribute local interfacerepulsions like those seen in the Al2O3/Ni couple.Local bonding interactions are essential; using onlyelectrostatic considerations as the means to predict

Figure 1. This figure displaysthe doped metal-ceramicinterface structure from atomic-level calculations. Although theactual number of atoms in thesimulation is fairly small,periodic boundary conditionsallow the team to simulate an“infinite” interface.

Page 50: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 41 –

2002 SUCCESS STORIES

strong metal-ceramic interfaces for TBC applicationsresults in the wrong physics. Their calculationsindicate chemical bonding effects are responsible forstrong heterogeneous interface adhesion.

Predicting Materials ImprovementsPrevious studies have concentrated on achieving adetailed understanding of atomic-level weaknesses atidealized interfaces related to those found in TBCsand predicting materials improvements to theseinterfaces. The team’s predictions that quantummechanical effects play the dominant role in stronginterface adhesion may allow new materials designparadigms that permit informed engineeringmodifications to real TBCs. Predictions for improvingmaterials performance should significantly decreasethe “trial and error” phase of materials design forthese applications.

Although current results may provide fruitful avenuesfor materials improvements, additional challengesremain. In the future, the team plans to study theeffect of diffusion and segregation of harmful dopantsto the metal alloy/oxide interface. A detailedunderstanding of how interface adhesion is reducedby undesirable segregants, such as sulfur, maysuggest means by which the detrimental impact ofsuch elements could be limited or eliminated.

Figure 2. Contour plot of the electron density difference between thesum of the isolated metal substrate and ceramic coating densitiesand the metal-ceramic interface (Al2O3/Ti-Ni) electron density. Thecolor scale runs from red to violet with red showing increased densityin the interface and violet indicating decreased density. The yellow“circles” in the horizontal region indicated by the blue line display aregion of metal-metal bonding as a result of interface formation.

ReferencesE.A.A. Jarvis, A. Christensen, and E.A. Carter, “WeakBonding of Alumina Coatings on Ni(111)” Surface Science487, pp. 55–76, 2001.

E.A.A. Jarvis and E.A. Carter, “The Role of ReactiveElements in Thermal Barrier Coatings” Computing inScience and Engineering 4, pp. 33–41, 2002.

E.A. Jarvis and E.A. Carter, Importance of Open-ShellEffects in Adhesion at Metal-Ceramic Interfaces, Phys RevB., 66, pp. 100–103, 2002.

CTA: Computational Chemistry and Materials ScienceComputer Resources: IBM SP P3 [MHPCC]

Page 51: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 42 –

PROJECT AND SUSTAIN US FORCES

The Army needs lightweight, compact, fuel-efficient propulsion systems for helicoptersand tanks for the Army After Next. Thiswill require significant improvement in the

performance of the next generation gas turbineengines. Reduction in engine size and weight willalso reduce the logistics burden for rapidlydeployable forces. Reduced fuel consumption isespecially critical since fuel is a major logisticsburden. Under the leadership of the Army ResearchOffice (ARO), scientists are developing advancedfuel-efficient systems to reduce fuel consumptionwithout sacrificing performance. However,experimental investigations of such systems arecomplicated by the difficulty in obtaining non-intrusive access to the hot, turbulent reacting regionin the combustor. Numerical tools that can predictthe unsteady combustion processes may be able toprovide a new, previously unavailable capability toinvestigate the mixing enhancement process. This,in turn, will advance the design process.

Revolutionizing the Design of FuelInjectorsThe department’s engineers and scientificresearchers must revolutionize the design of fuelinjectors to increase fuel efficiency and reduce theenvironmental impact of combustion-poweredvehicles. Fuel injectors are designed to rapidly andthoroughly mix fuel with the surrounding air (oxidizer)before the mixture is burnt. Ineffective or incompletemixing produces sooty flames and heat loss, whichcauses a substantial performance drop. Traditionalinjectors are designed with circular or round jetssince they generally deliver a symmetric (though notnecessarily uniform) fuel mixture. Geometries suchas square jets (Figure 1), however, may offerincreased mixing rates. Fuel injectors that employactive control of fuel dispersion may offer even moreenhanced mixing efficiency. For example, synthetic

microjet forcing of fuel injectors has shown aremarkable ability to increase and control mixing.Synthetic jets do not add additional fuel to themixture but, instead, work with fluid already injectedinto the mixing environment.Synthetic jets can be used toincrease fuel/air mixing ratesand/or to actively control theprimary jet flow or boundarylayers by periodicallysucking and blowing fluidinto and out of their micro-scale storage chambers, asshown in Figure 2.Innovative techniques basedon micro-electro-mechanical systems (MEMS)technology are currently being studied where the fueljet is actively forced using embedded syntheticmicro-jets. Significant enhancement of fuel-airmixing appears to be occurring due to these MEMS

Modeling “Smart” Fuel Injector DesignsJIN HOU SOO, HAMAYOON FEIZ, AND SURESH MENON

Georgia Institute of Technology, Atlanta, GASponsor: Army Research Office (ARO)

“Numerical tools that canpredict the unsteadycombustion processesmay be able to provide anew, previouslyunavailable capability toinvestigate the mixingenhancement process.This, in turn, will advancethe design process.”

Figure 1. Simulation ofsquare jet using Lattice-Boltzman method. Squarevortices, depicted by thevorticity magnitudeisosurface, are propelleddownstream (upwards) withthe flow. The streamlines aredrawn from outside the jetcore reveal the high massentrainment ratescharacteristic of square jets.

Page 52: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 43 –

2002 SUCCESS STORIES

devices, however, detailed observation of the flowfield generated by these micro-scale actuators isdifficult, if not impossible.

High Performance Modeling ofSynthetic JetsModeling of fuel injectors is a challenging task due totheir small size. The small dimensions restrict themaximum allowable time-step that may be taken in atraditional CFD simulation. Time-accurate (unsteady)simulations are required to capture the dynamics ofthe unsteady nature of jet motion and fuel/air mixing.The restrictive time-step effect is even morepronounced for micro-scale synthetic jets, whichhave to be small enough to actually fit inside theprimary fuel jets. A new modeling technique, Lattice-Boltzman Equation (LBE), is used to simulate thesmall-scale flow in one synthetic jet and other novelfuel injector designs. Unlike traditional CFD, whichrepresents the working fluid as a continuous system(macroscopic view), LBE treats the fluid as acollection of particles governed by a single statisticaldistribution equation (microscopic view). Thistechnique is advantageous when simulating small-scale fluids since its time-step limitations are notrestrictive. Additionally, LBE is easily parallelized andis highly scalable on a parallel high-performancecomputing system.

In the current studies, various 3-D non-circular andsynthetic jets are simulated using the LBE method.The objective of this study is to investigate theturbulent dynamics of these jets and to evaluate theeffectiveness and accuracy of the 3-D, parallel LBEmethod. Typically, 2–3 million grid points are used toresolve the resulting unsteady flow. This complexity

requires 600 CPU hours and nearly 2 gigabytes ofmemory on a distributed shared-memory parallelmachine. The extreme efficiency of the LBEalgorithm coupled with the high-performancecomputing resources available at the DoD MSRCsallows high-resolution parametric jet simulations withrapid turn-around times.

Excellent results have been obtained from the LBEsimulations. Simulations of free and forced squarejets have been able to capture phenomena such asaxis switching (vortices inverting as a result of thenon-uniform flow) and increased mass entrainment.Similarly, direct comparisons with experiments on aseries of synthetic jets have also been conducted.The simulations show good agreement with theexperiments, adding credibility to the numericalmethod. Finally, the interaction of a square jet with asteady cross-flow (as would be used for boundary-layer control or combustor wall cooling) has beeninvestigated. Complex flow structures such as“horse-shoe” and “hairpin” vortices have beenobserved.

Several extensions to the LBE method are currentlyunderway. These include extending this promisingmethod to combustion (which requires the modelingof chemical reactions and heat-release) and couplingthe LBE simulations to Large-Eddy Simulations(LES). LES is an effective tool for simulatingunsteady, complex flows such as those found insidegas turbine combustors. The coupled LBE/LESmethod would allow the simulation of both the fuelinjectors mixing regions and the combustor at anaffordable computational cost. The combined LBE/LES approach provides a unique simulationcapability that can be used to investigate theperformance of these fuel injectors under realisticoperating conditions. In the future, many otherissues of practical interest to DoD such as high-pressure combustion, supercritical fuels, andcombustion instability can be addressed with thesecodes.

ReferencesMenon, S. and Wand, H., “Fuel-air Mixing EnhancementUsing Synthetic Microjets,” AIAA Journal, Vol. 39, No. 12,2001.

Figure 2. Synthetic microjetsare used to increase fuel/airmixing and boundary layercontrol. Outside air is suckedinto the jet cavity and thenrapidly expelled outwards intothe surround mixture. Shownhere, is the simulation of ahigh-aspect-ratio (high widthto length ratio) synthetic jet.

CTA: Computational Fluid DynamicsComputer Resources: IBM P3 SMP, Cray T3E, andMass Storage [NAVO]

Page 53: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 44 –

PROJECT AND SUSTAIN US FORCES

Computational Modeling of Ceramic ArmorTIMOTHY HOLMQUIST AND GORDON JOHNSON

Army High Performance Computing Research Center (AHPCRC)/NetworkCS, Minneapolis, MN

The United States Army is developingadvanced lightweight armor to increasevehicle protection and reduce vehicleweight. One class of materials that has been

considered for armor applications in recent years isceramics. Ceramic materials are very strong incompression, relatively weak in tension, and tend tobe lightweight when compared to more traditionalarmor materials such as metals. Certain ceramicshave also exhibited a phenomenon called “ceramicdwell”. Ceramic dwell occurs when a high velocityprojectile impacts a ceramic target and then dwellson the surface of the ceramic with no significantpenetration. These characteristics make ceramicswell suited for armor applications, but also present asignificant design challenge when incorporated intoarmor systems.

Ballistic Testing Used to DevelopCeramic ArmorHistorically, ceramic armor has been developed byrigorous use of ballistic testing. Although muchadvancement has been made in ceramic armordesign primarily using ballistic testing, it is a timeconsuming and very expensive process. A betterapproach is to use ballistic testing along withcomputations. Effective armor design can beenhanced, cost can be reduced, and the behaviorbetter understood by the use of HPC resources if thecomputational models adequately represent thematerial behavior. Furthermore, the HPC systemshave the memory and speed required to performthese computations in a timely and efficient manner.

Recent advancements in the development of ceramicmaterial models and numerical algorithms atAHPCRC, allows for accurate representation ofceramic material behavior. Specifically, the ability tomodel the phenomenon of ceramic dwell has beendeveloped and computations have been performedto investigate ceramic dwell and how it is affected bytarget geometry.

Figure 1 presents the computational results for threetarget configurations. All three targets are comprisedof a ceramic core surrounded by metal. The baselinegeometry is a configuration that was investigatedexperimentally. The computational results accuratelyrepresent the behavior exhibited in the experiment.The other two target configurations, slightmodifications of the baseline geometry, wereperformed to investigate the effect of diameter andcover plate attachment. The penetration results arepresented as a function oftime at the bottom ofFigure 1. The resultsindicate that the bestperforming design is thelarge diameterconfiguration (least amountof ceramic penetration at60 microseconds) and theworst performing design isthe detached cover (mostamount of ceramicpenetration at 60microseconds).Computations such asthese can be performed toprovide insight into thecomplex behavior of ceramic armor and aid thearmor designer in developing improved armorsystems.

The ability to accurately simulate the ceramic dwellphenomenon allows the Army reseachers to designceramic armor in an efficient, cost effective manner.This new capability is currently being used by theARL and the Tank Automotive Research,Development and Engineering Center (TARDEC) tohelp design ceramic armor for the Future CombatSystem (FCS).

“Effective armor designcan be enhanced, cost canbe reduced, and thebehavior better understoodby the use of HPCresources if thecomputational modelsadequately represent thematerial behavior.Furthermore, the HPCsystems have the memoryand speed required toperform thesecomputations in a timelyand efficient manner.”

Page 54: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 45 –

2002 SUCCESS STORIES

Figure 1. Computational results for a baseline ceramic targetgeometry, a large diameter geometry and a detached covergeometry. For all three geometries, the ceramic material isencapsulated by metal. Also shown are the ceramic penetration timehistories.

CTA: Computational Structural MechanicsComputer Resources: SGI Onyx2 [TARDEC]

ReferencesHolmquist, T.J. and Johnson, G.R., “Response of SiliconCarbide to High Velocity Impact,” Journal of AppliedPhysics, Vol. 91, No. 9, May 1, 2002.

Holmquist, T.J., Templeton, D.W., and Bishnoi, K.D.,“Constitutive Modeling of Aluminum Nitride for LargeStrain, High-Strain Rate, and High Pressure Applications,”International Journal of Impact Engineering, Vol. 25, No. 3,2001.

Page 55: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 46 –

PROJECT AND SUSTAIN US FORCES

As composite materials are increasinglyused in weapon and combat systems(both retrofitted and new components), itis increasingly important to fully understand

the complexities associated with their manufacture.Structural materials used in the past, such as metals,have well-established design, manufacturing, andtesting protocols. The same cannot be said ofcomposites. Even after years of use, the structuresremain risky in terms of cost and difficultiesassociated with manufacture. Traditionally,experienced process engineers have manufacturedthese parts using “best guess” approaches.However, the complexities of the new designs beingdeveloped to support FCS significantly highlight thelimits of that approach. Composites with embeddedstructures, for example, significantly increase thelevel of difficulty in producing a viable part on the firstmanufacturing attempt.

Modeling Composites and theManufacturing ProcessThe best solution to overcome these problems is aphysics-based modeling and simulation approach.Liquid injection molding of composites usuallyinvolves some fiberglass or organic matrix fiberstructure being placed into a mold and beinginjected with a thermosetting resin. Darcy’s lawgoverns the flow of resin through a porous media. Itrelates the velocity field to the pressure gradientthrough fluid velocity and the fiber preformpermeability. Darcy’s law is defined as

where uv is the velocity field, µ is the fluid viscosity,and P is the pressure. Permeability K is a measure ofthe resistance experienced by a fluid as it flowsthrough a porous media and is an important materialcharacteristic involved in the flow simulations.Complex structures are discretized using finiteelement meshes to model thegeometric complexities.Material regions in the modelare defined based on thepermeability characteristics ofthe fiber preforms.

Traditional numericalapproaches to model themanufacturing phenomena,such as the finite element-control volume method, wereconsidered and found inadequate. The stabilityconditions associated with the solution requiredcomputer times exceeding weeks of single processoror parallel computer time. However, the newermethodologies, found in the COmpositeManufacturing PrOcess Simulation Environment(COMPOSE) suite of tools, were developed with afocus on parallelism and fundamental algorithmicchanges. Now, large-scale process simulations ofeven the most detailed and complex parts arepossible with results available in hours rather thanweeks or even months. Process engineers canperform parametric studies to identify potentialmanufacturing problems and optimize themanufacturing process before capital investment inmold tooling is made. COMPOSE developers areworking to expand these capabilities by addingsensitivity analysis and thermal and cure modeling tothe parallel solutions. Recent additions such as across-platform Java-based user interface are putting

Parallel Computing Enables StreamlinedAcquisition for Composite MaterialsDALE SHIRES AND BRIAN HENZ

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

RAM MOHAN

University of New Orleans, New Orleans, LA

“Now, large-scaleprocess simulations ofeven the most detailedand complex parts arepossible with resultsavailable in hoursrather than weeks oreven months.”

Page 56: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 47 –

2002 SUCCESS STORIES

the technology in the hands of process engineersand manufacturers even faster.

COMPOSE has been used to model numerousgeometries found in rotorcraft applications. Theseparts include fuselage components, stabilizers, andinterior components. Recent work with SikorskyUnited Technologies was directed towardmanufacturing improvements for composite seatfittings found in the RAH-66 Comanche helicopter(shown in Figure 1). This part, shown in Figure 2, isthin-shelled, made from a graphite woven fiber, andinjected with a polymer resin. By using computermodeling, it was possible to verify the injectionscheme anticipated in production and to suggestimprovements. Process simulations were conductedusing various numbers of processors (Figure 3shows a 16-processor partition). Vent locations,optimally placed near areas that are last to fill, werealso determined from the model. Detailed parametricstudies and numerous white papers addressing themanufacturing practices of this part were possiblethanks to the quick turn-around time provided by theparallel COMPOSE suite of tools. One exampleshowing resin flow progression through the part,based on injection into a channel along the bottomedge, is shown in Figure 4.

Transitioning to DoD IndustrialPartnersThis work is part of a larger effort directed towardsfacilitating and enhancing the use of compositematerials in military systems. ARL and theUniversities of New Orleans and Minnesota are

Figure 1. RAH-66Comanche heavilyfeatures compositematerials (Imagecourtesy of Sikorsky)

Figure 2. Material regionsfor co-pilot’s seat

Figure 3. 16-processorpartition of co-pilot’s sea

conducting the basic research behind processmodeling. Transition to DoD industrial partners suchas Boeing and Sikorsky is being conducted throughthe Aviation Applied Technology Directorate (AATD),Weapons and Materials Research Directorate-ARL,and the US Army Aviation and Missile Command.These process models are one of the significantmanufacturing technology advancements found inthe Rotary Wing Structures TechnologyDemonstration project managed by AATD.

ReferencesD. Shires, R. Mohan, and B. Henz, “Scalable Performanceand Application of CHSSI Software COMPOSE forComposite Material Processing.” Proceedings of the 2002DoD HPCMO User’s Group Conference, June 2002.

R. Mohan, D. Shires, A. Mark, W. Roy, S. Walsh, and J.Sen, “The Application of Process Simulation Tools toReduce the Risks in Liquid Molding of Composites.”Proceedings of the American Helicopter Society 57th

Annual Forum and Technology Display, Washington, DC,May 2001.

R. Mohan, D. Shires, and A. Mark, “Scalable Large-ScaleProcess Modeling and Simulations in Liquid CompositeMolding.” Proceedings of the 2001 InternationalConference on Computational Science, San Francisco, CA,May 2001.

Figure 4. Plot of resinimpregnation times (blueis last to fill)

CTA: Integrated Modeling and TestingComputer Resources: SGI Origin 2000, SGI Origin3800, and IBM NH-2 [ARL, AHPCRC]

Page 57: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 48 –

PROJECT AND SUSTAIN US FORCES

Cavitation is “the formation of partialvacuums in a liquid by a swiftly movingsolid body (as a propeller) or by high-intensity sound waves.” 1 In liquid

handling systems like turbomachinery pumps andpropellers, low pressure fluctuations downstream ofthe rotor can induce cavitation, leading to acoustic

noise, loss of performance,and structural damage. Thiscomplex, highly unsteadyflow phenomenon is poorlyunderstood and cannot bepredicted by the traditionalCFD methods. In thepresent work, sponsored bythe Navy, researchersemploy LES to study theflow past a stator(stationary)-rotorcombination with tip

leakage, with the objective of understanding,predicting, and eventually controlling cavitation. TheLES code developed for this project has the potentialto be a useful research and design tool for otherfuture weapons systems design efforts.

Understanding the UnderlyingCauses of CavitationTraditionally, scientists used experiments to study tip-leakage flow in turbomachines. There is evidencesuggesting that low-pressure events and cavitationsare strongly related to the turbulence associated withthe tip-leakage vortex near the endwall (Figure 1).However, knowledge of the detailed mechanisms is

still quite limited. Computational studies of this flowhave customarily relied upon Reynolds AveragedNavier-Stokes (RANS) models, which cannotdescribe the highly unsteady flow and pressurecharacteristics. In addition, the tip-gap region haspresented CFD practitioners with considerablechallenges in terms of grid topology and resolution.

Introduction of the Large-EddySimulationThe HPCMP resources allow scientists to use LES,which is computationally much more intensive thanRANS calculations. As shown in Figure 2, thepresent LES research attempts to resolve a widerange of flow scales associated with the stator wake,the rotor tip-leakage vortex, and the turbulentboundary layers. The computational domain is quitelong since the region of interest extends severalchord lengths into the wake. The flow is three-dimensional without a statistically homogeneousdirection, requiring extremely long sampling time toobtain converged flow statistics. A full simulationrequires more than 25 million grid points and 100,000time steps. A typical production run requires over50,000 single-node CPU hours and 10 gigabyte (GB)

Large-Eddy Simulation of Tip-Clearance Flow inTurbomachinesPARVIZ MOIN, DONGHYUN YOU, AND MENG WANG

Stanford University, Stanford, CA

RAJAT MITTAL

The George Washington University, Washington, DC

1 Merriam Websters Collegiate© Dictionary

“The simulationsconducted to date haveshown encouragingresults in terms ofpredicting both thequalitative andquantitative features ofthe rotor tip-leakage flowas observedexperimentally.”

Figure 1. Example ofa ducted propeller withtip-clearance

Page 58: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 49 –

2002 SUCCESS STORIES

of memory on the SGI Origin 3800 and generatesover 50 GB of data, rendering these simulationsfeasible only on large, high performance, parallelplatforms. Large-scale simulations of this magnitudewould have been impossible without HPCMPresources.

A large-eddy simulation code, which combines animmersed-boundary technique with a generalizedcurvilinear structured grid, has been employed tocarry out the simulations. From the preliminaryresults, mean velocities, Reynolds stresses andenergy spectra show reasonable agreements withexperimental measurements (Figure 3). The flowfieldexhibits strong circular motions associated with thetip-leakage and tip-separation vortices. Theseswirling structures convect downstream, expand insize and generate intense turbulent fluctuations in theendwall region. Strong pressure fluctuations thatmay be responsible for the cavitation and acousticnoise have also been observed. Analysis of theresults indicates that further improvement may beobtained by increasing the streamwise resolution inthe region where the tip-leakage and trailing edgevortices interact with each other.

Expanding Naval UnderstandingThe simulations conducted to date have shownencouraging results in terms of predicting both thequalitative and quantitative features of the rotor tip-leakage flow as observed experimentally. Grid-refinement studies are underway, and the solutionswill be fully validated. Once this is done, ProfessorMoin’s team will focus on a detailed analysis of thedata with emphasis on understanding the turbulencecharacteristics, vortex dynamics and pressurefluctuations in and downstream of the tip-gap region.In addition, stator-rotor interaction and rotational

effects will be included to simulate the flow in a morerealistic configuration. The parallel performance ofthe code will be further enhanced by employingdomain decomposition strategies coupled withexplicit Message Passing Interface (MPI), instead ofthe loop level directives (OpenMP) currently in use.

The results of this research will eventually lead to anew generation of quieter weapons systems capableof greater performance at reduced costs.

ReferencesYou, D., Mittal, R., Wang, M., and Moin, P., “Large-EddySimulation of a Rotor Tip-Clearance Flow,” AIAA Paper No.2002-0981, 40th AIAA Aerospace Sciences Meeting &Exhibit, Reno, NV, January 14–17, 2002.

You, D., Mittal, R., Wang, M., and Moin, P., “Large-EddySimulation of a Rotor Tip-Clearance Flow,” Bulletin of theAmerican Physical Society, Vol. 46, No. 10, p. 106, 54thAnnual Meeting of the Division of Fluid Dynamics, theAmerican Physical Society, San Diego, CA, 2001.

Figure 2. Instantaneous streamwise velocity iso-surfaces revealingthe tip-leakage flow and rotor-blade wake

blade wake

tip-leakage flow

inflow

Figure 3. Energy spectra ofeach velocity component as afunction of frequency at alocation in the tip-leakageregion

CTA: Computational Fluid DynamicsComputer Resources: SGI Origin 3800 [ARL] andCompaq GS320 [ASC]

Page 59: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 50 –

PROJECT AND SUSTAIN US FORCES

The Air Force has recently undergone asecond round of flight tests to addressproblems identified on a new refueling podfor the KC-135R aircraft. It is a hose and

drogue system called the Multi-Point RefuelingSystem (MPRS) (Figure 1). When the hose anddrogue on the KC-135R are retracted, the drogueoften strikes and occasionally damages the wing.This can lead to unacceptable downtime to repair thewing trailing edge and tip.

Conventional Testing Fails to YieldSolutionsPrior to flight test, a team from the EngineeringDirectorate of the Aeronautical Systems Center (ASC)was called on to perform CFD modeling on thesystem. Previous wind tunnel and flight tests failed toidentify the source of the problem. The wind tunnelmeasurements were not taken close enough to thewing to discover the flow phenomena responsible forcausing the drogue to strike the wing.

High Performance ComputingSolutionsThe ASC research team used HPCMP computationalresources and CFD to solve multiple flight conditionsin a reasonable amount of time. The investigatingteam used CHSSI developed code, Cobalt60, for theCFD solutions. They coupled it with an in-house,

steady-state catenary solver. The computationalsolutions provided the entire flowfield, allowingengineers to identify sources of the problem.Solution visualization using Fieldview identified alarge upwash in the flowfield just behind the MPRSrefueling pod. As Figure 2illustrates, and as calculatedby the catenary solver, thedrogue was being pushed upinto the wing by the flowfield inproximity to the pod. Thedesign of the back end of thepod allows the flow to remainattached and turn upward.This flow behind the podcontributes to the generalupwash flowfield created bythe wing tip vortex. HPCMPresources allowed ASC to compute many variationson the geometry including the baseline geometry, theFrench Air Force variant, the wind tunnel geometry,rotated fins, inboard fins, three spoilers, and the

Spoil the StrikesM.S. JURKOVICH, T.E. RITTINGER, AND R.M. WEYER

Aeronautical Systems Center (ASC), Wright-Patterson Air Force Base, OH

Figure 1. Aircraft view showing installation of MPRS

Figure 2. Baseline geometry resulting in local upwash behind pod

“The use of majorshared resource centerresources is savingprogram dollars inhelping identify causesand potential solutionsto a current operationalproblem that were notidentifiable by othermethods at hand.”

Page 60: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 51 –

2002 SUCCESS STORIES

entire aircraft in sideslip. Computational grids weregenerated with the National Aeronautics and SpaceAdministration, Langley and Air Force ResearchLaboratory codes Gridtool/Vgrid and Blacksmith.The largest grid was 14 million tetrahedral andprismatic cells.

The largest spoiler geometry provided the bestflowfield analytically (Figure 3) and wasmanufactured for flight-testing. The flight test

showed the spoiler to be partially successful inreducing the frequency of wing strikes. Follow-onstudies are already in progress that identify means toalso reduce the impact of the wing tip generatedupwash on the retracting drogue. The use of MSRCsis saving program dollars in helping identify causesand potential solutions to a current operationalproblem that were not identifiable by other methodsat hand.

Figure 3. Spoil geometry that removed local upwash behind pod

CTA: Computational Fluid DynamicsComputer Resources: SGI Origin 3000 [ARL, ERDC],IBM SP3 [MHPCC, ASC], and SGI Origin 2000 [ASC]

Page 61: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 52 –

PROJECT AND SUSTAIN US FORCES

CFD Modeling of Antenna Structures Using CobaltInternal Boundary ConditionsSUSAN POLSKY

Naval Air Warfare Center, Aircraft Division (NAWCAD), Patuxent River, MD

The objective of the Ship Aircraft AirwakeAnalysis for Enhanced Dynamic Interface(SAFEDI) Program is to developcomputational fluid dynamics methods to

predict the time-accurate airwake produced by Navyships for application in manned and unmanned flightsimulation. Wind tunnel tests have shown that“small” geometric details on ships can significantlyaffect the character of the airwake turbulence thataircraft must operate in. These “small” geometricfeatures include the antenna arrays that are placedatop the ships superstructure. These arrays are verycomplex geometrically and present an extremelydifficult gridding problem for both grid constructionand required grid density.

Difficulties of Modeling AntennaStructuresTraditionally, geometric details of the scale of theantennas have simply been ignored because of thedifficulties in modeling the true geometric detail ofthe structures. In this work, internal boundaryconditions have been developed to model theantenna arrays on a sub-grid scale. The initialapproach taken was to modify the momentumequations using a source term to retard the flow byremoving momentum, but still allow for a permeablesurface. The “porosity” of the structures werespecified as a drag coefficients in an input deck.However, it was found that determining the correctdrag coefficient to use was more of an art thanscience. Therefore, the current approach is to tag allthe cells within the structures as solid walls. Theareas where the antennas exist are specified in aninput deck along with the general shape (i.e.,cylinder, sphere, etc.).

Modeling Ship Air WakesShip air wakes are inherently unsteady and must bemodeled as such. In addition, the ships are verylarge while details on the order of thousandths ofinches must be modeled to properly resolveboundary layers. Thisrequires large amountsof memory and largeamounts of computertime.

Visualization of theresulting three-dimensional flow fields isan essential part of thedevelopment andanalysis tasks related toship air wakepredictions. State-of-the-art visualization softwareis heavily relied on for allaspects of the program. In particular, animation ofthe time-varying solutions has provided insight intonever before appreciated features of the air wakeflow field.

Access to HPC resources has enabled thedevelopment of a new method of modeling finegeometric details for the prediction of ship air wake.

Improving Manned and UnmannedFlight SimulationThe primary impact is significantly improving mannedand unmanned flight simulation by providing realisticairwake models. Improved simulations can reduceoperating costs by providing an alternative methodfor pilot training and dynamic interface envelopedevelopment. In addition, this work provides aunique opportunity to improve the generalunderstanding of ship airwake aerodynamics.

“The primary impact issignificantly improvingmanned and unmannedflight simulation byproviding realistic airwakemodels. Improvedsimulations can reduceoperating costs byproviding an alternativemethod for pilot trainingand dynamic interfaceenvelope development.”

Page 62: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 53 –

2002 SUCCESS STORIES

Finally, the analysis techniques developed here canbe used to aid in the design of the flight decks andislands on new ships and thereby address flightoperations at the very initial stages of ship design.

An internal boundary condition to simulate first ordereffects of an antenna array on the flow field has beendeveloped, implemented, and tested. The methoddeveloped imposes a solid wall boundary conditionwithin the antenna array. This approach avoids theneed to determine the porosity (i.e., drag coefficient)of varied geometric shapes.

The several simple test cases that were run provethat the cells within the antenna array box are beingfound correctly, the orientation of the antenna is therequested orientation and the momentum is beingreduced by the presence of the antenna arrayboundary condition. The remaining challenges are touse the method for an actual antenna geometry(Figure 1) and to validate against both CFD and, ifpossible, experimental data. For validation againstCFD data, the actual antenna geometry will begridded and run using the same CFD code, Cobalt.This process has already begun and some resultingflow fields are shown in Figures 2 and 3. The methoditself is still under development and may be modifiedbased on results from the validation effort.

ReferencesPolsky, S., A Computational Study of Unsteady ShipAirwake, AIAA Paper 2002-1022, January 2002.

Figure 3. Two planar cuts showingaffects of internal boundarycondition

Figure 2. Supporting grid structureand application of subscale boundarycondition (colored by speed)

Figure 1. Actual antenna geometry

CTA: Computational Fluid DynamicsComputer Resources: IBM SP3 [ASC]

Page 63: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 54 –

PROJECT AND SUSTAIN US FORCES

High-speed flows over open cavitiesproduce complex unsteadyinteractions, which are characterizedby a severe acoustic environment. At

flight conditions, such flowfields are bothbroadband, small-scale fluctuations typical ofturbulent shear layers, and discrete resonancewhose frequency and amplitude depend upon thecavity geometry and external flow conditions. Whilethese phenomena are of fundamental physicalinterest, they also represent a number of significantconcerns for aerospace applications.

Researchers at AFRL are studying thesephenomena, hoping toinfluence aircraft weaponsbay environments (as shownin Figure 1), whereaerodynamic performance orstability may be adverselyaffected, structural loadingmay become excessive, andsensitive instrumentation maybe damaged. Acousticresonance can also pose athreat to the safe release andaccurate delivery of weaponssystems stored within thecavity.

Shortfalls of the TraditionalApproachTraditionally, solving the RANS equations has allowedsimulation of the flow about aircraft weapons baycavities. This approach precluded a precisedescription of the turbulent cavity flowfield, whichconsists of fine-scale fluid structures. The models,which accounted for the effects of turbulence, wereunreliable and incapable of representing high-frequency fluctuations of the fluid structures.

HPC resources, particularly large-scale computingplatforms, are allowing the investigator to represent,with more accuracy, the fundamental properties ofsupersonic turbulent aircraft weapons bay cavities(Figures 2 and 3). In particular, the engineers canuse large-eddy simulations to overcome deficienciesof the RANS approach and failure of traditionalturbulence models.

Using Large-Eddy Simulations toImprove Acoustic Pressure LevelsLarge-eddy simulations numerically generated theturbulent flowfields about weapons bay cavities. Theability of high-frequency forced mass injection tosuppress cavity resonant acoustic modes wasinvestigated. Scientific visualization was usedextensively to clarify the physical characteristics ofunsteady three-dimensional turbulent cavityflowfields. The construction of videos made it

Flow Control for Aircraft Weapons Bay EnhancedAcoustic EnvironmentDONALD RIZZETTA

Air Force Research Laboratory (AFRL), Wright-Patterson AFB, OH

Figure1. Weapons BayCavity Flowfield

“High performancecomputing resources,particularly large-scalecomputing platforms,are allowing theinvestigator torepresent, with moreaccuracy, thefundamental propertiesof supersonic turbulentaircraft weapons baycavities.”

Page 64: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 55 –

2002 SUCCESS STORIES

possible to understand the basic mechanismscontributing to the suppression of resonant acousticmodes through forced mass injection. It was foundthat the use of flow control appreciably reducedamplitudes of acoustic pressure levels within thecavity.

Using HPC resources, the engineers numericallyreproduced supersonic turbulent flow about anaircraft weapons bay both with and without flowcontrol. Large-eddy simulations have made itpossible to correctly describe the small-scale fluidstructures that characterize turbulence. Pulsed massinjection was found to be an effective flow controlmechanism for mitigating undesirable resonantacoustic modes. This considerably enhanced theacoustic environment within the weapons bay,thereby reducing the loading and potential damageto weapons systems, surrounding structure, andinstrumentation.

ReferencesRizzetta, D.P. and Visbal, M.R., “Large-Eddy Simulation ofSupersonic Cavity Flowfields,” AIAA Paper 2002-2853,32nd AIAA Fluid Dynamics Conference, St. Louis, MO,June 24–26, 2002.

Figure 2. Instantaneous Mach Number Contours Figure 3. Instantaneous Vorticity Contours

CTA: Computational Fluid DynamicsComputer Resources: SGI Origin 3000 and Cray SV1[NAVO] and IBM SP3 [ASC]

Page 65: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 56 –

PROJECT AND SUSTAIN US FORCES

Using Strongly Coupled Fluid/Structure InteractionCode to Predict Fighter Aircraft Wing Response toAnti-Aircraft Artillery (AAA) DamageRONALD HINRICHSEN

National Center for Supercomputing Applications, University of Illinois, Champaign-Urbana, ILSponsor: Joint Technical Coordinating Group for Aircraft Survivability

“The new methodologyhas led to a new wayof thinking aboutaircraft response to in-flight damage. Bymoving the simulateddamage from oneplace to another,certain “vulnerablespots” were identifiedwhich were notpreviouslyanticipated.”

It’s becoming clear that current static loadingtechniques for live-fire testing fail to accuratelyreplicate the loads of aircraft that are damagedin flight. Vulnerability assessments based on

such loading techniques may also, in turn, fall shortof providing accurate and complete results. Thegood news is that advances in a proposed techniquefor dynamic live-fire ground testing may remedy theshortcomings of current static ground testing. Highperformance computers are being used to develop amethodology to predict the in-flight response ofmanned and unmanned air vehicles to hostile threatsthey may encounter during wartime operations. Themethodology gives DoD a physics-based, highfidelity tool for use in new aircraft design as well aspretest prediction and dynamic test planning ofon-going and future live-fire test and evaluation.

Shortfalls of Static Ground TestingIn the past, only static predictions could be madeusing engineering level tools, which neglected theinfluence of structural damage to the airflow over theaircraft. The main shortfall with static ground testingis that the quasi-static techniques do not account forchanges in structural stiffness and mass that occurfrom damage. As such, current loadingmethodologies fail to reconfigure correctly forrepresenting in-flight loads. Ground loadingmethodologies also fail to consider damage inducedchanges to the flutter envelope that can lead topremature failure.

Boundary Element Method ModelsFlow FieldsThe methodology of this research was to use HPCresources to model a currentfighter aircraft structure usingthe Lagrangian perspectivecoupled with a boundaryelement method for modelingthe flow field around theaircraft. The boundary elementmethod used in this work isconsistent with several“paneling methods” which areused for inviscid,incompressible flows. Thewing model was initially loadedwith the airloads and g-loadsrepresenting a given flightcondition (Figure 1 and Figure2). Ten seconds into the simulation, a section ofstructure was instantaneously removed to simulatethe damage resulting from a threat detonation orimpact. The simulation was allowed to continue foranother 10 seconds and resulting stresses and wingdisplacements were observed (Figure 3).

Simulations of this type require stepping in time withsteps on the order of 0.1 microseconds. Thus a 20second simulation would require 200 million steps.Without the HPC resources, each simulation wouldhave taken days to perform. With the multi-processorcapability that HPC brings to the table, eachsimulation took hours to complete.

The new methodology has led to a new way ofthinking about aircraft response to in-flight damage.By moving the simulated damage from one place to

Page 66: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 57 –

2002 SUCCESS STORIES

another, certain “vulnerable spots” were identifiedwhich were previously not anticipated. For instance,at some flight conditions and store loading, damagenear the wingtip was seen to be more catastrophicthan the same damage near the wing root.

Figure 1. Fighter aircraftmodel before damage

Figure 2. VonMises stresscontours beforedamage

New Methods Improve PredictedStructural ResponseThis research is significant as it points the way to aneffective and efficient method for predicting aircraftresponse resulting from the damage from detonationor impact of a missile or an AAA threat. Theseresults, in turn, will help provide a pathway for testengineers to determine the most appropriate testmethod. That method could either be static,dynamic, or a combination of the two. Subsequentconcept designs of dynamic ground tests may bebased on improved predicted structural response.This work will be used in the future to assist live firetest planners in designing and controlling dynamicground test to more realistically reproduce whatactually happens in flight.

ReferencesMonty A. Moshier, Ronald L. Hinrichsen, Gregory J.Czarnecki, David J. Barrett, and Michael R. Weisenbach,“Dynamic Loading Methodologies”, AIAA 2002-1492, 43rd

CTA: Computational Structural MechanicsComputer Resources: Compaq ES40 and CompaqGS320 [ASC]

Figure 3. VonMises stresscontoursimmediately afterdamage

Page 67: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 58 –

PROJECT AND SUSTAIN US FORCES

During the engineering, manufacturing, anddevelopment phase of the F/A-18E/Fprogram, it was discovered that theF/A-18E (Figure 1) was susceptible to

abrupt wing stall (AWS). Abrupt wing stall occurswhen an aircraft undergoes a rapid, severe uppersurface flow separation. If this occursasymmetrically, large rolling moments can beproduced, which result in uncommanded lateralmotion. To eliminate AWS on the F/A-18E, the flapschedule was modified and a porous door wasadded. However, the cause of the F/A-18E’s initialencounter with AWS is not well understood. As aresult, the AWS program was formed to investigatethe causes of AWS and to develop a set of designguidelines to help designers avoid AWS on futureaircraft.

Abrupt wing stall has been traditionally addressedthrough expensive flight testing, relatively late in anaircraft development program. With advances in

CFD, AWS can now be approached using CFD tosolve the RANS equations. However, solving theseequations for this class of flow requires large memoryresources and long computational times.

During the redesign of the F/A-18E from the F/A-18C,the wing camber, thickness, twist, leading-edgeradius and leading-edge flap-chord ratio weremodified. In addition, aleading-edge snag wasadded to the wing. Sincethe F/A-18C does notexperience AWS, thesewing parameters are likelycontributing to the AWSexperienced by the F/A18E. Using CFD, thesewing parameters werestudied independently and simultaneously todetermine their effect on AWS and to develop a set ofdesign guidelines.

The effects of these six wing parameters on AWSwere analyzed using eleven different configurations.These eleven configurations included configurationsin which one or more of the wing parameters weremodified. For example, to determine the effect oftwist on AWS, the twist was removed from the F/A-18C wing to form one of the configurations. In somecases, two wing parameters were modifiedsimultaneously. For example, one of theconfigurations was obtained by adding a snag to theF/A-18C wing while removing the camber.

The eleven configurations were analyzed at twotransonic Mach numbers and six different angles ofattack (AoA). This resulted in 132 different cases,which were analyzed. As each case requiredapproximately 6,000 hours of CPU time on an SGI

Modification of the F/A-18C Wing to DetermineEffect of Wing Parameters on Abrupt Wing StallBRADFORD GREEN

Naval Air Systems Command (NAVAIR), Patuxent River, MD

JAMES OTT

Combustion Research and Flow Technology, Inc (CRAFT), Dublin, PA

Figure 1. The F/A-18E Super Hornet, the Navy’s strike-fighteraircraft (DoD photo by Vernon Pugh)

“Current results indicatethat the addition of thesnag and the reduction ofthe leading-edge flap-chord ratio could becontributing significantlyto AWS on the F/A-18E.”

Page 68: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 59 –

2002 SUCCESS STORIES

Origin, extensive CPU time and computer resourceswere necessary to complete this task. Through usingthe resources of the HPCMP, it was possible toanalyze all 132 configurations.

Although this research is ongoing, meaningful resultshave been obtained. The current results indicate thatthe addition of the snag and the reduction of theleading-edge flap-chord ratio could be contributingsignificantly to AWS on the F/A-18E. In Figure 2, thewing-root bending-moment coefficient is plotted as afunction of AoA for the F/A-18C, F/A-18C with a snag,F/A-18C with a snag and reduced leading-edge flap-chord ratio and the F/A-18E. AWS can occur whenthe slope of this curve changes sign. It can be seenfrom the figure that the addition of the snag andreduction of the leading-edge flap-chord ratio on theF/A-18C moves the wing-root bending-moment curvetoward that of the F/A-18E.

The results of this research also indicate that thecamber may be contributing to AWS on the F/A-18E,while the twist, thickness and leading-edge radius donot appear to be impacting AWS. Although this ison-going research, the team expects to finish theremaining cases expeditiously so that they canconfirm their current observations and contribute tothe elimination of AWS on the F/A-18E and otheraircraft.

CTA: Computational Fluid DynamicsComputer Resources: SGI Origin 2000/3000/3800[ASC]

Figure 2. Wing-root bending-moment coefficient as a function of AoA for the F/A-18C, F/A-18C with a snag, F/A-18C with a snag and reducedleading-edge flap-chord ratio and F/A-18E

Page 69: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 60 –

PROJECT AND SUSTAIN US FORCES

Currently, fighter development is underclose scrutiny due to ever increasingacquisition costs. The Joint StrikeFighter program made low cost a primary

requirement in their acquisition strategy.Unfortunately, there is little confidence that CFDmodeling for full aircraft is accurate under massiveseparation, limiting its usefulness.

Reynolds Averaged Navier-StokesDeficienciesWhile advances have taken place in areas such asgrid generation and fast algorithms for solution ofsystems of equations, CFD has remained limited as areliable tool for predicting inherently unsteady flowsat flight Reynolds numbers. Current engineeringapproaches to predicting unsteady flows are basedon solution of the RANS equations. The turbulencemodels used in RANS methods model the entirespectrum of turbulent motions. While often adequatein steady flows with no regions of reversed flow, orpossibly exhibiting shallow separations, it appearsthat RANS turbulence models are unable toaccurately predict phenomena dominating flowscharacterized by massive separations. Unsteady,massively separated flows are characterized bygeometry-dependent and three-dimensional turbulenteddies. These eddies, arguably, are what defeatRANS turbulence models, of any complexity.

To overcome the deficiencies of RANS models forpredicting massively separated flows,Detached-Eddy Simulation (DES) was proposed withthe objective of developinga numerically feasible andaccurate approachcombining the mostfavorable elements of RANSmodels and LES. Theprimary advantage of DESis that it can be applied athigh Reynolds numbers, ascan Reynolds-averagedtechniques, but also resolves geometry-dependent,unsteady three-dimensional turbulent motions as inLES.

Testing Detached-Eddy SimulationTwo geometrically simple cases were examined totest the method and build confidence for full aircraftsimulation — a delta wing at 27° angle-of-attackexhibiting vortex breakdown, and a forebody at 90°angle of attack. Although the geometries are simple,the resulting flows are quite complex. In fact,accurately predicting delta wing vortex breakdown athigh Reynolds number has challenged researchersfor decades. These applications are then used tobuild confidence in the full aircraft simulations (F-16,F-15E) that have been performed. The computationswere performed using the unstructured-grid solver

Detached-Eddy Simulation of Massively SeparatedFlows Over AircraftJAMES FORSYTHE AND SCOTT MORTON

United States Air Force Academy (USAFA), Colorado Springs, CO

KYLE SQUIRES

Arizona State University, Tempe, AZ

KENNETH WURTZLER, WILLIAM STRANG, AND ROBERT TOMARO

Cobalt Solutions, LLC, Dayton OH

PHILIPPE SPALART

Boeing Commercial Airplanes, Seattle, WA

“DES applied today usingHPC resources is makingmore “routine” the full-configuration predictionof complex geometriesand at flight conditions.”

Page 70: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 61 –

2002 SUCCESS STORIES

Cobalt. Cobalt is the commercial version of theCommon High Performance Computing SoftwareSupport Initiative code, Cobalt60. Grids used in thecalculations of all the geometries summarized beloware unstructured, comprised of a combination ofprisms, pyramids, and tetrahedra.

The forebody modeled is the flow at 90° angle-of-attack around a rectangular ogive, with the maincross-section a round corner square as shown inFigure 1. The Reynolds number based on the widthof the body is 2.1 x 106. Grid sizes for thecomputations performed to date have ranged from4.2 x 106 to 5.2 x 106 cells. Visualization of thepressure distribution on the surface and theinstantaneous vorticity at eight stations along thebody are shown in Figure 1. Predictions obtainedusing both DES and RANS (using the Spalart-Allmaras one-equation model) are shown. Evident inthe surface pressure prediction from the RANScomputation is that along the forebody, the separatedstructure is more coherent than that obtained in theDES, with a low-pressure footprint evident alongmost of the forebody. The DES result ischaracterized by a more chaotic and three-dimensional structure over most of the body,resulting in a reasonably constant pressure regionalong the top (aft) surface (c.f., Figure 2). Time-averaged pressure distributions were measured inexperiments at eight axial stations, corresponding tothe eight locations at which vorticity contours areshown in Figure 1. Compared in Figure 2 is thepressure coefficient at 11% of the forebody length.The angle θ= 0° is in the symmetry plane on thewindward side, with θ= 180° in the leeward sidesymmetry plane. As shown in the figure, the strongcoherent vortices predicted in the RANS solution giverise to a large variation in pressure on the leeward

side that differs markedly from the experimentalmeasurements. The largest differences in the RANSpredictions occur closest to the tip of the forebody,resulting in large errors in moment predictions. TheDES prediction of the pressure coefficient, on theother hand, is in excellent agreement with themeasurements, a result of the more accurateresolution of the unsteady shedding that yields a flatpressure profile on the leeward side.

The 70° delta wing at 27° angle-of-attack has alsobeen calculated. A comprehensive grid refinementstudy was performed on a half-span model with thenumber of cells ranging from 1.2x106 to 10.5x106.The finest grid results are shown in Figure 3.Numerous flow features are clearly resolvedincluding shear layer instabilities prior to vortexbreakdown, reverse flow near vortex breakdown, postbreakdown windings, and vortex shedding off theblunt trailing edge. Resolved turbulent kinetic energyin a line along the vortex core is plotted in Figure 4.DES possesses the same characteristics as LES, inthat a wider range of turbulent length scales isresolved as the grid density is increased. The finestgrid matches the peak turbulent kinetic energymeasured experimentally. Other parameters such asvelocity in the core of the vortex and breakdownlocation were in excellent agreement withexperiments. Accurately predicting turbulent kineticenergy levels provides increased confidence thatDES will be able to predict the unsteady loads on anaircraft due to vortex breakdown (e.g., the F-18 withtail buffet).

Figure 2: Pressure distribution around the rectangular ogiveforebody. Comparison of DES and RANS to experimentalmeasurements at station 3 (c.f., Figure 1).

Figure 1. RANS (left) and DES (right) predictions of the flow around therectangular–ogive forebody. The freestream flow is at 90° angle-of-attack(from top to bottom in the visualizations). Surface colored by pressure andvorticity contours at eight stations. The location of the eight stations is thesame at which experimental measurements of the pressure distribution wereavailable.

Page 71: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 62 –

PROJECT AND SUSTAIN US FORCES

Key Advances in DESWork to date has provided key advances inapplication, assessment, and improvement of DES.Computation of building block flows such as two-and three-dimensional forebodies and delta wingshas established a foundation for resolution of severalissues important to using DES to predict the flow fieldaround full aircraft, helping to provide guidance ingrid-generation aspects, turbulence treatments, andnumerical parameters. The subsequent applicationof DES to the F-16 and F-15E has further enhancedthe confidence level with lift and drag predictions thatare accurate compared to flight test data. Thesedevelopments will soon provide aircraft designerswith a powerful tool for the prediction of massivelyseparated flows over complex configurations and atflight conditions. The availability of high performancecomputing continues to accelerate the developmentof this powerful technique. DES applied today usingHPC resources is making more “routine” the full-configuration prediction of complex geometries andat flight conditions. Accuracy in lift, drag, andmoment predictions around the F-15E, for example,are within 5% of flight test data. While alreadyrelatively accurate, increases in HPC capacity willenable efficient simulation using substantially largergrids. DES flow fields will offer richer detail ondenser grids, with associated increases anticipated inoverall accuracy. Increases in HPC capacity coupledwith the emergence of more efficient and accuratealgorithms will soon enable prediction of additionalphysical effects such as fluid-structure interactions.

Figure 3. Flow over the delta wing at 27° angle of attack. Isosurfaceof vorticity.

ReferencesSquires, K.D., Forsythe, J.R., and Spalart, P.R., Detached-Eddy Simulation of the Separated Flow Around a ForebodyCross-section, Direct and Large-Eddy Simulation IV,ERCOFTAC Series, Volume 8, B.J. Guerts, R. Friedrich, andO. Metais, editors, Kluwer Academic Press, pp. 481–500.

Forsythe, J.R., Hoffmann, K.A., and Squires, K.D.,Detached-Eddy Simulation with CompressibilityCorrections Applied to a Supersonic Axisymmetric BaseFlow, AIAA 02-0586, January 2002.

Forsythe, J.R., Squires, K.D., Wurtzler, K.E., and Spalart,P.R., Detached-Eddy Simulation of Fighter Aircraft at HighAlpha, AIAA 02-0591, January 2002.

Squires, K.D., Forsythe, J.R., Morton, S.A., Strang, W.Z.,Wurtzler, K.E., Tomaro, R.F., Grismer, M.J., and Spalart,P.R., Progress on Detached-Eddy Simulation of MassivelySeparated Flows, AIAA 02-1021, January 2002.

Morton, S.A., Forsythe, J.R., Mitchell, A., and Hajek, D.,DES and RANS Simulations of Delta Wing Vortical Flows,AIAA 02-0587, January 2002.

Figure 4. Resolved turbulent kinetic energy in the core of the vortexon the delta wing

CTA: Computational Fluid DynamicsComputer Resources: Cray T3E [AHPCRC]

Page 72: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 63 –

2002 SUCCESS STORIES

Modern and future aircraft, such as theF-117 (Figure 1), B-2, F-22, and JointStrike Fighter (JSF), all require lowobservability (LO) to gain air superiority

and increase survivability. This significantly increasesthe development costs of advanced aircraft.Fortunately, the use of HPC enabled ComputationalElectromagnetics (CEM) tools is reducing LO vehicledesign costs.

A team of scientists and engineers led byDr. Kueichien Hill of AFRL, other DoD agencies, andDrs. Navele and Sancer of Northrop GrummanIntegrated Systems are developing new methods tocompute accurate radar signatures for a large classof inlet and exhaust ducts with LO features. Engineinlet and exhaust ducts are contributors to the overallair vehicle signature. In addition to real worldgeometries for explicit aircraft, ducts with genericmaterial treatments and shapes were used togenerate a radar signature database.

Lowering the Costs of Modeling andTestingUntil very recently, the designs of low radar cross-section vehicles relied almost exclusively on costlyexperiments with some support of computationalelectromagnetics. While first-principle computercodes satisfy all LO requirements, they involve thesolution of large linearsystems of equations, theorder of which increases atthe cubic power of the radarfrequency being studied andwith the complexity of themodeled target. Theincreasing complexity ofmodern LO weapon systemscoupled with higherfrequencies of interest hasmeant that such simulationsare too computationallyintensive for even the mostadvanced desktop or serialcomputers. Massivelyparallel supercomputerssuch as those provided by the HPCMP must be usedto accurately predict the radar cross section of realtargets at higher frequencies. With the use of HPCresources, several large test models (many of whichcost about 10 million dollars) could be eliminated.Not only could model cost be reduced, but theassociated test cost (on the order of $15,000 permeasurement shift) could also be reduced.

Radar Signature Database for Low ObservableEngine Duct DesignsKUEICHIEN HILL

Air Force Research Laboratory (AFRL), Wright-Patterson AFB, OH

SUNIL NAVALE AND MAURICE SANCER

Northrop Grumman Corporation, Wright-Patterson AFB, OH

Figure 1. US Air Force ground crew members do a final inspection ofan F-117A Nighthawk aircraft before it departs for Kuwait fromHolloman Air Force Base, NM. (DoD photo by Senior Airman RobertaMcDonald, US Air Force)

“Massively parallelsupercomputers such asthose provided by theHPCMP must be used toaccurately predict theradar cross section ofreal targets at higherfrequencies. With theuse of HPC resources,several large testmodels (many of whichcost about 10 milliondollars) could beeliminated.”

Page 73: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 64 –

PROJECT AND SUSTAIN US FORCES

Figure 2. Cascadeprocedure for duct oflength 25.8λ

CEM Tools ConsiderationsIn order to be useful for LO designs, CEM tools haveto satisfy three LO imposed CEM requirements:accuracy, material modeling, and scalability. ManyCEM codes do not satisfy all three LO requirements.CEM codes, based on approximate asymptotic high

frequency techniques such as the XPATCH code(developed by the Automatic Target Recognition[ATR] community) cannot satisfy the LO accuracyand material modeling requirements. The presenceof LO material treatments and the close coupling ofscattering features violate the fundamentalassumptions of the asymptotic techniques. CEMcodes based on the finite volume/difference timedomain techniques also cannot satisfy the LOaccuracy requirement. They employ an approximateradiation boundary condition, which causes a non-physical reflection that contaminates the solution.

Developing a Novel ApproachTo satisfy all three LO requirements, NorthropGrumman developed a first principle-based (solvingMaxwell’s equations exactly without making limitingassumptions) hybrid finite element/method ofmoments (FEM/MoM) code. The finite elementportion of the code allows flexibility in materialmodeling and the moment method portion of thecode generates accurate solutions using an exactradiation boundary condition. Under the support ofthe HPCMP’s CHSSI CEA-3 project (Maturation of aLO Component Design Code), this code evolved intoa robust, portable, and scalable code that was usedto generate the radar signature database for engineinlet and exhaust ducts.

A new modular duct cascading technique wasemployed to efficiently generate the radar signature

database, especially for higher frequency bands.This technique allowed the doubling of the ductlength without repeated computations. This level ofproblem size and accuracy had never beendemonstrated before and was only possible becauseof HPC resources. Simulations were performed withand without the newly discovered technique tocorroborate the accuracy of the new technique andto demonstrate the several orders of magnitudesavings in run time andmemory requirement.

Influencing theDesign of EngineDuctsDr. Hill’s team’s work hasinfluenced the design ofengine ducts in three differentareas.

First, the modular ductcascading technique reducedthe simulation time by severalorders of magnitude andproduced a more accuratesolution by reducing the size of the finite elementsystem. Such a reduction is of paramountimportance when modeling very large targets as thefinite element technique is susceptible toaccumulation of numerical errors. Figures 1, 2, and 3illustrate the impact of the modular cascadingtechnique on duct design. Figure 1 shows the radarcross section computed using the cascadingtechnique for a duct of 13 wavelengths. Theexcellent agreement between the cascadingtechnique and the standard FEM/MoM technique inFigure 1 is lost in Figure 2, which illustrates the same

Figure 1. Cascadeprocedure for duct oflength 13λ

“Radar cross-sectioncomputations for theDefense AdvancedResearch ProjectsAgency’s (DARPA)future strike aircraftwere accomplished in3.5 days on 121processors of theOrigin 3000 at ARL.Such a computationwould have beenimpossible without theuse of HPCMPresources.”

Page 74: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 65 –

2002 SUCCESS STORIES

Figure 3. Reduced computational requirement of innovative“cascading” technique. Computational requirement comparison ofstandard FEM-MoM and cascading techniques.

computations but for a longer duct of 25.8wavelengths. This disagreement is due to thenumerical errors that accumulate in the standardFEM/MoM, which are resolved in the cascadingtechnique by modeling only a small section of theduct, performing computations on the small section,and cascading such computations to produce theactual target. Such a cascading procedure yieldsgreater accuracy and saves computation time asseen in Figure 3.

The second major impact was in the generation ofradar images for duct configurations. Such radarimages are crucial in understanding thephenomenology of radar scattering from engine and

exhaust ducts. Figure 4 shows typical radar imagesthat were generated on the ARL’s Origin 3000.

Thirdly, the work influenced the simulation of futureair vehicles including the engine and exhaust ducts.Radar cross-section computations for the DefenseAdvanced Research Projects Agency’s (DARPA)future strike aircraft were accomplished in 3.5 dayson 121 processors of the Origin 3000 at ARL. Such acomputation would have been impossible without theuse of HPCMP resources.

References“Accurate Large-Scale CEM Modeling Using Hybrid FEM/MoM Technique”, Sunil Navale, Yan-Chow Ma, and MauriceSancer, Northrop Grumman Corporation and KueichienHill, Air Force Research Laboratory, Proceedings of the2001 IEEE Antennas and Propagation Society InternationalSymposium and USNC/URSI National Radio ScienceMeeting, Boston , MA, ISBN 0-7803-7072-4, July 8–13,2001.

“Accurate Large-Scale CEM Modeling Using Hybrid FEM/MoM Technique”, Kueichien Hill, Air Force ResearchLaboratory and Sunil Navale, Yan-Chow Ma, and MauriceSancer, Northrop Grumman Corporation, 2001 HPC UserGroup Conference, Biloxi, MS, June 21, 2001.

Figure 4. Downrange and cross-range image of generic duct at very high frequencies (center - 0.15 GHz)

CTA: Computational Electromagnetics and AcousticsComputer Resources: SGI Origin 3000 [ARL] and IBMSP P3 [ASC]

• Low downrange and resolutionimage (left) and high resolution(above)

• Generation of this image (nocoating) at 0.15 GHz took 40hours on ARL O3K (32 400 MHzprocess)

• Generation of a SAR image for arealistic Duct at X-Band willrequire resources of an order ofmagnitude

Minimum contour = 40 dB Center Frequency = 0.2000 GHzMaximum contour = 20 dB Center Angle = 0.0 DegContour step = 5 dB Image offset = (82.7, D.D.)

+/- 48 deg Sector, 5 deg step

Duct with “pug” indicated inoutline

Page 75: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 66 –

PROJECT AND SUSTAIN US FORCES

Scalable Pre- and Post-Processing Approaches forLarge Scale Computational ElectromagneticSimulationsERIC MARK, JERRY CLARKE, RAJU NAMBURU, AND CALVIN LE

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

JOHN NEHERBASS

Ohio State University, Columbus, OH

The overall objective of this research is todevelop and demonstrate a scalablecomputational electromagnetic softwareenvironment to address accurate prediction

of radar cross sections (RCS) for full range armoredvehicles with realistic material treatments andcomplex geometric configurations. A softwareenvironment consisting of scalable pre-processing,post-processing, and accurate finite-difference time-domain software was developed which demonstrateda significant reduction in overall simulation time forlarge-scale military applications. Computationalelectromagnetic simulations of full range militaryvehicles play a critical role in enhancing the designand performance of combat systems including theArmy’s FCS. This high fidelity software environmentcan also address wide band communicationsapplications for DoD.

Developing Scalable Software ToolsTraditional serial pre- and post-processingapproaches cannot generate the desired meshes.They are also inefficient when used to visualize verylarge-scale, high fidelity electromagnetic simulations.To overcome this problem, a team of scientists at theARL developed a suite of scalable software toolsusing the Interdisciplinary Computing Environment(ICE). ICE was developed at ARL for generatingdesirable meshes from the available computer aideddesign (CAD) geometries and/or surface models.For computational approaches such as Method ofMoments (MoM) and Finite Difference Time Domain(FDTD) codes, simple surface model information isnot sufficient. These approaches require volumetricinformation or at least appropriate thickness. In

addition to generating desirable meshes on scalablesystems, this techniqueautomatically decomposes thedomain for efficient loadbalancing. In addition, thesesoftware tools provide run timevisualization of very large-scaledata from multiple processors.

Applicability of the approachwas demonstrated forsimulating a full-scale groundvehicle in free space at X-bandusing scalable finite differencetime domain software.Simulations at X-band using afinite difference time domain approach require 2.56billion cells and 480 processors of IBM SP3.Numerical simulations show good qualitativeagreement with experimental results. Figure 1illustrates the capability of modeling a very large-scale application.

“Computationalelectromagneticsimulations of fullrange military vehiclesplay a critical role inenhancing the designand performance ofcombat systemsincluding the Army’sFuture CombatSystems”.

Figure 1. ZSU 23-4 combat vehicle subjected to an electromagnetic pulse atX-band

Page 76: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 67 –

2002 SUCCESS STORIES

Software tools developed under this research alongwith different scalable applications software create avirtual design tool for addressing large-scale practicalapplications. This technology will assist not only inoptimizing systems performance but also inperforming trade-off studies for the competingrequirements in the design of FCS.

ReferencesEric Mark, Raju Namburu, and Jerry Clarke,“Computational Environment for Large-Scale Grid BasedComputational Electromagnetic Simulations,” at theGround Vehicle Survivability Symposium, Sponsored byUS Army TARDEC, April 9–11, 2002.

Jerry Clarke and Raju Namburu, “The eXtensible DataModel and Format: a High Performance Data Hub forConnecting Parallel Codes and Tools”, ARL-TR-2552, July2001.

Raju Namburu, Eric Mark, and Jerry Clarke,“Computational Electromagnetic Methodologies forCombat System Applications,” 22nd Army ScienceConference, Orlando, FL, Dec 2–5, 2002.

CTA: Computational Electromagnetics and AcousticsComputer Resources: IBM SP3 and SGI Origin 3000[ARL]

Page 77: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 68 –

PROJECT AND SUSTAIN US FORCES

The Chief of Naval Operations noted in hisTop Five Priorities: Future Readiness:

“The DD(X) program literally outlines whatthe ship-borne Navy is going to be like inthe next 30 years. It’s not just a destroyerthat’s going to support the United StatesMarine Corps with a precision gun that isgoing to fire a round from 75 to 100miles… It is about spiraling thattechnology off into the missile defensecruiser of the future. And it is aboutcreating what I believe is one of our mostpressing requirements, a littoral combatantship that is going to operate in the near-land areas and address real threats in thefuture in near-land anti-submarine warfareand in mine warfare …”

As today’s Navy continues its transformation, theNaval ships of the future will be fundamentallydifferent from those currently in the fleet in order tomeet evolving missions and related operationalrequirements and to accommodate emergingtechnologies such as electric drive as the mainpropulsion system. Such missions include increasedlittoral operations that require unprecedented shipsignature reduction. Because these ships will beradically different from existing designs, there is nohistorical database to benchmark new designsagainst. Additionally, seakeeping and maneuveringbehavior of these new ships is significantly differentfrom conventional naval ships. The US Navy needs a

way to analyze these new hull forms, their flowphysics, and associated operating behavior toproperly design the next generation of navalcombatants.

Surface ship design has traditionally been developedby the build-and-test approach. Stealth has not beenpart of the major performance requirements.Maneuvering and seakeepinghave largely been addressedusing extensive experiments.Historically, simple analysismethods are used to evaluatea given ship under a largerange of operating conditions,but these existing codes relyheavily on experimentalempiricism. Consequently, thesimple analysis methods oftenused are suspect when appliedto an unconventional shapewithout an existing experimental database. Thislimits the applicability of such techniques for futuredesigns.

Through HPC resources it is becoming possible topredict flow phenomena that are related tohydrodynamic seakeeping and maneuvering analysisby solving the RANS equations rather than by simplestrip theory approaches. For example, thecomputation of Model 5415, a conventional surfacecombatant, with rudders, propeller shafts, supportstruts and rotating propellers at 436 rates per minute

Unsteady RANS Simulation for Surface ShipManeuvering and SeakeepingJOSEPH GORSKI

Naval Surface Warfare Center, Carderock Division (NSWCCD), West Bethesda, MD

MARK HYMAN

Naval Surface Warfare Center Coastal Systems Station (CSS), Panama City, FL

LAFE TAYLOR

Mississippi State University, Starkville, MS

ROBERT WILSON

The University of Iowa, Iowa City, IA

“HPC resourcesenabled the team toperform the firstnonlinear free surfacesimulation with asurface trackingapproach for a fully-appendedconventionalcombatant hull form.”

Page 78: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 69 –

2002 SUCCESS STORIES

(RPM), required over 5.7 million nodes, 7.7 millionprisms, and 9.6 million tetrahedra for its unstructuredmesh. This calculation took 48 hours on 75 IBM SP3/512 Mb processors (3600 processor hours) for eachpropeller revolution. Using HPC environments, theengineers and scientists can model more realisticphysics as part of the computation, as shown inFigure 1. The Navy can now use a computational-based approach to address new designs. It is onlywith the large parallel HPC resources available in thelast several years that such computations could bedone with sufficient accuracy that a computational-based approach is feasible.

Enabling the First Nonlinear FreeSurface Simulation for a Fully-Appended Combatant Hull FormHPC resources enabled the team to perform the firstnonlinear free surface simulation with a surfacetracking approach for a fully-appended conventionalcombatant hull form. The hull form was similar toDDG-51, with rudders, propeller shafts, supportstruts, and rotating propellers. The simulation wasperformed at model-scale for steady ahead motion incalm water. Such calculations allow the whole flowfield to be studied, unlike traditional experimentaldata, which supplies limited amounts of information.This can lead to better understanding of the flowphysics affecting maneuvering and seakeeping.Additionally, because simpler empirical-basedmethods will continue to be needed for aconsiderable time, the community is evaluating usingthe more complicated physics-based calculations,where new hull forms can be analyzed for a limitednumber of highly complex computations. This canprovide the necessary empirical database, such asfor roll damping models, for the simpler codes. This,in turn, will reduce the need for building and testingnew models.

The Navy team has had promising successes usingthe physics based calculations in their research.Computations have been done for increasinglycomplex bodies. Bilge-keel forces on a three-dimensional rolling cylinder were accuratelypredicted. Detailed flow and force characteristicswere calculated for an unappended naval combatanthull in prescribed roll motions. Initial calculations fora fully-appended combatant hull gave goodqualitative predictions for surface pressure and free-surface elevation (Figure 2). Calculations for arudder-induced turn led to evaluation of improvedmethods for representing a free surface.

Future Challenges to Increase InsightFuture work in this area, while challenging, will vastlyexpand the Navy’s insight into the transformationaltechnologies Admiral Clark envisioned. A surface-capturing technique appears very promising in termsof robustness. A rudder-induced maneuver with thesurface-capturing method will be computed for thefully appended Model 5415. Integration of thecomputed viscous stresses and pressure distributionon this configuration will provide the hydrodynamicforces and moments acting on the ship. Integrationof the 6-Degrees of Freedom (DOF) equations usingthese forces and moments will yield the timeevolution of the ship’s velocity and rotation rate. Thesensitivity of the computed trajectory with respect tothe free surface will also be examined by comparingthis maneuvering solution to the one obtained atFr=0. Future efforts will also include unsteadysimulations of Model 5415 in incident waves withboth forced and free roll motions. Initial free rollsimulations will include prediction of roll decay after

Figure 1. Wave elevation and surface pressure distribution on Navalsurface combatant Model 5415

Figure 2. Time sequence of axial vorticity contours for surfacecombatant simulations with prescribed roll motion using CFDSHIP-IOWA

Page 79: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 70 –

PROJECT AND SUSTAIN US FORCES

the model is released from some initial angulardisplacement at t=0 and will allow for comparisonwith experimental measurements. Plannedsimulations for Model 5415 with forced and free rollmotions will be considerably more complex andcomputationally demanding. The most difficultchallenge to address is computing the complexity ofthe air/water interface that is necessary for accuratelypredicting ship motions. This will require the abilityto handle breaking waves and other complexitiessuch as water on deck.

ReferencesKim, K-H., “Unsteady RANS Simulation for Surface ShipDynamics,” Presented at the DoD HPCMP Users GroupConf., Biloxi, MS, June 2001.

Kim, K-H., “Simulation of Surface Ship Dynamics UsingUnsteady RANS Codes,” Presented at the NATO RTOAdvanced Vehicle Technology Panel Spring 2002 Meeting,Paris, France, April 2002.

CTA: Computational Fluid DynamicsComputer Resources: Cray T3E [ARSC, NAVO], IBMSP3 [MHPCC], and SGI Origin 3000 [ARL, NAVO]

Page 80: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 71 –

2002 SUCCESS STORIES

Computational Structural Mechanics Provides aNew Way to “Test” Behind-Armor DebrisANAND PRAKASH

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

The Survivability/Lethality Analysis Directorateof the ARL notes, “In survivability analysis,the ability to prevent or mitigate personnelcasualties and component or system

destruction is directly related to the ability tounderstand and analyze the process of how damageoccurs.

Behind-armor debris (BAD) effects are the result of acombination of stochastic events: fragment masses,shapes, and velocities, as well as the shape anddispersion of the debris cone depend upon:

(1) armor composition(2) munition type(3) attack angle (obliquity)

Significant reductions in debris amount and effectswill increase crew survivability chances and preservethe system and its subcomponents. By doing this,

we better the chance of systemsurvivability and afford it theopportunity to complete itsintended mission.”1

BAD plays a critical role in theevaluation of both thesurvivability of crew andcomponents in combat vehiclesand of the lethality of suchweapon systems. The US ARLgenerates BAD data at its

experimental research facilities to determine thedebris characteristics for input into survivabilityanalysis codes. Thus, BAD data is a critical input toan analysis that plays a central role in every majorArmy acquisition program, including the FCS, theInterim Armored Vehicle, and the “Stryker” (Figure 1),a highly deployable-wheeled armored vehicle whichwill be the combat vehicle of choice for the Army’sInterim Brigade Combat Teams (IBCTs).

Using Physics Based Modeling toCollect BAD DataSupplementing this experimental capability for armortesting is an important goal. Dr. Anand Prakash ofthe US Army Research Laboratory is developing anew capability to obtain BAD characteristics by usingphysics-based modeling to conduct 3-D computersimulations. The expected advantage of thisapproach is three fold. First, it will reduce costs byreducing the number of experiments. Second, it willprovide a means to conduct survivability analysesinvolving new armors before they are actuallymanufactured. Third, it is capable of providingphysical details of BAD (e.g., velocity vectors offragments and the ability to distinguish thetrajectories of penetrator debris from armor debris)that are difficult to obtain experimentally. Three-dimensional simulations of BAD were conducted withthe CTH code on the supercomputers at the ARLMSRC. CTH is an Eulerian wave code that solves theequations of continuum mechanics over a spatialmesh in small time steps. The material properties areprovided as an input. Dr. Prakash used the Johnson-Cooke model of plasticity and obtained BAD patterns

Figure 1. A Stryker Infantry Carrier Vehicle rolls of a C-130 at Fort Irwin,CA, after being transported there for a National Training Center exercise(Photo from US Army)

“This is an importantnew result, as there isat present no practicalway to obtain thevelocity distribution offragments by directexperimentalmeasurement.”

1 http://www.arl.army.mil/slad/Services/BAD.html

Page 81: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 72 –

PROJECT AND SUSTAIN US FORCES

plate. This is an important new result, as there is atpresent no practical way to obtain the velocitydistribution of fragments by direct experimentalmeasurement. Until now, survivability analysessimply made assumptions regarding this distribution.

for Kinetic Energy (KE) long rods at normal impact onrolled homogenous armor (RHA) of finite thickness.Wave codes like CTH are suitable to obtain the netresult of complex wave propagation, reflection, andinterference phenomena. However, the highresolution necessary to keep track of behind armordebris in three dimensions required a mesh with 6million cells and 96 nodes, with 200 CPU hours pernode for complete simulation. Without the modernHPC resources of the MSRC, it would not have beenfeasible to conduct such a simulation. Thesesimulations have provided new information on thespatial and velocity distribution of BAD fragments.

Capturing the ResultsFigure 2 shows the cross-sectional view of an earlystage of a 3-D simulation in which a rod of 12.45 cmin length and a diameter of 0.4 cm strikes a steelarmor plate 11.45 cm thick with a velocity of 1.6 km/snormal to the plate. A thin witness plate is placed11.45 cm behind the armor plate. After the rodperforates the armor plate, the behind armor debrisstrikes the witness plate. Figure 3 shows the BADdistribution on the witness plate obtained bysimulation. The spatial and time distributions ofvarious physical quantities associated with BAD werealso obtained. For example, Figure 4 shows thedistribution of fragment velocities as a function ofradial distance from the point of intersection of theline of impact with the back surface of the target

CTA: Computational Fluid DynamicsComputer Resources: SGI Origin 3000 [ARL]

Figure 2. Thesimulated impact of along rod on a thickarmor plate todetermine thecharacteristics ofbehind armor debris(BAD)

Impacting the FutureThe team is now performing a validation study,comparing the results of this simulation approach toactual experimental data. When this is completed,together with application of the approach to furtherweapon/target conditions, the Department will have apowerful and flexible new capability to improve thetimeliness, cost, and accuracy of vulnerability/lethalityanalysis products for system developers and theArmy’s independent evaluators and decision makers.

ReferencesPrakash, A., in Proc. 12th Annual US Army Ground VehicleSurvivability Symposium, Monterey, CA, March 26–29, 2001.

Prakash, A., in Proc. 13th Annual US Army Ground VehicleSurvivability Symposium, Monterey, CA, April 8–11, 2002.

Prakash, A., in 20th International Ballistics Symposium,Orlando, FL, Vol. 2, pp. 1066–1070, September 23–27,2002.

Figure 3. Pattern ofbehind armor debriscaptured on a thinwitness plate placedbehind the targetplate, as obtained bya 3-D simulation

Figure 4. Fragment velocity (component in the direction of initial rodvelocity) as a function of radial distance of origination from theimpact axis

Page 82: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 73 –

2002 SUCCESS STORIES

Airdrop technology is a vital DoD capabilityfor the rapid deployment of warfighters,ammunition, equipment, and supplies. In addition, the demand for airdrop of food,

medical supplies, and shelters for humanitarian reliefefforts has increased significantly in recent years.Airdrop systems development, which traditionallyrelies heavily on time-consuming and costly full-scaletesting, is leaning increasingly on modeling andsimulation. Government and academic researchers,who have teamed to make computational modeling avaluable tool for a broad range of airdropapplications, are addressing a variety of challenges.HPC capabilities are being used to numericallymodel parachutes and airdrop system performance.These capabilities are reducing the time and cost offull-scale testing, minimizing the life cycle costs ofairdrop systems, assisting in the optimization of newairdrop capabilities, and providing an airdrop virtualproving ground environment.

Modeling Fluid Structure InteractionBehaviorsHPC modeling methods are being developed thatcan be applied to many airdrop systems problemsincluding aerodynamics, structural dynamics, andfluid-structure interaction (FSI) behaviors. These

methods are based on the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) methodfor the fluid and a semi-discrete finite elementformulation based on theprinciple of virtual work for thecable-membrane parachutestructure. An iterativecoupling procedure couplesthe fluid and structure alongthe parachute canopysurface. Recent applicationssimulations have focused onthe aerodynamic interactionbetween two separateparachutes and betweendifferent canopies in aparachute cluster, FSIbetween two separateparachutes and fourparachute clusters, FSI forsteerable round parachuteswith riser control inputs, andthe FSI experienced by parachute soft-landingpayload retraction systems. In addition to theseapplication simulations, FSI validation simulations arebeing carried out and numerical results are beingcompared with water tunnel experiments for scalemodel parachutes. In one set of simulations, thedynamics of a small-scale round parachute canopy

Airdrop Systems Modeling: Fluid-StructureInteraction ApplicationsRICHARD BENNEY

US Army Soldier and Biological Chemical Command (SBCCOM), Natick, MA

KEITH STEIN

Bethel College, St Paul, MN

TAYFUN TEZDUYAR

Rice University, Houston, TX

MICHAEL ACCORSI

University of Connecticut, Storrs, CT

HAMID JOHARI

Worcester Polytechnic Institute, Worcester, MA

“High performancecomputing capabilitiesare being used tonumerically modelparachutes and airdropsystem performance.These capabilities arereducing the time andcost of full-scale testing,minimizing the life cyclecosts of airdropsystems, assisting in theoptimization of newairdrop capabilities, andproviding an airdropvirtual proving groundenvironment.”

Page 83: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 74 –

PROJECT AND SUSTAIN US FORCES

are compared against the experimental data from asimilarly scaled canopy for validation purposes. Theexperiments, which were carried out under AROsponsorship and using AHPCRC and White SandsMissile Range (WSMR) resources, have sufficientspatial and temporal resolution to permit detailedcomparison with the simulation results. In Figure 1,the three snapshots show the vorticity fieldsurrounding two parachutes predicted by the FSIsoftware. In this simulation, the lower canopy is keptrigid while the upper parachute deforms due to itsinteraction with the surrounding flow field.Deformation of the upper canopy is evident as it isdrawn into the wake of the lower canopy andapproaches canopy collapse. The framescorrespond to physical times of 0.00, 1.75, and 3.50seconds.

Figure 2 shows the structural response of the uppercanopy as it deforms and interacts with thesurrounding flow field. The colors on the canopycorrespond to the differential pressures, which are apart of the fluid dynamics solution.

SummaryNew computational technologies based on state-of-the-art methods are being developed to advanceairdrop systems modeling capabilities. Thesetechnologies are necessary to address a variety of

Figure 1. The three snapshotsshow vorticity field surroundingtwo parachutes predicted by theFSI software. In this simulation,the lower canopy is kept rigidwhile the upper parachutedeforms due to its interaction withthe surrounding flow field.Deformation of the upper canopyis evident as it is drawn into thewake of the lower canopy andapproaches canopy collapse.The frames correspond tophysical times of 0.00, 1.75, and3.50 seconds.

challenges associated with airdrop systemsmodeling. With these new capabilities, numericalsimulations are being carried out for the fluiddynamics, structural dynamics, and FSI interactionsexperienced by airdrop systems. It is expected thatfurther development of this capability will result insignificant reduction of life-cycle costs for futureairdrop systems.

Figure 2. This figure shows thestructural response of theupper canopy as it deformsand interacts with thesurrounding flow field. Thecolors on the canopycorrespond to the differentialpressures, which are a part ofthe fluid dynamics solution.

CTA: Computational Fluid DynamicsComputer Resources: Cray T3E [AHPCRC] and SGIOrigin 2000 [WSMR]

Page 84: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 75 –

2002 SUCCESS STORIES

The Navy needs a system capable ofsimultaneously breaching obstacles andclearing mines during an amphibiousassault. The rapid creation of transit lanes

through shoreline defenses allows landing craft todeposit troops and equipment directly onto andbeyond the beaches. The Navy and Marine Corpsneeds the capability to conduct an efficientamphibious assault, minimizing the number of sortiesrequired to conduct mine and obstacle clearance,and to minimize amphibious lift requirements. In thepast few years, PMS-490 and the ONR havesponsored brainstorming sessions and Broad AreaAnnouncements to generate concepts for obstacleclearance. In response to the obstacle-breachingconcern, the ONR started the Bomb Effects forObstacle Clearance Program, along with severalother alternative mine/obstacle-breaching programs.The Bomb Effects Program is a potentially effectivemeans of defeating obstacles and mines in the surfzone and beach zone.

This is a highly visible, high priority problem thatsupports the Navy’s investment in organic minecountermeasures (MCM), and is part of the Navy’sFuture Naval Capabilities (FNC) investments inorganic mine countermeasures. This weapon systemis part of the Analysis of Alternatives (AoA) for rapid,organic, standoff technologies for breaching surfzone and beach zone mines and obstacles. Thiseffort will provide important input into that analysis forcomparing Bomb Effects to other alternatives.

The Bomb Effects ProgramThe Bomb Effects Program is based on usingstandard general-purpose bombs as an existing,rapidly deployable, building block for developing aneffective system against obstacles. Two componentsin this program are a testing component and ananalytic component. The goal of the testingcomponent is to study, identify, and verify the

damage mechanisms of obstacles, both on land andin water, subjected to multiple bomb detonations.However, only limited experimental and full-scaletesting of bomb effects has been and can beperformed. The goal of theanalytic component is togenerate semi-analytic/empirical models that,when incorporated insystem effectivenessstudies, will determine thelethality of general-purposebombs against obstacles inthe beach zone and surfzone. In order to generatethese analytic models,experimental data orhydrocode data is required. Prior modeling andsimulation techniques generated a flow field due to abomb detonation and then applied the loads todetermine the response of the obstacle. Thisdecoupled method was the only feasible methodsince the applicable hydrocodes were either not welladvanced or were not parallelized at the time.

This 3-D modeling and simulation effort complementsboth the experimental and analytic efforts inproviding data that would otherwise be difficult toobtain experimentally. These “live-fire” simulationsare effectively an operational test of the real-lifescenario of multiple bombs detonated in the beachand surf zone.

Applying High PerformanceComputing Resources to Live-FireSimulationsThe parallel hydrocode Alegra is being used forthese simulations. In Alegra, a problem is dividedinto blocks; each block can then be run as Eulerian,Lagrangrian, or Arbitrary Lagrangian Eulerian (ALE).

3-D Bomb Effects Simulations for ObstacleClearanceALEXANDRA LANDSBERG

Naval Surface Warfare Center (NSWC), Indian Head, MD

“This is a highly visible,high priority problem thatsupports the Navy’sinvestment in organic minecountermeasures (MCM),and is part of the Navy’sFuture Naval Capabilities(FNC) investments inorganic minecountermeasures.”

Page 85: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 76 –

PROJECT AND SUSTAIN US FORCES

The flexibility to blend Eulerian, Lagrangian, and ALEmethods is appealing for modeling these types ofsurf zone simulations since an Eulerian solver canaccurately capture the shocks in water and/or air, theLagrangian elements most accurately model thetetrahedons and the Eluerian or ALE solver can beused to model the sand. The complex shockinteractions of multiple bombs in water and sandstrongly influence the loads experienced on theseobstacles. The goal is to model the full-scale tests todetermine the impact of bomb detonations ontetrahedron obstacles as a function of time.Simulations include modeling subsets of the full-scale tests as well as the full domain.

Due to advances in parallel hydrocodes and theavailability of HPC resources, this effort is now able tomodel scenarios that can not be testedexperimentally, solved analytically, or modeled usingearlier computational methods. Specifically, theability to model a single bomb detonation impactingmultiple obstacles was lacking. This project hasdemonstrated a single bomb detonation impactingmultiple objects for both a detonation in air and for adetonation in water. More complex scenariosincluding multiple bombs detonated simultaneouslyor a bomb sequentially impacting multiple obstaclescan now be performed.

A general methodology was developed for meshgeneration, flow solver execution, and post-processing on DoD HPC platforms. Thismethodology included linking Patran© to multipleSandia National Laboratory software packages andthen linking to post-processing software includingEnsight© and CTHPLT. The development of ageneral, efficient methodology allows for rapid modelgeneration, on the order of hours rather than days.

Simulations were typically run to several milliseconds(1–6 ms). These times were based on either timehistory data from full-scale tests or the computationaldomain size. Simulations performed included (1) Mk82 and Mk 83 bombs detonated in air and in water ona sand surface, (2) two Mk 83 bombs 8 feet apartdetonated simultaneously in air on a sand bottom, (3)five Mk 83 bombs 32 feet apart detonatedsimultaneously in air on a sand bottom (typicalseparation distance for air detonations) and 8 feetapart (typical separation distance for underwaterdetonations), (4) Mk 83 bomb detonated in air near a“leg” of tetrahedron on a sand bottom (smallerphysical domain, simulation used for debugging andanalysis), (5) Mk 83 bomb detonated in air near atetrahedron on a sand bottom at 4 foot and 8 footstand-off (separation distance for full-scale tests inair), and (6) Mk 82 bomb detonated in air and inwater near 4 tetrahedrons on a sand bottom with 8foot, 12 foot, 16 foot, and 20 foot standoffs.

The following is a highlight resulting from a Mk 83bomb detonated in air near a single tetrahedron.Modeling and simulation of the impact of a bombdetonation near a full tetrahedron is difficult due tothe large mesh size required to accurately mesh thegeometry of the tetrahedron as well as the largephysical domain surrounding the bomb andtetrahedron. Each leg of the tetrahedron is a 5 footlong by 4 inches wide angle iron by 5/8 inch thicksteel. A tabular Sesame EOS with an elastic-plasticmodel was used to model the steel tetrahedron leg.Figure 1, in which the tetrahedron is 4 feet from thebomb, shows an isocontour of the high explosives(HE) gas as it impinges on the tetrahedron at t = 275ms. The isocontour and a centerline slice arecolored by pressure. The pressure on the forwardleg of the tetrahedron is increasing as well asbeginning to deform the tetrahedron.

In addition, simulation of a Mk 82 bomb detonated inair and in water near four tetrahedrons wasperformed. One of the successes of this effort hasbeen the modeling and simulation of a bombdetonation impacting multiple tetrahedrons in air andin water. Figure 2 is the detonation in air at t=1ms.This figure shows an isocontour of HE gas colored byvelocity magnitude. These simulations would not bepossible without HPC resources.

Figure 2. Mk 82 detonated in air near four tetrahedrons. Isocontourof HE gases colored by velocity magnitude.

Figure 1. Mk 83 detonated in air near a single tetrahedron.Isocontour of HE gases colored by pressure.

Page 86: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 77 –

2002 SUCCESS STORIES

CTA: Computational Structural MechanicsComputer Resources: SGI Origin 3800 [ARL, SMDC]

Future EffortsTo date, the simulations have been run in pureEulerian mode. One of the key remaining challengesis to run these simulations with Eulerian, Lagrangian,and ALE blocks. The interfaces between these

blocks has been problematic. These issues arebeing addressed both in-house and throughconsulting with Sandia National Laboratories, thedevelopers of Alegra, and the staff of the ARL, whoalso use the Alegra hydrocode.

Page 87: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 78 –

PROJECT AND SUSTAIN US FORCES

Providing the Warfighter Information Superiorityin Littoral WatersRICHARD ALLARD, ALAN WALLCRAFT, CHERYL ANN BLAIN, MARK COBB, LUCY SMEDSTAD, PATRICK

HOGAN, AND TIMOTHY KEEN

Naval Research Laboratory (NRL), Stennis Space Center, MS

STACY HOWINGTON, CHARLIE BERGER, AND JANE MCKEE SMITH

Engineer Research and Development Center (ERDC), Vicksburg, MS

MATT BETTENCOURT

Center for Higher Learning/University of Southern Mississippi, Stennis Space Center, MS

The littorals of the world are areas of greatstrategic importance. Approximately 95%of the world’s population lives within 600miles of sea, 80% of all countries border the

coast, and 80% of the world’s capitals lie within 300nautical miles of shore. With the end of the cold war,DoD’s focus has shifted from a large land/sea battle scenario with a monolithicglobal adversary to dealing with low-intensity conflicts in the near-coastal orlittoral regions. In wartime, forces thatdominate the littoral, including theundersea environment, operate withimpunity in the face of area denial threatswhile taking initial action to defeat thosethreats and prepare the battlespace forfollow-on forces.

Information superiority—understandingand exploitation of the naturalenvironment—is critical to the safe andeffective operation of joint forces engagedin littoral expeditionary warfare. Thetactical advantage will depend on whocan most fully exploit the naturaladvantages gained by a thoroughunderstanding of the physical environment. Everyaspect of the littoral environment is critical toconducting military operations, a huge challenge toboth the warfighters and the Meteorology andOceanography (METOC) community that supportsthem.

Characterizing the BattlespaceEnvironmentCharacterization of the battlespace environment toensure optimal employment of personnel, systemsand platforms depends on continuous improvement

in science and technology. Key andessential elements in that characterizationare effective, relocatable, high-resolution,dynamically linked mesoscale,meteorological, coastal oceanographic,surface water/groundwater, riverine/estuarine, and sediment transport models.The linkages between these models,coupled with the complexity and the scaleof the environment, require theimplementation of focused sets of thesemodels on DoD HPC architectures insupport of real time operations, scenario(operational and/or training) planning, andcourse of action analysis.

The High Fidelity Simulations of LittoralEnvironments (HFSOLE) Portfolio isfunded by CHSSI. The HFSOLE effort

encompasses four projects; 1) the System Integrationand Simulation Framework Project is developing theinterface algorithms for the atmospheric, riverine,estuarine, wave, tidal, and ocean models; 2) theNear-shore and Estuarine Environments Project iscoupling and enhancing nearshore process modelsto efficiently simulate interactions between winds,waves, currents, water levels, and sedimenttransport; 3) the Surface Water/Groundwater, Riverine

“The HFSOLE models havebeen shown in literature toprovide a realistic depictionof the complex physicalprocesses that occur in thelittoral battlespace, fromrivers and estuaries tooceanic waters. CHSSI hasgiven DoD an opportunity tomake these large numericalcodes scalable on a host ofHPC computationalplatforms and greatlyreduce the wallclock timeneeded to perform asimulation, hindcast, orforecast. ”

Page 88: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 79 –

2002 SUCCESS STORIES

and Tidal Environments Project is enhancing thecapabilities of the Adaptive Hydraulics (ADH) model.This project will provide DoD the tools to generatefast and accurate estimates of water depths andvelocity distributions for use in ingress/egressassessments for rivers and streams. They will alsosupport flow rate and sediment loading as input forcoastal simulation, and ground-surface inundationand moisture content predictions for use in localatmospheric simulations. The fourth project, theSystem Applications Project, is improving the fidelityand execution speed of DoD maritime operationsimulation systems dependent on data from oceansurface behavior models. Such simulations includeamphibious assault, mine clearing, and sea keepingoperations. This project provides for thedevelopment, implementation, and testing of anenvironmental server to support Modeling andSimulation (M&S).

Modeling the Littoral WatersPreviously, these models ran independently, withoutany coupling or linkages, and typically in serialfashion on one processor. The HFSOLE Portfolioincludes eight numerical ocean, wave, and riverinemodels. They use HPC resources to solve for a hostof equations including the shallow-water equations,primitive continuity and momentum equations, andthe spectral action balance equation. For example,the evolution of the wave spectrum in the SimulatingWAves Nearshore (SWAN) (Booij et al. 1999) wavemodel is described by the spectral action balanceequation, which for Cartesian coordinates is:

(1)

The first term in the left-hand side of this equationrepresents the local rate of change of action densityin time. The second and third terms representpropagation of action in geographical space (withpropagation velocities cx and cy in x- and y-space,respectively). The fourth term represents shifting ofthe relative frequency due to variations in depths andcurrents (with propagation velocity cσin σ-space).The fifth term represents depth-induced and current-induced refraction (with propagation velocity cθ in θ-space). The expressions for these propagationspeeds are taken from linear wave theory. The termS (= S(σ,θ)) at the right hand side of the actionbalance equation is the source term in terms ofenergy density representing the effects of generation,dissipation, and nonlinear wave-wave interactions.

The HFSOLE models have been shown in literatureto provide a realistic depiction of the complexphysical processes that occur in the littoralbattlespace, from rivers and estuaries to oceanicwaters. CHSSI has given DoD an opportunity to makethese large numerical codes scalable on a host ofHPC computational platforms and greatly reduce thewallclock time needed to perform a simulation,hindcast, or forecast. Additionally, the ModelCoupling Environmental Library (MCEL) beingdeveloped in HFSOLE allows for the coupling ofthese numerical models running on distributed HPCplatforms. The distributed model-couplinginfrastructure uses a data flow approach wherecoupling information is stored in a centralized serverand flows through processing routines or filters to thenumerical models. The resulting software is expectedto provide substantial improvements over existingtechnologies by including inter-model interactions.The models are used by a DoD Challenge Projectthat allows for more experimentation and the ability toperform simulations at higher resolutions.

Japan/East Sea CirculationSimulationsUsing HPC resources, HFSOLE researchers areperforming complex computations in a timelymanner. This will greatly impact providing thewarfighter with a tactical advantage. An example ofthis time savings is illustrated with the coastal oceanmodel ADvanced CIRCulation Model (ADCIRC) thatwas applied to the simulation of rip currentsgenerated by breaking waves. As shown in Figure 1,barred beaches with channels often produce ripcurrents (at the channels) when incident waves breakin the vicinity of the bars. The laboratory wave-drivenexperiments of Haller and Dalrymple (1999) weresimulated. The dimensions of the beach are 20 m by18.2 m with a water depth variation from 0.37 m

Figure 1. ADCIRC model computed circulation of the steady statesea surface height and current velocity over a laboratory barredbeach. Note the presence of rip currents in the two channels.

Page 89: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 80 –

PROJECT AND SUSTAIN US FORCES

offshore to -0.01 m at the shoreline. Thecomputational domain is discretized to a spacing of0.1 m using 40,040 elements and 20,313 nodes. Thesimulation extends 0.6 days to a steady state withcomputations made every 0.025 seconds, producinga rip current in the upper channel. Computationaltimes dropped from 123 hours using one CPU to 2hours using 128 CPUs.

A series of ocean circulation simulations in theJapan/East Sea have been performed on the NAVOCray T3E. The model used is a Hybrid CoordinateOcean Model (HYCOM), a next generation circulationmodel under development at the Naval ResearchLaboratory (NRL), the University of Miami, and LosAlamos National Laboratory. Ideally, an oceangeneral circulation model should (a) retain its watermass characteristics for centuries (a characteristic ofisopycnal coordinates), (b) have high verticalresolution in the surface mixed layer (a characteristicof z-level coordinates), and (c) have high verticalresolution in surface and bottom boundary layers incoastal regions (a characteristic of terrain-followingcoordinates).

The Japan/East Sea has been used to test therobustness of HYCOM because it contains manycirculation features found in the global ocean, suchas a western boundary current, subpolar gyre, andrich eddy field, and it is small enough to allowsimulations with very high grid resolution. Under thisHPC DoD Challenge Project, several simulations with3.5 km resolution were performed for several yearseach. Scalability was via MPI/SHared MEMory(SHMEM), or OpenMP, or both, although SHMEMwas used exclusively for the simulations performedon the T3E. The mesh size for the high-resolutionconfiguration is 394x618x15, and a one-yearsimulation requires about 12,000 CPU hrs/model

year. The HYCOM simulations have shown (Figure 2)that high horizontal grid resolutions are needed toadequately resolve the mesoscale features as well asthe coastline and bottom topography, especially overthe shelf.

Software Testing to BeginOngoing work in HFSOLE includes preparation forindividual Alpha and Beta software tests for each ofthe models included in the portfolio. A portfolio wideAlpha test using the MCEL software took place inJanuary 2003. Inputs to these models includetopographically controlled mesoscale atmosphericforcing such as wind stress, sensible and latent heatfluxes, incoming solar radiation and sea levelpressure, high-resolution bathymetry and tidalconstituents. The final results of this effort will be ascalable suite of dynamic models for real-timeoperational applications and high-fidelity simulationsfor modeling and simulation scenarios.

ReferencesBooij, N., R.C. Ris and L.H. Holthuijsen, “A Third-Generation Wave Model for Coastal Regions 1. ModelDescription and Validation,” J. Geophys. Res., Vol. 104,No. C4, pp. 7649-7666, April 1999.

Haller, Merrick C., and Robert A. Dalrymple, “Rip CurrentDynamics and Nearshore Circulation”, Res. Report CACR-99-05, Center for Applied Coastal Research, Univ. ofDelaware, 1999.

CTA: Climate/Weather/Ocean Modeling and Simulationand Environmental Quality Modeling and SimulationComputer Resources: Cray T3E [ERDC, NAVO], SGIOrigin 3000 [ERDC], and IBM SP, Cray SV1, and SUNE10000 [NAVO]

Figure 2. Impact of progressively increasing the horizontal gridresolution using HYCOM. At 14 km resolution, unrealistic overshootof the East Korea Warm Current (130ºE, 39ºN) is evident. Thisovershoot is diminished at 7.5 km, and at 3.5 km the currentseparates from the coast at the observed latitude. The strength ofthe subpolar gyre strengthens as the horizontal resolution isincreased.

Page 90: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 81 –

2002 SUCCESS STORIES

Getting the Waves RightJANE MCKEE SMITH

US Army Corps of Engineers, Engineer Research and Development Center (ERDC), Vicksburg, MS

Ocean waves near the coast have a hugeimpact on military activities aimed atgetting soldiers, sailors, marines, andequipment on the shore. Although waves

are generated in the open ocean and propagate overlong distances, the last kilometer or less near theshore is the critical region for landing and docking.Nearshore wave transformation is solved in themodel STeady-state WAVE (STWAVE) using the waveaction balance equation. The model includes theprocesses of shoaling and refraction (increases inwave height and changes in wave direction inshallow water), wave-current interaction (changes inwave properties caused by tidal or nearshorecurrents), wave growth due to the wind, and wavebreaking. The model is based on linear wave theoryand does not include reflection or bottom friction. Bylinking the nearshore wave transformation modelSTWAVE with deepwater wave forecasts or hindcasts(at large spatial scales), nearshore waves can beestimated on beaches and at harbor and inletentrances. Forecasts are used for short-termplanning of operations, and hindcasts are used togarner climatological information for longer-termplanning (typical and extreme conditions, seasonalvariation, storm probability).

Using STWAVE to Model NearshoreWavesIn the past, nearshore wave transformation modelinghas been run on HPC platforms to make forecasts tosupport warfighters. To meet stringent timerequirements for producing the forecasts, the modelwas run with very coarse spatial resolution and lessfrequency in time than is optimal. Under the CHSSIHigh Fidelity Simulations of Littoral Environmentsproject, STWAVE was parallelized to exploit HPCcapabilities. Tests earlier this year demonstrated thatthe parallel version of STWAVE was over 90 percentefficient using 32 processors on both the Cray T3Eand SGI Origin 3000. This translates into increasingthe model throughput by a factor of 28. The highly

efficient parallelization allows improved resolution ofthe nearshore domain (which improves modelaccuracy) and more frequent model runs (moreresolution in time), while still meeting or beatingrequired forecast time constraints. Improvednearshore modeling times also allows more time forother modeling componentsand for decision making.Figure 1 shows verification ofthe parallel version ofSTWAVE at Duck, NC, for theyear 1997. Full-yearsimulations were not run priorto the parallel capability.

Continuing challengesinclude expanding to moreHPC platforms and a largernumber of processors,implementing additionalimprovements to modelefficiency, and upgrading model physics. The CHSSIwork will focus on interfacing STWAVE withcirculation and sediment transport models to providea greater understanding of the physical processesthat impact directly on operations, equipment, andtraining.

Figure 1. Parallel version of STWAVE at Duck, NC, 1997

“Tests earlier this yeardemonstrated that theparallel version ofSTWAVE was over 90percent efficient using32 processors on boththe Cray T3E and SGIOrigin 3000. Thistranslates into increasingthe model throughput bya factor of 28.”

Page 91: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 82 –

PROJECT AND SUSTAIN US FORCES

ReferencesSmith, J.M., “Breaking in a Spectral Wave Model,”Proceedings, Waves 2001, ASCE, pp. 1022–1031, 2001.

Smith, J.M., Sherlock, A.R., and Resio, D.T., STWAVE:Steady-state Spectral Wave Model User’s Manual forSTWAVE Version 3.0, ERDC/CHL SR-01-1, US ArmyEngineer Research and Development Center, Vicksburg,MS, 2001.

CTA: Climate/Weather/Ocean Modeling and SimulationComputer Resources: Cray T3E and SGI Origin 3000[ERDC]

http://chl.wes.army.mil/research/wave/wavesprg/numeric/wtrasformation/downld/erdc-chl-sr-01-11.pdfl

Page 92: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 83 –

2002 SUCCESS STORIES

Rapid Transition to Operations: Numerical WeatherModelingWILLIAM BURNETT

Naval Meteorology and Oceanography Command (NAVMETOCCOM), Stennis Space Center, MS

“The POPS is the onlynational system thatassimilates classifiedand unclassified data,and produces anddisseminates classifiedand unclassified global/regional atmosphericguidance.”

The Primary Oceanographic PredictionSystem (POPS) produces and providescritical, classified and unclassifiedatmospheric and oceanographic guidance to

Navy and DoD activities worldwide on a fixedschedule, 24 hours a day. POPS covers the entireFleet Numerical Meteorology and OceanographyCenter (FNMOC) enterprise including thesupercomputing, communications (including receiptof hundreds of thousands of observations andtransmission of model products), databases, dataassimilation and distribution, and systems control/monitoring. It is the engine that operates all theNavy’s global/regional/tactical atmospheric,oceanographic, wave, ice, and tropical cyclonemodels, and DoD’s only coupled air/ocean model.The POPS is the only national system that assimilatesclassified and unclassified data, and produces anddisseminates classified and unclassified global/regional atmospheric guidance.

Meteorological Data AssimilationKeeping operational numerical weather models“state-of-the-art” while transitioning to future, scalablecomputing architectures is a difficult but necessarytask. The computing industry is evolving into a newstate in which the preferred mode of large-scalecomputers is a hybrid system composed of hundredsof nodes, each using tens of processors. Memory isdistributed among the nodes, meaning that explicitcommunications are needed for synchronization ofdata during computations across the nodes. Theindustry standard for performing this communicationis known as the Message Passing Interface (MPI).Within each node, the individual processors sharememory in a manner similar to today’s multi-processor, shared memory computers, such as theCray C90. Communications among the individualprocessors is accomplished through the emergingstandard OpenMP. However, the shared memory inthe newer systems is cache-based with a relatively

small bandwidth, and the processors are scalar,rather than vector processors. These differences,and the use of distributed memory across the nodes,requires a significantly different programming systemthan what has been used in the past in order torealize the full potential of this newer architecture.

The Navy Operational GlobalAtmospheric PredictionSystem (NOGAPS) and theCoupled Ocean AtmosphereMesoscale Prediction System(COAMPS) are completenumerical weather prediction(NWP) systems. Eachreceives raw meteorologicalobservation data from myriadsources around the world,and with a variety of dataprocessing andcomputational steps, produces numerical, gridded“products” to satisfy DoD requirements formeteorological support. This process is known asmeteorological data assimilation and is typicallyperformed by marching forward in 6-hour stepsknown as “update cycles.”

HPC Resources CrucialThe Oceanographer of the NAVMETOCCOM andFNMOC agreed to a very aggressive schedule tocompletely upgrade the Navy’s and DoD’s POPSsupercomputer from vector to scalable architecturesin 1997. This schedule could not have beensuccessfully accomplished without DoD’s HPCresources. MSRC, CHSSI, and PET are invaluableassets. They ensured that the distributed memorysystems purchased by the Oceanographer of theNavy in 1999 were efficient and that the modelingcodes were optimized for the new architecture.CHSSI funds were leveraged with theOceanographer’s research and development (R&D)

Page 93: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 84 –

PROJECT AND SUSTAIN US FORCES

funds to rapidly upgrade and rewrite the NOGAPSand the COAMPS codes. These funds were crucial toensuring that the updated codes were ready for thenew systems. The CHSSI code conversion effortensured that the Naval Meteorology andOceanography community could provide higher-resolution and more accurate numerical weatherproducts, in a faster amount of time, for less costs.Second, MSRC resources (both DoD ChallengeProjects and CHSSI) were used to test andbenchmark the codes on various machines to helpdetermine which machines could efficiently andeffectively operate the code. This was crucial toensuring that the codes were ready for rapidtransitions to operations. Third, PET resources wereused to train both the researchers and operators onthe new types of HPC architectures, and help themunderstand how to develop and operate scalablesystems.

Visualization of atmospheric data (both global andregional) is crucial to the scientist and the warfighter.Both NOGAPS and COAMPS outputs contain millionsof bits of information. Visualization techniques helpsthe user understand the information and the impactof the environment on exercises/operations.

The HPC resources allowed the operationalcommunity to continue to provide seamlessoperational weather products and information to thewarfighters during a crucial time. The conversionfrom the Cray C90 supercomputers to the SGI Origin3800s was an enormous and highly successfulundertaking. During this extraordinarily complex

effort, HPC resources ensured that FNMOCcontinued to provide exceptional service to the Fleetand DoD other customers.

New Codes Work HeterogeneouslyThe new NOGAPS and COAMPS codes work in aheterogeneous computing environment that is asignificant departure from previous generations.Historically, these systems have run in ahomogeneous computing environment with allmodules running on the same platform, e.g., a CrayC90. This was possible because supercomputerssuch as the C90 delivered considerablecomputational power for codes ranging from serial toparallel vector applications. The new generation ofscalable architecture supercomputers, withdistributed memories and cache-based processors,are not general-purpose machines and require codethat works in a heterogeneous environment.

ReferencesHogan, T.F. and T.E. Rosmond, 1991: The description ofthe US Navy Operational Global Atmospheric PredictionSystem’s spectral forecast model, Mon. Wea. Rev., 119,pp.1786–1815.

Rosmond, T.E., 2000: A scalable version of the NavyOperational Global Atmospheric Prediction Systemspectral forecast model. Scientific Programming, 8, pp. 31–38.

Rosmond, T.E., J. Teixeira, M. Peng, T.F. Hogan, and R.Pauley, 2002: Navy Operational Global AtmosphericPrediction System (NOGAPS): forcing of ocean models,Oceanography, 15, pp. 99–108.

Figure 1. Navy Operational Global Atmospheric Prediction System(NOGAPS) previous 12-hr forecast of precipitation rate[mm/12hrfilled areas in color], 540 and 564 thickness lines [dm - bluecontours], and sea level pressure [hPa - orange contours] valid 14May 2002 at 00 Zulu

Figure 2. Coupled Ocean Atmosphere Mesoscale Prediction System(COAMPS) previous 12-hr forecast of precipitation rate [mm/12hrfilled areas in color], 540 and 564 thickness lines [ dm - bluecontours], and sea level pressure [hPa - orange contours] valid 14May 2002 at 00 Zulu

Page 94: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 85 –

2002 SUCCESS STORIES

Hodur, 1997: The Naval Research Laboratory’s CoupledOcean/Atmosphere Mesoscale Prediction System(COAMPS), Mon. Wea. Rev., 125, pp. 1414–1430.

Mirin, A.A., G.A. Sugiyama, S. Chen, R.M. Hodur, T.R. Holt,and J.M. Schmidt, 2001: Development and Performance ofa Scalable Version of a Nonhydrostatic AtmosphericModel, HPC User’s Group Conference 2001, Beau RivageResort, Biloxi, Mississippi, June 1r8–21, 2001.

CTA: Climate/Weather/Ocean Modeling and SimulationComputer Resources: IBM SP3, SGI Origin 2000, SGIOrigin 3000, and Cray T3E [NAVO]

Hodur, R.M., J. Pullen, J. Cummings, P. Martin, and M.A.Rennick, 2002: The Coupled Ocean/AtmosphereMesoscale Prediction System (COAMPS), Oceanography,15, pp. 88–98.

Page 95: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 86 –

PROJECT AND SUSTAIN US FORCES

The Air Force Weather Agency (AFWA)implemented an advanced observationintegration method September 26, 2002, thatsignificantly improves weather forecast

model accuracy.

The three-dimensional variational data assimilation(3DVAR) processes nearly four times the amount ofweather observations than the previous method.Additionally, 3DVAR ingests a wider spectrum ofobservations, as many as 21 various data types.Developers contend that all of the forecastsproduced using 3DVAR show significantly enhancedresolution.

3DVAR is capable of running on 45, 15, and 5kilometer grids further enhancing the analysis oninitial conditions. Real-time verification of the datashows improved cloud and precipitation locations ofthe initial forecast time of the model.

“It’s a more sophisticated way of determining theinitial conditions, ultimately leading to a betterforecast,” said Dr. Jerry Wegiel, chief of Fine ScaleModels team at AFWA and the principle investigatorfor 3DVAR. “This is a significant milestone for theoperational and research community,” he added.

AFWA is the first in the country to use this methodoperationally and with the increased capability, AFWAis producing weather forecast products with greateraccuracy for the warfighter. AFWA maximizes thenation’s aerospace and ground combat effectivenessby providing accurate, relevant and timely air andspace weather information to the DoD, coalition, andnational users.

3DVAR is one facet of a larger initiative to replace thecurrent fine scale modeling process usedoperationally. As part of this initiative, developers areworking on the Weather Research and Forecast(WRF) model slated to replace the Mesoscale ModelFive.

WRF is the object of major interest for meteorologistsbecause it provides a common framework forresearch and operational modeling. WRFincorporates a portable modeling infrastructure withthe latest atmospheric science allowing researchersto develop new numerical methods whileincorporating real-time operations. The WRFinitiative teams the nation’s top weather scientistsfrom AFWA, the National Center for EnvironmentalPrediction, National Center for AtmosphericResearch, and various universities; in fact, more than700 scientists are involved in this effort.

AFWA has long been theDoD’s center of excellencefor cloud analysis usinghigh-resolution satellitedata, and now, with theimplementation of WRF-3DVAR, AFWA is beingtouted as a leader inforecast mode R&Dl.

Funding for the WRF-3DVARinitiative comes from theHPCMP. The HPCMPprovides the supercomputerservices, high-speednetwork communications,and computational scienceexpertise that enables theDefense laboratories andtest centers to conduct a wide range of focusedresearch, development, and test activities. Thispartnership puts advanced technology in the handsof US forces more quickly, less expensively, and withgreater certainty of success.

AFWA’s fine scale models team is looking fartherdown the research road and has unveiled a concept

3DVAR — AFWA’s New Forecast Model TiesResearch to OperationsPAIGE HUGHES

Air Force Weather Agency (AFWA), Offutt AFB, NE\

“AFWA is the first in thecountry to use this methodoperationally and with theincreased capability, AFWAis producing weatherforecast products withgreater accuracy for thewarfighter. AFWAmaximizes the nation’saerospace and groundcombat effectiveness byproviding accurate,relevant and timely air andspace weather informationto the Department ofDefense, coalition, andnational users.”

Page 96: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 87 –

2002 SUCCESS STORIES

CTA: Climate/Weather/Ocean Modeling and SimulationComputer Resources: IBM SP3 [ASC,NCAR, AWFA],SGI Origin 3000 [ERDC], and IBM SP PWR PC [AWFA]

called four-dimensional variational data assimilation(4DVAR). With funding, 4DVAR, the program couldbe implemented operationally as early as 2006.

Figure 1. The first panel shows cloud top heights from an operationalMM5 run using WRF-3DVAR as the data assimilation scheme. Thebottom panel is verifying MB enhanced IR imagery. (JAAWIN)

Page 97: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 88 –

PROJECT AND SUSTAIN US FORCES

Forecasting the Weather Under the Sea: NAVOMSRC Scientific Visualization Center Allows Navyto See the Ocean Environment for UnderseaRecovery OperationsPETE GRUZINSKAS

Naval Oceanographic Office (NAVO), Scientific Visualization Center, Stennis Space Center, MS

The ability to accurately predict environmentalconditions is paramount to operationalconsideration within the Navy and theDoD. The recent Navy salvage operation

(SALVOPS) of the Japanese fishing trawler EhimeMaru provides an excellent example of the need forthese accurate predictions. Through the applicationof state of the art software analysis tools designed forthe analysis of R&D ocean circulation models to theSALVOPS, the Navy was able to successfullycomplete its mission.

On February 9, 2002, a collision between a US Navysubmarine and the Ehime Maru sent the fishingvessel to the bottom of the Pacific Ocean about 10miles south of Diamond Head, Oahu, Hawaii. Thedecision was made to retrieve the vessel, which lay in

approximately 1800 feet ofwater. A Crisis Action Teamwas formed, and the NAVOMSRC Visualization Centerwas asked to providevisualization support for theEhime Maru retrievaloperation. The first challengewas to visualize the high-resolution model outputgenerated by the Shallow-Water Analysis and ForecastSystem (SWAFS), running onthe MSRC’s IBM-SP. SWAFS,as shown in Figure 1, is a 3-Docean circulation model that

produces time-series data containing ocean currentspeed and direction, temperature, and salinity fields.Initially, a 2-kilometer grid that covered the entireHawaiian Islands was generated and used to provide

input to a higher resolution (500-meter) nest, whichbounded the operation area.

Development of Analytical SoftwareEnvironmentsPart of the NAVO MSRC Visualization Center’smission is the development of analytical softwareenvironments for the DoD research community.Some of these tools, designed to analyze oceanmodel output, were modified to analyze the Navy’soperational model output for the Ehime MaruSALVOPS. These software tools provided significantdiagnostic capability, which assisted in the validationof the model output in a very complex environment.

While virtual environments may not provide all of theanswers to data analysts, the fact of the matter is thereal world environment is 3-D. Features within theenvironment have a 3-D structure that can be difficult,if not impossible, to realize in two-dimensional (2-D)

“With the Navy’s everevolving 3-D oceanmodeling capabilitiesand the matchingHPCMP hardwareadvances, it is nowpossible to place modelresults on the desktopof the DoD user and, inthe case of the EhimeMaru SALVOPS, on thelaptop of analysts in thefield.”

Figure 1. SWAFS surface currents in the operational area

Page 98: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 89 –

2002 SUCCESS STORIES

space. The same technology that was built to helpresearch and development modelers scrutinize theirmodel output was applied to this operationalscenario where the speed and direction of the oceancurrents were critical factors.

An additional feature added to the application,unique to the Ehime Maru retrieval operation, was thedisplay of an Acoustic Doppler Current Profiler(ADCP) buoy. The display of the ADCP dataprovided analysts a direct comparison betweenmodel output and in-situ current measurements innear real-time. Another significant feature of thevirtual environments built for ocean model analysis isportability — the ability to run on a variety ofhardware architectures, including laptop computers— which proved to be critical in this operation.

Portability is accomplished by savvy application ofgraphics techniques, as well as smart Input/Outputand memory management. This ability was critical tothe provision of an analysis environment and data toforward-deployed personnel on-scene at the EhimeMaru recovery site. Other support products weredeveloped to render the recovery area in 3-D fromhigh-resolution bathymetry (water depth) data todelineate critical areas (Figure 2). Conceptualanimations were built to demonstrate the mechanicsof an extremely difficult recovery operation.

Using Virtual Environments toAnalyze Results3-D ocean circulation models, like those supportedby the HPCMP, represent 3-D grids with output thatvaries over time (four-dimensional). To analyze theoutput of these models the computational space is

reconstructed into a virtual environment where avariety of analytical techniques can be applied andsubsequently visualized.

The NAVO MSRC Visualization staff was approachedbefore this recovery operation to assist in the analysisof the current fields in the vicinity of the Ehime Maru,which lay at a depth of approximately 1800 feet. Thework accomplished in the R&D analysis arena wasdirectly applied to this naval operation. Techniques,such as colorized vectors and advected particles,were employed to ensure that critical thresholds ofcurrent speed were not exceeded in and around thesalvage area. On scene (in situ) sensors providednear real-time measurements, which were displayedalong with the model output. This was all packagedand automated, to provide daily results to analysts’on-scene in Oahu.

A level of confidence was established, based onmodel output and in situ measurements, that theocean currents in and around the salvage area wouldnot represent a threat to the salvage operation. Thevirtual environments created by the staff at the NAVOMSRC Visualization Center provided a critical lookinto the environment around this operation, withregard to the undersea terrain (bathymetry), oceancurrent speed, direction, and temperature. This wasa successful SALVOPS and the tools brought to bearbefore and during this operation will serve both theR&D and operational ocean modeling communities.

Future 3-D Ocean ModelsAdvances in hardware and software technology havehelped close the gap between computationaltechnology and the technology required tomanipulate and display the results of the 3-D oceanmodel output. With the Navy’s ever evolving 3-Docean modeling capabilities and the matchingHPCMP hardware advances, it is now possible toplace model results on the desktop of the DoD userand, in the case of the Ehime Maru SALVOPS, on thelaptop of analysts in the field.

The operation was successful, and all missionobjectives were accomplished. This recoveryoperation demonstrates the excellent synergy thathas developed between the operational Navy and theDoD research infrastructure built by the HPCMP.

ReferencesNavigator, Spring 2002, NAVO MSRC, pp. 7–9.

CTA: Climate/Weather/Ocean Modeling and SimulationComputer Resources: RS/6000 SP3 [NAVO]

Figure 2. Still image showing Ehime Maru retrieval over realbathymetry data. The animation was designed to conceptualize/visualize what the recovery would look like in the context of the realdata collected by the USNS SUMNER.

Page 99: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 100: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

Body with Payload Cavity

Bourellet

“C” Shaped Armature Tungsten Nose

DENY ENEMIES

SANCTUARY

Remote SensingUnmanned Aerial Vehicles

Long-Range Precision Strike

Small-Diameter MunitionsDeep Mobile Target Attack inDenied Areas

Defeating Hard and DeeplyBuried Targets

Page 101: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 92 –

DENY ENEMIES SANCTUARY

References (from top to bottom):

Accuracy Modeling for Electro-Magnetic Launched Anti-Armor Projectile by James NewillTypical electromagnetic projectile

Time-Accurate Aerodynamics Modeling of Synthetic Jets for Projectile Control by Jubaraj SahuComputed time-coded particle traces show the interaction of the unsteady synthetic jet with the projectile wake flow at Mach =0.11 and zero degree angle of attack

Development of a High-Fidelity Nonlinear Aeroelastic Solver by Raymond GordnierThree-dimensional panel flutter due to interaction with a transitional boundary layer

Identification of “Hot-Spots” on Tactical Vehicles at UHF Frequencies by Anders SullivanImpulse response mapped on T-72

Page 102: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 93 –

2002 SUCCESS STORIES

Deny Enemies Sanctuary

“Deny enemies sanctuary — so they know no corner of the world is remote enough, no mountain highenough, no cave or bunker deep enough, no SUV fast enough, to protect them from our reach… Toachieve this objective, we must have the capability to locate, track and attack both mobile and fixedtargets—any where, any time, and under all weather conditions… To achieve this, we must developnew data links for connecting ground forces with air support; new long-range precision strikecapabilities; new, long-range, deep penetrating weapons that can reach our adversaries in the cavesand hardened bunkers where they hide…” — from testimony of US Secretary of Defense, DonaldRumsfeld, prepared for the House Armed Services Committee 2003 Defense Budget Request,February 6, 2002.

These words ring true. Recent events in our nation only underscore their importance. As scientists andengineers the message is clear. All our efforts must be directed towards satisfying the Secretary’stransformational goal. The following success stories highlight the outstanding accomplishments ofDoD scientists and engineers in striving towards this goal. The transformational goal is actually acontinuously evolving process that requires constant innovation by the scientific community. Theability to solve very complex problems from quantum physics to high-speed fluid flow is a hallmark ofthese success stories. By applying state-of-the-art algorithms, coupled with high performancecomputing resources, problems that could not be solved as little as 5 years ago can now be solvedroutinely. A selection of these stories illustrate the practical application of modeling and simulation inthe design process — with the goal of producing better and cheaper weapon systems for thewarfighter. Other stories reflect basic research efforts in materials modeling and device development.Overall, the stories span many of the HPCMP computational technology areas including computationalfluid dynamics (CFD), computational structural mechanics (CSM), computational chemistry andmaterial science (CSM), and computational electromagnetics and acoustics (CEA).

The first group of stories in this section highlights the use of high performance computing in the designprocess for air, sea, and ground weapon systems applications. These stories include using high-fidelity simulations for the design of an advanced submarine sail, end-to-end simulation capabilities forprojectile-target interactions, and advanced modeling and simulation of revolutionary unmanned airvehicles (UAV). These stories are examples of the paradigm shift taking place within the DoD as partof the transformation. That is, developing cost effective weapons in a “virtual” design environment.The next group of stories discuss materials and chemistry modeling. The first in this set is a story onatomistic modeling techniques for the development of new energetic materials. This is followed by astory on materials modeling for device development. Next is a story on quantum mechanicalmolecular modeling for the design of advanced resins used in gun testing. Finally there is a relatedstory on nanoparticles used in the design of high energy density materials. In the next group, there aretwo stories on modeling and simulation of weapons of mass destruction. In both stories, the authorsstudy the effects of the non-ideal airblast on the environment. The next group of stories illustrates theapplication of modeling and simulation in advanced sensor technology and advanced radio frequency(RF) weapon technology. The first in this group discusses modeling the performance of unattendedground sensors in the modern complex battlefield environment. It is shown that high-fidelitysimulations are essential to assess the performance limitations of next-generation sensor technology.The next story is on modeling and simulation of foliage penetrating radar and how electromagneticmodels can be used in the development of advanced detection algorithms. The last story in this groupdiscusses the use of electromagnetic particle-in-cell methods to design high power microwavedevices. These devices are designed to defeat enemy battlefield electronics using non-lethal means.This capability will be very important in future conflicts where over-matched enemy forces may secludethemselves into the civilian population. The last group in this section shows examples of theoutstanding progress that has been made in reactive flow modeling. The first in this set discusses

Page 103: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 94 –

DENY ENEMIES SANCTUARY

multi-phase CFD simulations used for next generation gun propulsion systems. The next is a relatedstory on the interaction between high temperature gas and solid propellant in tank and artillery gunrounds. The final story in this group, and in this section, discusses the production of nanoenergeticparticles in turbulent reacting flows, which are expected to be employed in next-generation Army gunsystems. In summary, each of these stories can relate to the Secretary’s transformational goal ondenying enemies sanctuary — whether it is in unmanned surveillance and tracking technology(sensors, UAV) or improved lethality technology (weapon platforms, materials).

As the DoD leadership continues to transform military training and doctrine, it is incumbent upon thescientific community to continue to develop and deliver the best technologies to the warfighter. In theend, these stories are just a small sampling of the outstanding contributions made by scientists andengineers across the country in government service, academia, and industry. The common threadrunning throughout this national fabric is high performance computing. In large measure, the workrepresented by these stories could not have been accomplished without the significant investmentsmade in high performance computing hardware, software, and networking. These stories are a tribute,and in some sense a validation, of this investment.

Dr. Anders SullivanArmy Research Laboratory, Sensors and Electron Devices Directorate, Adelphi, MD

Page 104: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 95 –

2002 SUCCESS STORIES

With submarine operations shifting tolittoral missions, there is an increaseddesire for new systems that improvelittoral warfare effectiveness, and

designs that will allow flexibility for mission-specificreconfiguration and rapid insertion of newtechnologies. Candidate systems underconsideration include countermeasureenhancements, high data rate antennas, unmannedunderwater vehicles, littoral warfare weapons,advanced buoyant cable antennas, and SpecialOperating Forces stowage, among others.Conventional sails with their low drag and lowvolumes cannot accommodate these systems. Anew sail shape, with up to four times the volume, isnecessary to enclose such new systems. This newsail, which has a dual curved canopy-like shape, hasalready been tested on the US Navy’s Large ScaleVehicle (Figure 1) with plans for eventual insertion onthe Virginia Class submarine.

Limited Experimental DataThis sail shape is a radical new design for the USNavy. There is no historical database to use as areference for its design. Traditionally, designing sucha new shape requires a whole series of tests. Largenumbers of variants are built, tested, and evaluatedencompassing both large and small changes. Avariety of concepts are also explored. Additionally,design decisions are frequently based on limitedamounts of experimental data of a large number ofgeometries.

Impacting Design DecisionsHPC environments bring a computational-basedapproach to addressing such designs. Many shapescan be evaluated computationally so that only a fewcandidate shapes are actually built and tested,saving money. Computational studies can beconducted in a timely manner, saving time in thedesign cycle over the traditional build and testprocedure. It is only with the large, parallel, HPC

resources available in thelast several years that suchcomputations can be doneto sufficient accuracy andspeed to impact designdecisions in both thepreliminary and final stages.

Unlike traditionalexperimental data, whichsupplies limited amounts ofdata and information, thewhole flow field can bestudied using HPCresources and acomputational basedapproach. This analysisshowed that large vortical flow structures were beingdeveloped from the early sail designs, causing alarge impact on the propulsor inflow. It was possibleto modify the sail shapes to reduce this impact on thepropulsor inflow once the vortical structuresphenomena was better understood.

Design of an Advanced Submarine SailJOSEPH GORSKI

Naval Surface Warfare Center, Carderock Division (NSWCCD), West Bethesda, MD

Figure 1. Advanced sail installed on the US Navy’s large scale vehicle

“High performancecomputing allows manyshapes to be evaluatedcomputationally so thatonly a few candidateshapes are actually builtand tested, saving money.Another aspect of this isthe timely manner inwhich it is now possible todo computational studiessaving time in the designcycle over the traditionalbuild and test procedure.”

Page 105: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 96 –

DENY ENEMIES SANCTUARY

Scientific visualization has been used to helpunderstand the vortical flow created by the largesails. The visualization was used to help identify howthe vortices are formed around the sail and then flowdownstream into the propulsor. A betterunderstanding of how these vortices are formed hasled to better ideas of how to minimize them in theredesign effort.

Better Designs in Shorter TimeOver the last decade, there has been an increasedemphasis on the use of computational tools toevaluate submarine flows and guide their design.With the advent of parallel computational capabilities,viscous simulations have seen a larger role inpredicting these flow fields. The shift to a morecomputational based design and analysis approachleads to the possibility of better designs in a shorteramount of time. Such computations allow for therank ordering of designs as well as providing theentire flow field, which can lead to betterunderstanding of the flow physics. In the currenteffort, modifications to candidate sail shapes wereevaluated computationally providing drag differencesas well as propulsor inflow. This allowed for thedevelopment of a new sail shape, which was builtand tested on the US Navy’s Large Scale Vehicle—with plans for future implementation on the Virginiaclass of submarines. Such computations provide acost effective way to evaluate various shapes that

may improve a design. In this way, computations arereducing many design-oriented tests. Future effortsneed to include full propulsor modeling as part of thecalculations. As requirements become stricter, futuredesign efforts will be more sophisticated involvingmaneuvering as well as more complicated operatingscenarios. This will require large computerresources, as it will be extremely difficult if notimpossible to perform such evaluationsexperimentally.

ReferencesStout, M.C. and Dozier, D.F., “Advanced SubmarineSail,” Carderock Division, NSWC - Technical Digest,pp. 58–62, December 2001.

Gorski, J.J. and Coleman, R.M., “Use of RANSCalculations in the Design of a Submarine Sail,”Proceedings of the NATO RTO Applied VehicleTechnology Panel Spring 2002 Meeting, Paris,France, April 2002.

CTA: Computational Fluid DynamicsComputer Resources: Cray T3E [ARSC]

Page 106: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 97 –

2002 SUCCESS STORIES

The ARL scientists, working through theCHSSI, are developing an end-to-endsimulation capability to simulate complexprojectile-target interaction applications.

This work will accelerate the fielding of new weaponsystems, improve armor technologies, and enhanceprotective structures technology. Simulation ofpenetration into deeply buried structures is theprimary targeted application area.

Solving IndividualDiscipline AreasTypically, penetration intodeeply buried structuresinvolves solving twocoupled physicsapplications: oneapplication is associatedwith addressing themodeling and simulation ofthe penetrator; the otherapplication is associated

with the modeling and simulation of deeply buriedstructures. Most of the earlier efforts were confinedto solving the individual discipline areas because ofthe application’s complexity and computer resourcelimitations. Simulating the complex projectile-targetinteraction in a seamless, coupled, orinterdisciplinary environment requires increased useof HPC resources.

Fully Coupling the Discipline AreasPenetration into deeply buried structures addressesdevelopment of a scalable-coupled Eulerian-Lagrangian approach for a wide class of penetration-target applications in the areas of weaponperformance, weapons effects, and performance anddefeat of targets. This class is represented byproblems involving high-pressure, high-strain, and/or

high-strain-rate material interactions, where one ormore of the material regions undergoes relativelysmall deformations, while the other materials in theproblem undergo arbitrarily large deformations. Thepenetration into earth problem is a good example ofthis, where the penetrator remains essentially elastic,while the target can undergo very large strains anddeformations in the region of interaction with thepenetrator (Figure 1). Nevertheless, deformation ofthe penetrator can often have a strong influence onthe loading exerted on it by the target. Thus, a fullycoupled treatment of the penetrator/targetinteractions and deformations is required for theseapplications.

Simulations were performed of the large deformationresponse of a complex buried structure to thepenetrator. A buried structure using two differentlayers of clay and concrete was modeled usingshock physics software. The penetrator wasmodeled with all the details using large deformation

Penetration into Deeply Buried Structures: ACoupled Computational MethodologyRAJU NAMBURU AND JERRY CLARKE

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

Figure 1. Penetrator traveling through two different layers of soilmaterial towards a deeply buried structure

“These preliminary resultsand coupled computationalenvironment show promisefor applicability to a wideclass of penetration-targetapplications and practicalapplications for fieldingnew weapon systems,improving armortechnologies, andenhancing protectivestructures technologies.”

Page 107: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 98 –

DENY ENEMIES SANCTUARY

finite element structural analysis software. Thecoupled simulations showed good scalability and theresults showed good qualitative agreement. Thesepreliminary results and coupled computationalenvironment show promise for applicability to a wideclass of penetration-target applications and practicalapplications for fielding new weapon systems,improving armor technologies, and enhancingprotective structures technologies.

ReferencesJerry Clarke and Raju Namburu, “The eXtensibleData Model and Format: a High Performance DataHub for Connecting Parallel Codes and Tools”, ARL-TR-2552, July 2001.

Stephen Schraml, Kent Kimsey, and Jerry Clarke,“High-Performance Computing Applications forSurvivability-Lethality Technologies”, IEEE Computingin Science & Engineering, Vol.4 No. 2, pp. 16–22,March/April 2002.

R.R. Namburu and J.A. Clarke, Interdisciplinarycomputing environment for coupled Eulerian-Lagrangian applications, presented at the Fifth WorldCongress for Computational Mechanics, Vienna,Austria, July 7–12, 2002. Sponsored by: InternationalAssociation for Computational Mechancis.

P. Yarrington, “Coupled Eulearian-Lagrangian codeZAPOTEC for transient dynamic analysis”, presentedat the Fifth World Congress for ComputationalMechanics, Vienna, Austria, July 7–12, 2002.Sponsored by: International Association forComputational Mechancis.

CTA: Computational Structural MechanicsComputer Resources: SGI Origin 3000 [ARL]

Page 108: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 99 –

2002 SUCCESS STORIES

Correct prediction of the aeroelasticbehavior of current and future airvehicles remains a challenge for the AirForce. The revolutionary concepts being

proposed for a new UAV that will be required tooperate under extreme flight conditions magnify theproblem. These new systems may also incorporatenew lightweight composites or have wing designsthat are routinely subjected to large deflectionsduring their operation. A high-fidelity simulationcapability is required to accurately model thecomplex, nonlinear physical phenomena involved inthese types of aircraft. New generations ofmultidisciplinary computational tools that couplehighly accurate CFD solvers with computationalstructural dynamics modules are required. Theincorporation of these types of computationaltechniques into DoD weapons programs promisescost and development time savings as well asimproved operational performance.

Linear Aerodynamic TechniquesTraditional aeroelastic analysis has coupled linearaerodynamic techniques such as vortex lattice andpanel methods with linear modal techniques for thestructural model. These techniques have beenpreferred due to their ease in implementation andrapid solution times. While these simplified analysistools have been used extensively, they often fail topredict detrimental aeroelastic phenomena sincethey ignore important nonlinear physical features ofthe problem. The simplified aerodynamic modelscannot address such critical issues as transition,turbulence, separation, buffet, and transonic shocks.The linear modal structural models are limited tosmall amplitude responses and linear materialproperties.

New Nonlinear Aeroelastic SolverThe computational power available through theHPCMP now allows higher fidelity modeling to bebrought to bear on the problem of simulatingaeroelastic response. Specifically, CFD solvers mayreplace linear aerodynamic models for the nonlinearEuler and Navier-Stokes equations. Nonlinearstructural models employing finite difference andfinite element techniques may be used to representthe structural response. Coupling of these enhancedtechniques provides a powerful new nonlinear,aeroelastic simulation tool that can addresschallenging fluid-structure interaction problems suchas limit-cycle oscillations, tail buffet, and transitiondelay and drag reduction via compliant surfaces.

By exploiting the computer power available throughthe HPCMP, the present workseeks to overcome thedeficiencies in currentaeroelastic methods bydeveloping a new fullynonlinear aeroelastic solver.In this new procedure, ahigher order (2 to 3 timesmore accurate than manyexisting CFD codes),unsteady, 3-D Navier-Stokescode is used to simulate the aerodynamics of theproblem. The structural dynamics are simulatedusing nonlinear finite difference and finite elementmodels. These independent solvers are coupled in asynchronized fashion to provide a powerful newnonlinear aeroelastic tool, which has been applied toseveral problems that could not previously besimulated with classical linear techniques.

One problem considered is the direct numericalsimulation of the interaction of a transitional boundarylayer flow with a flexible panel for both subsonic andtransonic flow speeds (Figure 1). At the higher flow

Development of a High-Fidelity NonlinearAeroelastic SolverRAYMOND GORDNIER

Air Force Research Laboratory (AFRL), Wright-Patterson AFB, OH

“The revolutionaryconcepts beingproposed for a newUAV that will berequired to operateunder extreme flightconditions magnify theproblem.”

Page 109: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 100 –

DENY ENEMIES SANCTUARY

speed, a traveling wave flutter phenomena occursthat originates from the coupling of the transitionalTollmien-Schlichting waves in the boundary layer withhigher-mode flexural waves in the elastic panel. Thisresults in significant acoustic radiation above thefluttering panel (Figure 2). The highly refined meshes(up to 15 million grid points) and long time histories(500,000 time steps) required to accurately computethis problem demand the use of the most powerfulcomputing platforms available to successfullycomplete this simulation.

This unique nonlinear aeroelastic tool has alsosuccessfully captured the proper limit cycle behaviorof a delta wing. Previous computations using linearstructural models over predicted by four to five timesthe amplitude of the experimentally measured limitcycle response of the delta wing. The inability of thelinear structural models to simulate the actualnonlinear structural mechanisms in the problem wasidentified as the cause of this discrepancy. Thepresent nonlinear structural techniques accuratelycapture the correct amplitude and frequency of thedelta wing limit-cycle oscillation (Figure 3).

A novel, fully nonlinear, aeroelastic solver has beendeveloped,which exploitsthe extensivecomputingcapacity of theHPCMP. Theability of thisnew tool toaddress fluid-structureinteractionproblems thatcannot becomputed withcurrent linear aeroelastic techniques has beendemonstrated. This high-fidelity simulation capabilitymay now be exploited to enhance the developmentof current and future manned and unmanned airvehicles.

Detecting and Explaining RelevantFlow FeaturesThe ability to effectively visualize the computedresults for the complex physical problems consideredin this work is essential in order to develop acomplete understanding of the problem beingsimulated. The massive amounts of informationproduced by these high-fidelity computations maynot be exploited without resorting extensively toscientific visualization. Three-dimensionalvisualizations of the resulting flowfields have beenused to detect and explain relevant flow featurescritical to the analysis of the physical phenomenabeing investigated. Furthermore, the dynamic natureof these problems could not be discerned without theproduction of animations of the unsteady computedresults. Without the ability to produce these types ofscientific visualizations, a complete interpretation ofthese costly simulations would not be possible andmuch of the computational effort would be wasted.

Developing a New NonlinearAeroelastic Computational CapabilityPresent computational capabilities employed toanalyze and predict the aeroelastic behavior ofcurrent and future Air Force vehicles rely almostexclusively on linear aerodynamic and structuralmodels. These simplified models fail to predict manydetrimental aeroelastic phenomena since they ignoreimportant nonlinear physical features inherent to theproblem. The present research focuses ondeveloping a new, nonlinear aeroelasticcomputational capability to overcome these

Figure 3. Stress developed in a delta wingat the point of maximum deflection in thelimit cycle response

Figure 2.Acousticradiation dueto travellingwave flutter ofa flexiblepanel

Figure 1. 3-D panel flutter due to interaction with a transitionalboundary layer

Page 110: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 101 –

2002 SUCCESS STORIES

deficiencies. Exploitation of this new computationalcapability earlier in vehicle design and developmentwill provide a means to predict adverse fluid/structureinteraction problems before the flight test andmanufacture phase when redesign of the vehicle iscostly. This can provide significant cost anddevelopment-time savings to an air vehicle program.This type of computational tool will also allow for amore rational design of future UAVs, which will berequired to operate in extreme flight conditions wherenonlinear aeroelastic phenomena will be prevalent.

ReferencesGordnier, Raymond E., “Computation of Limit CycleOscillations of a Delta Wing,” AIAA-2002-1411, 43rdAIAA Strucutres, Structural Dynamics, and MaterialsConference, Denver, CO, April 2002.

Visbal, Miguel R. and Gordnier, Raymond E., “DirectNumerical Simulation of the Interaction of a BoundaryLayer with a Flexible Panel,” AIAA-2001-2721, 31stFluid Dynamics Conference, Anaheim, CA, June2001.

CTA: Computational Fluid DynamicsComputer Resources: Cray SV1 [ARSC, NAVO]

Page 111: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 102 –

DENY ENEMIES SANCTUARY

Advances in defense weaponry oftenrequire the development of new energeticmaterials. These technologicaladvances often outpace the energetic

material development process, however, not allowingfor the complete utilization of the new weaponsplatforms. It is critical that the process of developinghigh-performance energetic materials be optimizedto meet the needs of these innovative applications.Modeling and simulation must be used in thedevelopment process where possible, eliminating thetraditional costly and time-consuming approaches offormulation and experimentation. With this goal inmind, researchers at ARL have recently developed acomputational tool for assessing the detonationperformance of new energetic materials.

Early Models Lacked PredictiveCapabilitiesHistorically, theoretical models parameterized toexperimental data were used in simulations to aid inanalyses of experiments. The models were heavilydependent on measurements before highperformance computing. For such models,predictive capabilities were extremely limited. Theylacked the means of discerning whetherdiscrepancies between the theoretical predictionsand the experimental measurements were a result ofan inadequate model or an inaccurate theory.

The Role of the New ComputationalToolThe role of this method in the development processis two-fold. First, the method can be used to providethe fundamental understanding of energetic materialsnecessary for exploitation in the weapons system.Second, the method can be used to screencandidate notional materials immediately,substantially impacting time and financial

considerations. This computational tool can also beused to meet the demands of advancing weaponstechnology that requires nano-structured energeticmaterials. As defense weapons technology andengineering continues to grow past the currentunderstanding of the microscopic-level phenomena,the implementation of this tool will continue to fill inexisting knowledge gaps.

To date the computational tool, exercised using theresources of the ARL MSRC, has allowed successfulprediction of the detonation properties for severalsimple systems. With theadvances in moderntheoretical chemicalmodeling methods,highly scalable software,and parallel computerarchitectures, moreelaborate, accurate, andpredictive atomisticsimulations are beingperformed that provideinformation aboutenergetic materials that isnot amenable to directexperimentalobservation.

More Realistic and Complex SystemsSimulatedFuture research objectives are to simulate morerealistic and complex systems, which can only beimplemented through the use of resources suppliedthrough the HPCMP. The calculation of detonationproperties requires a multitude of simulations at avariety of temperatures and pressures, with eachsimulation requiring millions of computations forsystems containing tens of thousands of atoms. As aresult, the calculation of the detonation properties of

Novel Energetic Material Development UsingMolecular SimulationJOHN BRENNAN AND BETSY RICE

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

“This computational tool canalso be used to meet thedemands of advancingweapons technology thatrequires nano-structuredenergetic materials. Asdefense weapons technologyand engineering continues togrow past the currentunderstanding of themicroscopic-levelphenomena, theimplementation of this tool willcontinue to fill in existingknowledge gaps.”

Page 112: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 103 –

2002 SUCCESS STORIES

a single energetic material might require several CPUweeks. The resources available through the HPCMPhave enabled the evaluation of detonation propertiesusing the atomistic modeling technique.

ReferencesJ.K. Brennan and B.M. Rice, “Molecular Simulation ofShocked Materials Using Reactive Monte Carlo”,proceedings for the Shock Wave Processes inCondensed Matter Workshop, Edinburgh, U.K., May19–24, 2002.

J.K. Brennan and B.M. Rice, “Molecular Simulation ofShocked Materials Using Reactive Monte Carlo”,Phys. Rev. E, 66(2), p. 021105, 2002.

Figure 1. Application of the Reactive Monte Carlo (RxMC) method to the detonation of nitric oxide (2NO<=>N2+O2): (a) Detonation properties ofNO; (b) Molecular configuration snapshot of reacting system at P=28.47 GPa (blue: N2 molecules; red: O2 molecules; green: NO molecules)

C.H. Turner, J.K. Brennan, J.K. Johnson, and K.E.Gubbins, “Effect of Confinement by Porous Materialson Chemical Reaction Kinetics”, J. Chem. Phys., 116,p. 2138, 2002.

C.H. Turner, J.K. Brennan, J. Pikunic, and K.E.Gubbins, “Simulation of Chemical Reaction Equilibriaand Kinetics in Heterogeneous Carbon Micropores”,Appl. Surf. Sci., 196, p. 366, 2002.

CTA: Computational Chemistry and Materials ScienceComputer Resources: SGI Origin 2000, SGI Origin3000, and IBM SP3 [ARL]

(a) (b)

Page 113: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 104 –

DENY ENEMIES SANCTUARY

Relationship Between Local Structure and PhaseTransitions of a Disordered Solid SolutionANDREW M. RAPPE

University of Pennsylvania, Philadelphia, PASponsor: Office of Naval Research (ONR), Arlington, VA

Ferroelectric (FE) materials have a wide rangeof applications, from the nonvolatile memorymodules in computers to piezoelectrictransducers. In particular, lead zirconate

titanate (PZT) exhibits a large piezoelectric effect,making PZT the primary active material in navalSound Navigation and Ranging (SONAR) devices,medical ultrasound systems, and many otherimportant applications. PZT is piezoelectric — thismeans that it converts tiny underwater pressurewaves to electrical signals, or vice versa. The Navyneeds to understand PZT so that it can develop newmaterials with even better properties. The next-generation piezoelectric materials need to be moresensitive to small signals, more durable, and lighter.

Due to its technological importance, PZT has beenextensively studied both theoretically with quantummechanical studies of small supercells andexperimentally with methods such as neutronscattering and X-ray diffraction. PZT assumes theABO3 perovskitestructure, (Figure 1).However, the disorderin the B-cationarrangement makessmall, densityfunctional theory (DFT)-accessible supercellsonly an approximatemodel of the realmaterial. Experimentalmethods only probethe ensemble averageproperties of thematerial. All phases ofPZT have extremely similar experimental pairdistribution functions (PDFs), indicating similar localstructure despite different macroscopic properties.

Thus the relationship betweenthe local structure of thematerial and its superiorproperties is not clear.

Using SupercellCalculations to StudyChemical CompositionThe focus of the work has beenthe use of large, many atomsupercell calculations to studythe relationships betweenchemical composition, disorder,and piezoelectricity in complexoxide Pb(Zr1-x Tix)O3 (PZT) solid solutions notaccessed with previous small quantum mechanicalcalculations. Supercells of PZT that are four to eighttimes larger than previous calculations wereinvestigated. The large size and the variety of thesupercells studied allowed probing of the effect ofvarious atomic arrangements on the structure of thematerial, something impossible to do with theordered cells used in previous studies.

Although there are differences in the macroscopicphases and polarization of various compositions ofPZT, there are many similarities within the localstructure of each composition. The direction of leadatom distortions in PZT governs the orientation of thepolarization and therefore the high piezoelectricresponse, which gives PZT its excellent SONARproperties.

In the calculations, lead atoms mostly maketetragonal (aligned with the x-axis) and orthorhombic(with non-zero components along x and y, or x and zaxis) displacements. The distortions are towardtitanium (Ti) neighbors and away from zirconium (Zr)neighbors, conforming to the overall polarization as

“The properties whichmake PZT effectivealso make itcomplicated to model.The extensiveresources of theHPCMP enable us tomodel large enoughregions of the PZT sothat we canunderstand howdisorder affects theSONAR capabilities ofPZT.”

Figure 1. ABO3 perovskite structure

Page 114: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 105 –

2002 SUCCESS STORIES

much as possible. Both preferences areunderstandable. Zr is a larger ion than Ti, whichmeans that the purely repulsive interaction betweenPb and B cations will be stronger for Pb-Zr than forPb-Ti. The conformity to overall direction ofpolarization is due to simple electrostatics, whichmake dipole alignment favorable. In the case of Pbatoms 1 and 2 in Figure 2, both driving forces can besatisfied. However, in the case of Pb atoms 3 and 4,the local preference to move toward Ti conflicts withthe desire to align with the overall polarization. Thiscompetition results in a compromise, with distortionspredominantly along the x-direction for these Pbatoms.

The dependence of Pb distortion direction on thelocal environment and overall polarization isparticularly important, as it is responsible for thecompositional transitions in PZT. In the Ti-rich phase,due to the abundance of Ti ions, a tetragonaldisplacement satisfies the local repulsion energypreferences for the majority of Pb atoms. This can beseen from the angular distribution of the Pbdistortions (Figure 3). The Ti-rich composition of PZThas no Pb distortions with more than a 15-degreedeviation away from the tetragonal ferroelectricdirection. The small scatter in the direction of Pb offcentering gives rise to a fairly ordered phase withlarge overall polarization along the x-axis. On theother hand, in the Zr-rich phase, Ti ion scarcitymeans that no single direction can satisfy the localenergy preferences of all or even most of the Pbatoms. Consequently, the Pb displacement anglesare more broadly distributed between 30 and 80degrees away from the x-axis.

As a result of the disorder in the direction of Pbdisplacements, the magnitude of the overallpolarization is smaller for the rhombohedral phasethan that of the tetragonal phase. The monoclinic

phase is the bridge between the rhombohedral andthe tetragonal phases. While many Pb ions cansatisfy their local energy preferences by making atetragonal distortion, other Pb ions cannot. Thedisplacement of the latter Pb ions will thereforecontain significant y-axis and z-axis components,producing the overall polarization of the monoclinicphase.

Development of a Model That WillAllow Study of Disordered SystemsCrystal chemistry (where atoms are regarded aspossessing properties transferable between variousenvironments) provides an intuitive understanding ofhow the atomic composition affects the structure andthe material behavior, closing the gap between thesmall unit cells available to first-principles theoristsand real disordered materials with complex B-cationarrangements. The results of the DFT calculationsfirmly establish the validity and the accuracy of thecrystal chemistry approach by revealing thedistinction between intrinsic and the environment-dependent behavioral motifs in the material andcorrelating the structural motifs with the localenvironment for each of the constituent atoms.

The existence of intrinsic atomic behaviors makes itpossible to extract a simple intuitive framework,which explains both the DFT results and themacroscopic compositional phase transitions. Thisframework can then be made quantitative byencapsulating the chemical interactions in aphenomenological, crystal chemistry type modelusing well-known formalisms to characterizeelectrostatic and bonding interactions. This modelwould contain only physically intuitive interactions,could be applied to any perovskite material, andwould lend itself naturally to materials design.

Figure 2. Projection of the 4 × 2 × 1 50/50 supercell DFT PZTstructure on the xy plane. The oxygen octahedra are depicted bydiamonds, and distortions from the ideal cubic perovskite positionsare shown by arrows. Black for O, blue for Pb, green for Zr and redfor Ti. Pb atoms are 1/2 unit cell above the plane and apical O atomsare left out.

Figure 3. Distribution of Pb distortion angle away from tetragonaldirection for the rhombohedral, monoclinic and tetragonal phases ofPZT

Page 115: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 106 –

DENY ENEMIES SANCTUARY

Such first-principles based, crystal-chemicalmodeling will enable the design of materials withdesired macroscopic properties by relating thevariations in atomic composition of the material to itsmacroscopic behavior.

The properties which make PZT effective also make itcomplicated to model. PZT is Pb(Zr1-xTix)O3, and thedisordered positions of the Ti and Zr atoms aredirectly related to the piezoelectricity. The extensiveresources of the HPCMP enable researchers tomodel large enough regions of the PZT so that wecan understand how disorder affects the SONARcapabilities of PZT. The plan is to extend thisdevelopment to the design of the new piezoelectricmaterials for future Naval SONAR needs.

References“Local Structure and Phase Transitions of PZT”,Fundamental Physics of Ferroelectrics Conference,February 3–6, 2002.

CTA: Computational Chemistry and Materials ScienceComputer Resources: SGI Origin 3000 [ERDC]

Page 116: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 107 –

2002 SUCCESS STORIES

Novel materials to aid and promote theadhesion and survivability of aerodynamicmeasurement apparatus in the extremeoperating conditions of gun launched

munitions would be beneficial for improved munitionsperformance and ultimately improved weaponlethality. The goal of this project was to employquantum mechanical molecular modeling andquantitative structure-property relationships methodsand to design and guide the synthesis of novel epoxyresins with enhanced shock absorption properties foruse as adhesives in smart munitions electronics.

Predicting Bulk Polymer PropertiesQuantitative Structure-Property Relationships(QSPRs), based on molecular properties calculatedusing the AM1 semiempirical quantum mechanicalmethod, have been developed to predict bulkpolymer properties without the need for costly andtime-consuming experimentation. A QSPR modelwas generated for complex cross-linked copolymers,using the regression analysis program,COmprehensive DEscriptors for Structural andStatistical Analysis (CODESSA). By applying an adhoc treatment based on elementary probabilitytheory to the QSPR analysis, the ARL scientistsdeveloped a method for computing bulk polymerglass transition temperatures for stoichiometric andnon-stoichiometric monomeric formulations. Amodel polymer — that represents potentially idealproperties that can be achieved experimentally —was synthesized and found to validate the specificinitial model predictions and in general thecomputational approach.

Glass Transition Temperature forPolymersThe model was used to predict the glass transitiontemperature for a series of polymers composed ofthe widely used epoxide, diglycidyl ether ofbisphenol A, cured with an aliphatic or aromatic

amine curing agent, all having a stoichiometricformulation of reactive functional groups. Theexperimental data for the glass transition temperatureQSPR analysis was obtained from publishedliterature sources. The experimental data used tovalidate the non-stoichiometric glass transitiontemperature predictions was obtained at ARL. For 13polymers, a designer correlation equation having acoefficient ofdetermination (R2) of0.9977 was obtainedusing four computedmolecular properties.The molecular propertiesdescribe the propensityfor intermolecularattractions and stiffness ofbackbone atoms of themodel polymer systems.Fundamentally, it is known that stoichiometricformulations of amine-cured epoxy resins givemaximal glass transition temperature values and thatfor non-stoichiometric formulations the glasstransition temperature decreases in a mannerdependent on the extent of cross-linking in thepolymer, which in turn, is dependent on thestoichiometric ratio of amine and epoxide functionalgroups. By applying a treatment based onelementary probability theory to molecular models,each representing a different level of cross-linking,the trend in experimental glass transitiontemperatures for non-stoichiometric formulations wasaccurately predicted (Figure 1).

Benefits of the QuantitativeStructure-Property RelationshipsTwo principle benefits are envisioned. First is arevolutionary tool to rapidly design and developpolymer materials using robust computationalmethods to keep pace with the timeline requirementsdictated by the FCS and the Objective Force Warriorby avoiding the need for extensive synthesis and

Polymer Modeling for Smart Munitions ElectronicsJ.A. MORRILL, R. JENSEN, C.F. CHABALOWSKI, AND S. MCKNIGHT

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

“Quantitative Structure-Property Relationships(QSPRs) have beendeveloped to predict bulkpolymer properties withoutthe need for costly and time-consuming experimentation.”

Page 117: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 108 –

DENY ENEMIES SANCTUARY

testing. The second benefit is significantly enhancedsurvivability and reliability of electronic componentsfor smart munitions that will be enabled by the newlydeveloped polymer packaging materials.

ReferencesDewar, M.J.S., Zoebisch, E.G., Healy, E.F., andStewart, J.J., P. J. Am. Chem. Soc., 107, pp. 3902–3909, 1985.

Figure 1. a) Sample polymer structure used in prediction of glass transition temperatures for non-stoichiometric formulations. b) Plot ofexperimental and predicted glass transition temperatures vs. stoichiometric ratio for non-stoichiometric polymer formulations.

(a)(b)

Katritzky, A.R., Lobanov, V.S., Karelson, M., Murugan,R., Grendze, M.P., and Toomey, J.E., J. Rev. Roum.Chem., 41, pp. 851–867, 1996.

Macosko, C.W. and Miller, D.R. Macromolecules, 9(2),pp. 199–206, 1976.

CTA: Computational Chemistry and Materials ScienceComputer Resources: SGI Origin 2000 and SGI Origin3000[ARL]

Page 118: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 109 –

2002 SUCCESS STORIES

Dynamics of Oxidation of Metallic NanoparticlesRAJIV K. KALIA, AIICHIRO NAKANO, AND PRIYA VASHISHTA

Air Force Office of Scientific Research (AFOSR), Arlington, VA

DoD researchers are reducing the size ofreactant particles such as aluminum (Al)to accelerate the burn rates of propellantand explosives. Microscopic

understanding of the structure and mechanicalproperties of encapsulated Al nanoparticles will havesignificant impacts on the design of high energydensity materials.

In recent years, there has been a great deal ofinterest in the synthesis of nanostructuredcomposites consisting of metallic nanoparticlescoated with a protective layer. Such nanocompositeshave mixed metallic and ceramic properties such ashigh electric conductivity and high-temperaturestability. The objective of this research is toinvestigate the oxidation, structure, thermodynamics,and mechanical properties of these encapsulatednanoparticles at high temperatures under highpressures.

Studying the Chemical Bondingbetween Dissimilar MaterialsInterfaces between soft/ductile materials (metals orpolymers) and hard surfaces (metal oxides orceramics) dominate many properties of compositematerials and coatings. The present variable-chargemolecular dynamics (VCMD) study of the oxidation ofmetallic nanoparticles paves a way to atomisticinvestigations of how chemical bonding betweendissimilar materials at the atomic level determinesmacroscopic properties such as structure, adhesion,friction, stiffness, and fracture toughness.

New Algorithm Developed for VCMDSchemeTo describe chemical reactions such as oxidation, theVCMD scheme developed by Streitz and Mintmirewas employed in which atomic charges aredetermined at every time step to equalizeelectronegativity. This variable N-charge problem

amounts to solving a denselinear system, and its solutiontypically requires O(N3)operations.

To perform multimillion-atomVCMD simulations on massivelyparallel computers, an O(N)algorithm was developed inwhich the fast multipole methodby Greengard and Rokhlin wasused to operate matrix-vector multiplications in O(N)time and the temporal locality was used by initializingthe iterative solution with the previous time step’scharges. To further accelerate the convergence, amultilevel preconditioning scheme was developed bysplitting the Coulomb interaction matrix into short-and long-range components and using the sparseshort-range matrix as a preconditioner. Thisalgorithm was incorporated into a suite of scalableparallel molecular dynamics algorithms.

VCMD Provides Detailed PictureThe VCMD scheme was successfully used to studythe oxidation dynamics of an aluminum nanoparticle(diameter 200 angstrom). The VCMD simulationsprovide a detailed picture of the rapid evolution andculmination of the surface oxide thickness, localstresses, and atomic diffusivities. The VCMDsimulations reveal that an aluminum nanoparticle inan oxygen-rich environment goes through a rapidthree-step process culminating in a stable oxidescale: i) Dissociation of oxygen molecules and thediffusion of oxygen atoms into octahedral andsubsequently into tetrahedral sites in thenanoparticle; ii) radial diffusion of aluminum andoxygen atoms and the formation of isolated clustersof corner-sharing and edge-sharing OAl4 tetrahedra;and iii) coalescence of OAl4 clusters to form a neutral,percolating tetrahedral network that impedes furtherintrusion of oxygen atoms into and of Al atoms out ofthe nanoparticle. Structural analysis reveals a 40

“Recent advances inparallel computinghave made it possiblefor scientists toperform atomisticsimulations ofmaterials involvingbillions of atoms.”

Page 119: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 110 –

DENY ENEMIES SANCTUARY

Figure 1. Snapshot of an Alnanoparticle after 466ps ofsimulation time. (A quarter ofthe system is cut out to showthe aluminum/aluminum-oxideinterface.) The larger spherescorrespond to oxygen andsmaller spheres to aluminum;color represents the sign andmagnitude of the charge on anatom.

angstrom thick amorphous oxide scale on the Alnanoparticle (see Figure 1).

Recent advances in parallel computing have made itpossible for scientists to perform atomisticsimulations of materials involving billions of atoms.However rendering such large datasets with aninteractive speed is a major challenge. To solve thisproblem, a visualization system incorporating paralleland distributed computing paradigms has beendeveloped. This system named Atomsviewer uses aparallelized, fast visibility-culling algorithm based onthe octree data structure to reduce the number ofatoms sent to the graphics pipeline.

ReferencesT.J. Campbell, R.K. Kalia, A. Nakano, P. Vashishta, S.Ogata, and S. Rodgers, “Dynamics of oxidation ofaluminum nanoclusters using variable chargemolecular-dynamics simulations on parallelcomputers,” Physical Review Letters 82, pp. 4866–4869, 1999.

A. Nakano, R.K. Kalia, P. Vashishta, T.J. Campbell, S.Ogata, F. Shimojo, and S. Saini, “Scalable atomisticsimulation algorithms for materials research,” inProceedings of Supercomputing 2001, ACM, NewYork, NY, 2001.

CTA: Computational Chemistry and Materials ScienceComputer Resources: IBM SP3 and Cray T3E [NAVO]

Page 120: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 111 –

2002 SUCCESS STORIES

SHAMRC Environment and Vehicle LoadsCalculations in Non-Ideal FlowJOE CREPEAU AND JOHN PERRY

US Army Corps of Engineers, Engineer Research and Development Center (ERDC), Vicksburg, MS

High performance computing resourceshave enabled scientists and engineers toperform high fidelity computational fluiddynamics calculations in support of the

DTRA Non-Ideal Air Blast (NIAB) program. The NIABprogram seeks to quantify the effects of NIAB onmilitary vehicles over a wide range of geographicalareas and scenarios. Of particular interest is theability to simulate computationally the environmentand vehicle loads from the exit jet produced by theDTRA Large Blast and Thermal Simulator (LB/TS).

The goal of this project is the simulation ofenvironment and vehicle loads caused by non-idealairblasts. The ERDC scientific visualization team waspresented with the challenge of providing avisualization methodology and toolset. Thevisualization needs of this project had outstripped thecapabilities of the commercial-off-the-shelf toolsoriginally used as the size and sophistication of thecomputational simulations grew. The ScientificVisualization Center (SVC) team studied thealternatives in view of the growing requirements, anddeveloped a new approach that leveraged theresearch team’s existing investment in a previouslydeveloped OpenGL-based visualization application.After collaborating with the research team onenhancements to the visualization code, the SVCteam was able to deliver a tool that enabled analysisof the team’s data in terms of vehicle andenvironment loads and the dust entrainment resultingfrom a blast.

The CodeSecond-order Hydrodynamic Automatic MeshRefinement Code (SHAMRC) — pronouncedshamrock — is a two- and three-dimensional, finitedifference, hydrodynamic computer code. SHAMRCis a descendant of SHARC (Second-orderHydrodynamic Advanced Research Code). It is usedto solve a variety of airblast related problems which

include high explosive (HE) detonations, nuclearexplosive (NE) detonations, structure loading,thermal effects on airblast, cloud rise, conventionalmunitions blast and fragmentation, shock tubephenomenology, dust and debris dispersion, andatmospheric shockpropagation.

SHAMRC’s capabilitiesand attributes includemultiple geometries, non-responsive structures,non-interactive andinteractive particles,several atmospheremodels, multi-materials, alarge material library, HEdetonations, a K-εturbulence model, andwater and dustvaporization. SHAMRC issecond-order accurate inboth space and time andis fully conservative of mass, momentum, andenergy. It is fast because it employs a structuredEulerian grid and it is efficient due to the use of thepre-processor source library. SHAMRC is aproduction research code that currently has thecapability to run in single-grid mode or with anadaptive mesh refinement (AMR), both of which havebeen parallelized to take advantage of the rapidlydeveloping hardware improvements, and ensure theviability of the code. The AMR capability of SHAMRCallows the efficient calculation of certain classes ofproblems that are otherwise intractable using theconventional single-grid method.

SHAMRC has been parallelized to run on largescalable parallel platforms. It uses the MessagePassing Interface library to achieve parallelism. Initialversions of the code were parallelized in 3-D only andthe Eulerian grid was decomposed in only one

“SHAMRC is used to solve avariety of airblast relatedproblems which include highexplosive (HE) detonations,nuclear explosive (NE)detonations, structureloading, thermal effects onairblast, cloud rise,conventional munitions blastand fragmentation, shocktube phenomenology, dustand debris dispersion, andatmospheric shockpropagation.”

Page 121: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 112 –

DENY ENEMIES SANCTUARY

dimension. The calculations used this parallelmodel. Since that time, the code has been modifiedto perform automatic domain decomposition on theEulerian grid in up to three dimensions for both the2-D and 3-D versions of the code. The parallelversion of SHAMRC has been tested on severalparallel platforms

There are several examples of the effects of usingthese new codes. Figure 1 shows airblast loads onan M110-A2 self-propelled howitzer located in the testsection exterior to the LB/TS. The surface is coloredby the pressure from the blast wave. Figure 2 is aclose-up of the howitzer. Figure 3 is a wider view ofthe howitzer on the test platform at the samecalculational time. Both images show the interactionof the dust with the vehicle. The presence of thedust significantly modifies the airblast environmentand the loads on the vehicle.

Figure 1. Isosurface of dust density interaction with the M110-A2self-propelled howitzer

Figure 2. Isosurface of energy interacting with M110-A2 self-propelledhowitzer

CTA: Computational Fluid DynamicsComputer Resources: SGI Origin 3000 [ERDC]

Figure 3. Wide-angle view of isosurface of dust denisty interacting withM110-A2 self-propelled howitzer

Page 122: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 113 –

2002 SUCCESS STORIES

Countering the proliferation of weapons ofmass destruction (i.e., chemical, biologicalor nuclear weapons) is a critical missionarea for DTRA. Assessment of explosion

effects related to military scenarios providesimportant inputs to military strategy. For example, ifan underground bunker filled with chemical/biological (C/B) weapons is attacked, DTRA mustpredict the consequences. Scientists must addressissues like the evolution in space and time of theenergy released during the explosion, the risk thatthe energy released will destroy the structure, thelikelihood of C/B weapons breaching containment,and the possibility of destroying the agent from thehigh-temperature environment of the explosion.Other questions such as how much of the agent will

be ejected from thebunker and how theagent is dispersed in theatmosphere (at what riskto civilians) must also beaddressed.

Scientists at DTRA aredeveloping thecomputational toolsrequired for detailedassessment of suchexplosions. By using theadvanced computerarchitectures available inthe DoD HPCMP,combined withsophisticated adaptive

numerical methodology, it is now possible to performdetailed, end-to-end numerical simulations of the 3-Dtime-dependent explosion field.

Physical Modeling Time isConsumingTraditionally, physical modeling, either in reducedscale laboratory settings or in full-scale fieldexperiments, has been used to answer the questions.Such experiments are time consuming andexpensive compared to the results that could beobtained using numerical simulations. Furthermore,experiments are difficult to instrument so high qualitydata is hard to obtain.

Computational Models OvercomeTraditional LimitationsHPC is an enabling technology for the simulationsthat have been undertaken. Chemically reactingcompressible flows are basically intractable in threedimensions without the capabilities provided by HPCsystems. Typical models considered in thecalculations involved 6 chemical species with 10–20million computational zones.

HPC overcomes the traditional limitations bypermitting the scientist to make predictions on thefate of C/B agents using computational models ratherthan the more costly physical model experiments.The use of HPC allows DTRA researchers to assessdifferent scenarios based on physics-basedpredictive modeling rather than relying on heuristicscaling laws and phenomenological models.

HPC, as applied with high-order numerically accurateAMR simulations, has made it possible to providepredictions of the rate of combustion of C/B agents ina closed chamber (Figure 1). The results of thenumerical simulations have been validated againstphysical scale model experiments. It is possible forthe weapons effects specialist to evaluate numericallythe influence of the size of the explosive charge,explosive placement, and C/B agent configuration.

Modeling Explosions in Buried Chambers andTunnel SystemsJOHN BELL, MARC DAY, ALLEN KUHL, AND CHARLES RENDLEMAN

Defense Threat Reduction Agency (DTRA), Alexandria, VA

“HPC overcomes thetraditional limitations bypermitting the scientist tomake predictions on the fateof chemical/biological agentsusing computational modelsrather than the more costlyphysical model experiments.The use of HPC allows DTRAresearchers to assessdifferent scenarios based onphysics-based predictivemodeling rather than relyingon heuristic scaling laws andphenomenological models.”

Page 123: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 114 –

DENY ENEMIES SANCTUARY

in order to track and quantify the release of C/Bmaterials. Finally, DTRA scientists will improve thesimulations by examining other explosive agents,such as the new classes of thermobaric explosives.

ReferencesJ.B. Bell, M S. Day, V.E. Beckner, A.L. Kuhl, P.Neuwald, and H. Reichenbach “Simulations ofShock-Induced Mixing and Combustion of anAcetylene Cloud in a Chamber,” Proceedings of the18th International Colloquium on the Dynamics ofExplosions and Reactive Systems, Seattle, WA,ISBN# 0-9711740-0-8, July 29–August 3, 2001.

J.B. Bell, M.S. Day, V.E. Beckner, A.L. Kuhl, P.Neuwald, and H. Reichenbach, “NumericalSimulations of Shock-Induced Mixing andCombustion,” presented by Ninth InternationalConference on Numerical Combustion Sorrento, Italy,April 7–10, 2002.

Scientific visualization is an important part of thework. The simulations result in large data setscontaining the details of chemical and explosivereactions in complex, turbulent flow-fields. The onlyeffective way to understand and extract informationfrom this data is to use visualization tools.

Extending the Adaptive MeshRefinement Modeling SoftwareThe computations performed to date provide acompelling qualitative picture to explain theobservations obtained during field experiments.Based on the successes achieved with the AMRmodeling of confined explosions and the initial workwith the simulation of thermobaric explosives, theDTRA scientists intend to extend the capabilitiessupported by the software. The extended version ofthe software will provide end-to-end simulationcapabilities for the modeling of thermobaric explosivedevices in chambers and tunnel complexes byadding the AMR solid-mechanics and low-Machnumber reactive Navier-Stokes modeling capability tothe present reactive Navier-Stokes software.

Future enhancements to the modeling package areintended to remove some of the limitations in thephysical assumptions made in the simulations, forexample, by including the effects of radiation incoupled equations of motion. Another improvementwill provide for venting of the combustion andexplosion products into the background environment

Figure 1. Simulated fuel consumption andtemperature profiles for a series of small-scale (1:75)experiments used to model a single bay of theDipole Jewel Test Field. In the experiments, a smallexplosive charge detonated inside a closedcontainer induces turbulent mixing and shock-induced combustion of a surrogate for a biologicalagent. The resulting combustion characteristicsdepend strongly on the size of the explosion charge,and its proximity to the combustible material. Thefigure depicts typical time histories for three cases ofvarying charge size (0.2–0.5g) and separationdistance (10–17mm). To the right, the temperaturefield is shown for the vertical midplane for the side ofthe vessel containing the combustible; at the instantthat reflected blast waves interact with the fuel.When the detonation is far from the initial fuel, verylittle combustion is observed, even over long times.If the charge is closer, the incident blast waveinduces sufficient turbulent mixing of the fuel and airand the waves reflected from the rear wall ignite themixture. For stronger explosions, the incident shockinitiates more vigorous volumetric combustion.

CTA: Computational Fluid DynamicsComputer Resources: IBM SP3 [ARL] and CompaqCluster [ERDC]

Page 124: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 115 –

2002 SUCCESS STORIES

Seismic Propagation Modeling for Army SensorNetworksMARK MORAN, STEPHEN KETCHAM, THOMAS ANDERSON, AND JIM LACOMBE

Cold Regions Research and Engineering Laboratory (CRREL), Hanover, NH

ROY GREENFIELD

Greenfield Associates, State College, PA

Comprehensive, reliable, situationalinformation is imperative for the successof light-armor, maneuver dominatedFuture Combat System operations.

Tactically significant terrain includes large-scalephysiographic features such as forests, hills, passes,narrow valleys, or rivers. These complex battlefieldenvironments are extremely difficult sensor settings.Opposing forces will adapt their counter operationsto exploit poor sensor coverage circumstances. Inthese complex settings/contexts, passive seismic andacoustic Unattended Ground Sensors (UGSs) willprovide unique non-line-of-sight (NLOS) and beyond-line-of-sight (BLOS) information under conditions thatare poorly covered by air breathing or spaced basedsensor platforms. The NLOS attribute is a result ofreadily “bending” signal wavefronts as theypropagate through geologic and atmospheric media.Unfortunately, the inherent variability of terrain andmeteorology also leads to large fluctuations in signalcharacteristics and consequently severedegradations in information accuracy and reliability.These environmental and terrain induced informationdegradations can be mitigated by deploying highpopulation sensor networks, by calibrating each UGSnode to its specific setting, and by fusing diversesensor information with optimized algorithms.

In recent field trials, seismic sensors havedemonstrated their applicability to target bearing,range, and classification problems at target rangeswell over 1000 m. However, seismic sensors have,historically, not been heavily relied upon in practicaltactical systems. This is largely due to the strongeffects of geology on the character of seismic dataand the highly variable, unknown geologicalcharacteristics of each deployment setting. Beforeseismic sensors can reach their full potential in

“This capability supports theadvanced sensor networksystems envisioned for theObjective Force bydemonstrating the targettracking and classificationcapabilities of seismicsensor networks for non-line-of-sight and beyond-line-of-sight situationalawareness and by providingdevelopers of tacticalsystems with a database ofseismic signatures for awide variety of geologicconditions.”

fielded systems, methods must be developed thataddress the geologicvariability issues. Geologicadaptation is the centraldemonstration for thisproject. The team alsoaddressed the moregeneral problem of howtarget-track data fusion fora large number (greaterthan 6) of spatiallydistributed UGS sensorscan be done and theconsequent improvementof tracking accuracy. Fielddemonstrations withcomparable numbers ofphysical hardware are notanticipated for severalmore years and as aconsequence, optimized UGS data-fusion methodshave not been previously given. It is only through theuse of high fidelity simulations that we are able todevelop these new adaptation and tracking methods.

Current FocusThe current focus of the modeling effort includes (1)simulations of armored tracked vehicles moving overopen terrain with realistic topography and subsurfacegeology, (2) the effects of topography and geologyon the vehicle signatures, (3) high-fidelity modeling oftracked vehicle dynamics and their track pad forcesso that these forces can be used as inputs to seismicsimulations, and (4) accuracy validation bycomparisons with field test data. Because trackedvehicles generate target-specific seismic signaturesat ranges of roughly 1 km, models on the scale of 1

Page 125: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 116 –

DENY ENEMIES SANCTUARY

km2 in surface area were simulated. The resultingsynthetic signatures allow development of sensorsystem algorithms that fully exploit seismic signals.Furthermore, this capability supports the advancedsensor network systems envisioned for the ObjectiveForce by demonstrating the target tracking andclassification capabilities of seismic sensor networksfor NLOS and BLOS situational awareness and byproviding developers of tactical systems with adatabase of seismic signatures for a wide variety ofgeologic conditions.

The ability to simulate high fidelity, time-varying,seismic wavefields that include realistic signalcomplexity is central to the team’s demonstration ofseismic sensor geologic adaptation and to thedevelopment of optimized track-location information

fusion methods. It is particularly important that thesignal complexity include the dynamic harmonicenergy shifts characteristic of moving targets, and theeffects of strong geologic contrasts (large variationsin amplitude, coherence, and wavefront curvature).

Geologic VariationsGeology can be highly variable across geographicregions and within a specific location. Thesevariations have first order effects on the character ofseismic signals and must be compensated for beforeseismic sensors can be employed in battlefieldsystems. For basic seismic system operation, onlytwo geologic parameters are required. These includethe propagation speed of the incoming seismicsurface waves, and the rate of decay of surface

(a)

—outcrop

—trench

(b)

(c)

(d)

Figure 1. Results from a seismic simulation of a tank moving over complex open terrain

Page 126: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 117 –

2002 SUCCESS STORIES

waves. Using simulated data shows that these basicproperties are easily derived from simple calibrationmethods. It further demonstrates that target trackinginformation can be corrected allowing compensationfor very complex signal effects resulting from stronggeologic contrasts. This demonstrates that targettracks to moving vehicles are substantively improvedby application of the correction functions andgeology parameters derived from the calibrationevents.

Figure 1 (a-d) shows an example resulting from asimulation of a tank moving over complex openterrain. Seismic sources were taken from a vehicle-dynamics simulation of the tank. Figure 1a andfigure 1b show snapshots of ground vibrationamplitudes as the tank moves toward and away froma wide trench. Figure 1c shows the speed of thetank, and figure 1d shows a synthetic seismicsignature spectrogram of the ground vibration nearthe middle of the model surface. The signaturecontains realistic features apparent in fieldmeasurements, such as multiple harmonics andevolution of the spectral content with vehicle speed.The series of overlaying black lines show expectedharmonics from tank wheels rolling over track blocks.

Fusing Network NodesWe fuse line-of-bearing (LOB) and range informationfrom each node in the network with two methods.The simplest is an outlier rejection method based onthe mean and standard deviation of each individualnodes location estimate. The second approach usesan optimum non-linear, weighted, least-squares errorminimization (WLS) with weights determined from theinformation (LOB and range) variance. Both theseapproaches are shown to give comparableperformance. The WLS approach is expected to bemore appropriate with higher network nodepopulations and when considering more diversesensor inputs.

Seismic UGSs networks can be adapted to specificgeologic context by using a sparse sequence ofcalibrated events. All that is required in thesuggested calibration method is a consistent sourceexcitation mechanism and meter scale sourceposition accuracy. These simple criteria can beeasily met in a wide variety of ways, includingmonitoring the seismic signals generated by thenetwork deployment vehicle. The geologicadaptation functions, derived from the calibrationdata, are then applied to the moving target LOB andrange estimates for each UGS node in the network.The results show that the adapted and fused movingvehicle network track results smoothly converge toerrors as small as 2 m. The surprising correlationbetween the location calibration errors and thevehicle tracking error indicate that the networkperformance might be quantifiable at the time ofdeployment. This would have broad practical utilityin designating target engagement points andweapons systems.

CTA: Computational Electromagnetics and AcousticsComputer Resources: Cray T3E[AHPCRC, ERDC]and SGI Origin 3000 [ERDC]

Page 127: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 118 –

DENY ENEMIES SANCTUARY

Within the DoD, there is considerableinterest in evaluating the use of lowfrequency, ultra-wideband (UWB)imaging radar to detect tactical vehicles

concealed by foliage and targets buried in theground. At low frequencies, the tree canopy isessentially transparent. Likewise, low-frequencyradar can efficiently propagate through the groundfor subsurface target detection. Despite theseadvantages, the development of robust targetdetection algorithms has been hampered by thecomplexity of the scattering physics due, in part, tothe large number of physical parameters involvedand the frequency diversity inherent in UWB systems.The objective of the CHSSI CEA-9 project is todevelop high performance software for predictingelectromagnetic scattering from general 3-D surfaceand subsurface targets. In particular, attention wasdirected toward radar-based, wide-area surveillanceof buried mines, unexploded ordnance (UXO),bunkers, tunnels, and large ground vehicles.Validated electromagnetic models will permit theanalysis of many target scenarios at a fraction of thecost of a measurements program.

Multi-level Fast Multipole AlgorithmDevelopedThe software developed in this project is based onthe fast multipole method (FMM). The FMMconstitutes a recently developed extension to theMoM technique. In the MoM, the unknownconstituents in an integral equation (e.g., surfacecurrents) are represented in terms of a set of knownexpansion functions with unknown coefficients. TheMoM results in the matrix equation Zi = v, where Z isan N x N “impedance” matrix, v is an N x 1 vector

representing the incident fields, and i is an N x 1vector representing the N unknown expansion-function coefficients. While the MoM is a verypowerful and general technique, it has limitations thatconstrain its use. Inparticular, one must storethe matrix Z, andtherefore, computermemory limits thenumber of N unknownsthat can be considered.Consequently, the needto store the N x N matrix Ztranslates to a limitation on the electrical size of thetarget that can be considered with a given computermemory. To mitigate this problem, the ARL team hasdeveloped a fully scalable, multi-level fast multipolealgorithm (MLFMA), which can be applied to surface(tanks, trucks, etc.) or subsurface (mines, bunkers,etc.) targets. The MLFMA is a technique in which theexpansion functions are grouped into M clusters andthe interactions are treated globally, cluster bycluster. The MLFMA generally requires O(NlogN) inruntime and memory, while the MoM requires O(N2)in runtime and memory. To model these large targetswhere the number of unknowns may approach onemillion, high-performance computing resources arerequired. The parallel MLFMA software wasdeveloped to run on the MSRC SGI and IBM HPCplatforms.

Validating Electromagnetic ModelsThe ARL has been developing low frequency UWBradar technology for detection of foliage-concealedtargets for many years. Based on recent field testdata, researchers know that the UWB radar has the

Identification of “Hot-Spots” on Tactical Vehiclesat UHF FrequenciesANDERS SULLIVAN

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

LAWRENCE CARIN

Department of Electrical and Computer Engineering, Duke University, Durham, NC

“Validated electromagneticmodels will permit theanalysis of many targetscenarios at a fraction of thecost of a measurementsprogram.”

Page 128: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 119 –

2002 SUCCESS STORIES

potential to “see” foliage-concealed targets. Thequestion remains, however, on whether they candevelop an understanding of the underlyingsignature and whether it is robust and reliableenough to be militarily significant. Rigorouselectromagnetic models such as the MLFMA canhelp answer this question. As an example, the ARLteam computed the vertical transmit-vertical receive(VV) envelope response from a T-72 for nose-onincidence (θ = 55°, ϕ = 0°) using a UHF impulse. TheUHF pulse spectrum had a bandwidth of 246–434MHz. This bandwidth corresponded to a rangeresolution of 0.8 m. As shown in Figure 1, the time-domain response was then transformed into thespace domain and mapped onto the surface of thetank. This was done in order to correlate the peakreturns in the time waveform impulse response tospecific physical features on the target — in thiscase, the front fender area, cupola, and rear gastank. These highlighted (red) areas are the localscattering centers (or “hot spots”) on the T-72 for thisparticular angle of incidence. This analysis showsthat the response from the target is directly linked tothe shape and orientation of the vehicle. This aspectof target scattering may be useful for ATR algorithm

development. In the end, it may be the key todiscriminating targets of interest such as tanks,trucks, missile launchers, etc., from the large cluttersources (trees).

ReferencesN. Geng, A. Sullivan, and L. Carin, “Fast multipolealgorithm for scattering from an arbitrary PEC targetabove or buried in a lossy half-space,” IEEE Trans.Antennas and Propagation, Vol. 49, pp. 740–748, May2001.

A. Sullivan, “Modeling targets in a forest at VHF andUHF frequencies,” Proceedings of SPIE, RadarSensor Technology VII, April 2002.

CTA: Computational Electromagnetics and AcousticsComputer Resources: SGI Origin 2000 [ARL]

Figure 1. Impulse response mapped on T-72

Page 129: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 120 –

DENY ENEMIES SANCTUARY

Simulation and Redesign of the Compact A6MagnetronKEITH CARTWRIGHT

Air Force Research Laboratory (AFRL), Albuquerque, NM

Radio frequency (RF) weapons will bedesigned to work against hostilebattlefield electronics. Given theimportance of electronics in the modern

battlefield, the ability to target these electronics willgive the US a distinct advantage against technicallyadvanced adversaries. The effect of RF radiation onthe electronics can vary from transient disruption todestruction of hardware, depending on the powerand frequency of the radiation. Furthermore, sinceonly electronic equipment is affected, RF radiationcan be applied in a non-lethal manner, giving thewarfighter a broader range of options for dealing withopponents of varying sophistication and strength.This generation of electromagnetic radiation is criticalto the DoD’s advanced RF weapon effort.

Predicting High Power MicrowavePerformance Using ICEPICImproved Concurrent Electromagnetic Particle-in-Cell(ICEPIC) predicts the performance of several highpower microwave (HPM) devices in currentdevelopment for the US Air Force. With an accuratesimulation tool, devices such as the relativisticklystron oscillator (RKO), the magnetically insulatedline oscillator (MILO), the relativistic magnetron, andhigh frequency gyrotrons can be improved. ICEPICsimulates the electrodynamics (Maxwell’s equations)and charged particle dynamics (relativistic Lorenz’sforce law) with second-order time and spaceaccuracy in HPM sources. These simulationsaccurately predict RF-production and antenna gaincompared with experimental testing. In general,ICEPIC development has been driven by the desire toachieve high power in a compact RF device. Thisimplies that the device is operated at high currentand high space charge. Consequently, these devicesare highly non-linear and have complex three-dimensional geometries with small features. ICEPIC

may be used both for fine-tuning well-understooddevices and for testing new designs.

Virtual Prototyping with ICEPICResearchers at the AFRL are performing virtualprototyping of RF systems, from pulse power driversto beam sources through the antennas, with theICEPIC HPC software. They have developed ICEPICover the past several years withAFSOR funding. This codesimulates the electrodynamics(Maxwell’s equations) andcharged particle dynamics(relativistic Lorenz’s force law)from first principles. Suchsimulations require majorcomputational resources and arescalable to hundreds of nodes.

The team successfully applied ICEPIC to simulate theUniversity of Michigan’s relativistic magnetron byreproducing the RF power level, growth rate, andoperating frequency. ICEPIC predicts 300 MW(mega-watt) of total extracted instantaneous power,150 MW RMS (Root Mean Square), compared with200 MW RMS measured in the experiment. ICEPICalso predicts the experimental 100 nanosecond (ns)delay in the onset of RF after the diode voltagereaches its peak. The operating frequency observedin the ICEPIC results agrees to within a few percent ofthe 1.03 GHz measured in the experiment. TheICEPIC results also show that the University ofMichigan’s A6 does not operate in the desired pimode (a pi phase shift in the electric field from cavityto cavity). In addition, the ICEPIC results show alarge upstream and downstream DC loss current,similar to experimental observation.

“ICEPICdevelopment hasbeen driven by thedesire to achievehigh power in acompact RF device.”

Page 130: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 121 –

2002 SUCCESS STORIES

In addition to simulating the University of Michigandevice, ICEPIC is applied to the Titan/PSI A6magnetron designed for AFRL (Figure 1). The resultsverify the 1-GW (gigawatt) capability of the design,but find severe mode competition that hampers theaxial extraction concept needed to meet the compactsize goal. The current program provides for anevolutionary step by extracting power radiallythrough two waveguides. The simulations showroughly 1-GW of instantaneous power extracted intoeach arm (for a total of 1-GW RMS power) when theentire cathode region under the anode block isallowed to emit. However, there is severe modecompetition between the pi and 2pi/3 modes(Figure 2). The phase advance from cavity to cavityis not constant in this design. The phase advance forthe extracting cavities and the cavity between is pi/2.The rest of the cavities have a 3pi/4 advance, whichmakes the average phase advance 2pi/3. Clearly, theextractors are having a profound effect on the overallelectrodynamics of the device. This shows theimportance of using 3-D simulation tools andincluding all the geometry of the source. This issue

will not be a problem for theinitial proof of principle radialextraction experiments, butlimits the output power for thecompact axial extractionconcepts to about half of therequired value. In addition,the peak electric field for thesemodes is not in the center ofthe anode block as expected,but downstream of the center(Figure 3). Details of the fieldand particle structure insidethe magnetron is difficult toobtain from experimentbecause of the high electric

field stresses and electron bombardment. Thus, thesimulation results are extremely insightful.

Developing Compact, Multi-GigawattMicrowave SourcesThe task is to develop compact, multi-gigawattmicrowave sources. The next step is now beingtaken in AFRL’s evolution towards true virtualprototyping; the relativistic magnetron is beingsimulated before it is built at the lab. The details ofthe device that will eventually be built, including thegeometric structure and the externally generatedmagnetic field distribution, will be based on thesesimulations. Scientific visualizations are used forthree focuses. First, it is used for communication ofcomplex physics. Second, it is useful for debuggingICEPIC software as well as the input files. Lastly, itcan be used to uncover new and unexpectedemergent physics.

Figure 1. This shows the Titian/PSI A6magnetron simulated in ICEPIC. Thissnap shot shows the electric field andparticles 50 ns after the start of the appliedvoltage. The iso-surfaces in this figure arethe electric field pointing around thecathode (theta direction). Red is in theclockwise direction when facing upstream(left in this graphic) green is in the counterclockwise direction. The power extractedfrom the magnetron into the wave-guidesshows a pi phase advance between thetwo cavities. The desired mode has a 2piphase advance for the extractors, thismagnetron is operating in the wrongmode.

Figure 2. This shows the Titian/PSI A6 magnetronsimulated in ICEPIC. This snap shot shows the electricfield and particles 50 ns after the start of the appliedvoltage. The camera is pointing upstream toward thepulse power. The iso-surfaces in this figure are theelectric field pointing around the cathode (thetadirection). Red is in the clockwise direction when facingupstream, green is in the counter clockwise direction.The power extracted from the magnetron into the wave-guides shows a pi phase advance between the twocavities. The desired mode has a 2pi phase advance forthe extractors, this magnetron is operating in the wrongmode. The phase advance from cavity to cavity is notconstant in this design. The phase advance for theextracting cavities and the cavity between is pi/2. Therest of the cavities have a 3pi/4 advance, which makesthe average phase advance 2pi/3.

Page 131: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 122 –

DENY ENEMIES SANCTUARY

Figure 3. This shows the Titian/PSI A6 magnetron simulated in ICEPIC. This snap shot shows the electric field and particles 50 ns after the start ofthe applied voltage. The peak electric field for these modes is not in the center of the anode block as expected, but downstream (left) of thecenter (vertical) yellow line.

The Titan/PSI A6 magnetron design is improvedthrough extensive ICEPIC simulations. Thesimulations allow studies of the consequences ofchanging the vane depth and extraction slots to forcethe tube into the pi-mode. Using simulation results,the parasitic mode is reduced in amplitude to tenpercent of the desired pi-mode, and cathode endcaps are designed to reduce the upstream anddownstream loss current. The reduced loss currentincreases the efficiency of the magnetron. Plans areto build the improved design in late 2002 and tocompare to experimental results to simulation resultsfrom ICEPIC.

ReferencesM.D. Haworth, K.L. Cartwright, J.W. Luginsland, D.A.Shiffler, and R.J. Umstattd, “Improved ElectrostaticDesign for MILO Cathodes,” IEEE Trans. Plasma Sci.30, June 2002.

J.W. Luginsland, Y.Y. Lau, R.J. Umstattd, and J.J.Watrous, “Beyond the Child-Langmuir law: A reviewof recent results on multidimensional space-charge-limited flow.” Phys. Plasmas, 9, 2371, May 2002.

R.E. Peterkin, Jr. and J.W. Luginsland, “Theory anddesign of a virtual prototyping environment fordirected energy concepts,” Computing in Scienceand Engineering, March–April 2002.

CTA: Computational Electromagnetics and AcousticsComputer Resources: IBM SMP [ARL, ERDC,MHPCC], IBM Netfinity [MHPCC], and Compaq SC40[ERDC]

Page 132: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 123 –

2002 SUCCESS STORIES

Multiphase CFD Simulations of Solid PropellantCombustion in Modular Charges andElectrothermal-Chemical (ETC) GunsMICHAEL NUSCA AND PAUL CONROY

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

The Army is exploring a variety of gunpropulsion options for indirect anddirect-fire weapons for the legacyforce and the Future Combat System. As it

transforms, the Army has identified requirements forhypervelocity projectile launch for strategic Armymissions. Among these gun systems are those thatuse solid propellant — granular form loadedin modules (indirect-fire) or disk/strip formfor high-loading-density cartridges (direct-fire) — along with electrothermal-chemical(ETC) augmentation. Advanced indirect-firegun systems with complex grain andcontainer geometries are being investigatedfor the Modular Artillery Charge System(MACS) that has been developed for allfielded 155 mm cannons. The gunpropulsion-modeling environment has beenone in which separate codes (some 1-D,some 2-D) are used, with no single multi-dimensional code able to address the truly3-D details of all of these systems. Thissituation renders comparison of ballisticperformance cumbersome andinconclusive. In contrast, the multiphasecontinuum equations that represent thephysics of gun propulsion comprise a set ofgeneral equations applicable to all of thesesystems. Thermodynamic state equations andconstitutive laws, as well as boundary conditions,represent the relevant differences in system details.

NGEN – The Army’s Next GenerationInterior Ballistics ModelThe Army’s next generation interior ballistics model,NGEN, is a multi-dimensional, multi-phase CFD codethat has been under development at ARL for anumber of years. The NGEN code incorporates 3-D

continuum equations along with auxiliary relationsinto a single code structure that is both modular andreadily transportable between scaleable, highperformance computer architectures. Since interiorballistics involves flowfield components of both acontinuous and a discrete nature, an Eulerian/Lagrangian approach is used. The components of

the flow are represented by the balanceequations for a multi-component reactingmixture. This describes the conservation ofmass, momentum, and energy on asufficiently small scale of resolution in bothspace and time. A macroscopicrepresentation of the flow is adopted usingthese equations derived by a formalaveraging technique applied to themicroscopic flow. These equations require anumber of constitutive laws for closureincluding molecular transport terms, stateequations, chemical reaction rates,intergranular stresses, and interphasetransfer. The numerical representation ofthese equations as well as the numericalsolution is based on a finite-volumediscretization and high-order accurate,conservative numerical solution schemes.

The success of this modeling effort hasyielded a code that stands as a singularcomputational simulation tool providing the Armywith the unique capability to simulate current andemerging gun propulsion systems. Thesesimulations serve to both streamline testing and aidin the optimization of interior ballistics performance.High performance computing resources andscientific visualization tools are essential to efficientlyuse these numerical simulations, because the NGENcode solves the governing equations in a high-resolution 2-D and 3-D mode. In one particularlysignificant application, the NGEN code was used to

“The success of thismodeling effort hasyielded a code thatstands as a singularcomputationalsimulation toolproviding the Armywith the uniquecapability to simulatecurrent and emerginggun propulsionsystems. Thesesimulations serve toboth streamline testingand aid in theoptimization of interiorballisticsperformance.”

Page 133: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 124 –

DENY ENEMIES SANCTUARY

simulate the MACS propelling charge, that is beingdeveloped at the Armament Research Developmentand Engineering Center for all fielded 155mmcannon, and for the XM297 Crusader cannon(developed by PM-Crusader). Figure 1 displays ascientific visualization of the MACS using NGEN. TheNGEN code simulations for the M231, low-zoneMACS charge, aided in the successful typeclassification of the charge. For the high-zonecharge, NGEN code simulations were used touncover previously unknown physical phenomena forthe M232 charge that only occurs during charge

fallback in an elevated gun tube. This is a particularlyacute concern at top zone since bag chargespositioned adjacent to the breech lead to large-amplitude pressure waves that can destroy theweapon. NGEN simulations of these modularcharges proved that the prevailing gas dynamicprocess (when the charges are positioned adjacentto the breech face) causes the modules to bedisplaced forward. This mitigates pressure waveformation — a fact confirmed by actual gun testsperformed a year later.

Encouraged by these successes, the scientists atARL used the NGEN code to investigate the loadingof M232 charges that are at the ambient temperaturealong with modules that have been initiallytemperature conditioned. This combination emulatestaking modules from different loading pallets andassembling a single propelling charge. Theimportant questions were whether such a chargewould lead to high-amplitude pressure waves in thegun chamber and what level of performance gain orloss could be expected. Figure 2 shows a scientificvisualization of the case of three hot modules loadedin the forward position along with two ambientmodules comprising the final charge. In this situation(considered a worst-case scenario) hot propellantimpacts the projectile base, as predicted by theNGEN code. This leads to a noticeable amplificationof pressure waves in the chamber but still non-threatening to the weapon’s structural integrity. Thepredicted projectile velocity was shown to be slightlyenhanced by this type of charge.

Figure 1. Scientific visualization of the MACS charge (fivemodules) showing the location of propellant particles (colored bytemperature) and formation of gas dynamic vortices in the gunchamber

Figure 2. Scientific Visualization of the MACS charge (five modules with three forward modules at initially elevated temperature) showing thelocation of propellant modules (colored by temperature) at eight times in the flamespreading process. Note the impact of hot propellant on theprojectile base (E,F).

Page 134: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 125 –

2002 SUCCESS STORIES

Direct-Fire Gun SimulationIn another application, this time to direct-fire gunsystems, the ARL researchers used the NGEN codeto simulate solid propellant charges ignited using anETC igniter. A simulation of a high-loading-density(HLD) M829A1 120mm tank charge composed ofdisk and granular solid propellant mediademonstrated the ability of the NGEN code toaccommodate ignition and flamespreading through anovel charge. This simulation included the effects ofprojectile afterbody intrusion and movement as wellas compression and the resulting ignition delay in thepropellant. Figure 3 is a scientific visualization of thisgun charge. These simulations revealed, for the firsttime, that the igniter, positioned at the gun breech,could forcefully compress the disk charge. Thiscompression closed critical flow paths in the charge,preventing effective flamespreading and evenignition. The simulations showed that the lowmolecular weight plasma, generated by an ETCigniter, could circumvent this problem. The plasmaestablishes a convectively driven flame thatpropagates faster than the material compressionwave in the disk propellant, permitting an evenignition of the charge. A basic design tenet for usinghigh-loading-density charges was established; thatpressure waves in the gun chamber generated by theconventional igniter, when paired with the diskpropellant charge, were avoided by using an ETCigniter.

Figure 3. Scientific Visualization of a HLD charge (disk propellant) showinggas pressure (colored by pressure up to red at about 100 MPa). Note thehigher pressure forward on the projectile afterbody.

ReferencesNusca, M.J., and Conroy, P.J., “Multiphase CFDSimulations of Solid Propellant Combustion in GunSystems,” Proc. of the 40th Aerospace SciencesConference, AIAA, January 2002, Paper No. 2002-1091 (also Journal of Propulsion and Power,November 2002).

Conroy, P.J., and Nusca, M.J., “Progress in theModeling of High-Loading Density Direct-FireCharges Using the NGEN Multiphase CFD Code,”Proc. of the 38th JANNAF Combustion SubcommitteeMeeting, CPIA, April 2002.

CTA: Computational Fluid DynamicsComputer Resources: SGI Origin 2000 [SMDC] andCray SV1 [NAVO]

Page 135: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 126 –

DENY ENEMIES SANCTUARY

The Effects of Propellant Deformation During aGun FiringSTEPHEN RAY

Army High Performance Computing Research Center (AHPCRC), Minneapolis, MN

The US Army is studying the use ofincreased amounts of propellant in tankand artillery gun rounds to increase therange and lethality of the guns. One of the

technical challenges posed by larger amounts ofpropellant is ensuring that all of the propellant burnsrapidly enough while keeping the pressure in the gunbelow certain levels. Usually, higher pressure is aneffect of faster propellant burning. If the projectileleaves the gun before the propellant has finishedburning, the energy released by combustion after theprojectile exits is wasted. On the other hand, if thepressure in the gun is too high, the gun can bedamaged.

Army researchers want better control of the tradeoffbetween the speed at which the propellant burns andthe pressure in the gun. In order to achieve thatcontrol, they first need a better understanding ofsome of the detailed phenomena that occur while thegun is being fired. For instance, propellant disks canbe placed in the gun (Figure 1), and they start to burnbecause hot gas from the igniter reaches them andraises their surface temperature. The hot gas willalso cause the propellant disks to move, deform, andpossibly break. This interaction between the gas andthe propellant is not yet well understood, but knowinghow it works will help researchers control the burningof the propellant.

Numerical Study of Gas andPropellant Interaction UnderwayStudying the interaction between the gas andpropellant is difficult to do experimentally. An effort tostudy this numerically is underway using HPCresources available at AHPCRC. In order to studythis problem in detail, the motion, pressure, andtemperature of the gas haveto be computed in eachtime step using conservationof mass, momentum, andenergy equations. Themotion, stresses, and strainsof the propellant arecomputed by using theequations of motioncomplemented by anappropriate strength modeland an equation of state.This all has to be done on avery fine grid, with thedistance between gridpoints on the order of 0.008inches. The HPC systems have the memory andcomputing speed needed to study this problem.

Scientific visualization, specifically the FV2 andPRESTO visualizers developed at the AHPCRC, hasplayed an important role in understanding the datagenerated by the numerical model. In the study ofgas interacting with the propellant, the formation andmotion of pressure waves in the gas is a veryimportant phenomenon. Creating animations of thepressure in the gas allows researchers to see how ithappens, which aids in understanding it.

“Army researchers wantbetter control of thetradeoff between thespeed at which thepropellant burns and thepressure in the gun. Thisinteraction between thegas and the propellant isnot yet well understood,but knowing how it workswill help researcherscontrol the burning of thepropellant.”

Figure 1. Sketch of disks of solid propellant in a gun round

Page 136: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 127 –

2002 SUCCESS STORIES

Numerical Study Gives New InsightThe numerical study of the interaction between thegas and the propellant on the HPC systems hasgiven new insights into some of the effects of theinteraction. For instance, many researchers agreethat the propellant disks will break if they are involvedin a high-speed collision. But the HPC study showsthat the gas can bend the disks around theneighboring propellant and break them even thoughthe disks are moving at very low speeds (Figure 2).The inclusion of combustion and fracture models inthe simulation of the propellant will allow the code togenerate new details about the interactions. Suchinsights will give researchers more information asthey develop a fuller understanding of the gun firing,which in turn will develop the control needed tosafely increase the amount of propellant in the tankand artillery gun rounds.

Figure 2. Gas temperature in barrel end of the gun. Notice how onedisk is bending over its neighbor; this disk is fracturing.

ReferencesS. Ray, “The Modeling of Combustion in a SolidPropellant Gun”, presented at the 38th Joint Army-Navy-NASA-Air Force (JANNAF) CombustionSubcommittee Meeting, sponsored by the ChemicalPropulsion Information Agency, Destin, FL, April2002.

S. Ray, “A Study of Fluid-Structure InteractionPhenomena in Interior Ballistics Applications”,presented at the 31st American Institute ofAeronautics and Astronautics (AIAA) Fluid DynamicsConference, sponsored by the AIAA, Anaheim, CA,June 2001.

CTA: Computational Fluid DynamicsComputer Resources: Cray T3E [AHPCRC]

Page 137: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 128 –

DENY ENEMIES SANCTUARY

Modeling and Simulation of NanoparticleFormation and Growth in Turbulent Reacting FlowsSEAN GARRICK

Army High Performance Computing Research Center (AHPCRC), Minneapolis, MN

Simulations can be used in the productionand characterization of nanoscale energeticparticles and composites. There areseveral technologies that can be employed

in the manufacture of nanoscale materials. Vapor-phase methodologies are by far the most favoredbecause of the chemical purity and costconsiderations. When driven from gas phaseprecursors, there are several phenomena, whichneed to be addressed: vapor-phase chemistry,particle formation, and growth (coagulation,coalescence, condensation, etc.) among others.

Researchers at AHPCRC are developing tools tosimulate the production of nanoenergetic particles —propellants or explosives. Energetic nanoparticleshave been targeted as one of the propellants for theArmy’s FCS because they exhibit very high reactionrates. However, the Army currently has no source ofthese materials. The manufacture of nanoparticles isdifficult because all of the relevant mechanisms/dynamics are not completely understood. Tomitigate this, tools are being developed to determinethe effects of chemistry, flow/temperature dynamics,and particle-particle interactions on nanoparticleformation and growth in realistic flow systems, suchas those employed in cost-effective processes likevapor-phase synthesis. In this way, the Army canhave particles in a specified size range, chemicalcomposition, and physical structure.

Simulating Nanoscale ParticleProductionThe numerical simulation of nanoscale particleproduction through gas phase synthesis involves thesolution of a large number of coupled, non-linearintegro and partial differential equations. Thesemodels represent turbulent reacting flows with lengthscales spanning seven orders of magnitude. Withthe addition of nucleation, coagulation, and other

particle-particle interactions, a rather diverse range ofphenomena is under investigation. The primaryobjective is to improve our physical understanding ofparticle formation and growth in a turbulent reactingenvironment.

To illustrate theunderlying phenomena,simulations of severalturbulent flows wereperformed. A turbulentplanar jet was simulatedwith a domaincomprised of 24,500,000grid points. In thiscalculation, the timesteps are on the order of10-9 seconds and thedynamics of particlesbetween 1 and 10nanometers werecaptured. Theconcentration of 4 nanometer diameter particles isshown in Figure 1(a). From this image, one can seethat the particles are engulfed in the vorticalstructures and are effectively segregated from therest of the flow. The characteristic flow time of theparticles, the length of time the particles remain inspecific regions of the flow, and the temperature ofthese regions are key areas of investigation. Theseare just a few of the issues which can be answeredwith the capability facilitated by advanced simulation.

High performance computing resources wererequired to run the direct numerical simulation of theturbulent jet. The exact solution required 96,265 CPUhours. Other solution methodologies had to beexplored for the more complex flow geometries.Large-eddy simulations can provide reasonablyaccurate simulations in a fraction of the time requiredby direct numerical simulation. However, the effect ofturbulence on both the flow and particle fields mustbe modeled.

“Energetic nanoparticles havebeen targeted as one of thepropellants for the Army’s FCSbecause they exhibit very highreaction rates. Tools are beingdeveloped to determine theeffects of chemistry, flow/temperature dynamics, andparticle-particle interactions onnanoparticle formation andgrowth in realistic flowsystems, such as thoseemployed in cost-effectiveprocesses like vapor-phasesynthesis.”

Page 138: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 129 –

2002 SUCCESS STORIES

The effect of turbulence on the particle growth rate isshown in Figure 1(b). The figure reveals that theeffect is more pronounced as the flow develops andis as high as 20%. This resulted in the developmentof the first large-eddy simulation of nanoparticlegrowth in turbulent flows, which represents theparticle field in a manner which allows for the captureof the underlying physics and chemistry of particlecoagulation.

Figure 1. Nanoparticle coagulation in a turbulent jet: (a)Concentration of 4 nm diameter particles; (b) Turbulent growthrate

Figure 2. Nanoparticle growth in temporal mixing layer: (a) Directnumerical simulation (450 CPU hours); (b) Large-eddy simulation (1CPU hour)

Large-Eddy Simulation versus DirectNumerical SimulationTo test the methodology, both large-eddy simulationsand direct numerical simulations of nanoparticlecoagulation in temporal mixing layers, which aresubsections of the turbulent planar jet wereperformed. The results are shown in Figure 2. Theconcentration of 4 nanometer diameter particlesobtained through direct numerical simulation isshown in Figure 2(a) and the concentration obtainedthrough large-eddy simulation is shown in Figure2(b). The large-eddy simulation yields very goodresults while reducing the CPU time by a factor ofmore than 400. The use of large-eddy simulation willgreatly affect the development of advancednanostructured materials such as composites, whichare being considered as the next generation ofpropellants because of the large surface-to-volumeratio, and thus are highly reactive.

CTA: Computational Fluid DynamicsComputer Resources: Cray T3E [AHPCRC]

Page 139: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 140: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

ENHANCE THE

CAPABILITY AND

SURVIVABILITY OF

SPACE SYSTEMS

Cowllip

Impingingshock

Fuselagesection

Inlet

Impingingshock

Space SurveillanceSpace Access

Sub-Orbital Space Vehicle

Page 141: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 132 –

ENHANCE THE CAPABILITY AND SURVIVABILITY OF SPACE SYSTEMS

References (from top to bottom):

Simulations of Energetic Materials for Rocket Propulsion: Getting More Bang for the Buck by Jerry BoatzFinal configurations of an HMX molecule on an aluminum (111) surface. In the final configuration, one of the oxygen atoms isbonded to two surface aluminum atoms and is completely dissociated from the initial NO2 group. The other oxygen atom alsoforms a bond with aluminum but remains bonded to the nitrogen atom in the original NO2 group.

Scramjet Shock-on-Cowl-Lip Load Mitigation with Magnetogasdynamics by Datta GaitondeSchematic of shock-on-cowl-lip problem

New Polynitrogen Molecules – Energetic “Air” as a Next-Generation Propellant? by Jerry BoatzMP2/6-31G(d) optimized structure of triphenylmethyldiazonium cation, [C19N2H15]+. The predicted C-N internuclear distance of3.21 angstroms indicates that this cation is unstable with respect to dissociation to triphenylmethyl cation and N2.

Page 142: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 133 –

2002 SUCCESS STORIES

Enhance the Capability and Survivability of Space Systems

The overarching strategic goal for the DoD is to maintain US global superiority in space while makingspace access and operations easily affordable. From space, a whole range of critical informationcollection and distribution functions becomes possible with both robust global reach and little forward-based infrastructure. Information provided to US military personnel by space-based systems includesweather, forces location/movement, environmental monitoring, transportation routes, advancedwarning on weapons deployment, and weapons targeting. Future space systems could allowapplication of space-based force against ballistic missiles and other threats. The number of space-faring nations and their capabilities is increasing and, with that, the possibility of threat to US forces.Space systems based on current technology are highly expensive to acquire and operate. This highcost threatens US dominance of space.

One of the ways to reduce the costs of future space systems is to reduce the weight of the vehicle. Acommon area for advancement to reduce the weight of launch vehicles is in the propellants.Propellants comprise 70–90% of a launch vehicle’s weight and accounts for 40–60% of the cost of thevehicle. Added to this, the manpower required to load the propellants into the space vehicle and thecostly safety procedures and apparel that are intrinsic in this, research in more efficient and saferpropellants is ongoing.

Another focus is the development of rocket propulsion engines and motors with improved performancefor transition into existing or new systems. Boost and orbit transfer propulsion systems willdemonstrate improvements in specific impulse, mass fraction, thrust to weight, reliability, reusability,and cost. Spacecraft propulsion systems will demonstrate improvements in specific impulse, thrusterefficiency, and mass fraction.

Both of the above areas have one commonality — designing the item prior to developing the item.High performance computing is the key to these design efforts. The articles by Boatz, Sorescu, andThompson; by McQuaid; and another article by Boatz describe quantum mechanical calculations todetermine which proposed propellants will provide sufficient energy as a propellant, if there is afeasible path to produce it, and if it is less toxic. All three of these characteristics must be consideredprior to the development of new propellants. In the article by Gaitonde, the design of facets ofsupersonic engines is studied to alleviate problems with sustained hypersonic flight.

We believe that using HPC in the early stages of developing new space technologies helps ensurecontinued US superiority in space.

Dr. Leslie PerkinsHigh Performance Computing Modernization Program Office, Arlington, VA

Page 143: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 134 –

(This page left blank intentionally)

Page 144: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 135 –

2002 SUCCESS STORIES

Powdered aluminum has long been usedas an energetic ingredient in rocketpropellant formulations, comprisingapproximately 15–20% of some

conventional ammonium perchlorate solidpropellants. However, the performance of aluminumis reduced because of the rapid formation of analuminum oxide overcoat on aluminum particlesbefore combustion, which inhibits efficient burning.Furthermore, formation of the oxide overcoat severelyreduces the potential advantages of using highsurface-to-volume-ratio ultrafine aluminum particles,which have highly desirable properties such asenhanced burn rates.

Inhibiting Formation of OxideOvercoatIn order to inhibit the rapid formation of an oxideovercoat on the ultrafine aluminum particles withoutsimultaneously degrading performance, coating thealuminum particles with an energetic material suchas 1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane(HMX) has been proposed. One of the objectives ofthis challenge project is to characterize theinteractions between energetic compounds such asHMX and aluminum surfaces in order to understand,at an atomic level, the chemistry at the metal-molecule interface. Specifically, ab initio quantumchemical calculations are used to determine if HMXand/or other classes of energetic materials willeffectively inhibit the formation of an oxide overcoaton the surface of aluminum.

Using Quantum ChemicalCalculationsConventional techniques fordetermining the feasibility ofnew ideas such as this haverelied heavily on costly andtime-consuming experimentalsynthesis and testing.However, the availability ofHPC resources hassignificantly lessened the needfor these empirical approachesby enabling the application ofreliable high-level quantumchemical calculations to address complex issuessuch as the nature of the interactions betweenenergetic materials and metallic surfaces. A keyadvantage of using HPC is the ability to efficientlyscreen a variety of energetic materials as potentialmetallic coatings and focus subsequentexperimental efforts on only the most promisingcandidates.

Spin-polarized generalized gradient approximation(GGA) density functional theory, using ultrasoftVanderbilt-type pseudopotentials and the PW91exchange-correlation functional in a plane-wavebasis, has been used to characterize the interactionsbetween the energetic molecules nitromethane (NM),HMX, and 1,1-diamino-2,2-dinitroethylene (FOX-7)and the aluminum (111) surface (Figure 1). Twodifferent models have been used to approximate the

(111) surface of aluminum: a o1.19R)7x7( slab

Simulations of Energetic Materials for RocketPropulsion: Getting More Bang for the BuckJERRY BOATZ

Air Force Research Laboratory (AFRL), Edwards Air Force Base, CA

DAN C. SORESCU

University of Pittsburgh, Pittsburgh, PA

DONALD L, THOMPSON

Oklahoma State University, Stillwater, OK

“A key advantage ofusing HPC is the abilityto efficiently screen avariety of energeticmaterials as potentialmetallic coatings andfocus subsequentexperimental efforts ononly the mostpromising candidates.”

Page 145: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 136 –

ENHANCE THE CAPABILITY AND SURVIVABILITY OF SPACE SYSTEMS

model with four layers (28 aluminum atoms) used forNM and a four-layer (3x3) slab (36 atoms) for thelarger molecules (HMX and FOX-7). All calculationswere performed using the Vienna as Initio SimulationPackage (VASP) code.

Preliminary predictions of the interactions betweenthe energetic molecules NM, HMX, and FOX-7 haveindicated a common propensity for oxidation of thealuminum surface by the oxygen-rich nitro groups(NO2) present in these molecules. Dissociation of thenitro groups and formation of strong Al-O bondsappears to be a common mechanism for thesemolecules. These results suggest that nitro-containing energetic compounds are not likely to beeffective coating materials for preventing rapidoxidation of aluminum.

Scientific visualization is used for the construction ofthe initial atomic configurations for crystals, surfacesand gas-solid systems, using the Crystal and SurfaceBuilders modules available in Cerius2 package.

Interactions of Several Ionic SystemsFuture work will extend this set of investigations tofirst principles density functional theory calculationsof the interactions of several ionic systems such asammonium nitrate or ammonium dinitramide with anAl surface. In these investigations, key reactionpathways of these energetic salts on an aluminum

surface will be computed. Researchers will attemptto understand the type of chemical processes thatcan take place at the interface of aluminum hydrideswith different classes of energetic materials. In thiscase, they will focus on the case of ammonium nitrateand RDX crystals where the detonation properties aresignificantly affected by the interaction with aluminumhydrides.

ReferencesJ.A. Boatz, D.C. Sorescu, and D.L. Thompson,“Computational Studies of Advanced Materials”, ArcticRegion Supercomputing Center Technology Panel,University of Alaska-Fairbanks, March 12–14, 2002.

D.C. Sorescu, J.A. Boatz, and D.L. Thompson, Classicaland Quantum Mechanical Studies of Crystalline FOX-7 (1,1-diamino-2,2-dinitroethylene), J. Phys. Chem., A 105, pp.5010–5021, 2001.

Figure 1. Initial (left) and final (right) configurations of an HMXmolecule on an aluminum (111) surface. In the final configuration,one of the oxygen atoms is bonded to two surface aluminum atomsand is completely dissociated from the initial NO2 group. The otheroxygen atom also forms a bond with aluminum but remains bondedto the nitrogen atom in the original NO2 group.

CTA: Computational Chemistry and Materials ScienceComputer Resources: SGI Origin 2000 [ARL], CrayT3E [NAVO], and IBM SP P3 [ASC]

Page 146: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 137 –

2002 SUCCESS STORIES

A New Design Approach to Hydrazine-AlternativeHypergols for Missile PropulsionMICHAEL MCQUAID

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

Providing an extremely reliable basis onwhich to design intermittent and/or variablethrust propulsion systems, hypergolic liquid(or gel) fuel-oxidizer combinations—i.e.,

those that react spontaneously upon mixing—arecandidates for propelling beyond line-of-sightweapons for the Army’s FCS, an example being theLoitering Attack Missile being developed under theNetFires technology demonstration program.

Finding Viable Alternatives to Toxicand Carcenogenic FuelsOne of the drawbacks to “standard” hypergolic fuel-oxidizer combinations is that the fuels are derivedfrom hydrazine, monomethylhydrazine (MMH), and/or unsymmetrical dimethylhydrazine (UDMH)—all ofwhich are acutely toxic and suspected carcinogens.Fielding weapons systems with them necessitates theimplementation of burdensome and costly handlingprocedures. Moreover, the only US producer ofhydrazines is in danger of closing. Searching foralternatives to hydrazine-based fuels, the Army, incollaboration with the Air Force and the Navy, isinvestigating the use of secondary and tertiary amineazides in various propulsion applications. One of theformulations within this class of compounds receivingconsiderable attention is 2-azido-N,N-dimethylethanamine [(CH3)2NCH2CH2N3]. Alsoreferred to as DMAZ, this fuel has shown greatpromise as a hypergolic fuel, performingcompetitively with Aerozine 50 (a 50/50 mixture ofhydrazine and UDMH) in inhibited red fuming nitricacid (IRFNA) oxidized systems. DMAZ-IRFNAsystems do not, however, meet “ignition delay”standards set by MMH-IRFNA systems. Thisshortcoming negatively impacts engine design andmay preclude the fielding of DMAZ.

Gaining Insights ThroughComputational Chemistry in a HighPerformance ComputingEnvironmentBased on traditional chemical insights, protontransfer to the amine nitrogen has been hypothesizedas an important initial step in the ignition ofDMAZ-IRFNA systems. Therefore, attempts toaddress the ignition delay issue have been based onsynthesizing 2-azidoethanamines with aminesubstituents other than themethyl groups found in DMAZ,with the hope being to increasethe nucleophilic nature of theamine lone pair electrons.However, a computationalquantum chemistry study ofDMAZ identified that in itslowest energy configuration(Figure 1), the amine nitrogen isshielded from proton attack bythe azide group, and thereforemay be a barrier to the ignitionprocess. Moreover, it is notpossible to prevent azide groupshielding of the amine lone pairin alternately substituted 2-azidoethanamines. Thefinding stimulated a search for molecular designs thatprevented shielding, but did not otherwisesignificantly compromise DMAZ’s features andthermophysical properties. Proposing to achieve thisgoal by introducing a cyclic structure between theamine and azide groups, ballistic performance-relevant properties of the cis and trans isomers of anotional compound—2-azido-N,N-dimethylcyclopropanamine (ADMCPA)—werecomputationally evaluated. (The trans form ofADMCPA prevents direct amine-azide interaction).

“The ability tocomputationally screenthe properties andpredict theperformance ofnotional hypergolsthrough thisenvironment hasprovided firmjustification for furtherinvestment in this high-risk developmenteffort.”

Page 147: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 138 –

ENHANCE THE CAPABILITY AND SURVIVABILITY OF SPACE SYSTEMS

Molecular stability and gas-phase heats of formationwere established through computational quantumchemistry, and liquid densities and heats ofvaporization were estimated via molecular dynamicssimulations. The results were then used as input tothermochemical calculations of ballistic potential.Finding that the ADMCPA isomers merit furtherinvestigation as hypergols, their synthesis is nowbeing pursued by the Army Aviation and MissileResearch, Development, and Engineering Center asan option for its Impinging Stream Vortex Engine—atop launch package candidate for the Loitering AttackMissile.

Without the insight provided by computationalquantum chemistry in a high performance computingenvironment, the new design approach could not

have been identified. Moreover, the ability tocomputationally screen the properties and predict theperformance of notional hypergols through thisenvironment has provided firm justification for furtherinvestment in this high-risk development effort.

ReferencesM.J. McQuaid, K.L. McNesby, B.M. Rice, and C.F.Chabalowski, “Density Functional Theory Characterizationof the Structure and Gas-Phase Mid-Infrared AbsorptionSpectrum of 2-azido-N,N-dimethylethanamine (DMAZ),” J.Mol. Struct., THEOCHEM, Vol. 587, pp.199-218, 2002.

M.J. McQuaid, “Computational Characterization of 2-azidocycloalkanamines: Notional Variations on theHypergol 2-azido-N,N-dimethylethanamine (DMAZ),” ARL-TR-2806, 2002.

Figure 1. Molecular structure of the lowest energy DMAZ structure (left) and its electrostatic potential on an electron isodensitysurface (right). Dark blue regions of surface indicate sites to which nitric acid will transfer a proton, the amine site being screened bythe azide group.

CTA: Computational Chemistry and Material ScienceComputer Resources: SGI Origin 2000 and SGI Origin3000 [ARL]

Page 148: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 139 –

2002 SUCCESS STORIES

The identification, development, andformulation of new energetic materials foradvanced rocket propulsion applicationsare areas of long standing interest to the Air

Force. The performance limits of currently usedpropellants have been reached, so new energeticcompounds are required to significantly improve theability of the warfighter to access and control space.

Polynitrogen species such as the recently discoveredN5+ cation (a positively charged ion) are of interestas potential energetic ingredients in new propellant

formulations. The recentsuccessful synthesis of N5+ inmacroscopic quantities hasprompted the search foradditional polynitrogencompounds. Computationalchemistry plays a central rolein determining the stabilities,potential synthetic pathways,and key spectroscopic“fingerprints” of newpolynitrogen species.

Characterizing New MaterialsConventional techniques for characterizing newmaterials such as polynitrogens have relied heavilyon costly and time-consuming experimentalsynthesis and characterization. However, theavailability of HPC resources has significantlylessened the need for these more empiricalapproaches by enabling the application of high-levelquantum chemical calculations to reliably predictimportant properties such as heats of formation,stabilities, mechanisms of formation anddecomposition, and spectroscopic constants. A keyadvantage of using HPC in this regard is the ability toefficiently “screen” a large number of potentialpolynitrogen molecules and focus subsequent

experimental efforts on the subset of only thepromising candidates.

Computing Chemical CharacteristicsUsing Ab Initio MethodsThe structures, stabilities, vibrational frequencies,and infrared intensities of several potential syntheticprecursors to new polynitrogen species have beencomputed using ab initio electronic structure theory,at the second order perturbation theory level (MP2,also known as MBPT(2)), using the 6-31G(d) valencedouble-zeta polarized basis set. Shown in Figure 1 isthe predicted structure of triphenylmethyldiazoniumcation, also known as trityldiazonium, which is apossible precursor to new polynitrogen compoundssuch as pentazole, a 5-membered ring system. Thecalculated structure shows that this cation is unstable

New Polynitrogen Molecules – Energetic “Air” as aNext-Generation Propellant?JERRY BOATZ

Air Force Research Laboratory (AFRL), Edwards Air Force Base, CA

Figure 1. MP2/6-31G(d) optimized structure oftriphenylmethyldiazonium cation, [C19N2H15]+. The predicted C-Ninternuclear distance of 3.21 angstroms indicates that this cation isunstable with respect to dissociation to triphenylmethyl cation andN2.

“The performancelimits of currently usedpropellants have beenreached, so newenergetic compoundsare required tosignificantly improvethe ability of thewarfighter to accessand control space.”

Page 149: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 140 –

ENHANCE THE CAPABILITY AND SURVIVABILITY OF SPACE SYSTEMS

with respect to dissociation of N2. Therefore, thesecalculations predict that this cation is not a viablepolynitrogen precursor.

Although this is a negative result in the sense that itindicates that trityldiazonium is not a stableprecursor, it is nonetheless a highly useful result inmultiple ways. First, it saves significant time andeffort by eliminating from consideration subsequentattempts at synthesis of a compound that is not likelyto be stable. Furthermore, it suggests ways in whichthe trityldiazonium cation may be chemicallymodified in order to overcome its instability (e.g., byjudicious placement of electron-withdrawing groupson the trityl moeity.)

Derivatives of PolynitrogenPrecursorsFuture work in this area will examine such derivativesof the trityldiazonium cation, as well as other classesof promising polynitrogen precursors.

ReferencesK.O. Christe, W.W. Wilson, J.A. Sheehy, and J.A. Boatz, ANovel Homoleptic Polynitrogen Ion as a High EnergyDensity Material, Angew. Chem. Int. Ed. 38, pp. 2004–2009,1999.

CTA: Computational Chemistry and Material ScienceComputer Resources: IBM SP P3 [ASC], SGI Origin3800 [AFFTC]

Page 150: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 141 –

2002 SUCCESS STORIES

Advances in sustained hypersonic flight(exceeding five times the speed of sound)will impact future strike capability, easeaccess-to-space, and influence military

strategy. Developing vehicles to survive the harshhigh-speed aerodynamic environment for lengthyperiods poses major challenges. Key limitations areimposed by poor efficiency of current air-breathingpropulsion technologies and by excessive heattransfer and drag loads, which can affect designviability and cause mission failure. Supersonic

combustion ramjets orscramjets will play a criticalrole by overcoming many ofthese constraints. Effectivedesign will require informedcompromise, coupled torevolutionary flow controltechniques. The problemconsidered here stems from amass-capture standpoint;

scramjets function best when external compressionshock waves are focused on cowl lips (Figure 1).Figure 2 shows Mach contours of the flow patternresulting near the cowl lip without control. Thecrucial feature of this so-called Edney Type IVinteraction is the nearly wall-normal supersonic jet,which impacts the surface. The stagnation processyields extremely high, unsteady heat loads, whichcan exceed forty times nominal values and lead tocatastrophic failure.

Exploring Revolutionary FlowControl TechniquesTraditional approaches to flow control and heattransfer mitigation have not proven successful at highspeeds. This has motivated exploration of morerevolutionary techniques. Since hypersonic flow is,or can be artificially made to be electricallyconducting, an innovative method calls for a properly

established electromagnetic field to provide surfaceshielding. The advantages of magnetogasdynamics(MGD) include the absence of moving parts, shortcontrol response times, and the possibility ofleveraging the massive scales and advancedtechnological state of electrical engineering. Groundtesting difficulties dictate that the development ofthese techniques will be led by modeling, withsimulations providing insight into the flow physics aswell as guidance on experimental test-bedconfiguration.

The problem is fundamentally multidisciplinary sinceit requires integration of the Navier-Stokes andMaxwell equations, together with models of thermo-chemical, non-equilibrium phenomena. This yields alarge non-linear equation set and requires intelligentsimplifications for computer modeling. The currenteffort employs elements of first principles andphenomenological approaches for the MGDformulation. Thus, the Lorentz forces and energeticinteractions are treated as source terms; the electricfield is obtained by solving the Poisson equationarising from the current continuity constraint and thecurrent is obtained from the generalized Ohm’s law.

Scramjet Shock-on-Cowl-Lip Load Mitigation withMagnetogasdynamicsDATTA GAITONDE

Air Force Research Laboratory (AFRL), Wright-Patterson AFB, OH

“These problemsrequire months tosolve on stand-aloneworkstations but canbe solved in days onthe high-speed HPCsystems.”

Cowllip

Impingingshock

Fuselagesection

Inlet

Impingingshock

Figure 1. Schematic of shock-on-cowl-lip problem

Page 151: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 142 –

ENHANCE THE CAPABILITY AND SURVIVABILITY OF SPACE SYSTEMS

Alleviating High Heat Transfer RatesMassive computational resources are mandated bythe mathematical complexity of the governingequations coupled with the need for fine spatio-temporal discretizations to assure solutions of highfidelity. Despite the use of a unique algorithm, whichblends high-resolution upwind, high-order compact-difference/filtering and implicit time-integrationschemes, these problems require months to solve onstand-alone workstations but can be solved in dayson the high-speed HPC systems. This greatlyfacilitates numerical exploration of the parameterspace to characterize effectiveness and identifypromising candidates for further study in ground-testfacilities.

To alleviate the high heat transfer rates, acircumferential, radially decaying magnetic field isimposed on the flow, consistent with an axiallydirected current. Various electrical conductivitydistributions are enforced in an electrodeless setupto reduce the strength of the jet and to modify thefluid dynamic bifurcation. Figure 3 shows the effectof one of the more promising control configurationsin which the primary triple point is modified with anionized hot-spot, reducing peak heat transfer rate byover 20%. Surface pressures also diminish to asimilar extent. Furthermore, parametric studiesdemonstrate that these benefits increase as thesecond power of magnetic field strength and firstpower of electrical conductivity. Post processingyields invaluable insight into the mechanics of flowcontrol. Figure 4 shows that the magnetic field hasmodified the pattern from Type IV to that classified asType III, and the supersonic jet does not form at all.Energy balance analyses indicate a highly efficientand reversible flow field energy extraction process,which raises the possibility of even more effectivecontrol through realization of successive flowbifurcations.

The analysis of advanced flow control techniques isgreatly facilitated by visualizing the results. Althoughthe primary goal is often to influence quantities on thesurface itself, many promising possibilities leveragethe large gains obtained by modifying the overall flowstructure. Scientific visualization aids inunderstanding the fluid dynamics and controlresponse and fosters out-of-the-box thinking.

Figure 2. Flow field without control

Figure 3. Effect of MGD-control on heat transfer rates

Figure 4. Effect of MGD-control on flow field

Page 152: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 143 –

2002 SUCCESS STORIES

Future of ScramjetsThis work clearly shows the ability of advancedelectromagnetic force-based methods to alleviate oreliminate problems encountered in sustainedhypersonic flight.

Significant challenges remain both in enhancing thetheoretical model and in comprehensively evaluatingand exploiting the potential of these revolutionaryproposals. Phenomenological aspects of theformulation are presently being replaced with firstprinciples approaches, particularly in the modeling ofhigh-temperature effects. The use of wall electrodesand non-equilibrium ionization techniques opensexciting new possibilities in managing global energybudgets of scramjet operation. MGD-assisteddiffusers, nozzles and combustors promise efficiencyenhancements unthinkable with conventional

methods, and may provide the key breakthrough tomake sustained hypersonic flight feasible.

ReferencesGaitonde, D.V. and Poggie, J., “Elements of a NumericalProcedure for 3-D MGD Flow Control Analysis,” AIAA Paper2002-0198, January 2002.

Poggie, J. and Gaitonde, D.V., “Magnetic Control of FlowPast a Blunt Body: Numerical Validation and Exploration,”Phys. Fluids, Vol. 14, No. 5, pp. 1720–1731, May 2002.

CTA: Computational Electromagnetics and AcousticsComputer Resources: Cray SV1 [NAVO] and IBM SP3[ASC]

Page 153: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 154: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

LEVERAGINGINFORMATION

TECHNOLOGY

End-to-End Interoperable CommunicationsSurvivable Strategic Communications

Page 155: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 146 –

LEVERAGING INFORMATION TECHNOLOGY

References (from left to right):

Mixed Electromagnetic — Circuit Modeling and Parallelization for Rigorous Characterization of Cosite Interference inWireless Communication Channels by Costas SarrisElectric field magnitude iso-surfaces on a military vehicle staircase model at 50 MHz

“Presto”: Parallel Scientific Visualization of Remote Data Sets by Andrew JohnsonView of the Presto interface showing the results of a CSM simulation. Damage and material stress is shown for a projectile-armorpenetration

Novel Electron Emitters and Nanoscale Devices by Jerry BernholcElectron distribution near a tip of a BN/C nanotube in a field emitter configuration

Page 156: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 147 –

2002 SUCCESS STORIES

Leveraging Information Technology

A Native American Indian chief once remarked that the words written on the medallions worn onnecklaces of his tribesmen are but a visual reminder of the words written on their minds and souls. Thewords on the Air Force Research Laboratory coin from General Hap Arnold read, “The First Essential ofAirpower is Pre-Eminence in Research.” The mind and soul of America’s transformational departmentof defense exist in a body fundamentally dependent on pre-eminence in research; a body fed by HPCtechnology. The following seven articles exemplify the importance of HPC in the transformationaldefense power the US DoD is becoming.

To support soldiers in the field and scientists in the laboratory, an effort from the Army HighPerformance Computing Research Center provides a persistent data repository for data producers andconsumers. The Virtual Data Grid described in the article by Weissman provides just-in-time dataproduction and delivery for data intensive HPC applications. This data management systemassimilates and fuses data for real time and as-fast-as-possible computational systems, pumpingvaluable information into the 21st century battlespace. The Virtual Data Grid may enable modeling theevolution of toxic agents in public places, providing support to homeland defense organizations.Leveraging Virtual Data Grid information technology enables US and allied forces to project power intoremote environments with integrated multi-source data access.

The technical capability discussed by Flemming, Perlman, Sarris, Thiel, Czarnul, and Tomko in MixedEM-Circuit Modeling and Parallelization characterizes cosite interference problems relevant to militarycommunication ad-hoc network architectures. This modeling tool mixes electromagnetic wavepropagation and circuit modeling equations to simulate the effects of collocating electronic equipmenton the same vehicle, station, or base. Specific communications impact to US forces projectedanywhere in the world can be realized by leveraging this advanced communication focusedinformation technology.

While the Virtual Data Grid technology provides US and allied forces access to data on a wide areanetwork structure and the cosite electromagnetic interference technology models the communicationchannels connecting the troops to that data and its derivatives, the third effort, Parallel ScientificVisualization of Remote Data by A. Johnson, dramatically reduces the necessary network traffic andclient machine requirements to get the data to the user by providing remote parallelized visualization oflarge data sets. This client-server approach to visualization of massive data puts visual data not only inthe scientists’ hands, but projects information into the deployed soldiers’ battlefield. These first threearticles describe rapid advances in information technology that connect the battlespace to largeamounts of real-time and near-real-time data. Since this third technology reduces network traffic, analternative application might enhance space operations by enabling data visualization across limitedcommunication channels.

The article by Samios and Welday, Connecting the Navy Research Lab to the Department of the Navy’sChief Information Officer describes a successful effort that physically connects the two locations withasynchronous, real time variable bit rate transfer capability. This advanced information technologyemployment provides permanent virtual circuits upgrade throughout the core fabric of the defenseresearch and engineering network wide area network (DREN WAN). In addition to VTC link, thisconnection enables real time experiments with secondary benefactors including modeling andsimulation, test and evaluation, and research and development users on the DREN WAN. This articledetails a terrific example of leveraging rapid advances in commercially available network technology toimprove US communication connectivity.

Included in this section is Novel Electron Emitters and Nanoscale Devices by Bernholc, that describesan exciting application of the nation’s HPC resources. The authors were able to accomplishcomputationally intensive nanoscale device analysis to investigate and predict the properties of new

Page 157: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 148 –

LEVERAGING INFORMATION TECHNOLOGY

materials and technologies. Impacts of this particular analysis includes flat panel displays and electronemitters. This study from the Office of Naval Research provides an excellent example for how HPCplays a vital role investigating revolutionary concepts in a field cited in the quadrennial defense reviewas having a potentially large impact on US military capability.

The final two articles describe evolutionary enhancements to two battlefield simulation systems. AnIntegrated Missile Defense Approach by Johnson and Robertson improves the Missile DefenseAgency’s (MDA) wargaming capability by using the DoD HPC resources in conjunction with MissileDefense Wargame and Analysis Resource. Under this effort, the Joint National Integration Center(JNIC) improved MDA’s homeland defense simulation realism by increasing the simulation size,including fog of war error simulation, adding a target assignment decision aid, and enabling datacollection in parallel without performance degradation. The final article entitled Successfully OperatingWG2000 Simulation by Fuller describes a JNIC developed simulation tool, Wargame 2000 (WG2000),that leverages HPC capability and high-speed high-volume communication network technology.WG2000 provides simulated battles for military planners around the world to exercise command andcontrol human-in-control analysis. When coupled with the Virtual Data Grid technology, theseevolutions in parallelization of wargame simulation tools provide the next generation of homelanddefense simulation modeling.

These seven examples comprise a subset of HPC projects with current and future impact to the jointwarfighting team. Secretary of Defense Rumsfeld has stated that: “It is not only about changing thecapabilities at our disposal, but changing how we think about war. Imagine for a moment that youcould go back in time and give a knight in King Arthur’s court an M-16. If he takes that weapon, getsback on his horse, and uses the stock to knock his opponent’s head, it’s not transformational.Transformation occurs when he gets behind a tree and starts shooting.” Our nation’s HighPerformance Computing resources are an essential tool in a battlespace demanding informationsuperiority and advanced technology. When used in the transformational sense Mr. Rumsfelddescribes, this computational and algorithmic power enables US forces to employ an arsenal ofleading edge technologies in transformational manners.

1st Lt John Gulick, USAFAir Force Research Laboratory, Sensors Directorate, WPAFB, OH

Dr. Kueichien HillAir Force Research Laboratory, Sensors Directorate, WPAFB, OH

Page 158: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 149 –

2002 SUCCESS STORIES

The Virtual Data Grid: Just-in-time Data Delivery …and Data Production for HPC ApplicationsJON WEISSMAN

Army High Performance Computing Research Center (AHPCRC), Minneapolis, MN

A persistent data repository for dataproducers and consumers is needed tosupport soldiers in the field and scientistsin the laboratory. Increasingly, data from a

large variety of sources must be available quickly andreliably. Earlier solutions did not attempt to provide asingle coherent infrastructure for data management.HPC is needed to produce data on the fly such as inbattlefield simulation scenarios. HPC applicationswill also critically rely on data products (e.g.,battlefield visualization).

Virtual Data GridThis project supports and extends HPC technology toa wide-area Grid. The digitally enhanced battlefieldof the future requires a distributed data andcomputation infrastructure. As shown in Figure 1,this infrastructure must provide for the storage and

delivery of distributed, heterogeneous, data sourcesincluding geographic information systems data,weather, sensor grids (e.g.,the Information Grid, NSOFT)and a variety of nationalassets. Often times, the dataof interest is large, anddistributed across differentremote storage locations anddata delivery networks. Inaddition, connection toremote data networks may beintermittent (e.g., sensor networks). A reliable,efficient, persistent data network called the VirtualData Grid (VDG) is being developed for this purpose.The VDG allows data producers and consumers tosee a consistent infrastructure for data exchange.The VDG also allows data produced from highperformance simulations to be stored and made

Figure 1. Virtual Data Grid

“The digitallyenhanced battlefield ofthe future requires adistributed data andcomputationinfrastructure.”

Page 159: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 150 –

LEVERAGING INFORMATION TECHNOLOGY

available by launching such simulations on-the-fly toproduce needed data on demand. A publish-subscribe applications programming interface (API)for data producers and consumers is provided.

The VDG leverages existing metacomputing Gridtechnologies and solutions to meet its requirements.For example, it uses the Globus toolkit support toconnect a grid of VDG servers that manages a set ofpersistent storage depots. Internally, the VDGimplements data scheduling algorithms forperformance, including intelligent caching, staging,and prefetching of data based on applicationpatterns of access (an API for expressing accesspatterns has been developed). Data schedulingconsiders the demand for a particular data object byconcurrent users. Novel data scheduling algorithmshave also been developed. Data is moved close tothe consumer before the data is actually required.Popular data objects are moved to a location near theconsumer base. In some cases, popular dataobjects are replicated. Large data objects are alsostriped across several VDG server depots forincreased performance. Fault tolerant delivery ofdata objects is achieved by intelligent replicationschemes.

Currently, the VDG represents core infrastructure insupport of the AHPCRC’s efforts in Virtual ComputingEnvironments. One application is predicting and

simulating the evolution of toxic agents in largepublic places (e.g., stadiums). Data from manysources must be integrated and simulations must belaunched quickly to solve this problem.

ReferencesJon B. Weissman, “Adapting Parallel Applications to theGrid,” Journal of Parallel and Distributed Computing,Journal of Parallel and Distributed Computing, 2002.

Jon B. Weissman and Colin Gan, “The Virtual Data Grid: AnInfrastructure To Support Distributed Data-centricApplications”, DoD High Performance ComputingModernization Program’s User Group Conference, June2002.

CTA: Forces Modeling and Simulation/C4IComputer Resources: Cray T3E and IBM SP2[AHPCRC]

Page 160: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 151 –

The objective of this research is tocharacterize cosite (collocation of electronicequipment on the same vehicle, station, orbase) interference problems relevant to

military communication ad-hoc network architectures,in complex, arbitrary environments, throughscaleable computational techniques for the numericalmodeling of electromagnetic wave propagation(Maxwell’s equations) and transmitter-receivergeometries (circuit equations).

The numerical tools developed in this researchprovide, for the first time, the opportunity ofperforming end-to-end characterization of militaryradio network performance under cosite interferenceconditions including realistic vehicle models andadvanced receiver architectures in environments ascomplex as a forest. Two applications evolved fromthis work. The first application was a physics-basedoptimization of the system design (such as choice ofcoding scheme and power levels). The secondapplication was an educated, simulation-basedplanning of exercises in tactical communicationscenarios.

Relying on Measurement-BasedModelsDespite the high interest that cosite interferenceproblems present from both a research and anapplication point of view, few efforts have been made

towards their rigorous theoretical characterization.Straightforward calculationof interfering power levels isnot possible because of theclose proximity of interferersand the presence ofcomplex scatters andantenna platforms.Therefore, some previousstudies have relied onmeasurement-basedmodels that omit theinfluence of system-specificparameters oncommunication linkperformance. On the otherhand, time-domain methodsfor the solution of Maxwell’sequations hold the promiseof fully accounting, not only for the effect of platformson radiation properties of vehicular antennas, butalso for the operation of front-end electronics,through a state-equation description of the latter.

The computational cost of time-domain techniquesgrows large, however, when more complex operationenvironments are considered for cosite interferencescenarios. An example of particular interest is thecase where communication between a receiver and atransmitter, under cosite interference conditions, isattempted within a forest environment at very highfrequency (VHF) band military radio frequencies.

Mixed Electromagnetic — Circuit Modeling andParallelization for Rigorous Characterization ofCosite Interference in Wireless CommunicationChannelsT. FLEMMING AND B. PERLMAN

Communication-Electronics Command (CECOM), Fort Monmouth, NJ

C. SARRIS, W. THIEL, AND P. CZARNUL

University of Michigan, Ann Arbor, MI

K. TOMKO

University of Cincinnati, Cincinnatti, OH

“These techniques allowfor the accurate,physics-based evaluationof system performancevia mixedelectromagnetic-circuitsimulations and providethe tacticalcommunication analystwith a reliable simulationtool that takes intoaccount criticalelectromagneticinteractions withincomplex, real-lifeenvironments.”

Page 161: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 152 –

LEVERAGING INFORMATION TECHNOLOGY

Using a Method of MomentsApproachA hybrid solution has been implemented formodeling a multi-antenna, mobile communicationsystem with a time-domain technique and itsinteraction with its forest environment by means ofthe MoM approach. In addition, advanced circuitmodels are included through a recently developedcircuit simulator (TRANSIM), that allows for fast time-domain analysis of linear and nonlinear components.

Time domain electromagnetic modeling is based onthe Haar wavelet based Multiresolution - TimeDomain technique (MRTD), which allows fordynamically adaptive gridding throughout thecomputational domain. Using a novel, efficientthresholding algorithm, wavelet operations are notcarried out at regions of the domain where slow fieldvariations are observed. That effectively amounts toestablishing a multigrid with a cell aspect ratio of 1:8(since one wavelet level is used in all threedimensions). In addition, wavelet operations dictatean adaptive, global repartitioning algorithm forperiodic load balancing, which improves theperformance of the parallel code.

Using Communication ScenarioSimulationsCommunication scenarios have been simulatedinvolving the interaction of neighboring transmit-receive modules, communicating with a remote peer,within a forest environment. The simulation showedthe effects of platform, multi-path and forest-inducedattenuation of electromagnetic waves on networkperformance (Figure 1). Realistic models of vehicularplatforms have been incorporated and severalradiation properties of those have been brought tolight (Figure 2).

The US Army Communication-Electronics Commandscientists have predicted interfering power levels atmulti-antenna vehicular systems and comparedarchitectures showing the effect of the vehicle

positioning on the performance of the radio network.The results have highlighted such issues as thedependence of parasitic interference betweenneighboring platforms on the shape of the vehicleand geometry-specific scattering effects andresonance.

Scientific visualization was used to gain insight intothe physical phenomena of interest for this researcheffort and has led to several important conclusionsrelated to the effect of vehicle positioning on radionetwork performance based on inspection ofvisualization tool output.

Implementing Efficient, ScaleableComputational TechniquesEfficient, scaleable computational techniques for end-to-end characterization of cosite interference inmilitary radio networks have been proposed andimplemented in this project. These techniques allowfor the accurate, physics-based evaluation of systemperformance through mixed electromagnetic-circuitsimulations and provide the tactical communicationanalyst with a reliable simulation tool that takes intoaccount critical electromagnetic interactions withincomplex, real-life environments.

ReferencesC. Sarris and L. Katehi, “Coupling Front-Tracking andWavelet Techniques for Fast Time Domain Simulations”,Proceedings of 2002 IEEE International MicrowaveSymposium, Seattle, WA, June 2002.

C. Sarris, W. Thiel, I.-S. Koh, K. Sarabandi, L. Katehi, andB. Perlman, “Hybrid Time-Domain Performance Analysis ofMulti-Antenna Systems on Vehicular Systems”,Proceedings of 2002 European Microwave Conference,Milan, Italy, September 2002.

Figure 2. Electric fieldmagnitude iso-surfaceson a military vehiclestaircase model at 50MHz

CTA: Computational Electronics and NanoelectronicsComputer Resources: SGI Origin 2000 and IBM SP3[ASC]

Figure 1.Interference patternfor three vehicularplatforms (shown inlily) at 50 MHz, at theplane of the gap ofthe monopoleantennas, mountedon them. Thick arrowindicates the positionof a transmittingantenna and the thinarrows the position ofreceiving antennas.

Page 162: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 153 –

2002 SUCCESS STORIES

“Presto”: Parallel Scientific Visualization ofRemote Data SetsANDREW JOHNSON

Army High Performance Computing Research Center (AHPCRC), Minneapolis, MN

Presto is an interactive data visualization tooldeveloped at AHPCRC which providesresearchers, using their desktopworkstations, the capabilities of a large and

expensive visualization laboratory for visualizing largenumerical simulation data sets. These data sets arecomputed on, and reside at, remote HPC systems.This is accomplished by implementing a closelycoupled client-server paradigm that leverages theparallel computing technology of remote HPCsystems. The client (i.e., the user’s desktop system)and the server (i.e., the remote HPC resource) onlyneed to be linked through a typical Internet or otherdedicated network connection.

Limitations of Current ComputingModelsA new visualization tool such as Presto is requireddue to various limitations with current computingand/or data processing models. Traditionally, aresearcher would compute a large numericalsimulation on a HPC resource that is physicallylocated at a remote, central computing center suchas a MSRC or DC. They would download the entirecompleted data set to their local workstation for post-processing and visualization. Today, researchers arecomputing data sets of between 1 and 200 gigabytesin file size or larger. Transfer of this amount of dataacross a network is impractical. Also, researchers donot typically have local computing resourcespowerful enough to load, process and visualize theentire data set effectively. These bottlenecks havelimited the size and scope of the numericalsimulations that researchers can create and processeffectively.

Presto Solves LimitationsPresto solves both of these problems through itsclient-server implementation and parallel computingcapabilities. The client-server implementation meansthat Presto actually consists of two separatecomponents. The “Presto Client” runs on the user’sdesktop workstation, either Windows or UNIX-based,and handles all user interaction and displays allvisualization geometry sent to itby the server. Visualization isaccomplished throughtraditional boundary shadings,cross-sections, iso-surfaces,streamlines and volumerendering. The “Presto Server”runs on the remote HPCresource, and is written usingthe distributed-memory parallelprogramming model based onMPI, which is portable toalmost all HPC systems such as the Cray T3E, CrayX1, IBM SP, SGI Origin servers, and PC clusters. Theserver is responsible for loading and processing thelarge 3-D data set, and generating and/or extractingany visualization geometry requested by the clientcomponent. The parallel implementation is fullyscalable, so the size of the data set that can bevisualized from the desktop is only limited by the sizeof the remote HPC resource.

Presto has been built to visualize 3-D unstructuredmesh data sets composed of either scalor or vectorvariables, and is typically being used to visualizeCFD (Figure 1) and CSM (Figure 2) data sets.Typically, unstructured mesh data sets containingbetween 1 and 40 million tetrahedral elements arevisualized using up to 32 processors of a Cray T3E.Data sets containing over 1 billion tetrahedralelements have been visualized with Presto using1024 processors. Currently, Presto is being used in aproduction environment, and several future updates

“Presto has been builtto visualize 3-Dunstructured meshdata sets composedof either scalor orvector variables, andis typically being usedto visualize CFD andCSM data sets.”

Page 163: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 154 –

LEVERAGING INFORMATION TECHNOLOGY

are being planned which include implementing atighter client-server security model as well asgeometry compression techniques to reduce Presto’snetwork bandwidth requirements.

Advantages of PrestoPresto provides three advantages to the researcher:1) Visualizes remote data sets: Basically, theresearcher just leaves the data set on the large workdisks of the HPC system that computed the results. The “Server” part of Presto runs on that HPC systemand is responsible for loading and processing thenumerical simulation results. The user never needsto transfer their data set off of the HPC system thatgenerated it. Data sets located on any HPC systemcan be visualized from any remote location as longas an Internet connection (TCP/IP) can be createdbetween the user’s local workstation and the remoteHPC system. 2) Scalability: The Presto Visualization“server” is fully parallel based on the MPI library. Thismeans that the server runs on any parallel

architecture and is fully scalable in terms of memory.When visualizing a small data set, the user can useonly a few processors to load the data set. Whenvisualizing a large data set, the user may have to usemore processors. Typically, around 8 to 16processors are used, but Presto has been tested withas many as 1024 processesors on a Cray T3E. Thismany processors were required to visualize one ofthe largest data sets ever, which involved anunstructured mesh containing 1 billion tetrahedralelements. 3) Easy-to-use: The “client” part of Prestohandles all of the GUI user interface and displays allof the graphics components sent to it by the serverover the Internet. The client runs on Windows or anyUNIX flavor (Linux, SGI, etc.). Once the connectionbetween the client and remote server is made, theapplication behaves as any traditional desktopapplication.

Presto saves time and money (i.e., users can usetheir traditional workstations and not an expensivevisualization system) and effectively uses HPCparallel resources for post-processing andvisualization. A user from almost any type ofdesktop system can visualize data sets of almost anysize quickly and efficiently. This makes the use ofnumerical simulation by researchers more effective,efficient, and easier, especially for large data sets.

ReferencesJohnson, A., Automatic Mesh Generation, GeometricModeling, and Visualization Tool Development Efforts atthe Army HPC Research Center/NetworkCS, Proceedingsof the DoD HPCMP Users Group Conference,Albuquerque, NM, 2000.

Johnson, A., New AHPCRC Tools for Interactive DataVisualization, AHPCRC Bulletin, Spring 2001, Vol. 11,No. 1, 2001.

Figure 2. View of the Presto interface showing the results of a CSMsimulation. Damage and material stress is shown for aprojectile-armor penetration.

Figure 1. View of the Presto interface showing the results of a CFDsimulation. Velocity vectors are shown as well as a volume-renderedimage showing velocity magnitude.

CTA: Computational Fluid Dynamics andComputational Structural MechanicsComputer Resources: Cray T3E ,IBM SP, and Cray X1[AHPCRC] and SGI Origin 2000 [ARL]

Page 164: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 155 –

2002 SUCCESS STORIES

Just prior to September 11, 2001, theDepartment of Navy (DON) Chief InformationOfficer (CIO) approached the NetworkInfrastructure Services Agency (NISA) to

support a high-speed permanent virtual path (PVP)between their offices and the Defense Research andEngineering Network (DREN) node in the Pentagon(NISA supports both locations and the cross-connection). After September 11, 2001, availablebandwidth on some of the links was reduced and theoriginal request could no longer be easilyaccommodated. As a result, the request for a 40-Mbps PVP was reduced to 6 Mbps. The NavalResearch Laboratory (NRL) looked to DREN toprovide the link.

The technical problem centered around how toconnect multiple DoD sites using real-time variablebit rate virtual circuits, which have very low latencyand jitter. Resolution of the problem will have a widerimpact to DoD Science and Technology, Modelingand Simulation, and Test and Evaluation communitieswithin the Navy.

Early IssuesThere were several early issues that were identified:physical connectivity from the closest NISAAsynchronous Transfer Mode (ATM) switch to theDREN ATM switch at the Pentagon, availability, cost,and characteristics of the PVP had to be determined.Security also had to be considered; the existingsecurity on the DON CIO workstations could not becompromised. Any traffic between theseworkstations and the NISA-supported systems wouldleave PT-1, tunnel through to the DREN wide areanetwork (WAN), enter the NISA enclave and existingNISA security mechanisms through the same paththat Unclassified but Sensitive Internet Protocol

Router Network and Internettraffic follows, withoutinteracting with the NISAinfrastructure.

In the past, procuring manyindependent unclassifiedand classified connectionsbetween each site wouldhave solved this problem.But separate circuitsolutions are very difficult tomanage, very expensive,and usually result in severe limits on bandwidthavailability.

Connection EstablishedIn September of 2002, a solution was completed thatconnects several TeraMedia VTC sessions betweenNRL and DON CIO spaces. This was done by usingreal-time variable bit rate (rt-VBR) setups over DREN.Figure 1 shows the connection using ATM-connectedMacinstosh workstations and a Sun server at DONCIO (PT-1).

This project brought to the fore other ATMrequirements in the HPMCP community for rt-VBRpermanent virtual circuits (PVCs). As a result, DRENWAN switches throughout the core fabric wereupgraded to support rt-VBR PVCs as well as switchedvirtual circuits.

The Future of ATM and the Use ofPVCs With DRENEven as long haul WAN providers move away fromATM based networks to increasingly sophisticated IPbased networks, there will be ATM-oriented Quality ofService requirements that will be difficult to meet with

Connecting the Naval Research Laboratory to theDepartment of the Navy’s Chief InformationOfficerJOHN SAMIOS AND ALAN WELDAY

High Performance Computing Modernization Program (HPCMP), Arlington, VA

“In September of 2002, asolution was completed,that connects severalTeraMedia VTC sessionsbetween NRL and DONCIO spaces. This wasdone by using real-timevariable bit rate (rt-VBR)setups over DREN.”

Page 165: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 156 –

LEVERAGING INFORMATION TECHNOLOGY

the new architectures for some time to come. At thestart of this project, DREN could not handle the lowjitter requirements that drove a rt-PVCs solution;today it can. The beneficiaries of this innovation arenot only the original testers but, a much wider groupof communities that includes modeling andsimulation, test and evaluation, and research and

Figure 1. The box on the left shows an ATM switch in the PT-1 wiring closet under NISN control. From the PT-1 ATM switch, there is a 6-Mbpsrt-VBR PVP through the NISN cloud to the DREN switch in the Pentagon. The PT-1 switch signals with the DREN switch (and is using a subset ofits ATM NSAP address space). The DREN/PT-1 switches fully signals with DREN so direct end-to-end ATM connection are possible. The LECS,LES, and BUS are running on the DREN switch and an ELAN was created for this project.

development. Today, “hardware-in-the-middle” or“man-in-the-loop” simulations and tests can be donein real-time. Without this capability, the experimentscannot be done remotely, and assumptions had to bemade by researchers to simulate the “real-time”effect put on their test or experiment.

Page 166: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 157 –

2002 SUCCESS STORIES

Nanotubes are one of the most interestingnew materials to emerge in the pastdecade. Several layered materials formtubular structures, but the most prevalent

are carbon (C) and boron nitride (BN) nanotubes.The discovery of carbon nanotubes as a material withoutstanding mechanical and electrical properties hasled to a quest for other novel graphene-basedstructures with technologically desirable properties.The closely-related boron nitride (BN) nanotubes andmixed BN/C systems, which are now being producedin gram quantities, have electronic properties that arecomplementary to pure carbon nanotubes and couldthus be useful in a variety of electronic devices.

The focus of this work was to use Challenge-levelHPC resources to investigate and predict propertiesof advanced new materials and technologies criticalto the DoD’s needs. This research has shown thatBN/C nanotube structures can be used for effectiveband-offset engineering in nanoscale devices, andfor robust field emitters with efficiencies of up to twoorders in magnitude higher than those built fromcarbon nanotubes.

Using Sophisticated ab initioTechniquesUntil very recently, the design and prediction ofadvanced materials has been an empiricalundertaking. The advent of HPC enabled the use ofsophisticated ab initio techniques, which can predictthe properties of complex materials. The simulationswere carried out using a novel, real-space multigridmethod developed especially for very large-scalequantum calculations. Plane-wave-based methodshave also been used for independent crosscheckingof the key results. The use of multigrid techniquespermits efficient calculations on ill-conditionedsystems with multiple length scales and/or high-energy cutoffs. The multigrid codes have been verywell optimized for massively parallel computers and

their execution speed scales linearly with the numberof processors. The calculations required up to 128processors of the Cray T3E supercomputer. Thecalculations were carried out in a supercell geometry,using up to 6 layers of boron nitride and carbon. In aperiodic super lattice of polar and non-polar materials(such as BN and C), anyspontaneous polarizationfield manifests itself as aperiodic oscillation in theelectrostatic potential of thesystem, due to periodicboundary conditions. Thefull potential then displays asaw tooth behavior that isthe signature of aspontaneous polarizationfield superimposed onto aperiodic crystalline potential.In order to obtain the valueof this field, the planar average of the electrostaticpotential along the nanotube axis has beenevaluated. This average, which is over the planeperpendicular to the tube, displays strong oscillationsdue to the varying strength of the ionic potentials. Tosubtract out this effect, the corresponding “bulk”averages have been subtracted out, and thereby theone-dimensional macroscopic average of theelectrostatic potential of the system was calculated.Since nanotube calculations include a sizablevacuum region surrounding the nanotube, the bandoffsets could be computed directly by referring thehighest occupied levels to the vacuum level.

Symmetry of Nanotube ImportantIt is clear that symmetry of the nanotube plays animportant role in determining the magnitude of thespontaneous polarization field. The strongest effectswill be observed in the (k, 0) zigzag nanotubes, sincethis geometry maximizes the dipole moment of theBN bond. This study has primarily concentrated on

Novel Electron Emitters and Nanoscale DevicesJERRY BERNHOLC

North Carolina State University, Department of Physics, Raleigh, NC

Sponsor: Office of Naval Research (ONR), Arlington, VA

“This research has shownthat BN/C nanotubestructures can be used foreffective band-offsetengineering in nanoscaledevices, and for robustfield emitters withefficiencies of up to twoorders in magnitudehigher than those builtfrom carbon nanotubes.”

Page 167: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 158 –

LEVERAGING INFORMATION TECHNOLOGY

zigzag nanotubes with diameters of up to 12 E. Thevalue of the polarization field obtained from the slopeof the macroscopic average potential for zigzagtubes is shown in Figure 1. In contrast, the (k, k)armchair nanotubes are not expected to display anyspontaneous polarization field. This is because anyindividual nanotube ring must be charge neutral, sothat no fields are possible. This has been directlyverified for the case of a (5,5) BN/C super lattice.

Chiral nanotubes will have fields that are between thezigzag and armchair values.

From the calculated band-offsets, it is clear that thevalence state of the BN/C super lattices is alwaysspatially localized in the carbon region. This stateplays an important role in determining the responseto the macroscopic electric field induced by the BNsection. In particular, the oscillatory behaviorobserved as a function of the helicity index can beunderstood from the symmetry property of thevalence state, displayed in Figure 2 for two zigzagnanotubes. Figure 2 directly shows that the valencestate is localized within the carbon section. For small

diameter nanotubes, it displays either a longitudinalor a transverse symmetry as exemplified by the (7,0)and (8,0) tubes, respectively. Specifically, for (k =3p+1, 0) nanotubes such as the (7,0) tube, thevalence state always assumes a longitudinalcharacter. The axial distribution of valence electronswill therefore induce a depolarization field that isopposite to the one intrinsic to the BN section. Thisfield is quite effective in small diameter nanotubes inreducing the polarization, so that for the (7,0)nanotube the screening is almost total and theobserved macroscopic field is close to zero. Forlarger diameter nanotubes, the symmetry of thevalence state gradually loses its peculiar axial orlongitudinal character and the macroscopic fieldasymptotically approaches the value obtained for aflat BN sheet in the “zigzag” direction.

Assessing the Effectiveness of BN/CSystemsDespite the screening by the valence electronsdistributed over the carbon portion of the BN/Csuperlattices, a net polarization field is built up alongany zigzag structure. The existence of an intrinsicmacroscopic field will clearly influence the extractionof electrons from BN/C systems, when these areused as field emitting devices. Qualitatively, a goodelectron emitter is characterized by a largegeometrical field enhancement factor, due to theneedle-like shape of nanotubes, and a small workfunction that is an intrinsic property of the emittermaterial (Figure 3). Quantitatively, the current at theemitter tip is calculated from the so-called Fowler-Nordheim relationship. To assess the effectiveness ofBN/C systems as emitters, the field enhancementfactor for tubes of length 15.62 E and an applied fieldof 0.11 V/E have been computed. As expected, theapplied field is very well screened inside the systemand the local field enhancement factors were similarfor different BN/C systems. Since the fieldenhancement factor increases linearly with the size of

Figure 2. Electron distribution in the top valence state for (7,0) and (8,0) BN/C nanotubes. Note that the symmetries of the two states are quitedifferent.

Figure 1. The structure of a BN/C nanotube superlattice (top), theaverage electrostatic potential (blue) and its macroscopic average(red). The slope in the macroscopic potential leads to an electricfield inside the nanotube.

Page 168: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 159 –

2002 SUCCESS STORIES

the nanotubes, large enhancement factors arepossible for long nanotubes. However, thepolarization field can clearly be used to furtherenhance the field emitting properties. In order toexamine this effect quantitatively, researchers havebuilt-up a set of finite sized (6,0) zigzag structuresusing B, N, and C in various combinations. The workfunctions were then computed as the differencebetween the vacuum level and the Fermi energy ofthe system. The former was obtained by thepreviously discussed potential average procedure,while the latter was simply given by the highestoccupied eigen state-of-the-system. With thismethod, we calculated a work function of 5.01 eV fora (10,10) carbon nanotube, in close agreement withpreviously published values. For the smallerdiameter (6,0) carbon nanotube, we found aconsiderably larger work function of 6.44 eV. This isprimarily due to curvature effects.

Turning to the heterostructures, the researchersconsidered a finite NB/C (6,0) system consisting offour alternating N and B layers followed by 8 Clayers, giving a total of 96 atoms. Due to the netpolarization field experienced by the electron, thework function is reduced to 5.04 eV at the C tip andincreased to 7.52 eV on the N tip. The same trend isobserved for the BN/C system, for which the workfunction on the B tip is equal to 5.00 eV while it takesa value of 6.45 eV on the C tip. Therefore, the workfunction decreases by a significant 1.40 eV, ascompared to the pure carbon system. This is largeenough to lead to significant macroscopic effects.According to the Fowler-Nordheim relationship, thecurrent density depends exponentially on the workfunction. It follows that the insertion of BN segmentsin C nanotubes will increase the current density by

Figure 3. Electron distribution near a tip of a BN/C nanotube in a fieldemitter configuration

CTA: Computational Electronics and NanoelectronicsComputer Resources: Cray T3E [NAVO]

up to two orders in magnitude as compared to purecarbon nanotube systems.

Future Uses for NanotubesNanotubes are prime candidates for novel electronemitters, to be used in ultrahigh resolution flat paneldisplays and cold-cathode-based microwaveamplifiers. The hundred-fold increase in theemission current density would obviously have amajor effect on the use and efficiency of electronemitters at the battlefield and in support systems.The variability of the band offsets in BN/C systems asa function of their radius offers substantial flexibilitywhen designing novel nanoscale electronic devices.

ReferencesV. Meunier, C. Roland, J.Bernholc, and M. BuongiornoNardelli, “Electronic and field emission properties of BN/Cnanotube superlattices,” Appl. Phys. Lett., (in press 2002).

J. Bernholc, M. Buongiorno Nardelli, V. Meunier, S.Nakhmanson, D. Orlikowski, C. Roland and Q.Zhao,“Mechanical Transformations, Strength,Pyroelectricity, and Electron Transport in Nanotubes,”invited presentation at the 2002 International Conferenceon Computational Nanoscience and Nanotechnology, SanJuan, Puerto Rico, April 2002.

Page 169: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 160 –

LEVERAGING INFORMATION TECHNOLOGY

An Integrated Missile Defense ApproachMILT JOHNSON AND MICHAEL ROBERTSON

Joint National Integration Center (JNIC), Schriever AFB, CO

The Missile Defense Agency (MDA) uses aseries of operator-in-the-loop wargames toexamine Ballistic Missile Defense System(BMDS) issues for the near-term (2004–2006)

and far-term (2012). The primary means ofexamining these issues is to use DoD HPC resourcesin conjunction with Missile Defense Wargame andAnalysis Resource (MDWAR) and other federatedsimulations. To achieve optimal realism within its wargames, MDWAR explicitly models threats,interceptors, sensors, and communication systems(including passing tactical message contents in near-native formats). This approach permits MDWAR toenhance field exercises and other HWILT byproviding high-level integration between simulatedfuture missile defense system elements and today’sreal battle management, command, control,communications, computers and intelligenceequipment. Planners can evaluate the practicaleffectiveness of adding an additional family ofsystems interoperability features byextending the tactical messages setwithin MDWAR’s joint data networkcommunications model.

MDWAR InnovationsMDWAR’s design includes extensivedata collection and informationretrieval capabilities that provideinsight into the tactical messages andengagement event outcomes. Thedata collected during a wargame areavailable worldwide using the SecretInternet Protocol Routed Network(SIPRNET) and any commerciallyavailable Internet browser. Multipleinnovations are a direct result of theHPC contributions. One innovation isthat MDWAR is the first successfulOptimistic, Parallel, Discrete EventSimulation (O-PDES) framework. Inachieving MDWAR’s vision, thedesigners committed to the use of an

O-PDES framework as the basic simulation engine.The decision to use O-PDES as the foundation for areal-time, interactive, simulator destined to participateor even lead Distributed Interactive Simulations andHigh Level Architecture federations is not withoutcontroversy. Synchronous Parallel Environment forEmulation and Discrete-Event Simulation (SPEEDES)is the most significant advancement in state-of-the-artdiscrete event simulation in the last five years andwas the only mature framework. Its primary use is inas-fast-as-possible (AFAP) academic applications butit is available as a government off-the-shelf product.

MDWAR expands the scalability envelope. Theprimary reason for replacing the previous wargamingtool is to achieve the performance needed tosimulate more objects at higher fidelity. In the past,problem size and fidelity were reduced to achievereal-time execution. MDWAR’s goal is to maintain thedesired problem size and fidelity and “dial-up” thecomputer power to achieve real-time performance

Figure 1. BMDS System

Page 170: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 161 –

2002 SUCCESS STORIES

through the use of distributed processingtechnologies. The first National Missile Defensewargame in 1999 was run on twelve CPUs andinvolved six radars, a simple point-to-pointcommunications model, tens of threat objects, tens ofinterceptors and 25 non-interactive player positions.Seven months after its implementation, MDWAR wassimulating 56 radars, 3,000 interceptors, 50 threatobjects, 59 fully independent battle managers, and atheater-wide Joint Tactical Information DistributionSystem (JTIDS) communication system operating inan imperfect space track environment. It supports 32fully interactive player positions at three commandand control levels and runs on 25 CPUs. Two monthslater, MDWAR was handling an approximately 50percent increase in the threat load through themodeling of duplicate tracks in an imperfect SingleIntegrated Air Picture. Test problems are being runon 88 CPUs in preparation for even larger problems.The practical demonstration of this kind of scalabilityis another important advancement in the state-of-the-art.

O-PDES Techniques EliminatesSimulation ConstraintsMDWAR changes the fundamental nature of thecapability versus performance tradeoff in force-on-force level, real-time simulators. With previousmissile defense simulators, it was often necessary tosimplify the simulation to achieve real-time executionby reducing modeling detail and resolution orscenario complexity. MDWAR pioneers theapplication of O-PDES techniques to harness theprocessing capability of today’s high-performance,multi-processor computers. MDWAR’s advancedtechnology all but eliminates the need to constrainthe simulated scenario definition for real-timeexecution. MDWAR maintains real time performance,simulates more defense elements at higher fidelity,and operates in more complex and realisticenvironments than previously possible with otherwargaming tools. Its high-end capabilities areimpressive; yet it retains its predecessor’s variablefidelity and portability.

MDWAR overcomes the customary constraint tomodel the perfect information situation. In a JointTheater and Missile Defense Organization wargame,the JNIC did test the introduction of fog-of-warcontributors to evaluate joint command and controlmodes and firing doctrine in the presence ofimperfect information. Real-world location and

alignment biases (errors) were intentionallyintroduced into MDWAR sensor models; theperceived tracks were passed to a simulated JTIDScommunications network where explicit correlationalgorithms produced realistic track “drops,” “swaps,”and “duals” as well as “Reporting Responsibilitydueling” behaviors.1 The sensor biases andcorrelation control parameters were tuned to achievethe target space track picture quality necessary tosatisfy the game objectives. This represents anadvance in the state-of-the-art because it is the onlyreal-time, interactive simulator with both the fidelityand capacity to look at the theater-wide ballisticmissile defense operationalbehaviors in the presence ofuncertain information.

Target AssignmentDecision AidDevelopedMDWAR also provides a testenvironment within which avariety of test articles,concepts of operations(CONOPS), tactics,techniques and procedures,algorithms, decision aids, realworld systems, andprototypes are embeddedand their human in controlaspects analyzed. In an earlyapplication, MDWARdevelopers were asked toautomate the specified jointfiring doctrine in a way thatallows the operators to control the theater ballisticmissile engagements using a management byexception approach. By designing the AutomatedWeapon Target Assignment to be fully compatiblewith MDWAR’s mission (e.g., operate in a fog-of-warenvironment), the developers have produced aprototype of what is believed to be the first theater-wide automated weapon target assignment decisionaid. This first generation prototype was successfullyexercised in an imperfect information environment inits very first exposure.

MDWAR has a heavy data collection concept thateliminates performance impacts. The primarypurpose of any simulator is to generate data andMDWAR is no exception. It is common practice on

1 R2 Dueling refers to reporting rules for radar data. Conflicts arise in radar data coming in and they must be deconflicted.

“With previous missiledefense simulators, itwas often necessary tosimplify the simulationto achieve real-timeexecution by reducingmodeling detail andresolution or scenariocomplexity. MDWARpioneers the applicationof O-PDES techniquesto harness theprocessing capability oftoday’s high-performance, multi-processor computers.MDWAR’s advancedtechnology all buteliminates the need toconstrain the simulatedscenario definition forreal-time execution.”

Page 171: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 162 –

LEVERAGING INFORMATION TECHNOLOGY

many (even non-real-time) simulators to turn off datacollection in an effort to get acceptable runtimeperformance. MDWAR has the goal of collectingdata with little or no impact on runtime performance.MDWAR’s architecture, which consists of acombination of shared memory buffering and aseparate dedicated data collection application,satisfies this goal. The contents of every tacticalmessage and data on every simulated event(typically one to two million) produce one to twogigabytes of data that are collected in real-time withno measurable negative impact on runtimeperformance.

Technical Enhancements SparkImprovementsThrough the use of HPC and state-of-the-artadvances in simulation, MDA and the HPCMP havetaken wargaming to a new level. MDWAR’s designgoal is to examine and to develop CONOPS,doctrine, tactics, techniques and procedures throughthe use of operator-in-the-loop experiments thatinclude real-world fog-of-war challenges. Thecombination of advanced simulation technologies,performance oriented architecture, state-of-the-artHPC computers, and unique gaming facilities put thisgoal well within the reach of the air and missiledefense communities. These technicalenhancements translate into a number ofimprovements in the support provided to the MDA

wargaming customers: a dramatic increase in thenumbers of runs during game times; significantimprovement in quantitative data collection leading tobetter analysis and understanding of emergentbehaviors; and a rapid prototyping laboratory foranalyzing operational concepts, battle management,mission function automation, and operator displays.The end result is a flexible tool that can be readilyconfigured to represent desired modes of commandand control, system architecture, and threat.

ReferencesMai, Michael, Captain Mance Harmon, USAF, and AndreaMai, “Interoperability in the Joint Theater Using a ShotOpportunity Message,” presented at the 2001 National FireControl Symposium, August 21–26, 2001.

Commander Hill, USN, “JTBMD Wargame Results,”presented at 2002 Multinational BMD Conference, June 3–6, 2002.

Danberg, Bob, “How We Fought the TBMD 2010 Family ofSystems in Operator-in-the-Loop Wargames,” presented at2002 Multinational BMD Conference, June 3–6, 2002.

CTA: Forces Modeling and Simulation (FMS)/C4IComputer Resources: SGI Origin 2000 [JNIC]

Page 172: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 163 –

2002 SUCCESS STORIES

Successfully Operating the Wargame 2000SimulationDAVID FULLER

Joint National Integration Center (JNIC), Schriever AFB, CO

1 http://WWW.JNTF.OSD.MIL/pROGRAMS/mODELING/WG2K.ASP2 Approximately 10,000 simulation executions occurred in 2000 and 19,144 occurred in 2001, a performance level enhanced by improvedcommunication and a server upgrade. Up to 30,000 may be achieved in 2003.

The JNIC provides the nation a center ofexcellence for joint missile defenseinteroperability testing, wargaming, exercisessimulation, modeling, and analysis. JNIC

uses a locally developed simulation, Wargame 2000(WG2000). WG2000 is intended to provide asimulated combat environment that will allowwarfighting commanders, their staffs, and theacquisition community to examine missile and airdefense CONOPS.1 In the AFAP mode (non-real-time), WG2000 has a significant economic advantagein executing test simulations in parallel on the Origin2000 servers instead of the serial execution seen onmost real-time programs. The Air and MissileDefense Simulation, or AMDSim, is the core softwarecomputational component of WG2000. AMDSimcontains the code for all functional simulationelements including battle management, sensors,communications nodes, missile threat and defenseelements, and graphics displays. Using WG2000,military planners from around the world exercisecommand authority in simulated battles for commandand control (C2) human-in-control analysis.

SPEEDES Provides Wargame 2000FrameworkJet Propulsion Laboratory’s state-of-the-art SPEEDESframework forms the underlying basis for WG2000’shigh-performance, object-oriented, discrete-event,real-time simulation support. With its forte in time andmulti-CPU management, SPEEDES supports bothconservative and optimistic synchronizationstrategies. WG2000 improves upon the previousgeneration MDA/JNIC Advanced Real-time Gaming

Universal Simulation. This is a pursuit for higherfidelity, greater realism, and configuration flexibilitywith significant parallel CPU execution supported bymeaningful graphics. JNIC human-in-control C2

wargame pioneering effortsremain a challenging effort.

A WG2000 simulationexecution results in a 35 to 60minute consumption of from 3to 25 CPUs in either dedicatedor shared modes. Mostexecutions use 22–25 CPUs inshared mode. In the sharedmode, up to six completecopies of the simulation mayexecute simultaneously per server. WG2000simulation executions most frequently occur in theAFAP mode on the shared servers. WG2000 doesnot require the use of special hardware such asaircraft cockpits or communication devices oftenseen in other simulator types.

WG2000 server utilization nearly doubled in 20012.Such a huge execution success on HPC serverscannot occur with a traditional real-time simulationproduct due to serial operation dependencies.Consequently, with this economic model, the MDAprogram realizes extraordinary execution productivitythat enhances the quality of the delivered software.

Parallelization Offers Time SavingsLarge-scale servers seen today with anywhere from256 to 512 CPUs at the MSRCs and up to 128 CPUsat the JNIC DC offer their respective sites thecapacity to scale their products with the use of more

“In 2001, WG2000achieved success atthe eleven CPU scalinglevel. Today, scalingfrom 22 to 25 CPUsrepresents more thana 100 percent scalingimprovement.”

Page 173: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 164 –

LEVERAGING INFORMATION TECHNOLOGY

CPUs applied in parallel to the problem space. Ineffect, a problem that started as a serial sequence ofevents, becomes divided into parallel tasks thatexecute on either dynamically or statically assignedCPUs. The parallel architecture model shouldproduce more work in the same period of time, whencompared with the serial model. However, thelaunch, detection, and defense against a threatmissile obviously present a serial time sequence ofevents. Consequently, the composition of paralleltasks demands much from the software designers. In2001, WG2000 achieved success at the eleven CPUscaling level. Today, scaling from 22 to 25 CPUsrepresents more than a 100 percent scalingimprovement.

The combination of the WG2000 execution modeland large-scale servers provide the MDA with asuccessful, low-cost per execution model through theunique application of large server technology.WG2000 expects to expand to even faster large-scaleservers to produce even more product during its lifecycle.

ReferencesSPEEDES User’s Guide, # SO24, Rev. 3, 4, Solana Beach,CA. Metron Inc., October 2001.

CTA: Forces Modeling and Simulation/C4IComputer Resources: SGI Origin 2000 [JNIC]

Page 174: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

SECTION 3HIGH PERFORMANCE

COMPUTING

MODERNIZATION

PROGRAM

Page 175: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 176: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 167 –

Introduction

The DoD HPCMP is a strategic national defense initiative that exists to enable over 4,000 scientistsand engineers to address the engineering challenges of the S&T and T&E communities. Theprogram provides an unparalleled collaborative environment, linking users at over 100 DoDlaboratories, test centers, universities, and industrial sites. Because the HPCMP is managed as aDoD-wide corporate asset, expert teams are guaranteed access to a world-class, commercial,HPC capability based on mission priority rather than on the size or budget of their organizations.

A USER-CENTERED MANAGEMENT FOCUS

Traditional HPC programs are built around a single central facility that acts as the “center ofgravity” for all activities. The HPCMP, in contrast, provides a world-class HPC capabilityaccessible over a single network service, the DREN. All the program’s MSRCs and DCs areavailable to HPCMP users, with the allocation of resources determined independently by theServices and Defense Agencies. Additionally, users have a direct role in overall policy andimplementation through the HPC User Advocacy Group (UAG) and through their CTA leaders,who unify the HPCMP community.

INTEGRATED PROGRAM STRATEGY

The program has evolved beyond an initial phase of procurement and program development tofield, in FY 1996, a world-class HPC infrastructure for the DoD S&T and the T&E communities.Figure 1 depicts the HPCMP integrated program strategy. The infrastructure is comprised of threecomponents: HPC Centers,Networking, and SoftwareApplications Support. Each ofthese components will bediscussed in detail in thissection.

The HPCMP continuallyacquires the best commerciallyavailable, high-end, highperformance computers. Theprogram also acquires anddevelops common-use softwaretools and programmingenvironments, expands andtrains the DoD HPC user base,and exploits the best ideas,algorithms, and software toolsemerging from the national HPCinfrastructure.

Figure 1. Integrated Program Strategy of the Department of DefenseHigh Performance Computing Modernization Program

Page 177: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 168 –

THE HPCMP

USER REQUIREMENTS

The requirements process, developed and employed by the program, includes an annual surveyof the DoD S&T and T&E communities, coupled with site visits to discuss requirements andprovide information to user groups. The program obtains additional feedback through usersatisfaction surveys, annual user group conferences, and advisory panels who establish policiesand procedures at the HPC centers.

HPC systems performance metrics are tracked throughout the year at the computational projectlevel to ensure projects receiving allocations are able to use the resources effectively andefficiently. This process provides feedback to the Services and Agencies before allocatingresources the following fiscal year. The overall integrated process furnishes accurate and timelydata on which to base program decisions.

WORLD-CLASS SHARED HPC CAPABILITY

The HPCMP provides almost 27 teraflops of peak computing capability, an integrated computingenvironment, and a full complement of specialized and general user support. The programprovides a variety of architectures and classes of systems, ranging from the most powerfulcommercial configurations available from major US HPC vendors to more specialized systemsconfigured for specific types of mission applications. HPC capability is upgraded regularly toensure that DoD users are among the first to use the new generations of HPC capabilities. DuringFY 2002, the HPCMP completed its third technology upgrade at the MSRCs and continued to fundspecial capabilities at selected DCs. The program will continue to make major upgrades tocapabilities at the MSRCs in FY 2003.

DOD HPC CHALLENGE PROJECTS

HPCMP allocates approximately 25 percent of its HPC resources each fiscal year to competitivelyselected DoD Challenge Projects. These high-priority, computationally intensive projects developand use pathfinding applications that meet mission-critical warfighting and science andtechnology requirements. Forty-one projects were selected for implementation in fiscal year 2002.Many of the projects use multiple hardware platforms and in some cases, multiple sharedresource centers. Table 2 lists the fiscal year 2002 DoD HPC Challenge Projects.

HPCMP COMPUTATIONAL TECHNOLOGY AREAS

The DoD HPCMP user community is organized around ten broad CTAs. Each CTA has adesignated leader, a prominent DoD scientist or engineer, working in the discipline. Users ofHPCMP resources address challenges in the following computational areas:

Computational Structural Mechanics, Mr. Michael Giltrud, Defense Threat Reduction Agency,Alexandria, VA

Computational Fluid Dynamics, Dr. Robert Meakin, Army Aviation and Missile Command,Moffett Field, CA

Computational Chemistry and Materials Science, Dr. Ruth Pachter, Air Force ResearchLaboratory, Materials/Manufacturing Directorate, Wright-Patterson AFB, OH

Page 178: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 169 –

2002 SUCCESS STORIES

Table 2. DoD High Performance Computing Challenge Projects (Fiscal Year 2002)

tcejorP redaeLtcejorP )s( noitazinagrO

scinahceMlarutcurtSlanoitatupmoC

ninoitcetorPtsalBroftiforteRdnanoitaulavEniarreTnabrU

ymmoT,tolyaBsemaJ,leinaD'OsemaJ,sniveB

,dleifelttiLdivaD,nohSgnuoYnomaErehposirhCdna

dnahcraeseRreenignE,grubskciV,retneCtnempoleveD

SM

snoitcaretnItegraTelitcejorPxelpmoCgniledoM divaDdnayesmiKtneKsinopelK

,yrotarobaLhcraeseRymrADM,dnuorGgnivorPneedrebA

fonoitalumiSdnagniledoMlanoisnemiD-eerhTecnaraelCelcatsbOrofstceffEbmoB

grebsdnaLardnaxelA ,retneCerafraWecafruSlavaNnaidnI,noisiviDdaeHnaidnI

DM,daeH

scimanyDdiulFlanoitatupmoC

fogniledoMscimanyDdiulFlanoitatupmoCD-3)LIOC(resaLenidoI-negyxOlacimehCeht

neddaMyhtomiT ,yrotarobaLhcraeseRecroFriAMN,BFAdnaltriK

saGlluFnisrotcejnIleuFfolortnoCevitcAsenignEenibruT

noneMhseruS hcraeseR,eciffOhcraeseRymrACN,kraPelgnairT

yrutneCts12ehtrofgniledoMmetsySpordriAroirraWenrobriA

,yenneBdrahciR,nietShtieKsemaJ,rayudzeTnufyaT

dna,irahoJdimaH,ruobraBhtimSaynoS

lacigoloiB&reidloSymrAAM,kcitaN,dnammoClacimehC

noitarapeSevissaMhtiwtfarcriAlluFfosisylanAnoitalumiSyddE-dehcateDgnisU

treboR,relztruWhtenneK,gnartSmailliW,oramoT

semaJtpaC,remsirGwehttaMdna,seriuqSelyK,ehtysroF

tralapSepillihP

,yrotarobaLhcraeseRecroFriAHO,BFAnosrettaP-thgirW

erotS-tfarcriAfotroppuSniDFCdeilppAnoitargetnIsnopaeWdnaytilibitapmoC

letraMnhoJ ,eciffOelgaEkeeSecroFriALF,BFAnilgE

raenilnoNgnisUsVAUfosisylanAytilediF-hgiHsnoitalumiSerutcurtS/diulF

leugiMdnaellivleMdieRlabsiV

,yrotarobaLhcraeseRecroFriAHO,BFAnosrettaP-thgirW

tnemenifeRhseMevitpadAnoituloseR-hgiHsnoisolpxEdenifnoCfonoitalumiS)RMA(

,yaDsucraM,lleBnhoJekiM,nameldneRselrahC,renkceBtnecniV,ikswejiL

lhuKnellAdna

noitcudeRtaerhTesnefeDAV,airdnaxelA,ycnegA

edutitlAhgiHfosnoitalumiSelcitraPdirbyHD-3nisnoisolpxEraelcuN

thcerBnehpetS noitcudeRtaerhTesnefeDAV,airdnaxelA,ycnegA

sevaWgnikaerBpeetSfonoitalumiSyddE-egraLtsaLehT:pihSadnuorAsteehSyarpSnihTdna

scimanydordyHpihSlanoitatupmoCnireitnorF

dna,euYkciD,miKnaH-iKhtumremmoDsalguoD

,retneCerafraWecafruSlavaN,adsehteB,noisiviDkcoredraC

DM

niwolFecnaraelC-piTfonoitalumiSyddE-egraLsnoitanibmoCrotoR-rotatS

,gnaWgneM,nioMzivraPtajaRdna,uoYnuyhgnoD

lattiM

,hcraeseRlavaNfoeciffOAV,notgnilrA

noitcudeRgarDelbbuborciMfogniledoMscisyhP

semaJdnathgirblAyenraPnamlhU

hcraeseRdecnavdAesnefeDAV,notgnilrA,ycnegAtcejorP

tnalleporPdiloSfosnoitalumiSDFCesahpitluMdnasegrahCraludoMninoitsubmoCsnuG)CTE(lacimehC-lamrehtortcelE

luaPdnaacsuNleahciMyornoC

,yrotarobaLhcraeseRymrADM,dnuorGgnivorPneedrebA

Page 179: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 170 –

THE HPCMP

Table 2—Continued. DoD High Performance Computing Challenge Projects

tcejorP )s(redaeLtcejorP noitazinagrO

)deunitnoc(scimanyDdiulFlanoitatupmoC

rofecnelubruTekaWfogniledoMlaciremuN-etaLdnascimanyDxetroV:snoitacilppAlavaN

raehSdnanoitacifitartSniecnelubruTekaW

dlanoD,yalruoGleahciM-airaM,sttirFdivaD,isileD

semaJ,gnoLeLelacsaPdna,sniboRtreboR,yeliR

enreWeoJ

,hcraeseRlavaNfoeciffOAV,notgnilrA

tegraT/snopaeWfosnoitalumiSlellaraPlanoitatupmoC/DFCdelpuoCagnisUsnoitcaretnI

ygolodohteM)DSC(scimanyDlarutcurtS

muaBhpesoJ noitcudeRtaerhTesnefeDAV,airdnaxelA,ycnegA

DFCesahP-itluM,ydaetsnU,lanoisnemiD-eerhTdeepShgiHgnirevuenaMfosisylanA

selciheVgnitativacrepuS

,uadniLseluJ,znuKtreboRgnilebiGdrawoHdna

,hcraeseRlavaNfoeciffOAV,notgnilrA

fogniledoMscimanydoreAetaruccA-emiTlortnoCelitcejorProfsteJcitehtnyS

ramukuS,uhaSjarabuJnekiVyllaSdna,yhtravarkahC

,yrotarobaLhcraeseRymrADM,dnuorGgnivorPneedrebA

fosnoitalumiSlanoitatupmoCetaruccAemiTngiseDdnanoitalumiS,IDrofekawriApihS

snoitacilppA

yksloPnasuS tfarcriAretneCecafraWriAlavaNDM,reviRtnexutaP,noisiviD

pihSecafruSrofnoitalumiSSNARydaetsnUgnipeekaeSdnagnirevuenaM

oJ,miKnaH-iK s ,iksroGhpe,rolyaTetteyafaL,nretSderF

namyHkraMdna

,retneCerafraWecafruSlavaN,adsehteB,noisiviDkcoredraC

DM

ecneicSslairetaMdnayrtsimehClanoitatupmoC

dnaslairetaMtnaveleRDoDfonoitaziretcarahCsecafretnI

retraCylimE cifitneicSfoeciffOecroFriACD,BFAgnilloB,hcraeseR

otgnidaeLsledoMyrtsimehClanoitatupmoCnoisorEebuTnuGfonoitaideM

teragraM,ikswolabahCyraCregoR,ucseroSnaD,yelruH

,nospmohTdlanoD,sillEdlareGdna,eciRysteB

notgnihsuL

,yrotarobaLhcraeseRymrADM,dnuorGgnivorPneedrebA

yllacigolonhceTfoseidutSselpicnirPtsriFslairetaMtramStnatropmI

werdnA .M eppaR ,hcraeseRlavaNfoeciffOAV,notgnilrA

htiwstnegAerafraWlacimehCfosnoitcaretnIesaretsenilohcyltecA

dlareG,thgirWyreffeJmailliWdna,notgnihsuL

etihW

dnalacimehCdoowegdEneedrebA,retneClacigoloiB

DM,dnuorGgnivorP

erutarepmeThgiHfosnoitalumiSelacsitluMslairetaMcimareC

,onakaNorihciiA,ailaKvijaRatsihsaVayirPdna

cifitneicSfoeciffOecroFriACD,BFAgnilloB,hcraeseR

dnasebutonaNfonoitalumiSelacsitluMserutcurtSmutnauQ

clohnreByrreJ ,hcraeseRlavaNfoeciffOAV,notgnilrA

ytisneDygrenEhgiHfosnoitalumiSelacS-itluMslairetaM

ztaoByrreJ ,yrotarobaLhcraeseRecroFriAAC,BFAsdrawdE

ngiseDslairetaMweN ,rethcaPhtuR,ztaoByrreJ,htoVyrogerG,nodroGkraM

reffihcS-semmaHnorahSdna

cifitneicSfoeciffOecroFriACD,BFAgnilloB,hcraeseR

Page 180: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 171 –

2002 SUCCESS STORIES

Table 2—Continued. DoD High Performance Computing Challenge Projects

tcejorP )s(redaeLtcejorP noitazinagrO

scitsuocAdnascitengamortcelElanoitatupmoC

IItcejorPegnellahCresaLenrobriA ,kciniWymereJ,nworBrubliWdnaleBtreboRdna

,yrotarobaLhcraeseRecroFriAMN,BFAdnaltriK

fonoitadnuoF:ygrenEFRrewoPhgiHdetceriDsnopaeWecroFriAnoitareneGtxeN

thgirwtraChtieK ,yrotarobaLhcraeseRecroFriAMN,BFAdnaltriK

elbavresbOwoLrofesabataDerutangiSradaRngiseDtcuDenignE

lliHneihcieuK ,yrotarobaLhcraeseRecroFriAHO,BFAnosrettaP-thgirW

lacitcaTrofsnoitalumiSerutangiScimsieSdnuorgrednUdnasmetsySrosneSdnuorG

seitilicaF

nehpetS,naroMkraM,nahceMcMegroeG,mahcteK

yoR,mlohtseHgitSsamohT,dleifneerG

,ebmocaLsemaJ,nosrednA,tpuaHtreboR,ztahSleahciM

treboRdna,ssocaLdrahciRsevaerG

dnahcraeseRsnoigeRdloCrevonaH,yrotarobaLgnireenignE

HN

smetsyStabmoCerutuFrofgniledoMerutangiS aserehT,urubmaNujaR,gnuhCreteP,adnoG

,etnuBevetS,yelruHteragraMrehpotsirhC,nruboCmailliW

nhoJ,eeLnivlaC,noyneKmailliWdna,agesracsE

noegrupS

,yrotarobaLhcraeseRymrADM,dnuorGgnivorPneedrebA

noitalumiSdnagniledoMnaecOdna,rehtaeW,etamilC

dnagniledoMnaecOlabolGeergeD23/1noitciderP

yelraH,tfarcllaWnalAdna,sedohRtreboR,trublruH

revirhSyaJ

nhoJ,yrotarobaLhcraeseRlavaNSM,retneCecapSsinnetS.C

dirbYHehthtiwnoitciderPelacs-nisaBledoMnaecOetanidrooC

tengissahCcirE ,hcraeseRlavaNfoeciffOAV,notgnilrA

)PMEC(noitciderPledoMlatnemnorivnEdelpuoC eiluJ,ikswolsaMwalseiW,rentmeStreblA,naelCcM

aixuY,naikamkoTniboRdna,rellerPhtuR,gnahZ

kescaiPevetS

dnaloohcSetaudargtsoPlavaN,yrotarobaLhcraeseRlavaN

AC,yeretnoM

erehpsomtAehtfogniledoMelacsoseMdelpuoCnaecOdna

rudoHdrahciR ,yrotarobaLhcraeseRlavaNAC,yeretnoM

stnemnorivnElarottiLfonoitalumiSytilediFhgiH yreffeJdnadrallAdrahciRdnalloH

,yrotarobaLhcraeseRlavaN,retneCecapSsinnetS.CnhoJdnahcraeseRreenignEdnaSM,grubskciV,retneCtnempoleveD

SM

snoigeRlarottiLnisekaWdegrembuS ,yelirBregoR,lletruPkcirtaPsalguoDdna,iksroGhpesoJ

htumremmoD

,hcraeseRlavaNfoeciffOAV,notgnilrA

gnissecorPegamI/langiS

ecnamrofrePnoitingoceRtegraTcitamotuAnoitaulavE

ssoRyhtomiT ,yrotarobaLhcraeseRecroFriAHO,BFAnosrettaP-thgirW

Page 181: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 172 –

THE HPCMP

Computational Electromagnetics and Acoustics, Dr. Kueichien Hill, Air Force ResearchLaboratory, Sensors Directorate, Wright-Patterson AFB, OH

Climate/Weather/Ocean Modeling and Simulation, Dr. William Burnett, Naval OceanographicOffice, John C. Stennis Space Center, MS

Signal/Image Processing, Dr. Anders Sullivan, Army Research Laboratory, Aberdeen ProvingGround, MD

Forces Modeling and Simulation/C4I, Dr. Larry Peterson, Space and Naval Warfare SystemsCenter, San Diego, CA

Environmental Quality Modeling and Simulation, Dr. Jeffrey Holland, US Army Corps ofEngineers, Engineer Research and Development Center, Vicksburg, MS

Computational Electronics and Nanoelectronics, Dr. Barry Perlman, Army Communications-Electronics Command Research Development and Engineering Center, Ft. Monmouth, NJ

Integrated Modeling and Test Environments, Mr. Jere Matty, Arnold Engineering DevelopmentCenter, Arnold AFB, TN

Program Components

The program is organized into three components to achieve its goals. The components areHPCMP HPC centers, networking, and software applications support. The components allow theprogram to focus on the most efficient means of supporting the S&T and T&E communities’requirements. The components align to the program goals as follows:

HIGH PERFORMANCE COMPUTING CENTERS

The High Performance Computing Modernization Program currently supports four MSRCs andeleven DCs. The large MSRCs provide a wide range of high performance computing (HPC)capabilities designed to support the ongoing needs of the DoD HPC community.

Each MSRC supports complete centralized systems and services. The MSRCs also providevector machines, scalable parallel systems, clustered workstations, scientific visualizationresources, and access to training. Storage capabilities include distributed file and robotic archivalsystems. The four MSRCs are:

Air Force Aeronautical Systems Center (ASC), Wright-Patterson AFB, OH

Army Research Laboratory (ARL), Aberdeen Proving Ground, MD

Engineer Research and Development Center (ERDC), Vicksburg, MS

Naval Oceanographic Office (NAVO), John C. Stennis Space Center, MS

The DCs provide specialized HPC capability to users. HPC systems have been established atDCs where there is a significant advantage to having local HPC systems and where there is apotential for advancing DoD applications using investments in HPC capabilities and resources(e.g., a test activity with real-time processing requirements).

Page 182: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 173 –

2002 SUCCESS STORIES

The DCs are divided into two categories: “Allocated” and “Dedicated”. Allocated DistributedCenters associates the annual service/agency project allocation and/or challenge projectallocation process with these centers. Dedicated Distributed Centers associates the dedicatedproject or projects for which each independent DDC was awarded with their respective center.The DCs have the following goals:

Apply commercial HPC hardware and software as rapidly as it becomes available.

Complement, balance, and supplement the major shared resource centers by enabling localexpertise to be developed and leveraged by the larger DoD community.

Support small and medium-size HPC applications, leveraging the larger MSRCs that supportlarge applications.

Foster reuse of software tools and application software components as well as appropriate useof communication standards, interface standards, and graphics visualization standards acrossDoD.

Support local HPC requirements at selected sites where there is potential for advancing DoDapplications.

Promote the development of new software tools and application area specific software.

Leverage HPC expertise and assets located in industry, academia and other federallaboratories in addition to DoD facilities.

The DCs are :

Dedicated Distributed Centers

Air Force Research Laboratory, Information Directorate (AFRL/IF), Rome, NY

Arnold Engineering Development Center (AEDC), Arnold AFB, TN

Fleet Numerical Meteorology and Oceanography Center (FNMOC), Monterey, CA

Naval Air Warfare Center Aircraft Division (NAWCAD), Patuxent River, MD

Naval Research Laboratory (NRL-DC), Washington, DC

Simulation and Analysis Facility (SIMAF), Wright-Patterson AFB, OH

White Sands Missile Range (WSMR), White Sands Missile Range, NM

Allocated Distributed Centers

Arctic Region Supercomputing Center (ARSC), Fairbanks, AK

Army High Performance Computing Research Center (AHPCRC), Minneapolis, MN

Maui High Performance Computing Center (MHPCC), Kihei, HI

Space and Missile Defense Command (SMDC), Huntsville, AL

The MSRCs and DCs provide state-of-the-art HPC capabilities operating in several classificationenvironments and ensuring the DoD science and technology and test and evaluation usercommunities have the computational tools needed to retain the US warfighter’s technologicaladvantage.

Page 183: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 174 –

THE HPCMP

NETWORKING

Defense Research and Engineering Network

The DREN is DoD’s recognized research and engineering network. The DREN is a robust, highbandwidth, low latency network that provides connectivity between the High PerformanceComputing Program’s geographically dispersed user sites and high performance computingcenters. (See Figure 2)

DREN networking services are provided as a Virtual Private Network over the public infrastructure.DREN currently supports both cell-based and packet-based traffic over core infrastructures suchas Multi Protocol Layer Service and ATM. DREN Service Delivery Points are specified in terms ofWAN bandwidth access, network protocols, and local interfaces for connectivity. Minimal accessto a site is DS-3 (45 Mbps) with potential high end access of OC-768 (40 Gbps) over the next 10years. Current site connectivity to the DREN ranges from DS-3 at some user sites to OC-12 (622Mbps) at MSRCs and selected DCs and Internet Network Access Points (NAP).

Since its inception, DREN has been very active in transferring leading edge network and securitytechnologies across DoD and other Federal agencies. Our leading edge, defense-in-depthsecurity model is now viewed by experts as the best way to protect resources both internally andexternally. Layered defense extends from access control lists at the NAPs through sophisticated,high bandwidth intrusion detection monitored by our HPC Computer Emergency Response Team.

Figure 2. DREN Connections, High Performance Computing Centers, and Other Network Access Points

Page 184: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 175 –

2002 SUCCESS STORIES

Hardware pre-authentication, strong configuration management, and periodic physical securityassessments all contribute to hardened resources that are still accessible to the DoD researchscientists from wherever they need to work. Well designed WAN security coupled with strongserver and user security, allows DREN to truly deploy leading-edge, defense-in-depth protection ofDoD resources.

DREN services are currently provided to sites throughout the continental United States, Alaska,Hawaii, and can be extended overseas where necessary. Incorporating the best operationalcapabilities of both the DoD and the commercial telecommunications infrastructure, DRENprovides DoD a secure, world-class, long-haul research network in support of DoD’s S&T andT&E communities. DREN also extends services to DoD’s Modeling & Simulation community andthe MDA.

Security

The HPCMP has also designed and deployed a Secret DREN (SDREN) using a common Secretsystems high key with National Security Agency certified Type-1 encryptors. SDREN cantransport classified traffic at OC-3 (155 Mbps).

The Defense Information Systems Agency Information Security Program Management Office hascertified the DREN for sensitive but unclassified information and encrypted classified traffic.Access to high performance computing systems is controlled by enhanced security technologyand user procedures.

SOFTWARE APPLICATIONS SUPPORT

“Software Applications Support” is new terminology that captures the evolutionary nature of theProgram’s efforts to acquire and develop joint need HPC applications, software tools, andprogramming environments and educate and train DoD’s scientists and engineers to effectivelyuse advanced computational environments. This component contains two areas: the CHSSI,which has been producing efficient applications software codes for the last several years and PET,which is realigned from the HPC MSRCs. As the program has evolved, the potential synergy of re-focusing PET activities caused the HPCMP to realign PET from a shared resource center-centricorientation to a broader program-wide focus. This realignment offers significant opportunities forleveraging and collaborating with CHSSI.

Common High Performance Computing Software Support Initiative

CHSSI consists of a set of focused software development activities. These activities are designedto provide recognized communities of DoD computational scientists and engineers with commonHPC technical codes that exploit scalable computer architectures. Each CHSSI application isselected based on DoD needs and technical quality. The products have application across asignificant portion of the DoD S&T and T&E HPC user community.

CHSSI provides efficient, scalable, portable software codes, algorithms, tools, models, andsimulations that run on a variety of HPC platforms for use by a large number of S&T and T&Escientists and engineers. CHSSI, which is organized around ten CTAs and eight portfolios,involves many scientists and engineers working in close collaboration across government,

Page 185: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 176 –

THE HPCMP

industry, and academia. CHSSI teams include algorithm developers, applications specialists,computational scientists, computer scientists, and engineers.

Currently, the CHSSI program invests $20 million per fiscal year in 55 software projects. Thesesupport a mix of single and multi-disciplinary projects. These projects are providing “state-of-the-art” or advanced computer modeling capabilities. Prominent scientists and engineers from theDoD lead the projects. These development efforts leverage the considerable expertise and assetsof industry, academia, and other federal agencies. CHSSI software products are rigorously tested,documented, and reviewed as part of their structured development process.

Programming Environment and Training

PET activity focuses on improving DoD HPC productivity by creating an information conduitbetween DoD HPC users and top academic and industry experts. PET is responsible forgathering and deploying the best ideas, algorithms, and software tools emerging from the nationalhigh performance computing infrastructure into the DoD user community. PET acquires world-class technical support, develops strategic partnerships, and focuses on HPC tool developmentand deployment.

PET is divided into four components and each component executes a program that supports thedesignated functional area, as shown in Table 3.

The functional areas aregrouped to encourage synergyamong related CTAs,collaboration and interactionbetween CTA and cross-community functions, and tobalance the workload acrossthe four components.

PET activities consist of short-and long-term strategic efforts.Short-term efforts lasting lessthan three years include shortcollaborative projects withusers. Two of the long-termefforts focus on distancelearning and metacomputing.PET teams anticipate andrespond to the needs of DoDusers for training andassistance. The teams have astrong on-site presence atHPCMP major shared resourcecenters.

Table 3. CTA Component Functional Areas

tnenopmoCrebmuN saerAlanoitcnuFdnasATC

1

noitalumiSdnagniledoMnaecO/rehtaeW/etamilCnoitalumiSdnagniledoMytilauQlatnemnorivnE

tnemnorivnElanoitatupmoC

2

I4C/noitalumiSdnagniledoMsecroFstnemnorivnEtseTdnagniledoMdetargetnI

gnissecorPegamI/langiS

seigolonhceTgnilbanE

3

scimanyDdiulFlanoitatupmoCscinahceMlarutcurtSlanoitatupmoC

retneCegdelwonKenilnOTEPnoitanidrooCgniniarTdna,hcaertuO,noitacudE

4

ecneicSslairetaMdnayrtsimehClanoitatupmoCscitsuocAdnascitengamortcelElanoitatupmoCscinortceleonaNdnascinortcelElanoitatupmoC

seigolonhceTgninraeLecnatsiDdnaevitaroballoC

Page 186: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

SECTION 4ACRONYMS

Page 187: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 188: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 179 –

Acronyms

1-D one-dimension (dimensional)

2-D two-dimensions (dimensional)

3-D three-dimensions (dimensional)

3DVAR three-dimensional variational data assimilation

AAA Anti-Aircraft Artillery

AATD Aviation Applied Technology Directorate

AChE acetylcholinesterase

ADCIRC ADvanced CIRCulation Model

ADCP Acoustic Doppler Current Profiler

ADH ADvanced Hydraulics

ADMCPA 2-azido-N,N-dimethylcyclopropanamine

ADVISR Advanced Visualization for Intelligence, Surveillance, and Reconnaissance

AFAP as-fast-as-possible

AFB Air Force Base

AFOSR Air Force Office of Scientific Research

AFRL Air Force Research Laboratory

AFWA Air Force Weather Agency

AHPCRC Army High Performance Computing Research Center

AIRMS Airborne Infrared Measurement System

AL aluminum

ALE Arbitrary Lagrangian Eulerian

AMDSim Air and Missile Defense Simulation

AMR Adaptive Mesh Refinement or Automatic Mesh Refinement

AoA Analysis of Alternatives or angle of attack

API applications programming interface

ARL Army Research Laboratory

ARO Army Research Office

ASC Aeronautical Systems Center

ATD Advanced Technology Demonstrator

ATM Asynchronous Transfer Mode

ATR Automatic Target Recognition

AWS Abrupt Wing Stall

BAD Behind-armor debris

BLOS beyond-line-of-sight

BMDS Ballistic Missile Defense System

BN boron nitride

C carbon

CAD computer aided design

Page 189: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 180 –

ACRONYMS

C/B chemical/biological

CCM Computational Chemistry and Materials Science

CEA Computational Electromagnetics and Acoustics

CEM Computational Electromagnetics

CEN Computational Electromagnetics and Nanoelectronics

CFD Computational Fluid Dynamics

CHSSI Common High Performance Computing Software Support Initiative

CIO Chief Information Officer

COAMPS Coupled Ocean Atmosphere Mesoscale Prediction System

CODESSA COmprehensive DEscriptors for Structural and Statistical Analysis

COMPOSE COmposite Manufacturing PrOcess Simulation Environment

CONOPS concept of operations

CSD Computational Structural Dynamics

CSM Computational Structural Mechanics

CTA Computational Technology Area

CWO Climate/Weather/Ocean Modeling and Simulation

DARPA Defense Advanced Research Projects Agency

DC Distributed Center

DES Detached-Eddy Simulation

DFT density functional theory

DMAZ 2-azido-N,N-dimethylethanamine

DNS direct numerical simulation

DoD Department of Defense

DOF Degrees of Freedom

DON Department of Navy

DREN Defense Research and Engineering Network

DSD Deforming-Spatial-Domain

DTRA Defense Threat Reduction Agency

EQM Environmental Quality Modeling

ERDC US Army Corps of Engineers, Engineer Research and Development Center

ETC electrothermal-chemical

FCS Future Combat Systems

FDTD Finite Difference Time Domain

FE ferroelectric

FEM Finite Element Method

FeO iron oxide

FMM fast multipole method

FMS Forces Modeling and Simulation

FNC Future Naval Capabilities

FNMOC Fleet Numerical Meteorology and Oceanography Center

FOX-7 1,1-diamino-2,2-dinitroethylene

Page 190: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 181 –

2002 SUCCESS STORIES

FSI fluid-structure interaction

FY fiscal year

GB gigabyte

GW giga-watt

HE high explosives

HFM Hyperbolic Frequency Modulated

HFSOLE High Fidelity Simulations of Littoral Environments

HLD high-loading-density

HMX 1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane

HPC high performance computing or high performance computer

HPCMP High Performance Computing Modernization Program

HPCMPO High Performance Computing Modernization Program Office

HPM high power microwave

HVRB history variable reactive burn

HWILT hardware-in-the-loop test

HYCOM HYbrid Coordinate Ocean Model

IBCTs Interim Brigade Combat Teams

ICE Interdisciplinary Computing Environment

ICEPIC Improved Concurrent Electromagnetic Particle-in-Cell

IMT Integrated Modeling and Test Environment

I/O input/output

IP Information Processor or Internet protocol

IRFNA inhibited red fuming nitric acid

JNIC Joint National Integration Center

JSF Joint Strike Fighter

JTIDS Joint Tactical Information Distribution System

KE Kinetic Energy

LANL Los Alamos National Laboratory

LB/TS Large Blast and Thermal Simulator

LBE Lattice-Boltzman Equation

LES Large-Eddy Simulations

LO Low observables/observability

LOB line-of-bearing

M&S Modeling and Simulation

MACS Modular Artillery Charge System

MBPT(2) Many-body perturbation theory (second order)

MCEL Model Coupling Environmental Library

MCM mine countermeasures

MDA Missile Defense Agency

MDWAR Missile Defense Wargame and Analysis Resource

MEMS micro-electro-mechanical systems

Page 191: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 182 –

ACRONYMS

METOC Meteorology and Oceanography Community

MGD magnetogasdynamics

MILO magnetically insulated line oscillator

MLFMA multi-level fast multipole algorithm

MMH monomethylhydrazine

MoM method of moments

MP2 Moller-Plesset perturbation theory (second order)

MPI Message Passing Interface

MPRS Multi-Point Refueling System

MRTD Multiresolution-Time Domain

MSRC Major Shared Resource Center

MW mega-watt

NAP Network Access Point

NAVAIR Naval Air Systems Command

NAVMETOCCOM Naval Meteorology and Oceanography Command

NAVO Naval Oceanographic Office

NAWCAD Naval Air Warfare Center Aircraft Division

NE nuclear explosive

NGEN The Army’s Next Generation Interior Ballistics Model

NIAB Non-Ideal Air Blast

NISA Network Infrastructure Services Agency

NLOS non-line-of-sight

NM nitromethane

NMR Nuclear Magnetic Resonance

NOGAPS Navy Operational Global Atmospheric Prediction System

NRL Naval Research Laboratory

ns nanosecond

NWP Numerical Weather Prediction

ONR Office of Naval Research

O-PDES Optimistic, Parallel, Discrete Event Simulation

PDF pair distribution function

PenRen Pentagon Renovation Office

PET Programming Environment and Training

P-I pressure-impluse

POPS Primary Oceanographic Prediction System

PVC permanent virtual circuit

PVP permanent virtual path

PZT lead zirconate titanate

QC quantum-chemical

QM/MM quantum mechanics/molecular mechanics

QSPR Quantitative Structure-Property Relationships

Page 192: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 183 –

2002 SUCCESS STORIES

R&D research and development

RANS Reynolds Averaged Navier-Stokes

RCS radar cross section

RDT&E Research, Development, Test, and Evaluation

RF radio frequency

RHA rolled homogenous armor

RMS Root Mean Square

RPM rate per minute

ROC Receiver Operating Characteristic

rt-PVC real-time permanent virtual circuit

rt-PVP real-time permanent virtual path

rt-VBR(s) real-time variable bit rate(s)

SAFEDI Ship Aircraft Airwake Analysis for Enhanced Dynamic Interface

SALVOPS Salvage Operation(s)

SBCCOM Soldier Biological and Chemical Command

SDOF single-degree-of-freedom

SDREN Secret Defense Research and Engineering Network

SGI Silicon Graphics Inc.

SHAMRC Second-order Hydrodynamic Automatic Mesh Refinement Code

SHARC Second-order Hydrodynamic Advanced Research Code

SHMEM SHared MEMory

SIP Signal/Image Processing

SIPRNET Secret Internet Protocol Routed Network

SONAR SOund Navigation and Ranging

SPAWAR Space and Naval Warfare Systems Command

SPE Scalable Processing Environment

SPEEDES Synchronous Parallel Environment for Emulation and Discrete-Event Simulation

SSC SPAWAR Systems Center

SSCSD SPAWAR Systems Center, San Diego

SSHB Short Strong Hydrogen Bond

SST Stabilized Space-Time

S&T science and technology

STWAVE STeady-state WAVE

SVC Scientific Visualization Center

SWAFS Shallow Water Analysis and Forecast System

SWAN Simulating WAves Nearshore

TARDEC Tank Automotive Research, Development and Engineering Center

TBC Thermal Barrier Coating

T&E test and evaluation

Ti titanium

UAG User Advocacy Group

Page 193: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

– 184 –

ACRONYMS

UAV Unmanned Air Vehicles

UDMH unsymmetrical dimethylhydrazine

UGS Unattended Ground Sensor

UHF Ultra High Frequency

UNCLE Unsteady Computations of Field Equations

U.S. United States

UWB ultra-wideband

UXO Unexploded Ordnance

VASP Vienna as Initio Simulation Package

VCMD variable-charge molecular dynamics

VDG Virtual Data Grid

VHF very high frequencies

VV Vertical Transmit-Vertical Receive

WAN wide area network

WG2000 Wargame 2000

WLS weighted, least-squares

WRF Weather Research and Forecast

WSMR White Sands Missile Range

YSZ yttria-stabilized zirconia

Zr zirconate

Page 194: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational
Page 195: High Performance Computing - DTIC · The High Performance Computing Modernization Program’s mission is to “Deliver world-class, commercial, high-end, high performance computational

Department of DefenseHigh PHigh PHigh PHigh PHigh Performance Computing Modernization Program Officeerformance Computing Modernization Program Officeerformance Computing Modernization Program Officeerformance Computing Modernization Program Officeerformance Computing Modernization Program Office

1010 North Glebe Road, Suite 5101010 North Glebe Road, Suite 5101010 North Glebe Road, Suite 5101010 North Glebe Road, Suite 5101010 North Glebe Road, Suite 510Arlington, VArlington, VArlington, VArlington, VArlington, VA 22201A 22201A 22201A 22201A 22201

Phone: (703) 812-8205Phone: (703) 812-8205Phone: (703) 812-8205Phone: (703) 812-8205Phone: (703) 812-8205FFFFFacsimile: (703) 812-9701acsimile: (703) 812-9701acsimile: (703) 812-9701acsimile: (703) 812-9701acsimile: (703) 812-9701

http://wwwhttp://wwwhttp://wwwhttp://wwwhttp://www.hpcmo.hpc.mil.hpcmo.hpc.mil.hpcmo.hpc.mil.hpcmo.hpc.mil.hpcmo.hpc.mil