Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on...

103
International Committee for Future Accelerators (ICFA) Standing Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech ICFA SCIC Report Networking for High Energy and Nuclear Physics On behalf of ICFA SCIC: Harvey B. Newman [email protected]

Transcript of Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on...

Page 1: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

International Committee for Future Accelerators (ICFA) Standing Committee on Inter-Regional Connectivity (SCIC)

Chairperson: Professor Harvey Newman, Caltech

ICFA SCIC ReportNetworking for High Energy and Nuclear Physics

On behalf of ICFA SCIC: Harvey B. Newman [email protected]

February 2004(A Revision of the 2003 SCIC Report)

Page 2: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

SCIC List Members:

People e-mail Organisation / CountryAlberto Santoro [email protected] UERJ (Brazil)Alexandre Sztajnberg [email protected] UERJ (Brazil)Arshad Ali [email protected] NIIT(Pakistan)Daniel Davids [email protected] CERN (CH)David Foster [email protected] CERN (CH)David O. Williams [email protected] CERN (CH)Dean Karlen [email protected] Carleton (Canada)Denis Linglin [email protected] IN2P3 Lyon (France)Dongchul Son [email protected] KNU (Korea)Federico Ruggieri [email protected] INFN (Italy)Fukuko Yuasa [email protected] KEK (Japan)Hafeez Hoorani [email protected] PakistanHarvey B. Newman [email protected] Caltech (USA)Heidi Alvarez [email protected] Florida International University (USA)HwanBae Park [email protected] Korea University (Korea)Julio Ibarra [email protected] Florida International University (USA)Les Cottrell [email protected] SLAC (USA)Marcel Kunze [email protected] FZK (Germany)Vicky White [email protected] FNAL (USA)Michael Ernst [email protected] DESY (Germany)Olivier H. Martin [email protected] CERN (CH)Richard Hughes-Jones [email protected] University of Manchester (UK)Richard Mount [email protected] SLACCaltech (USA)Rongsheng Xu [email protected] IHEP (China)Sergei Berezhnev [email protected] RUHEP (RU)Sergio F. Novaes [email protected] State University of Sao Paulo (Brazil)Shawn McKee [email protected] SLAC (USA)Slava Ilyin [email protected] SINP MSU (RU)Sunanda Banerjee [email protected] IndiaSyed M. H. Zaidi [email protected] NUST (Pakistan)Sylvain Ravot [email protected] Caltech (USA)Vicky White [email protected] FNAL (USA)Vladimir Korenkov [email protected] JINR, Dubna (RU)Volker Guelzow [email protected] DESY (Germany)Yukio Karita [email protected] KEK (Japan)

2

Page 3: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

ICFA SCIC Monitoring Working Group:

Chair: Les Cottrell [email protected] SLAC (USA)

Daniel Davids [email protected] CERN (CH)Fukuko Yuasa [email protected] KEK (Japan)Richard Hughes-Jones [email protected] University of Manchester (UK)Sergei Berezhnev [email protected] RUHEP (RU)Sergio Novaes [email protected] Sao Paulo (Brazil)Shawn McKee [email protected] University of Michigan (USA)Sylvain Ravot [email protected] Caltech (USA)

ICFA SCIC Digital Divide Working group:

Chair: Alberto Santoro [email protected] UERJ (Brazil)

David O. Williams [email protected] CERN (CH)Dongchul Son [email protected] KNU (Korea)Hafeez Hoorani [email protected] PakistanHarvey Newman [email protected] Caltech (USA)Heidi Alvarez [email protected] Florida International University (USA)Julio Ibarra [email protected] Florida International University (USA)Sunanda Banerjee [email protected] IndiaSyed M. H. Zaidi [email protected] NUST (Pakistan)Vicky White [email protected] FNAL (USA)Yukio Karita [email protected] KEK (Japan)

ICFA SCIC Advanced Technologies Working Group:

Chair: R. Hughes-Jones [email protected] University of Manchester (UK)

Harvey Newman [email protected] Caltech (USA)Olivier H. Martin [email protected] CERN (CH)Sylvain Ravot [email protected] Caltech (USA)Vladimir Korenkov [email protected] JINR, Dubna (RU)

3

Page 4: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Section 1 to 7 : done

Section 8: add section about networking in Russia

Section 9: to be completed using the DoE Raodmap. motivate the requirements with examples taken from the roadmap. Also the table for IHEP Beijing requirements

Section 10: to be improved.

Section 11-16: done

General :

Add cross-references to Appendixes.Check referenceFormat textReport TERENA conclusions in the report

4

Page 5: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Report of the Standing Committee on Inter-Regional Connectivity (SCIC)

February 2004

Networking for High Energy and Nuclear Physics

On behalf of the SCIC:

Harvey B NewmanCalifornia Institute of Technology

Pasadena, CA 91125, [email protected]

1. Introduction: HENP Networking Challenges..............................................................72. ICFA SCIC in 2002-3..................................................................................................83. General Conclusions..................................................................................................124. Recommendations......................................................................................................125. The Digital Divide and ICFA SCIC..........................................................................16

5.1. Digital divide illustrated by network infrastructures.........................................185.2. Digital divide Illustrated by network performance............................................195.3. Digital divide illustrated by network costs........................................................215.4. A new “culture of worldwide collaboration”.....................................................22

6. HENP Network Status: Major Backbones and International Links..........................236.1. Europe................................................................................................................256.2. North America...................................................................................................326.3. Korea and Japan.................................................................................................376.4. Intercontinental links.........................................................................................40

7. Advanced Optical Networking Projects and Infrastructures.....................................437.1. Advanced Optical Networking Infrastructures..................................................437.2. Advanced Optical Networking Projects and Initiatives.....................................49

8. HENP Network Status: “Remote Regions”...............................................................54

5

Page 6: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

8.1. East-Europe........................................................................................................548.2. Russia and the Republics of the former Soviet Union.......................................558.3. Asia and Pacific.................................................................................................578.4. South America...................................................................................................59

9. The Growth of Network Requirements in 2003........................................................6110. Growth of HENP Network Usage in 2001-2004...................................................6411. HEP Challenges in Information Technology.........................................................6712. Progress in Network R&D.....................................................................................6713. Upcoming Advances in Network Technologies....................................................6814. Meeting the challenge: HENP Networks in 2005-2010; Petabyte-Scale Grids with Terabyte Transactions........................................................................................................7015. Coordination with Other Network Groups and Activities.....................................7116. Broader Implications: HENP and the World Summit on the Information Society7217. Relevance of Meeting These Challenges for Future Networks and Society.........73

6

Page 7: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

1. Introduction: HENP Networking Challenges

Wide area networking is a fundamental and mission-critical requirement for High Energy and Nuclear Physics. Moreover, HENP’s dependence on high performance networks is increasing rapidly. National and international networks of sufficient (and rapidly increasing) bandwidth and end-to-end performance are now essential for each part and every phase of our physics programs, including:

Data analysis involving physicists from all world regions Detector development and construction on a global scale The daily conduct of collaborative work in small and large groups, in both experiment

and theory The formation and successful operation of worldwide collaborations The successful and ubiquitous use of current and next generation distributed collaborative

tools The conception, design and implementation of next generation facilities as “global

networks”1

Referring to the largest experiments, and to the global collaborations of 500 to 2000 physicists from up to 40 countries and up to 160 institutions, a well known physicist2 summed it up by saying:

“Collaborations on this scale would never have been attempted, if they could not rely on excellent networks.”

In an era of global collaborations, and data intensive Grids, advanced networks are required to interconnect the physics groups seamlessly, enabling them to collaborate throughout the lifecycle of their work. For the major experiments, networks that operate seamlessly, with quantifiable high performance and known characteristics are required to create data Grids capable of processing and sharing massive physics datasets, rising from the Petabyte (1015 byte) to the Exabyte (1018 byte) scale within the next decade.

The need for global network-based systems that support our science has made the HENP community a leading early-adopter, and more recently a key co-developer of leading edge wide area networks. Over the past few years, several groups of physicists and engineers in our field, in the US, Europe and Asia, have worked with computer scientists to make significant advances in the development and optimization of network protocols, and methods of data transfer. During 2003 these developments, together with the availability of 2.5 and 10 Gigabit/sec wide area links and advances in data servers and their network interfaces (notably 10 Gigabit Ethernet) have made it possible for the first time to utilize networks with relatively high efficiency in the 1 to 10 Gigabit/sec (Gbps) speed range over continental and transoceanic distances.

These developments have been paralleled by upgrades in the national, and continental core network infrastructures, as well as the key transoceanic links used for research and education, to typical bandwidths in the US, western Europe as well as Japan and Korea of 2.5 to 10 Gbps. This 1 Such as the Global Accelerator Network (GAN); see http://www.desy.de/~dvsem/dvdoc/WS0203/willeke-20021104.pdf. 2 Larry Price, Argonne National Laboratory, in the TransAtlantic Network (TAN) Working Group Report, October 2001; see http://gate.hep.anl.gov/lprice/TAN.

7

Page 8: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

is documented in a series of brief Appendices covering some of the major national and international networks and network R&D projects. The transition to the use of “wavelength division multiplexing” to support multiple optical links on a single fiber has made these links increasingly affordable, and this has resulted in a substantially increased number of these links coming into service during 2003. In 2004 we expect this trend to continue and spread to other regions, notably including 10 Gbps links across the Atlantic and Pacific linking the Australia and the US, and Russia, China and the US through the “GLORIAD” optical ring project.

In some cases high energy physics laboratories or computer centers have been able to acquire leased “dark fiber” to their site, where they are able to connect to the principal wide area networks they use with one or more wavelengths. In 2003-4 we are seeing the emergence of some privately owned or leased wide area fiber infrastructures, managed by non-profit consortia of universities and regional network providers, to be used on behalf of research and education. This includes “National Lambda Rail” covering much of the US, accompanied by initiatives in several states (notably Illinois, California, and Florida), and similar initiatives are being considered in European countries (notably in Germany and the Netherlands).

These trends have also led to a forward-looking vision of much higher capacity networks based on many wavelengths in the future, where statically provisioned shared network links are complemented by dynamically provisioned optical paths to form “Lambda Grids” for the most demanding applications. The visions of advanced networks and Grid systems are beginning to converge, where future Grids will include end-to-end monitoring and tracking of networks as well as computing and storage resources, forming an integrated information system supporting data analysis, and more broadly research in many fields, on a global scale.

The rapid progress and the advancing vision of the future in the “most economically favored regions” of the world during 2003 also has brought into focus the problem of the Digital Divide that has been a main activity of the SCIC over the last three years. As we advance, there is an increasing danger that the groups in the less favored regions, including Southeast Europe, Latin America, Africa, and much of Asia, will get left behind. This problem needs concerted action on our part, if our increasingly global physics collaborations are to succeed in enabling scientists from all regions of the world to take their rightful place as full partners in the process of scientific discovery.

2. ICFA SCIC in 2002-3

The intensive activities of the SCIC continued in 2003 (as foreseen), and we expect this level of activity to continue in 2004. The committee met 3 times in the last 12 months, and continued its carry out its charge to:

Track network progress and plans, and connections to the major HENP institutes and universities in countries around the world;

Monitor network traffic, and end-to-end performance in different world regions Keep track of network technology developments and trends, and use these to “enlighten”

network planning for our field Focus on major problems related to networking in the HENP community; determine

ways to mitigate or overcome these problems; bring these problems to the attention of ICFA, particularly in areas where ICFA can help.

8

Page 9: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

While there were fewer meetings than in 2002, SCIC members took an active and often central role in a number of conferences, workshops and major events related to networking and the Digital Divide. These events tended to focus attention on the needs of the HENP community, as well as its key role as an application driver, and also a developer of future networks.

SWITCH-CC (Coordination Committee) meeting, January 23, University of Bern, “DataTAG project Update”

The AMPATH Workshop on Fostering Collaborations and Next-Generation Infrastructure, Miami, January 20033.

Meetings of TERENA, the Trans-European Research and Education Networking Association4. One of the key activities of TERENA was SERENATE5, “a series of strategic studies into the future of research and education networking in Europe, addressing the local (campus networks), national (national research & education networks), European and intercontinental levels” covering technical, policy, pricing and Digital Divide issues.

Members’ Meetings of Internet26, including the I2 HENP Working Group, the Applications Strategy Council, the End-to-end Performance Initiative and the VidMid Initiative on collaborative middleware.

Meetings of APAN7, the Asia Pacific Advanced Network (January and August 2003) GEANT APM (Access Point Manager) meeting, February 3, CESCA, Barcelona (ES),

“DataTAG project Update” PFLDnet, February 3-4, CERN, Geneva (CH), “GridDT (Data Transport)” ON-Vector Photonics workshop, February 4, San Diego (USA), "Optical Networking

Experiences @ iGrid2002". OptIPuter workshop, February 7, San Diego (USA), "IRTF-AAAARCH research group" First European Across Grids Conference, February 14, Santiago de Compostela (ES),

“TCP behaviour on Trans Atlantic Lambda’s” MB-NG workshop, February, University College London (UCL) (UK), “Generic AAA-

Based Bandwidth on Demand” CHEP’2003, March 24-28, La Jolla/San Diego (USA), Olivier Martin (CERN),

“DataTAG project Update” JAIST8 (Japan Advanced Institute of Science & Technology) Seminar, 24 February,

Ishikawa (Japan), “Efficient Network Protocols for Data-Intensive Worldwide Grids” NTT, Tokyo (Japan), March 3, "Optical Networking Experiences @ iGrid2002" GGF7, Tokyo (Japan), March 4, "Working and research group chairs training". DataGrid conference, May 2003, Barcelona (ES), “DataTAG project update” RIPE-45 meeting (European Operators Forum), May 2003, Barcelona (ES), “Internet

data transfer records between CERN and California”. RIPE-46 September 2004 meeting “PingER: a lightweight active end-to-end network monitoring tool/project“

Terena Networking Conference, May 2003, Zagreb (HR), “High-Performance Data Transport for Grid Applications”

3 http://ampath.fiu.edu/miami03_agenda.htm4 http://www.terena.nl, The TERENA compendium contains detailed information of interest on the status and evolution of research and education networks in Europe. 5 The SERENATE website at http://www.serenate.org includes a number of public documents of interest. 6 http://www.internet2.edu 7 http://www.apan.net 8 http://www.jaist.ac.jp/

9

Page 10: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

After-C5 & LCG meetings, June 2003, CERN (CH), “CERN’s external networking update”

US DoE workshop, June 2003, Reston, Virginia (USA), “Non-US networks” Grid Concertation meeting, June 2003, Brussels (BE), “DataTAG contributions to

advanced networking, Grid monitoring, interoperability and security”. GGF8, June 2003, Seattle, Washington State (USA), “Conceptual Grid Authorization

Framework”. ISOC, ledenvergadering, Amsterdam (NL), June 2003, "High speed networking for Grid

Applications". SURFnet expertise seminar, Amsterdam (NL), June 2003, "High speed networking for

Grid Applications". ASCI conference, Heijen (NL), June 2003, "High speed networking for Grid

Applications".

GGF GRID school, July 2003, Vico Equense, Italy, “Lecture on Glue Schema”

EU-US Optical “lambda” workshop appended to the 21st NORDUnet Network Conference, August 2003, Reykjavik, Iceland, “The case for dynamic on-demand “lambda” Grids”

NORDUnet 2003 Network Conference, August 2003, Reykjavik, Iceland, “High-Performance Transport for Data-Intensive World-Wide Grids”

9th Open European Summer School and IFIP Workshop on Next Generation Networks (EUNICE 2003), September 2003, Budapest–Balatonfüred, Hungary, “Benchmarking QoS on Router Interfaces of Gigabit Speeds and Beyond”

NEC’2003 conference, September 2003, Varna (Bulgaria), « DataTAG project update »

University of Michigan & MERIT, October 2003, Ann Arbor (MI/USA), "The Lambda Grid".

Crakow Grid Workshop (CGW’03), October 2003, Crakow, Poland, “DataTAG project update & recent results”

The Open Round Table “Developing Countries Access to Scientific Knowledge; Quantifying the Digital Divide, ICTP Trieste, October 2003.

Japan’s Internet Research Institute, October 2003, CERN (Switzerland), « DataTAG project update & recent results»

Telecom World 2003, Geneva, October 2003. This conference held every 3-4 years beings to together the telecommunications industry and key network developers and researchers. Caltech and CERN collaborated on a stand at the conference, on a series of advanced network demonstrations, and a joint session with the Fall Internet2 meeting.

UKLight Open Meeting, November 2003, Manchester (UK), "International Perspective: Facilities supporting research and development with LightPaths".

The SuperComputing 2003 conference (November 15-21, Phoenix, Arizona, USA). Scientists from Caltech, SLAC, LANL and CERN joined forces for their demonstration of “Distributed Particle Physics Analysis Using Ultra-High Speed TCP on the Grid” H. Newman was invited to speak at several panels.

10

Page 11: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Bandwidth ESTimation 2003 workshop, organized by DoE/CAIDA, December 2003, San Diego, CA, USA, “A method for measuring the hop-by-hop capacity of a path”

The World Summit on the Information Society9 (Geneva December 2003) and the associated CERN event on the Role of Science in the Information Society (RSIS).

Preparations for and launch of GLORIAD10, the US-Russia-China Optical Ring.

The working groups formed in the Spring of 2002 continued their work. Monitoring (Chaired by Les Cottrell of SLAC) Advanced Technologies (Chaired by Richard Hughes-Jones of Manchester) The Digital Divide (Chaired by Alberto Santoro of UERJ, Brazil)11

The working group membership was strengthened through the participation of several technical experts and members of the community with relevant experience in networks, network monitoring, and other relevant technologies throughout 2003. The SCIC web site, hosted by CERN (http://cern.ch/icfa-scic) that was set up in the Summer of 2002 is kept up to date with detailed information on the membership, meetings, minutes, presentations and reports. An extensive set of reports used to write this repot is available at the Web site.

The conclusion from the SCIC meetings throughout 2002-3, setting the tone for 2004, is that the scale and capability of networks, their pervasiveness and range of applications in everyday life, and dependence of our field on networks for its research in North America, Europe and Japan are all increasing rapidly. One recent development accelerating this trend is the worldwide development and deployment of data-intensive Grids, especially as physicists begin to develop ways to do data analysis, and to collaborate in a “Grid-enabled environment12”.

However, as the pace of network advances continues to accelerate, the gap between the technologically “faevoered” regions and the rest of the world is, if anything, in danger of widening. Since networks of sufficient capacity and capability in all regions are essential for the health of our major scientific programs, as well as our global collaborations, we must encourage the development and effective use of advanced networks in all world regions. We therefore agreed to make the committee’s work on closing the Digital Divide13 a prime focus for 2002-4.The SCIC also continued and expanded upon its work on monitoring network traffic and performance in many countries around the world through the Monitoring Working Group.We continued to track key network developments through the Advanced Technologies Working Group.

Reports from each of the three Working Groups, providing detailed results and recommendations, are attached to this report.

9 http://www.itu.int/WORLD2003/ 10 http://www.gloriad.org and http://www.nsf.gov/od/lpa/news/03/pr03151_video.htm . Information on the Launch Ceremony (January 12-13, 2004 can be found at http://www.china.org.cn/english/international/84572.htm 11 The Chair of this working group passed from Manuel Delfino of Barcelona to Santoro in mid-2002. 12 See http://pcbunn.cithep.caltech.edu/GAE/GAE.htm 13 This is a term for the gap in network capability, and the associated gap in access to communications and Web-based information, e-learning and e-commerce, that separates the wealthy regions of the world from the poorer regions.

11

Page 12: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

3. General Conclusions

The bandwidth of the major national and international networks used by the HENP community, as well as the transoceanic links is progressing rapidly and has reached the 2.5 – 10 Gbps range. This is encouraged by the continued rapid fall of prices per unit bandwidth for wide area networks, as well as the widespread and increasing affordability of Gigabit Ethernet.

A key issue for our field is to close the Digital Divide in HENP, so that scientists from all regions of the world have access to high performance networks and associated technologies that will allow them to collaborate as full partners: in experiment, theory and accelerator development. This is discussed in the following sections of this report, and in more depth in the accompanying report of the Digital Divide Working Group.

The rate of progress in the major networks has been faster than foreseen (even 1 to 2 years ago). The current generation of network backbones, representing a typical upgrade by a factor of four in speed, arrived in the last 15 months in the US, Europe and Japan. This rate of improvement is 2 to 3 times Moore’s Law14. This rapid rate of progress, confined mostly to the US, Europe and Japan, threatens to open the Digital Divide further, unless we take action.

Reliable high End-to-end Performance of networked applications such as large file transfers and Data Grids is required. Achieving this requires:

o End-to-end monitoring extending to all regions serving our community, a coherent approach to monitoring that allows physicists throughoput our community to extract clear, unambiguous and inclusive information is a prerequisite for this.

o Developing and deploying high performance (TCP) toolkits in a form that is suitable for widespread use by users. Training the community to use these tools well, and wisely.

o Removing local, last mile, and national and international bottlenecks end-to-end, whether the bottlenecks are technical or political in origin.

4. Recommendations

ICFA should work vigorously locally, nationally and internationally, to ensure that networks with sufficient raw capacity and end-to-end capability are available throughout the world. This is now a vital requirement for the success of our field and the health of our global collaborations.

The SCIC, and where appropriate other members of ICFA, should work in concert with other cognizant organizations as well as funding agencies on problems of global networking for HENP as well as other fields of research and education. The organizations include in particular Internet2, Terena, AMPATH; DataTAG, the Grid projects and the Global Grid Forum.

14 Usually quoted as a factor of 2 improvement in performance at the same cost every 18 months.

12

Page 13: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

HENP and its worldwide collaborations could be a model for other scientific disciplines, and for new modes of information sharing and communication in society at large. The provision of adequate networks and the success of our Collaborations in the Information Age thus has broader implications, that extend beyond the bounds of scientific research.

The world community will only reap the benefits of global collaborations in research and education, and of the development of advanced network and Grid systems, if we are able to close the Digital Divide that separates the economically and technologically most-favored from the less-favored regions of the world. It is imperative that ICFA members work with and advise the SCIC on the most effective means to close this Divide, country by country and region by region.

Recommendations concerning approaches to close the Divide, where ICFA and the HENP Laboratory Directors can help, include:

Identify and work on specific problems, country by country and region by region, to enable groups in all regions to be full partners in the process of search and discovery in science.

As detailed in the Digital Divide working group’s report, networks with adequate bandwidth tend to be too costly or otherwise hard to obtain in the economically poorest regions. Particular attention to China, India, Pakistan, Southeast Asia, Southeast Europe, South America and Africa is required.

Performance on existing national, metropolitan and local network infrastructures also may be limited, due to last mile problems, political problems, or a lack of coordination (or peering arrangements) among different network organizations15.

Create and encourage inter-regional programs to solve specific regional problems. Leading examples include the Virtual Silk Highway project16

(http://www.nato.int/science/e/silk.htm) led by DESY, the support for links in Asia by the KEK High Energy Accelerator Research Organization in Japan (http://www.kek.jp), and the support of network connections for research and education in South America by the AMPATH “Pathway to the Americas” (http://www.ampath.fiu.edu ) at Florida International University.

Make direct contacts, and help educate government officials on the needs and benefits to society of the development and deployment of advanced infrastructure and applications: for research, education, industry, commerce, and society as a whole.

Use (lightweight; non-disruptive) network monitoring to identify and track problems, and keep the research community (and the world community) informed on the evolving state of the Digital Divide. One leading example in the HEP community is the Internet End-to-end Performance Monitoring (IEPM) initiative (http://www-iepm.slac.stanford.edu ) at SLAC.

15 These problems tend to be most prevalent in the poorer regions, but examples of poor performance on existing network infrastructures due to lack of coordination and policy may be found in all regions. 16 Members of SCIC noted that the performance of satellite links is no longer competitive with terrestrial links based on optical fibers in terms of their achievable bandwidth or round trip time. But such links offer the only practical solution for remote regions that lack an optical fiber infrastructure.

13

Page 14: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

It is vital that support for the IEPM activity in particular, which covers 100 countries with 87% of the world population be continued and strengthened, so that we can monitor and track progress in network performance in more countries and more sites within countries, around the globe. This is as important for the general mission of the SCIC in our community as it is for our work on the Digital Divide.

Share and systematize information on the Digital Divide. The SCIC is gathering information on these problems and developing a Web site on the Digital Divide problems of research groups, universities and laboratories throughout its worldwide community. This will be coupled to general information on link bandwidths, quality, utilization and pricing. Monitoring results from the IEPM will be used to track and highlight ongoing and emerging problems.

This Web site will promote our community’s awareness and understanding of the nature of the problems: from lack of backbone bandwidth, to last mile connectivity problems, to policy and pricing issues.

Specific aspects of information sharing that will help develop a general approach to solving the problem globally include:

o Sharing examples of how the Divide can be bridged, or has been bridged successfully in a city, country or region. One class of solutions is the installation of short-distance optical fibers leased or owned by a university or laboratory, to reach the “point of presence” of a network provider. Another is the activation of existing national or metropolitan fiber-optic infrastructures (typically owned by electric or gas utilities, or railroads) that have remained unused. A third class is the resolution of technical problems involving antiquated network equipment, or equipment-configuration, or network software settings, etc.

o Making comparative pricing information available. Since international network prices are falling rapidly along the major Transatlantic and Transpacific routes, sharing this information should help us set lower pricing targets in the economically poorer regions, by pressuring multinational network vendors to lower their prices in the region, to bring them in line with their prices in larger markets.

o Identifying common themes in the nature of the problem, whether technical, political and financial, and the corresponding methods of solution.

Create a “new culture of collaboration” in the major experiments and at the HENP laboratories, as described in the following section of this report.

Work with the Internet Educational Equal Access Foundation (IEEAF) (http://www.ieeaf.org), and other organizations that aim to arrange for favorable network prices or outright bandwidth donations17, where possible.

17 The IEEAF successfully arranged a bandwidth donation of a 10 Gbps research link and a 622 Mbps production service in September 2002. It is expected to announce a donation between California and the Asia Pacific region early in 2003.

14

Page 15: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Prepare for and take part in the World Summit on the Information Society (WSIS; http://www.itu.int/wsis/). The WSIS is held in two phases. The first phase of WSIS took place in Geneva in December 2003. It addressed the broad range of themes concerning the Information Society and adopted a Declaration of Principles and Plan of Action18. The second phase will take place in Tunis, in November 2005. The WSIS process aims to develop a society where

“highly-developed… networks, equitable and ubiquitous access to information, appropriate content in accessible formats and effective communication can help people achieve their potential…”.

These aims are clearly synergistic with the aims of our field, and its need to provide worldwide access to information and effective communications in particular.

HENP has been recognized as having relevant experience in effective methods of initiating and promoting international collaboration, and harnessing or developing new technologies and applications to achieve these aims. HENP has been involved in WSIS preparatory and regional meetings in Bucharest in November 2002 and in Tokyo in January 2003. It has been invited to run a session on The Role of New Technologies in the Development of an Information Society19, and has been invited20 to take part in the planning process for the Summit itself. On behalf of the world's scientific community, in December, CERN organized the Role of Science in the Information Society21 (RSIS) conference, a Summit event of WSIS. RSIS reviewed the prospects that present developments in science and technology offer for the future of the Information Society, especially in education, environment, health, and economic development. More details about the implication of the HENP community in WSIS and in RSIS are available in section 15 and appendix xxx.

Formulate or encourage bi-lateral proposals22, through appropriate funding agency programs. Examples of programs are the US National Science Foundation’s ITR and International programs, the European Union’s Sixth Framework and @LIS programs, and NATO’s Science for Peace program.

Help start and support workshops on networks, Grids, and the associated advanced applications. These workshops could be associated with helping to solve Digital Divide problems in a particular country or region, where the workshop will be hosted. One outcome of such a workshop is to leave behind a better network, and/or better conditions for the acquisition, development and deployment of networks.

The SCIC is planning the first such workshop in Rio de Janeiro in February 2004, and will request ICFA approval for this workshop. An organizational meeting will be held in

18 http://www.itu.int/wsis/documents/doc_multi.asp?lang=en&id=1161|116019 At the WSIS Pan-European Ministerial meeting in Bucharest in November 2002. See http://cil.cern.ch:8080/WSIS and the US State Department site http://www.state.gov/e/eb/cip/wsis/20 By the WSIS Preparatory Committee and the US State Department. 21 http://rsis.web.cern.ch/rsis/01About/AboutRSIS.html22 Recent examples include the CLARA project to link Argentina, Brazil, Chile and Mexico to Europe. Another is a proposal to the US NSF from Florida International University, AMPATH, other universities in Florida, Caltech and UERJ in Brazil for a center for HEP Research, Education and Outreach which includes partial funding for a network link between North and South America for HENP.

15

Page 16: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

July 2003. ICFA members are requested to participate in these meetings and in this process.

Help form regional support and training groups for network and Grid system

development, operations, monitoring and troubleshooting23.

5. The Digital Divide and ICFA SCIC

The success of our major scientific programs, and the health of our global collaborations, depend on physicists from all world regions being full partners in the scientific enterprise. This means that they must have access to affordable networks of sufficient bandwidth, with an overall scale of performance that advances rapidly over time to meet the growing needs.

While the performance of networks has advanced substantially in most or all world regions, by a factor of 10 roughly every 4 years during each of the last two decades, the gulf that separates the best- and least-well provisioned regions has remained remarkably constant. This separation can be expressed in terms of the achievable throughput at a given time, or the time-difference between the moment when a certain performance level is first reached in one region, and when the same performance is reached in another region. This is illustrated in Figure 1 below24, where we see a log plot of the maximum throughput achievable in each region or across major networks (e.g. ESnet) versus time. The figure shows explicitly that China, Russia and Southeast Europe are several years behind North America and (Western) Europe. The fact that the lines in the plot for the various regions are nearly parallel means that the time-lag (and hence the “Digital Divide”) has been maintained for the last few years, and there is no indication that it will be closed unless, ICFA, SCIC and the HENP community take action.

Rapid advances in network technologies and applications are underway, and further advances and possibly breakthroughs are expected in the near future. While these developments will have important beneficial effects on our field, the initial benefits tend to be confined, for the most part, to the most economically and technologically advanced regions of the world (the US, Japan, and parts of western Europe). As each new generation of technology is deployed, it therefore brings with it the threat of further opening the Digital Divide that separates the economically most-favored from the less-favored regions of the world, notably Southeast Europe, Russia, China, India, Pakistan, Central Asia, South America, parts of Southeast Asia, and Africa. Closing this Divide, in an era of global collaborations, is of the highest importance for the present and future health of our field.

23 One example is the Internet2 HENP Working Group in the US. See http://www.internet2.edu/henp 24 This figure is taken from the SLAC Internet End-to-end Performance Monitoring Project (IEPM); see http://www-iepm.slac.stanford.edu/ . Note that the standard TCP stack was used. New TCP can achieve much higher throughput..

16

Page 17: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Figure 1 Maximum throughput for TCP streams versus time, in several regions of the world, seen from SLAC

17

Page 18: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

5.1.Digital divide illustrated by network infrastructures

Another stark illustration of the Digital Divide is shown in the Figure 2 , taken from the TERENA25 2003 Network Compendium. Figure 2 shows the core network size of NRENs in Europe, on a logarithmic scale. The figure is an estimation of the total size of the networks by multiplying the length of the various links in the backbone with the capacity of those links in Mb/s, The resulting unit is network size in Mb/s * km, It show that a number of countries made impressive advance in their network. However, excepted Poland, Slovakia and Czech republic, East-European countries are still far behind West-European countries, especially if we divide the core network size by the population of the country

Figure 2 Core networks size of European NRENs (in Mb/s*km)

25 The TransEuropean Research and Education Network Association (http://www.terena.nl) . The analytical part of the 2002 Compendium is at http://www.terena.nl/compendium/2002/index.html

18

Page 19: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

The remark about the Figure 2 is confirmed by the Figure 3. The Figure 3 gives the total external links for 2002 and 2003. Note that the scale is logarithmic. Many countries upgraded their links in 2002 and did not upgrade in 2003. The increases go in leaps and do not grow gradually.

Figure 3 External bandwidth of European NRENs

5.2.Digital divide Illustrated by network performance

Packet loss and Round Trip Time are also two relevant metrics in the evaluation of the digital divide. Since 1995, the IEPM working group at SLAC monitors over 850 remote hosts at 560 sites in over 100 countries covering over 78% of the world's population and over 99% of the online users of the Internet. Figure 4 shows fractions of the world’s population with measured loss performance seen from US. It can be seen that in 2001, less than 20% of the population lived in countries with good to acceptable packet loss.

Figure 4 Fraction of the world's population in countries with measured loss performance in 2001 (Poor, very poor and bad mean that remote collaboration is not possible).

19

Page 20: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Figure 5 Average monthly RTT measured from U.S to various countries of the world for January 2000 and August 2002 (in the Jan 2000 map countries shaded green are not measured).

20

Page 21: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Figure 6 shows the throughput seen between monitoring and monitored hosts in the major regions of the world. Each column is for monitoring hosts in a given region, each row is for monitored hosts in a given region

Figure 6 Derived throughputs in kbits/s from monitoring hosts to monitored hosts by region of the world for August 2003

5.3.Digital divide illustrated by network costs

Another way to measure the digital divide is costs. The Figure 7 shows the relative cost of international connectivity to countries in the GÉANT network, plotted against the number of suppliers offering connectivity to that country in 2004. Obviously, there are different speeds of connectivity in the network. There is also the general economies of scale in telecommunications, whereby faster circuits represent relatively better value for money than slower circuits. Adjustments have been made to the international connectivity numbers so that we are comparing prices, having adjusted them for differences in capacity. The basis on which this has been done is a good knowledge of the relative cost of different speeds of connectivity across Europe. Thus, typically, 622 Mbps is roughly half the cost of 155 Mbps etc. This approach has been extended so that it deals with both DWDM capacities and SDH capacities. The cross over data from SDH to DWDM is not as precise as is the case for SDH or DWDM numbers. In practice, the approach used, probably overstates the cost of DWDM capacity relative to SDH capacity, maybe by as much as 50%. So, the digital divide is probably somewhat larger than is reflected by the numbers shown here. Figure 7 shows that, measured as the ratio between the cheapest and the most expensive connectivity in the GÉANT network, the digital divide is actually 114. Without including Malta and Turkey, this number is 39,4.

21

Page 22: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Number of Suppliers versus Cost of Connectivity GEANT 2004 Data inc Turkey and Malta

0

20

40

60

80

100

120

0 2 4 6 8 10 12 14

Number of Suppliers

Rela

tive

Cost

of C

onne

ctiv

ity

Figure 7 Digital divide illustrated by access costs to the GEANT’s backbone.

5.4.A new “culture of worldwide collaboration”

It is also important to note that once networks of sufficient performance and reliability, and tools for remote collaboration are provided, our community will have to strive to change its culture, such that physicists remote from the experiment, especially the younger physicists and students who cannot travel often (or ever) to the laboratory site of the experiment, are able to participate fully in the analysis, and the physics. A new “culture of worldwide collaboration” would need to be propagated throughout our field if this is to succeed. The Collaborations would have to adopt a new mode of operation, where care is taken to share the most interesting and current developments in the analysis, and the discussions of the latest and most important issues in analysis and physics, with groups spread around the world on a daily basis. The major HENP laboratories would also need to create rooms, and new “collaborative working environments” able to support this kind of sharing, with the backing of laboratory management. The management of the experiments would need to strongly support, if not require the group leaders and other physicists at the laboratory to participate in, if not lead, the collaborative activity on an ongoing day-to-day basis.

While these may appear to be lofty goals, the network and Grid computing infrastructure, and cost-effective collaborative tools26 are becoming available to support this activity. Physicists are turning to the development and use of “Grid-enabled Analysis Environments” and “Grid-enabled Collaboratories”, which aim to make daily collaborative sharing of data and work on analysis, supported by distributed computing resources and Grid software, the norm. This will strengthen our field by integrating young university-based students in the process of search and discovery.

It is noteworthy that these goals are entirely consistent with, if not encompassed by the visionary

26 See for example www.vrvs.org

22

Page 23: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

ICFA Statement27 on Communications in International HEP Collaborations of October 17, 1996:

“ICFA urges that all countries and institutions wishing to participate even more effectively and fully in international HEP Collaborations should:

Review their operating methods to ensure they are fully adapted to remote participation

Strive to provide the necessary communications facilities and adequate international bandwidth”

We therefore call upon the management of the HENP laboratories, and the members of ICFA, to assume a leadership role and help create the conditions at the labs and some of the major universities and research centers, to fulfill ICFA’s visionary statement. The SCIC is ready to assist in this work.

6. HENP Network Status: Major Backbones and International Links

Since the requirements report by the ICFA Network Task Force (ICFA-NTF) in 1998 28, a Transatlantic Network Working Group in the US in 2001 studied the network requirements of several of the major HEP experimental programs in which the US is involved. The results of this study29 generally confirmed the estimates of the ICFA-NTF reports, but found the requirements for several major HEP links to be somewhat larger. This report showed that the major links used by HENP would need to reach the Gbps range to the US HEP and CERN laboratories by 2002-3, and 10 Gbps range by roughly 2004-7 (depending on which lab). Transatlantic bandwidth requirements were foreseen to rise from 3 Gbps in 2002 to more than 20 Gbps by 2006.

As discussed later in this report, the requirements estimates are tending to increase as bandwidth in the “leading” regions becomes more affordable, as future network technology developments are appearing on the horizon, and as the potential and requirements for a new generation of Grid systems becomes clearer.

As discussed further in the report of the Advanced Technologies working group, the prices per unit bandwidth have continued to fall dramatically, allowing the speed of the principal wide area network backbones and transoceanic links used by our field to increase rapidly in Europe, North America and Japan. Speeds in these regions rose from the 1.5 to 45 Megabits/sec (Mbps) range in 1996-1997, to the 622 Mbps to 10 Gbps range today. The outlook is for the continued evolution of these links to meet the needs of HENP’s major programs now underway and in preparation at BNL, CERN, DESY, FNAL, JLAB, KEK, DESY, SLAC and other laboratories in a cost effective way. This will require substantial ongoing investment.

The affordability of these links is driven, in part, by the explosion in the data transmission capacity of a single optical fiber, currently reaching more than 1 Terabit/sec. This is achieved by using dense wavelength division multiplexing (DWDM), where many wavelengths of light each modulated to carry 10 Gbps are carried on one fiber. The affordable end-to-end capacity in practice is however much more modest, and is limited by the cost of the fiber installation and the

27 See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html 28 See http://davidw.home.cern.ch/davidw/icfa/icfa-ntf.htm and the Requirements Report at http://l3www.cern.ch/~newman/icfareq98.html .29 See http://gate.hep.anl.gov/lprice/TAN .

23

Page 24: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

equipment for transmitting/receiving, routing and switching the data in the network, as well as the relatively limited speed and capacity of computers to send, receive, process and store the data.

Another limitation is the market price for relatively short-distance connections. This most severely affects universities and laboratories in third world countries due to the relatively scarce supply of bandwidth and/or the lack of competition, but it also affects HEP groups in all regions of the world where connections to national or regional academic and research networks are not available at low cost. There is also the “last mile” problem that persists in the US and many European countries, where prices for relatively short connections at 1.5 Mbps – 1 Gbps speeds often remain high, as a result of heavy demand versus limited supply by (very) few vendors. In addition, vendors are often reluctant to deploy services based on new technologies (such as Gigabit Ethernet). This is a result of the fact that deployment of the new services and underlying technologies require significant investments by the vendor, while at the same reducing the revenue stream compared to the older products30.

Examples of rapid progress in the capacity of the network backbones and the main links used by HENP are given below. In many cases the bandwidth actually available in practice for HEP, on shared academic and research networks serving a whole country, is much lower 31. There is therefore an important continued role for links dedicated to or largely available to HEP; especially on the most heavily used routes:

While these capacity increases on major links during the past year have led to generally improved network performance, in the countries mentioned and between them across the Atlantic and Pacific, meeting the HEP-specific needs (mentioned above) is going to require continued concerted effort. This includes sharing network monitoring results, developing and promulgating guidelines for best practices, tracking technology developments and costs, and dealing with end-to-end performance problems as they arise.

30 These factors are the root cause of the fact that most of the world, including the most technologically advanced regions, still use modems at typical speeds of 40 kbps. The transition to “broadband” services such as DSL or cable modems is well underway in the wealthier regions of the world, but the transition is proceeding at a rate that will take several years to complete. 31 For example, the typical reserved bandwidth to the national labs in France was often in the 2-34 Mbps range, until the advent of RENATER3 in the latter half of 2002.

24

Page 25: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

6.1.Europe The GEANT pan-European backbone32 now interconnects 32 countries, and its core network

includes 9 links at 10 Gbps and 11 at 2.5 Gbps (See Figure 8). Individual countries are connected at speeds in the range of 155 Mbps to 10 Gbps.

Figure 8 The GEANT Pan-European Backbone Network, showing the major links

32 Also see http://www.dante.net/server/show/nav.007

25

Page 26: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

The Figure 9Figure 9 gives an idea of the evolution of NRENs’ backbone capacity in West-Europe from 2001 to 2003. In 2001, the highest capacity was 2.5 Gbps; in 2003 the highest is 10 Gbps. Typically, the core capacity goes up in leaps, involving the change from one type of technology to another. Except Greece and Ireland, all backbone capacities are larger than 1 Gbps.

Figure 9 Core capacity on West-European NRENs NORDUnet is the Nordic Internet highway to research and education networks in

Denmark, Finland, Iceland, Norway and Sweden, and provides the Nordic backbone to the Global Information Society. As shown on, a part of the backbone has already been upgraded at 10 Gbps, other links are being upgraded.

Figure 10 The NORDUnet network

26

Page 27: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

SURFnet5 is the Dutch Reaserch and Education Network, It is a fully optical 10Gbit/s dual stack IP network. Today 65% of the SURFnet customer base is connected to SURFnet5 via Gigabit/s Ethernet. The current topology of SURFnet5 is shown on Figure 11. Early 2003 SURFnet presented its plans for SURFnet6. As part of the GigaPort Next Generation Network project, SURFnet6 is designed as a hybrid optical network. It will be based on dark fiber and aims to provide the SURFnet customers with seamless Lambda, Ethernet and IP network connectivity. In addition. SURFnet pioneered with Lambda networking and developed NetherLight33, which has become a major hub in GLIF34, the Global Lambda Integrated Facility for Research and Education.

Figure 11 SURFnet 10 Gbps backbone

RENATER335 (Figure 12), based on a 2.5 Gbps links between the major cities in France, was inaugurated in October 2002. It is connected to GEANT at 2.5 Gbps. The link between CERN and the IN2P3 computer center was upgraded from 155 Mbps to 1 Gbps at the end of 2002, and Lyon is investigating a dark fiber connection to CERN. The start of RENATER3 in October 2002 in France has greatly improved the connectivity to French institutes involved in HEP, which previously had dedicated bandwidth typically in the 2 Mbps to 34 Mbps range.

Figure 12 The Renater3 network in France

33 Refer to 34 35 See http://www.renater.fr/Reseau/index.htm

27

Page 28: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

The G-WIN German academic and research network36 is the core of the “Intranet for the science community” in Germany. It is configured around 27 core nodes, primarily located at scientific institutions. Responsible for the operation of G-WIN is DFN-Verein. There are 55 links interconnecting the core nodes. As shown on Figure 13, they are operated at rates from 2.5 Gbps up to 10 Gbps with some of them using transparent wavelength technology.

G-WiN

Stuttgart

Leipzig

Berlin

Frankfurt

Karlsruhe

Garching

Kiel

Braunschweig

Dresden

Aachen

RegensburgKaiserslautern

Augsburg

Bielefeld

Hannover

Erlangen

Heidelberg

Ilmenau

Würzburg

Magdeburg

Marburg

Göttingen

Oldenburg

Essen

St. Augustin

Kernnetzknoten10 Gbit/s2,4 Gbit/s2,4 Gbit/s622 Mbit/s

Rostock

Global Upstream Hamburg

DFNAusbaustufe 3

Figure 13 Current G-WIN topology

The SuperJANET4 network Figure 14 in the UK is composed of a 10 Gbps core and many 2.5 Gbps links from each of the academic metropolitan area networks (MANs)37. The core upgrade from 2.5 Gbps to 10 Gbps was completed in July 2002. The UK academic and research community also is deploying a next generation optical research network called “UKLight”38. The UKLight project will provide links of 10 Gbps to Amsterdam and to Starlight in Chicago.

Scotland via Glasgow

NNW

NorthernIreland

MidMAN

TVN

South WalesMAN

SWAN&BWEMAN

WorldComGlasgow

WorldComEdinburgh

WorldComManchester

WorldComReading

WorldComLeeds

WorldComBristol

WorldComLondon

WorldComPortsmouth

Scotland via Edinburgh

YHMAN

NorMAN

EMMAN

EastNet

External Links

LMN

KentishMANLeNSE

10Gbps

622Mbps155Mbps

SuperJanet4, July 2002 20Gbps

2.5Gbps

Figure 14 Schematic view of the SuperJanet4 network in the United Kingdom.

36 See http://www.dfn.de/win/ and http://www.noc.dfn.de/ 37 See http://www.superjanet4.net and http://www.ja.net . 38 See http://www.ja.net/development/UKLight/UKLightindex.html

28

Page 29: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

The Garr-B network39 (Figure 13) in operation in Italy since late 1999, is based on a backbone with links in the range of 155 Mbps to 2.5 Gbps. International connections include a 2.5 Gbps from the backbone to GEANT, and 2.5 Gbps to the commercial Internet provided by Global Crossing. Links from the backbone to other major cities and institutes are typically in the range of 34 to 155 Mbps. The next generation GARR-G40

network, based on point-to-point “lambdas” (wavelengths) with link speeds of at least 2.5 Gbps, and advanced services such as IPv6 and QoS. Metropolitan area networks (MANs) will be connected to the backbone, allowing more widespread high speed access, including secondary and then primary schools. A pilot network “GARR-G Pilot” (http://pilota.garr.it/ ) based on 2.5 Gbps wavelengths has been in operation since early 2001.

Figure 15 The GARR-B network in Italy

39 See http://www.garr.it/garr-b-home-hgarrb-engl.shtml 40 See http://www.garr.it/garr-gp/garr-gp-ilprogetto-engl.shtml

29

Page 30: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Over the last two year CESnet41, the Czech NREN, has designed and implemented a new network topology based on two essential requirements: redundant connections and low number of hops for all major PoPs. As shown on Figure 16, the network core is based on the Packet Over SONET (POS) technology with all core lines are operating at 2.5 Gbps. The network has 1.2 Gbps line to GÉANT used for academic traffic, 622 Mbps line to Telia used for commodity traffic, 2.5 Gbps line to NetherLight for experimental traffic

Figure 16 CESnet Network

The SANET42 network infrastructure (Figure 17) is based on leased dark fibres. The Network is configured as two rings providing full redundancy with a maximum delay of 5ms. In the near future SANET is planning to connect other Slovak towns to the optical infrastructure and upgrade backbone speed to 10Gbps. Currently SANET is in process of establishing direct optical connection from Bratislava to Brno in Czech Republic through leased dark fibre.

Figure 17 SANET backbone

41 See http://www.ces.net/42 See http://www.sanet.sk/en/siet_topologia.shtm

30

Page 31: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

The early availability of fibers, allowed to dramatically improve the backbone transmission speed of the Polish NREN Pioner43. From June 2003 16 MANs situated along installed fibers are connected with advanced 10GE network as shown on Figure 18. This transmission was built using native 10GE transport (10GE LAN standard) over the DWDM equipment installed on the pair of PIONIER fibers. Using DWDM on fibers allows for future, cost effective network expansion and allows to build testbeds for the next generation networks supporting advanced network services. Current solution (10GE network) is thought to be an intermediate one, before the true multi-lambda optical network is implemented and available for research society

ŁÓDŹ

TORUŃ

P OZNAŃ

B YDGOS ZCZ

OLS ZTYN

B IAŁYS TOK

GDAŃS K

KO S ZALIN

S ZCZECIN

ZIELONAGÓRA

WROCŁAW

CZĘS TOCHOWA

KRAKÓW RZES ZÓW

LUB LIN

KIELCE

PUŁAWY

RADOM

KATOWICEGLIWICE

B IELS KO-B IAŁA

OP OLE

GUB IN WARS ZAWA

CIES ZYN

S IEDLCE

1 0 G E n o d e s1 0 G E lin ks

Figure 18 PIONIER network

43 See http://www.pionier.gov.pl/str_glowna.html

31

Page 32: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

6.2.North America

The “Abilene” Network of Internet2 in the US was designed and deployed during 1998 as a high-performance IP backbone to meet the needs of the Internet2 community. The initial OC-48 instantiation, based on Cisco routers and Qwest SONET-based circuits, became operational in January 1999. The upgrade to the current OC-192 network, based on Juniper routers and adding Qwest DWDM-based circuits, was completed in December 2003. The current topology is shown in Figure 19.

Figure 19 Abilene 10Gbps backbone

Connecting to Abilene are 48 direct connectors that, in turn, provide connectivity to more than 220 participants, primarily research universities and several research laboratories. The speeds of the connections range from a diminishing number of OC-3 circuits to an increasing number of OC-48 (currently six) and 10 Gigabit Ethernet (now two) circuits. Abilene connectors are usually gigaPoPs, consortia of Internet2 members that cover a geographically compact area of the country and connect the research universities and their affiliated laboratories in that area to the Abilene backbone. The three-level infrastructure of backbone, gigaPoP, and campus network is capable of providing scalable, sustainable, high-speed networking to the faculty, staff, and students on our campuses. The actual performance achieved depends on the capacity and quality of the connectivity from departmental LAN to campus LAN to gigaPoP to Abilene.

In addition, Abilene places high priority on connectivity to international and federal peer research networks, including ESnet, CA*net, GEANT, and APAN. Currently, Abilene-ESnet peering includes two OC-48 SONET connections and will soon grow to three OC-48c SONET and one 10 Gigabit Ethernet connections. Similar multi-OC48 and above

32

Page 33: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

peerings are in place with CA*net and GEANT. To make these peerings scalable, we emphasize the use of 10 Gigabit Ethernet switch-based exchange points; thus, Abilene has two 10-Gb/s connections to Star Light (Chicago), one to Pacific Wave (Seattle), one to MAN LAN (New York), and similar planned upgrades to the NGIX-East (near Washington DC) and the planned Pacific Wave presence in Los Angeles. The recent demonstration by the DATATAG collaboration of a single TCP flow of more than 5 Gb/s between Geneva and Los Angeles was conducted in part over the Abilene Network (between Chicago and Los Angeles). In cases where the end-to-end connection from the hosts on campuses to Abilene are provisioned at or above 1 Gb/s, we are seeing increasing evidence of a networking environment where single TCP flows of more than 500-900 Mb/s can be routinely supported. An increasingly capable performance measurement infrastructure permits the performance of flows within Abilene and from Abilene to key edge sites to be instrumented. This instrumentation is one component of the Abilene Observatory, a general facility for making Abilene measurements available to the network research and advanced engineering community.

In addition to supporting high-performance, Abilene also provides native IPv6 connectivity to its members with performance identical to that provided for IPv4. A key strength of IPv6 is its support for global addressability for very large numbers of nodes, such as may be needed for large arrays of detectors or other distributed sensors.

In sum, this Abilene-based shared IP network provides excellent performance in the current environment dominated by Gigabit Ethernet LANs and host interfaces. As we face the future, however, we need to address the October 2006 completion of the current Abilene transport arrangement as well as the beginning, during 2007, of LHC operations. Both will call for new forms of network infrastructure to support the advanced research needs of our members.

33

Page 34: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

The ESnet backbone44 is currently adding OC192 (10 Gbps) links across the northern tier and OC48 links (2.5 Gbps) across the Southern tier of the US, as shown in Figure 20. Site access bandwidth is slowly moving from OC12 to OC48. The link to the Starlight 45

optical peering point for international links has been upgraded at 2.5 Gbps.

The bandwidth required by DOE’s large-scale science projects over the next 5 years is characterized in the June 2003 Roadmap46 shown in section 9. Programs that have currently defined requirement for high bandwidth include: High Energy Physics, Climate (data and computations), NanoScience at the Spallation Neutron Source47, Fusion Energy, Astrophysics, and Genomics (data and computations). A new ESnet architecture and implementation strategy is being developed in order to increase the bandwidth, services, flexibility and cost effectiveness of the network.

The elements of the architecture include independent, multi-lambda national backbones that independently connect to Metropolitan Area Network rings, together with independent paths to Europe and Japan. The MAN rings are intended to provide redundant paths and on-demand, high bandwidth point-to-point circuits to DOE Labs. The alternate paths can be used for provisioned circuits except in the probably rare circumstance when they are needed to replace production circuits that have failed. The multiple backbones would connect to the MAN rings in different locations to ensure that the failure of a backbone node could not isolate the MAN.

Another aspect of the new architecture is high-speed peering with the US university community, and the goal is to have multiple 2.5-10 Gb/s cross-connects with Internet2/Abilene to provide seamless, high-speed access between the university community and the DOE Labs. The long-term ESnet connectivity goals are shown in Figure 21.

The implementation strategy involves building the network by taking advantage of the evolution of the telecom milieu – that is, using non-traditional sources of fiber, collaborations with existing R&D institution network confederations for lower cost transport, and vendor neutral interconnect points for more easily achieving competition in local loops / tail circuits.

Replacing local loops with MAN optical rings should provide for continued high quality production IP service, at least one backup path from sites to ESnet hub, scalable bandwidth options from sites to ESnet backbone, and point-to-point provisioned high-speed circuits as an ESnet service.

With endpoint authentication, the point-to-point paths are private and intrusion resistant circuits, so they should be able to bypass site firewalls if the endpoints (sites) trust each other.

A clear mandate from the Roadmap Workshop was that ESnet should be more closely involved with the network R&D community, both to assist that community and to more rapidly transition new technology into ESnet. To facilitate this, the new implementation strategy includes interconnection points with National Lambda Rail and UltraNet – DOE’s network R&D testbed.

44 See h ttp://www.es.net and http://www1.es.net/pub/maps/current.jpg 45 See http://www.startap.net/starlight 46 “DOE Science Networking Challenge: Roadmap to 2008.” Report of the June, 2003, DOE Science Networking Workshop. Both Workshop reports are available at http://www.es.net/#research.47 The Spallation Neutron Source (SNS) is an accelerator-based neutron source being built in Oak Ridge, Tennessee, by the U.S. Department of Energy. The SNS will provide the most intense pulsed neutron beams in the world for scientific research and industrial development.

34

Page 35: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Figure 20 The ESnet backbone in the US, in 2003.

Figure 21 Long-term ESnet Connectivity Goal.

35

Page 36: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

The CA*net448 research and education network in Canada connects regional research and education networks using wavelengths at typical speeds of 10 Gbps each. The underlying architecture of CA*net 4 is two 10Gbps lambdas from Halifax to Vancouver as shown in Figure 22. A third national lambda is planned to be deployed later this year. Instead of being thought of as one single homogenous IP routed network, the CA*net 4 network can be better described as a set of a number independent parallel IP networks, each associated with a specific application or discipline. There are connections to the US at Seattle, Chicago, and New York.

CalgaryRegina

Winnipeg

OttawaMontreal

Toronto

Halifax

St. John’s

Fredericton

Charlottetown

Chicago

Seattle

New York

Edmonton

Saskatoon

Vancouver

CalgaryRegina

Winnipeg

OttawaMontreal

Toronto

Halifax

St. John’s

Fredericton

Charlottetown

Chicago

Seattle

New York

Edmonton

Saskatoon

Vancouver

Figure 22 CA*net4 (Canada) map

48 See http://www.canarie.ca/canet4/connected/canet4_map.html

36

Page 37: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

6.3. Korea and Japan

SuperSINET49 in Japan has been operating since January 4, 2002. Its backbone connects research institutes at 10 Gbps and the leading research facilities in the research institutes are directly connected at 1 Gbps, as shown in Figure 23. As example, KEK is connected at 10 Gbps to the core network. A link of 2 X 2.5 Gbps was connected to New York at the end of 2002, and peering of this link to ESnet was established in February 2003.

Figure 23 SuperSINET (Japan) map, October 2003.

49 See http://www.sinet.ad.jp/english/index.html

37

Page 38: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Two major backbone networks for advanced network research and applications exist in Korea: KREONET (Korea Research Environment Open NETwork)50, connected to over 230 organizations including major universities and research institutions, and the KOREN (KOrea Advanced Research Network)51 connected to 47 research institutions. Both networks were significantly updated in 2003.

o A major upgrade was done to the KREONET/KREONet2 backbone (Figure 24), raising the speed to 2.5-5 Gbps on a set of links interconnecting 11 regional centers, as shown in Figure 1, in order to support Grid R&D and supercomputing applications. The network also includes a Grid-based high-performance research network, called SuperSIReN with a speed of 10 Gbps centering around major universities and research institutes in Daedeok Science Valley.

Figure 24 The KREONET Infrastructure, showing the upgraded core network interconnecting 11 regional centers, the main national and international connections to other research and

50 KREONET is a national R&D network, run by KISTI (Korea Institute of Science and Technology Information) and supported by the Korean government, in particular MOST (the Ministry Of Science & Technology) since 1988. For science and technology information exchange and supercomputing related collaboration, KREONET provides high-performance network services for the Korean research and development community. Currently KREONET member institutions are over 230 organizations, which include 50 government research institutes, 72 universities, 15 industrial research laboratories, etc. (http://www.kreonet.re.kr)

51 KOREN (KOrea Advanced Research Network) was founded for the purpose of expanding the technological basis of Korea and for providing a research environment for the development of high speed telecommunications equipment and application services. Established in 1995, KOREN is a not-for-profit research network that seeks to provide universities, laboratories and industrial institutes with a research and development environment for 6T related technology and application services based on a subsidy from the Ministry of Information and Communications. (http://www.koren21.net)

38

Page 39: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

o The speed of KOREN, shown in Figure 25, also was upgraded to 10 Gbps between Seoul and Daejeon and a 2.5 Gbps Ring configuration was installed connecting four cities (Daejeon – Daegu – Busan – Gwangju – Daejeon). There were initially 5 user sites connected with 1 Gbps and the number of sites will be increased soon.

Figure 25 The KOREN infrastructure, showing the 2.5 Gbps ring interconnecting four major cities.

39

Page 40: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

6.4. Intercontinental links

The US-CERN link, between the Starlight and CERN, jointly funded by the US (DOE and NSF) and Europe (CERN and the EU through the DataTAG52 project) has been merged in a single 10 Gbps link53. Today, a strict policing based on MPLS protects the production traffic from the research traffic. Peering at 10 Gbps with Abilene and TeraGrid have been setup at Chicago and an upgrade of the bandwidth to GEANT at 10 Gbps is scheduled this year at Geneva. In parallel, the LHCnet is being extended to the US west coast via NLR waves made available to the HEP community through HOPI and Ultranet. Caltech is deploying a 10 Gbps local loop dedicated to research and development to the NLR PoP in Los-Angeles downtown. A general view of the LHCnet is available on Figure 26

The optical triangle interconnecting Geneva, Amsterdam and Chicago with 10 Gbps wavelengths from SurfNet will be extended to the UK, forming an optical quadrangle, once the “UKLight” project begins operations.

Figure 26 LHCNet Peering and Lambda triangle

52 See http://datatag.web.cern.ch/datatag/ .53 At little or no extra cost for the bandwidth upgrade.

40

Page 41: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

StarLight54 is a research-support facility and a major peering point for US national and international networks based in Chicago, that is designed by researchers, for researchers, anchoring a host of regional, national and international wavelength-rich LambdaGrids, with switching and routing at the highest experimental levels. It is also a persistent testbed for conducting research and experimentation with “lambda” signaling, provisioning and management, developing new data-transfer protocols, and designing real-time distributed data mining and visualization applications. Since Summer 2001, StarLight has been serving as a 1GigE (Gigabit Ethernet) and 10GigE electronic switching and routing facility for the national and international Research and Education networks. International Lambdas connected to Starlight are shown on Figure 27.

TransLight is a global partnership among institutions, organizations, consortia or country National Research Networks (NRNs) who wish to make their lambdas available for scheduled, experimental use. This one-year global-scale experimental networking initiative aims to advance cyberinfrastructure through the collaborative development of optical networking tools and techniques and advanced LambdaGrid middleware services among a worldwide community of researchers. TransLight consists of many provisioned 1-10 Gigabit Ethernet (GigE) lambdas among North America, Europe and Asia via StarLight in Chicago. As shown on Figure 27 members are CANARIE/CA*net4, CERN/Caltech, SURFnet/NetherLight, UIC/Euro-Link, TransPAC/APAN (Asia), NORDUnet, NorthernLight, CESNET/CzechLight and AARnet.

Figure 27 Translight

54 See http://www.startap.net/starlight/

41

Page 42: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

The Global Ring Network for Advanced Application Development (GLORIAD) shown on Figure 28, is the first round-the-world high-speed networks, jointly established by China, United States and Russia The multi-national GLORIAD program will actively encourage and coordinate applications across multiple disciplines and provide for sharing such scientific resources as databases, instrumentation, computational services, software, etc. In addition to supporting active scientific exchange with network services, the program will provide a test bed for advancing the state-of-the-art in collaborative and network technologies ? including Grid-based applications, optical network switching, an IPv6 backbone, network traffic engineering and network security.

Figure 28 The Global Ring Network for Advanced Application Development (GLORIAD)

42

Page 43: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

In December 2003, the Trans-Pacific Optical Research Testbed (SXTransPORT55) announced the deployment of dual 10 gigabit per second capacity circuits, connecting the Australia's Academic and Research Network (AARNet) to networks in North America, as part of a bundle of services, for approved non-commercial scientific, research and educational use. This gigantic leap shown in Figure 29 increases the Australia-US trans-pacific bandwidth by a factor 64!

Figure 29 Australia-US bandwidth for research and education

7. Advanced Optical Networking Projects and Infrastructures

7.1.Advanced Optical Networking Infrastructures Most conventional carriers, a growing number of utilities and municipalities, and a number of new-entrant carriers have installed fiber-optic cabling on their rights of way that well exceeds their current needs, and so remains "unlit", or "dark". Lighting this fibers can now be done using very inexpensive technology that is identical in many respects to that used on local area networks, and so is essentially a "commodity", if not a "consumer", item. Building networks that are based on a combination of this new technology and on gaining access to either pre-existing dark fiber or fiber that has been newly installed for this purpose, is increasingly being seen as a new way to build very high capacity networks for a very low cost, while gaining a degree of control over the network that had always rested with the carrier.In 2003-4 we are seeing the emergence of some privately owned or leased wide area fiber infrastructures, managed by non-profit consortia of universities and regional network providers,

55 http://www.aarnet.edu.au/news/sxtransport.html

43

Page 44: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

to be used on behalf of research and education. It marks an important change: from an era of managed bandwidth services, to one where the research and education community itself owns and shares the cost of operating the network infrastructure. The abundantly available dark fibers and lambdas will cause a paradigm shift in networking. In the new scheme, the costs of adding additional wavelengths to the infrastructure, while still significant, are much lower than was previously thought to be possible. Most advanced initiatives are listed below.

7.1.1. AARNetIn a deal finalized in December 2003, AARNet (the Australia's Academic and Research Network) acquired dark fibres across Australia for 15 years. Initially these fibers will provide 10Gbps across the country but the AARNet3 design will be capable of driving speeds of 40Gbps and beyond.

7.1.2. CESNETCESNET (The Czech Academic and Research Network) is leasing fibres since 1999. The current National fibre footprint realized or contracted is 17 lines, an overall length of 2513 km. Most of the CESNET backbone links rely on those leased fibres. The advantages are a wide independence on carriers, a better control of the network and important savings for higher transmission rate and for more lambdas. The case study shown on Table 1calculates expences for lambda leasing and fibre leasing based on offers for year 2003. It includes 4 year depreciation of equipment, academic discounts and equipment service fees.

Table 1 Case study in Central Europe:buying lambdas vs. leasing fibre

44

1 x 2,5G Leased 1 x 2,5G (EURO/Month)

about 150km (e.g. Ústí n.L. - Liberec) 7,000about 300km (e.g. Praha - Brno) 8,000

***

4 x 2,5G Leased 4 x 2,5G (EURO/Month)

about 150km (e.g. Ústí n.L. - Liberec) 14,000about 300km (e.g. Praha - Brno) 23,000

***

1 x 10G Leased 1 x 10G (EURO/Month)

about 150km (e.g. Ústí n.L. - Liberec) 14,000about 300km (e.g. Praha - Brno) 16,000

***

4 x 10G Leased 4 x 10G (EURO/Month)

about 150km (e.g. Ústí n.L. - Liberec) 29,000about 300km (e.g. Praha - Brno) 47,000

***

2 x booster 24dBm, 2 x DCF, DWDM 10G2 x (booster +In-line + preamplifier), 6 x DCF, DWDM 10G

Leased fibre with own equipment (EURO/Month)

5 000 *7 000 **

2 x booster 18dBm 2 x booster 27dBm + 2 x preamplifier + 6 x DCF

Leased fibre with own equipment (EURO/Month)

12 000 *14 000 **

8 000 **

2 x booster 21dBm, 2 x DCF2 x (booster 21dBm + in-line + preamplifier) + 6 x DCF

2 x booster 24dBm, DWDM 2,5G2 x (booster +In-line + preamplifier), 6 x DCF, DWDM 2,5G

Leased fibre with own equipment (EURO/Month)

5 000 *

Leased fibre with own equipment (EURO/Month)

8 000 *11 000 **

Page 45: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

7.1.3. PIONIERThe PIONIER56 (Polish NREN) network deployment started in 2001, with the fiber acquisition process. As the availability and quality of the existing fibers were not satisfactory for current and future demands of optical networking, the decision to build new fibers with cooperation of telecommunication carriers, using cost-sharing model was taken. 2650km of fiber lines were laid until June 2003 connecting 16 MANs. The complete fiber network shall connect 21 MANs with 5200km of fiber until 2005 as shown on Figure 30.

ŁÓDŹ

TORUŃ

P OZNAŃ

B YDGOS ZCZ

OLS ZTYN

B IAŁYS TOK

GDAŃS K

KOS ZALIN

S ZCZECIN

ZIELONAGÓRA

WROCŁAW

CZĘS TOCHOWA

KRAKÓW RZES ZÓW

LUB LIN

KIELCE

P UŁAWY

RADOM

KATOWICEGLIWICE

B IELS KO-B IAŁA

OP OLE

GUB IN WARS ZAWA

CIES ZYN

S IEDLCE

P IO NIE R n o d e sIn s ta lle d fib e r

F ib e rs p la n n e d in 2 0 0 4P IO NIE R n o d e s p la n n e d in 2 00 4

Figure 30 Pionier Fiber network

7.1.4. SURFnet6The deployment of the next generation Dutch research and education network SURFnet6 will be based on dark fibers. As shown in Figure 31, over 3000 km of managed dark fiber pairs is already available for SURFnet today.

Figure 31 Managed dark fiber pairs for SURFnet6.

56 See http://www.pionier.gov.pl/str_glowna.html

45

Page 46: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

7.1.5. X-WinDNF in Germany has started the process of upgrading from G-WiN to X-WiN. All links between the core nodes will be upgraded to 10Gbps and a flexible reconfiguration scheme with latencies below 7 days will allow for dynamic reconfiguration of the core links in case of changing data flow requirements. One major addition to existing standard services will be bandwidth on demand, and the technical and economical feasibility aspects will be exploited.

o In terms of the base technology diverse approaches are possible. Options include

SDH/Ethernet as a basic platform Managed lambdas Managed dark fiber and DFN’s own WDM

The market in Germany for dark fiber offers interesting possibilities. For example, as shown on Figure 32, GasLine, a national provider for natural gas, has installed optical fibers along their gas pipelines. The geographical coverage is not only interesting for the core infrastructure but also for the many institutions that are found in the proximity of the links. The respective technical characteristics of the fibers and the economical aspects look very promising. The roadmap for the migration to X-WiN includes the installation and operation of an optical testbed, called Viola. Network technology tests in the (real) user environment will provide important input to the design of the next generation NREN. A feasibility study will be completed in early 2004 and the concept will be worked out until Q3/04. The actual migration from G-WiN to X-Win is expected to take place in Q4/05.

Figure 32 GasLine dark fiber network

46

Page 47: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

7.1.6. FiberCOFiberCo57 in the US helps to provide inter-city dark fiber to regional optical networks with the benefit of a national-scale contract and aggregate price levels. The responsibility for lighting this fiber will rest with the regional networks. A secondary objective is to insure that the U.S. research university collective maintains access to a strategic fiber acquisition capability on the national scale for future initiatives. FiberCo has executed two agreements with Level 3 Communications that 1) provide it with an initial allocation of over 2,600 route-mi of dark fiber anywhere on Level 3's national footprint (see Figure 33) and 2) set the ongoing costs for fiber maintenance and equipment co-location.

Figure 33 FiberCo Available fiber topology

7.1.7. National LambdaRailNational LambdaRail58 (NLR) is not a single network, rather, a unique and rich set of facilities, capabilities and services that will support a set of multiple, distinct, experimental and production networks for the U.S. research community. On NLR, these different networks will exist side-by-side in the same fiber-optic cable pair, but will be physically independent of each other as each will be supported by its own lightwave or lambda. The principal objectives of NLR are to:

o Bridge the gap between leading-edge optical network research and state-of-the-art applications research;

o Push the beyond the technical and performance limitations of today’s Internet backbones;

o Provide the growing set of major computationally intensive science (e-Science) projects, initiatives and experiments with the dedicated bandwidth, deterministic performance characteristics, and/or other advanced network capabilities needed; and

o Enable the potential for highly creative, out-of-the-box experimentation and innovation that characterized facilities-based network research during the early years of the Internet.

57 See www.FiberCo.org58See http://www.nationallambdarail.org/

47

Page 48: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

A crucial characteristic of NLR is the capability to support both experimental and production networks at the same time – with 50 percent of its resources allocated to network research. The Portland - Seattle link, and Chicago - Pittsburgh link are up now. West Coast (San Diego - Seattle) will be up in March-April The entire first phase of NLR: San Diego to Los Angeles to Sunnyvale to Seattle to Denver to Chicago to Pittsburgh to Washington to Raleigh to Atlanta to Jacksonville will be operational by the end of August 2004. Planning is being finalized for the second phase, the remainder of the nationwide backbone. The NLR infrastructure is shown on Figure 34.

15808 Terminal, Regen or OADM site (OpAmp sites not shown)Fiber route15808 Terminal, Regen or OADM site (OpAmp sites not shown)Fiber route

PITPIT

PORPOR

FREFRE

RALRAL

WALWAL

NASNASPHOPHO

OLGOLG ATLATL

CHICHICLECLE

KANKAN

OGDOGDSACSAC BOSBOSNYCNYC

WDCWDC

STRSTR

DALDAL

DENDEN

LAXLAX

SVLSVL

SEASEA

SDGSDG

JACJAC

Figure 34 National Light Rail: Planned layout of the optical fibre route (from Level(3)) and Cisco Optical Multiplexers.

Without the availability of dark fibers, the NLR infrastructure would have probably never been deployed. As illustrated in Figure 35, the role of dark fibers is vital to link the NLR optical infrastructure to campus and laboratories.

Figure 35 NLR’s ‘Virtuous Circles’ and the Vital Role of Dark Fiber

48

Page 49: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

7.2.Advanced Optical Networking Projects and Initiatives

The DataTAG59 project has deployed a large-scale intercontinental Grid testbed involving the European DataGrid project, several national projects in Europe, and related Grid projects in the USA. The transatlantic Datatag testbed is one of the largest 10GigE testbed ever demonstrated, in addition to being the first transatlantic testbed with native 10GigE access capabilities. The project explores some forefront research topics like the design and implementation of advanced network services for guaranteed traffic delivery, transport protocol optimization, efficiency and reliability of network resource utilization, user-perceived application performance, middleware interoperability in multi domain scenarios, etc. One of the major achievement of the project is the establishment of the new Land Speed Record with a single TCP stream of 5.64 Gigabit/sec between Geneva and Los-Angeles sustained for more that one hour.

NetherLIGHT60 is an advanced optical infrastructure in Netherlands proving ground for network services optimized for high-performance applications. Operational since January 2002, NetherLIGHT is a multiple Gigabit Ethernet switching facility for high-performance access to participating networks and will ultimately become a pure wavelength switching facility for wavelength circuits as optical technologies and their control planes mature. NetherLIGHT has become a major hub in GLIF, the Global Lambda Integrated Facility for Research and Education shown on Figure 36. GLIF is a World Scale Lambda based Laboratory for Application and Middleware development on the emerging “LambdaGrid”, where Grid applications ride on dynamically configured networks based on optical wavelengths. The GLIF community shares the vision to build a new Network paradigm, which uses the Lambda network to support data transport for the most demanding e-Science applications, concurrent with the normal aggregated best effort Internet for the commodity traffic.

Figure 36 GLIF - Global Lambda Integrated Facility –, 1Q2004

59 See http://www.datatag.org60 See http://www.surfnet.nl/innovatie/netherlight/

49

DWDM SURFnet

lambda service path

IP service path

10 Gbit/s

SURFnet10 Gbit/s

SURFnet10 Gbit/s

IEEAF10 Gbit/s

DwingelooASTRON/JIVE

DwingelooASTRON/JIVE

PragueCzechLight

PragueCzechLight

2.5 Gbit/s

NSF10 Gbit/s

LondonUKLightLondonUKLight

StockholmNorthernLight

StockholmNorthernLight

2.5 Gbit/s

New YorkMANLANNew YorkMANLAN

TokyoWIDETokyoWIDE

10 Gbit/s

10Gbit/s

10 Gbit/s

2x10 Gbit/s

IEEAF10 Gbit/s

2x10 Gbit/s

10 Gbit/s

2.5 Gbit/s

2.5 Gbit/s

TokyoAPANTokyoAPAN

GenevaCERN

GenevaCERN

ChicagoChicago AmsterdamAmsterdam

Page 50: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

UltraNet61 is an experimental research testbed (Figure 37) funded by DOE Office of Science to develop networks with unprecedented capabilities to support distributed large-scale science applications that will drive extreme networking, in terms of sheer throughputs as well as other capabilities..

Figure 37 Ultranet Testbed

UKLight62 will enable the UK to join several other leading networks in the world creating an international experimental testbed for optical networking. UKLight will bring together leading-edge applications, Internet engineering for the future, and optical communications engineering, and enable UK researchers to join the growing international consortium which currently spans Europe and North America. These include STARLIGHT in the USA, SURFnet in the Netherlands (NetherLIGHT), CANARIE (Canadian academic network), CERN in Geneva, and NorthernLIGHT bringing the Nordic countries onboard. UKLight will connect UK national research backbone JANET to the testbed and also provide access for UK researchers to the Internet2 facilities in the USA via StarLIGHT.

Two important research programs called GRANDE and GARDEN are being submitted to the European commission. Both projects are in proposal stage, and have a very strong support from leading equipment manufacturers, leading service providers and lots of national research networks in Europe.

o Grande (Grid Aware Network Development in Europe) is a project that proposes to enrich the next generation research infrastructures (GEANT successors) with grid concepts, services and usage scenarios. The proposal’s goal is to create a pan-european heterogeneous testbed combining traditional IP networks with advanced circuit-switched networks (gigabit switched or lambda switched).

o Garden (GRIDs and Advanced Research Development Environment and Network) proposes to build an intercontinental IP controlled optical network testbed, also based on future research infrastructures. The project’s goal is to

61 See http://www.csm.ornl.gov/ultranet/62 See http://www.ja.net/development/UKLight/

50

Page 51: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

develop new protocols, architectures and AAA models, along with new GRID developments.

The UltraLight63 concept is the first of a new class of integrated information systems that will support the decades-long research program at the Large Hadron Collider (LHC) and other next generation sciences. Physicists at the LHC face unprecedented challenges: (1) massive, globally distributed datasets growing to the 100 petabyte level by 2010; (2) petaflops of distributed computing; (3) collaborative data analysis by global communities of thousands of scientists. In response to these challenges, the Grid-based infrastructures developed by the LHC collaborations provide massive computing and storage resources, but are limited by their treatment of the network as an external, passive, and largely unmanaged resource. UltraLight will overcome these limitations by monitoring, managing and optimizing the use of the network in real-time, using a distributed set of intelligent global services.

Figure 38 Initial Planned Ultralight Implementation

63 See http://ultralight.caltech.edu/.

51

Page 52: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

TeraGrid64 is a multi-year effort to build and deploy the world's largest, most comprehensive, distributed infrastructure for open scientific research (See Figure 39). By 2004, the TeraGrid will include 20 teraflops of computing power distributed at five sites, facilities capable of managing and storing nearly 1 petabyte of data, high-resolution visualization environments, and toolkits for grid computing. Four new TeraGrid sites, announced in September 2003, will add more scientific instruments, large datasets, and additional computing power and storage capacity to the system. All the components will be tightly integrated and connected through a network that operates at 40 gigabits per second.

Figure 39 The TeraGrid. Five centers at NCSA, Argonne, the San Diego and Pittsburgh Supercomputer Centers and Caltech are interconnected by a network of

three to four 10 Gbps wavelengths.

The Hybrid Optical/Packet Infrastructure (HOPI) initiative is led by Internet2 and regroups a variety of people from the high speed network community. The initiative will examine new infrastructures for the future and can be viewed as a prelude to the process for the 3rd generation Internet2 network architecture. The design team will focus on both architecture and implementation. It will examine a Hybrid of shared IP packet switching and dynamically provisioned optical lambdas. The eventual hybrid will require a rich set of wide-area lambdas and the appropriate switching mechanisms to support high capacity and dynamic provisioning. The immediate goals are the creation and the implementation of a test-bed within the next year in coordination with other similar projects. The project will rely on the Abilene MPLS capabilities and dedicated waves from NLR.

In planning the next generation Internet2 networking infrastructure, we anticipate the design and deployment of a new type of hybrid network – one combining a high-performance, shared packet-based infrastructure with dynamically provisioned optical lambdas and other ‘circuits’ offering more deterministic performance characteristics. We use the term HOPI (for hybrid optical and packet infrastructure) to denote both the effort

64 See http://www.teragrid.org/

52

Page 53: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

to plan this future hybrid and the set of testbed facilities that we will deploy collaboratively with our members to test various aspects of candidate hybrid designs.

The eventual hybrid environment will require a rich set of wide-area lambdas connecting both IP routers and lambda switches capable of very high capacity and dynamic provisioning, all at the national backbone level. Similarly, we are working now to facilitate the creation of regional optical networks (RONs) through the acquisition of dark fiber assets and the deployment of optronics to deliver lambda-based capabilities. Finally, we anticipate that the planned hybrid infrastructure will require new classes of campus networks capable of delivering the various options to high-performance desktops and computational clusters.

To enable the testing of various hybrid approaches, we are planning the initial HOPI testbed, making use of resources from Abilene, the emerging set of RONs, and the new National LambdaRail (NLR) infrastructure. A HOPI design team, composed of engineers from Internet2 member universities and laboratories, is now at work.

As our ideas, testing, and infrastructure planning evolve, we will work closely with the high energy physics community to ensure that the most demanding needs of our members (e.g., LHC) are met. We expect that the resulting hybrid packet and optical infrastructure will play a key role in a scalable and sustainable solution to the future needs of this community.

53

Page 54: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

8. HENP Network Status: “Remote Regions”

Outside of North America, Western Europe, Australia, Korean and Japan, network connections are slower65, and often much slower. Link prices in many countries have remained high, and affordable bandwidths are correspondingly low. This is caused by one or more of the following reasons:

Lack of competition Lack of local or regional infrastructure Government policies restricting competition or the installation of new facilities,

or fixing price structures.Notable examples of countries in need of improvement include China, India, Romania and Pakistan, as well as some other countries where HEP groups are planning “Tier2” Regional Centers. Brazil (UERJ, Rio) is planning a Tier1 center for LHC (CMS) and other programs. The IHEP Beijing link to KEK (and hence to the US) remains at 128 kbps for example. These are clear areas where ICFA-SCIC and ICFA as a whole can help.

8.1.East-Europe The network infrastructure for education and research in Romania is provided by the

Romanian Higher Education Data Network (RoEduNet). Important progresses have been made over the last two years. As shown on Figure 40 the backbone has two 155 Mbps link and an access to GEANT at 622Mbps. The current plan is to connect three of four local center at 2,5 Gbps and then deploy a 10Gb network infrastructure that may be based on dedicated dark fibers.

Figure 40 RoEduNet network

65 Also see the Digital Divide Working Group report, and the ICFA-SCIC meeting notes at http://cern.ch/icfaa-scic

54

Page 55: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

8.2.Russia and the Republics of the former Soviet Union

In Russia there is significant progress66 in the areas of the main scientific centers: in Moscow, St. Petersburg and Novosibirsk, where there have been a number of 1 Gbps links installed within these metropolitan areas. There is a Russian backbone of 100 Mbps that is being upgraded in Moscow, and links of 155 Mbps to St. Petersburg and 45 Mbps to Novosibirsk. External connections include a 155 Mbps link to Nordunet, and 155 Mbps link to Starlight (the “FASTNet project”). There is also a 155Mbps link to Stockholm to provide connectivity to the commodity Internet. A Russia-GEANT link (155 Mbps) is being implemented, to start in February 2003. Within this link, connectivity will be provided for Grid and other planned programmatic use, including 50-70 Mbps planned to connect to CERN and European HEP centers.

Outside of these main scientific areas in Russia, network capabilities remain modest. The network bandwidth in the other republics of the former Soviet Union, and the speed of their international connections, also remain low. This statement also applies to international connections to Novosibirsk: the link from KEK to the Budker Institute (BINP) was recently upgraded, but only from 128 to 512 kbps.

DESY is assisting with satellite connections the newly Independent States of the Southern Caucasus (comprising Armenia, Azerbaijan and Georgia), and Central Asia (comprising Kazakhstan, Kyrgyz Republic, Tajikistan, Turkmenistan and Uzbekistan). These countries shown in Figure 41 are located on the fringe of the European Internet arena and will not be in reach of affordable optical fibre connections within the next few years. The project called Silk 67 provides connectivity to the GEANT backbone via satellites links. The project started with a transmit plus receive bandwidth of 2Mbps in September 2002 and increased it to 10 Mbps in December 2003. From January 2004, the bandwidth will increase linearly by 500kbps/month until June 2005. This will lead to a maximum transmit plus receive bandwidth of about 24 Mbps by the end of the period.

66 Further details on the status, progress and Digital Divide problems in Russia are given in the report “Digital Divide and Connectivity for HEP in Russia”, by V.A. Ilyin et al., accompanying this report.67 See www.silkproject.org

55

Page 56: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Figure 41 Countries participating in the Silk project

56

Page 57: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

8.3.Asia and Pacific

There has been some progress in Asia, thanks to the links of the Asia Pacific Academic Network (APAN), shown in Figure 42, and the help of KEK. The bandwidths of these links are summarized in Table 2. Most of the links within Southeast Asia are somewhere in the range from 0.5 to 155 Mbps. A notable upgrade on the link between Japan and Korea was completed in January 2003, from 8 Mbps to 1 Gbps. The main example of progress in this region for 2003 is the upgrade from 310 Mbps to 20 Gbps of the Australia-US bandwidth.

Figure 42 APAN links in Southeast Asia, January 2003

57

Page 58: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Table 2 Bandwidth of APAN links in Mbps (December 2003).

58

Page 59: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

8.4.South America

The Brazilian Research and Education network RNP2 is one of the most advanced R&E networks in South America. It has two international links. One of them, of 155 Mbps, is used for Internet production traffic. The other is a 45 Mbps link that is connected to Internet2 through the AMPATH68 International Exchange Point in Miami, Florida and is used only for interconnection and cooperation among academic networks. Soon, the backbone will interconnect all Federal Institutions of Higher Education and Research Units in the Ministry of Science and Technology (MCT). In parallel, a new project called Giga project is started. It aims to deploy a national optical network by 2007, in which data will flow via the IP protocol directly over DWDM systems at Gbps speeds.

The CLARA (Cooperación Latino-Americana de Redes Avanzadas) is the recently created association of Latin American NRENs. The objective of CLARA is to promote co-operation among the Latin American NRENs to foster scientific and technological development. Its tasks will include promotion and project dissemination in Latin America, to ensure the long-term sustainability of the Latin American research network and its interconnection to US and Europe. The proposed topology for Clara backbone is shown on Figure 43

Figure 43 Proposed topology for Clara’s backbone

68 AMPATH, “Pathway of the Americas”, see http://www.ampath.fiu.edu/

59

Page 60: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

The mission of AMPATH is to serve as the pathway for Research and Education Networking in the Americas and to the World. Active since 2001, the purpose of the AMPATH project is to allow participating countries to contribute to the research and development of applications for the advancement of Internet technologies. In January 2003, the connection to Internet2's Abilene network was upgraded to an OC12c (622Mbps). The Ampath network is shown on Figure 44

Figure 44 AMPATH “Pathway of the Americas”, showing the links between the US and Latin America.

The ALICE69 project was set up in 2003 to develop an IP research network infrastructure within the Latin American region and towards Europe. It addresses the infrastructure objectives of the European Commission’s @LIS program, which aims to promote the Information Society and fight the digital divide throughout Latin America. In Latin America, intra-regional connectivity is currently not developed. There is also no organised connectivity between the pan-European research network, GÉANT, and the National Research and Education Networks in Latin America. ALICE seeks to address this limitations. It also aims to foster research and education collaborations, both within Latin America and between Latin America and Europe.

69 See http://www.dante.net/server/show/nav.009

60

Page 61: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

9. The Growth of Network Requirements in 2003

The estimates of future HENP domestic and transatlantic network requirements have increased rapidly over the last three years. This is documented, for example, in the October 2001 report of the DOE/NSF Transatlantic Network Working Group (TAN WG)70. The increased requirements are driven by the rapid advance of affordable network technology and networked applications, and especially the emergence of “Data Grids”71, that are foreseen to meet the needs of worldwide HENP collaborations. The LHC “Data Grid hierarchy” example (shown in Figure 13) illustrates that the requirements for each LHC experiment are expected to reach 2.5 Gbps by approximately 2005 at the national Tier1 centers at FNAL and BNL, and 2.5 Gbps at the regional Tier2 centers. Taken together with other programmatic needs for links to DESY, IN2P3 and INFN, this estimate corresponded to an aggregate transatlantic bandwidth requirement rising from 3 Gbps in 2002 to 23 Gbps in 2006.

As discussed in the following sections, it was understood in 2002-3 that the network bandwidths shown in Figure 13 correspond to a conservative “baseline” estimate of the needs, formulated using an evolutionary view of network technologies and a bottoms-up, static and hence conservative view of the Computing Model needed to support the LHC experiments.

Tier 1

Tier2 CenterTier2 Center

Online System

CERN 700k SI95 ~1 PB Disk; Tape Robot

FNAL: 200k SI 95; 600 TBI N2P3 Center I NFN CenterRAL Center

I nstituteI nstituteI nstituteI nstitute ~0.25TI PS

Workstations

~100- 400 MBytes/sec

2.5 Gbps

0.1–1 Gbps Physicists work on analysis “channels”Each institute has ~10 physicists working on one or more channels

Physics data cache

~PByte/sec

~2.5 Gbps

Tier2 CenterTier2 CenterTier2 CenterTier2 CenterTier2 CenterTier2 Center

~2.5 Gbps

Tier 0 +1

Tier 3

Tier 4

Tier2 Center Tier2 Center Tier 2

Experiment

CERN/Outside Resource Ratio ~1:2Tier0/( Tier1)/( Tier2) ~1:1:1

Figure 13 The LHC Data Grid Hierarchy

70 The report of this committee, commissioned by the US DOE and NSF and co-chaired by H. Newman (Caltech) and L. Price (Argonne Nat’l Lab) may be found at http://gate.hep.anl.gov/lprice/TAN. For comparison, the May 1998 ICFA Network Task Force Requirements report may be found at http://l3www.cern.ch/~newman/icfareq98.html. 71 Data Grids for high energy and astrophysics are currently under development by the Particle Physics Data Grid (PPDG; see http://ppdg.net), Grid Physics Network (GriPhyN; see http://www.griphyn.org), iVDGL (www.ivdgl.org) and the EU Data Grid (see http://www.eu-datagrid.org/ ) Projects, as well as several national Grid projects in Europe and Japan.

61

Page 62: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

One of the surprising results of the TAN WG report, shown in Table 2, is that the present-generation experiments (BaBar, D0 and CDF) have transatlantic network bandwidth needs that equal or exceed the levels presently estimated by the LHC experiments CMS and ATLAS. This is ascribed to the fact that the experiments now in operation are distributing (BaBar), or plan to distribute (D0; CDF in Run 2b) substantial portions of their event data to regional centers overseas, while the LHC experiments’ plans through 2001 foresaw only limited data distribution.

2001 2002 2003 2004 2005 2006CMS 100 200 300 600 800 2500

ATLAS 50 100 300 600 800 2500BABAR 300 600 1100 1600 2300 3000CDF 100 300 400 2000 3000 6000

Dzero 400 1600 2400 3200 6400 8000BTeV 20 40 100 200 300 500DESY 100 180 210 240 270 300

Total BW Required 1070 3020 4810 8440 13870 22800 US-CERN BW Installed or Planned

155-310 622 1250 2500 5000 10000

Table 2. Installed Transatlantic Bandwidth Requirements (Mbps)

The corresponding bandwidth requirements at the US HEP labs and on the principal links across the Atlantic are summarized72 in Table 3.

2001 2002 2003 2004 2005 2006SLAC 622 1244 1244 2500 2500 5000BNL 622 1244 1244 2500 2500 5000FNAL 622 2500 5000 10000 10000 20000US-CERN 310 622 1244 2500 5000 10000

US-DESY 155 310 310 310 310 622

Table 3. Summary of Bandwidth Requirements at HEP Labs and on Main Transoceanic Links (Mbps)

The estimates above for the LHC experiments are now known to be very conservative, and need to be updated. They do not accommodate some later, larger estimates of data volumes and/or data acquisition rates (e.g. for ATLAS), nor do they account for the more pervasive and persistent use of high resolution/high frame rate videoconferencing and other collaborative tools expected in the future. They also ignore the needs of individuals and small groups working with institute-based workgroup servers (Tier3) and desktops (Tier4) for rapid turnaround when extracting and transporting small (up to ~100 Gbyte) data samples on demand. The bandwidth estimates also do not accommodate the new network requirements arising from the deployment of future dynamic Grid systems that include caching, co-scheduling of data and compute resources, 72 The entries in the table correspond to standard commercial bandwidth offerings. OC3 = 155 Mbps, OC12 = 622 Mbps, OC48 = 2.5 Gbps and OC192 = 10 Gbps.

62

Page 63: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

and “Virtual Data” operations (see www.ivdgl.org) that can lead to significant automated data movements.

In June 2003, U.S. the Department of Energy (DOE) establishes a roadmap for the networks and collaborative tools that the Science Networking and Services environment requires for DOE science fields including astronomy/astrophysics, chemistry, climate, environmental and molecular sciences, fusion materials science, nuclear physics, and particle physics. This roadmap shown on Table 3, is a answer to the fact that it has become increasingly clear that the network provided for DOE science in the past will not be adequate to keep that science competitive in the future. If implemented and followed during the next five years, it will solve that problem.

63

Page 64: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Table 3 DOE Science Networking Roadmap

OE Science Networking Challenge: Roadmap to 2008

10. Growth of HENP Network Usage in 2001-2004

The levels of usage on some of the main links has been increasing rapidly, tracking the increased bandwidth on the main network backbones and transoceanic links.

In 1999-2000, the speed of the largest data transfers over long distances for HENP were, with very few exceptions, limited to just a few Mbps, because of the link bandwidths and/or TCP protocol issues. In 2001, following link upgrades and tuning of the network protocol, large scale data transfers on the US-CERN link in the range of 20-100 Mbps were made possible for the first time, and became increasingly common for BaBar, CMS and ATLAS. Data transfer volumes of 1 Terabyte per day, equivalent to roughly 100 Mbps used around the clock, were observed by the Fall of 2001 for Babar. These high speed transfers were made possible by the quality of the links, which in many cases are nearly free of packet loss, combined with the modification of the default parameter settings of TCP and the use of parallel data streams73.

In 2002, with many of the national and continental network backbones for research and education, and the major transoceanic links used by our field reaching the 2.5-10 Gbps range74, data volumes transferred were frequently 1 TB per day and higher for BaBar, and at a similar level for CMS during “data challenges”. The recent upgrades of the ESNet (www.es.net) links to SLAC and FNAL to OC12, and the backbone and transatlantic link upgrades, has led to transfers of several hundred Mbps being increasingly common at the time of this report.

The growth of network traffic is illustrated in Figure 14, where SLAC’s network traffic for 2002 over Internet2 (dominated by outbound traffic for BaBar to IN2P3, RAL, and INFN) is shown. The current bandwidth used by BaBar, including ESNet traffic, is typically in the 400 Mbps range (i.e.~4 TB/day equivalent) and is expected to rise as ESNet upgrades are put into service early in 200375. The long term trends, and future projections for network traffic, associated with distributed production processing of events for BaBar, are shown in Figure 15.

73 See http://www-iepm.slac.stanford.edu/monitoring/bulk/ and http://www.datatag.org 74 Able to carry a theoretical maximum, at 100% efficiency of roughly 25 – 100 TB/day. 75 See htt://www1.es.net/pub/maps/current.jpg for the current ESNet map.

64

Page 65: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Figure 14 The growth of SLAC network traffic in 2002

Figure 15 Long term trends and projections for SLAC’s network requirements for offsite production traffic

65

Page 66: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

11. HEP Challenges in Information Technology

The growth in HENP bandwidth requirements and network usage, and the associated need for advanced network R&D in our field, are driven by the fact that HENP’s current generation of major experiments at SLAC, KEK, BNL and Fermilab, and the next generation of LHC experiments, face unprecedented challenges: in data access, processing and distribution, and collaboration across national and international networks. The challenges include:

Providing rapid access to data subsets drawn from massive data stores, rising from Petabytes in 2003 to ~100 Petabytes by 2007, and ~1 Exabyte (1018 bytes) by approximately 2012 to 2015.

Providing secure, efficient and transparent managed access to heterogeneous worldwide-distributed computing and data handling resources, across an ensemble of networks of varying capability and reliability

Providing the collaborative infrastructure and tools that will make it possible for physicists in all world regions to contribute effectively to the analysis and the physics results, including from their home institutions. Once the infrastructure is in place, a new “culture of collaboration”, strongly supported by the managements of the HENP laboratories and the major experiments, will be required to make it possible to take part in the principal lines of analysis from locations remote from the site of the experiment.

Integrating all of the above infrastructures to produce the first Grid-based, managed distributed systems serving “virtual organizations” on a global scale.

12. Progress in Network R&D

In order to help meet its present and future needs for reliable, high performance networks, our community has engaged in network R&D over the last few years. In 2003, we made substantial progress in the development and use of networks up to multi Gbps speed range, and in the production use of data transfers at speeds close to Gbps.

Extensive tests on the maximum attainable throughput have continued in the IEPM “Bandwidth to the World” project76 at SLAC. HENP also has been involved in recent modifications and basic developments of the basic Transport Control Protocol (TCP) that is used for 90% of the traffic on the Internet. This was made possible through the use of advanced network testbeds across the US or Canada77, across the Atlantic or Pacific, and across Europe78. Transfers at 5.5 Gbps79 speeds have been demonstrated over distances of up to 10,000 km in 2003. Progress made over the past year are summarize in the Table 4 that shows the history of the Internet2 land speed record (LSR)80 in the single TCP stream class. The LSR awards honors the highest TCP throughput over 76 See http://www-iepm.slac.stanford.edu/bw 77 Many demonstrations of advanced network and Grid capabilities took place, for example, at the SuperComputing 2002 Conference, notable for the increase in the scale of HENP participation and prominence, relative to 2001. See http://www.sc-conference.org/sc2002/ 78 See http://www.datatag.org 79 HENP currently holds the “Internet2 Land Speed Record”, see http://lsr.internet2.edu/history.html 80 See http://lsr.internet2.edu/

66

Page 67: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

the longest distance (product of the throughput with the terrestrial distance between the two end-hosts) achieved with a single TCP stream. We expect transfers on this scale to be in production use by later this year between Fermilab and CERN by taking advantage of the 10 Gbps bandwidth that will be available mid-2004.

0

10000

20000

30000

40000

50000

60000

70000

Month Mar-00 Apr-02 Sep-02 Oct-02 Nov-02 Feb-03 May-03 Oct-03 Nov-03 Nov-03Month

Internet2 landspeed record history(in terabit-meters/second)

IPv4 terabit-meters/second)IPv6 (terabit-meters/second)

Table 4 Internet2 Land Speed Record history

In November 2003, a team of computer scientists and network engineers from Caltech, SLAC, LANL and CERN at the SuperComputing 2003 Bandwidth Challenge joined forces and captured the ”Sustained Bandwidth Award” for the demonstration of “Distributed Particle Physics Analysis Using Ultra-High Speed TCP on the Grid”, with a record bandwidth achieved of 23.2 Gigabits/sec (or 23.2 billion bits per second). The data, generated on the SC2003 showroom floor at Phoenix, was sent to sites in four countries (USA, Switzerland, Netherlands, and Japan) on three continents. The demonstration served to preview future Grid systems on a global scale, where communities of hundreds to thousands of scientists around the world would be able to access, process and analyze data samples of up to Terabytes, drawn from data stores thousands of times larger.

13. Upcoming Advances in Network Technologies In the last years optical components technology has rapidly evolved to support multiplexing and amplification of ever increasing digital modulation rates, including 10 Gbps modulations. Therefore, even if 10 Gigabit Ethernet (10 GbE) is a relatively new technology, it is no surprise that existing optical devices and 10 Gigabit Ethernet optical interfaces can be successfully married to support 10 Gbps transmission speeds over Long Haul or Extended Long Haul optical connections.It is possible now to transmit 10GE data traffic over 2000 Km without any O-E-O81 regeneration. Therefore, it is possible to implement a backbone optical network based on 10GE Ethernet traffic data without the need of expensive SONET/SDH infrastructure. The NLR infrastructure

81 Optical – Electrical – Optical regeneration

67

Page 68: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

described in section 7.1.7rely on the new Cisco 15808 DWDM long haul multiplexers which can multiplex up to 80 Lambda in a single fiber. For very long haul connections, typically trans-oceanic connections, the SONET technology will remains for several years because the cost to replace the current equipment is not justified. However, the way how we use SONET infrastructures may change. The new 10 GE WAN-PHY standard is SONET friendly and define how to carry Ethernet frames across SONET network. In the future, the use of 10 GE may be generalized to all parts of the backbones replacing the expensive Packet over Sonet (POS) technology:

o 10GE WAN-PHY for “very” long haul connection (with O-E-O regeneration)o 10GE LAN-PHY for regional, metro and local area (only OO amplifiers).

In addition to the generalization of the 10 GE technology in wide and local area networks, there are now strong prospects for breakthrough advances in a wide range of network technologies within the next one to five years, including:

Optical fiber infrastructure: Switches, routers and optical multiplexers supporting multiple 10 Gigabit/sec (Gbps), 40 Gbps wavelengths and possibly higher speeds; a greater number of wavelengths on a fiber; possible dynamic path building82

New versions of the basic Transport Control Protocol (TCP), and/or other protocols that provide stable and efficient data transport at speeds at and above 10 Gbps.Interesting development in this area are FAST TCP83 ,GridDT84 , UDT85, HSTCP86, Bic-TCP87, H-TCP88 and TCP-Westwood89

Mobile handheld devices with I/O and wireless network speed, and computing capability to support persistent, ubiquitous data access and collaborative work

Generalization of 10 Gbps Ethernet network interfaces on servers90 and eventually PCs; Ethernet at 40 Gbps or 100 Gbps

The abundance of new projects being started like HOPI, CLIF, NetherLight, UKLight, Ultranet and Ultralight should design and propose within the next few years new mechanisms to support and manage high capacity shared IP packet switching and dynamically provisioned optical lambdas.

14. Meeting the challenge: HENP Networks in 2005-2010; Petabyte-Scale Grids with Terabyte Transactions

Given the continued rapid decline of network prices per unit bandwidth, and the technology developments summarized above, a shift to a more “dynamic” view of the role of networks began 82 This would allow a combination of the circuit-switched and packet-switched network paradigms, in ways yet to be investigated and developed. See for example “Optical BGP Networks” by W. St. Arnaud et al., http://www.canarie.ca/canet4/library/c4design/opticalbgpnetworks.pdf 83 http://netlab.caltech.edu/FAST/84 http://sravot.home.cern.ch/sravot/GridDT/GridDT.htm85UDT is a UDP-based Data Transport Protocol. See http://www.rgrossman.com/sabul.htm.86 See http://www.icir.org/floyd/hstcp.html.87 http://www.csc.ncsu.edu/faculty/rhee/export/bitcp/index.htm.88 http://icfamon.dl.ac.uk/papers/DataTAG-WP2/reports/task1/20031125-Leith.pdf.89 http://www-ictserv.poliba.it/mascolo/tcp%20westwoood.htm.90 Currently available beta-test units, in very limited supply have recently been provided to a Los Alamos group by Intel.

68

Page 69: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

to emerge during the last year, triggered in part by planning for and initial work on a “Grid-enabled Analysis Environment91”. This has led to a more dynamic “system” view of the network (and the Grid system built on top of it), where physicists at remote locations could conceivably, a few years from now, extract Terabyte-sized subsets of the data (drawn from multi-petabyte data stores) on demand, and if needed rapidly deliver this data to their home-sites.

If it becomes economically feasible to deliver this data in a short “transaction” lasting minutes, rather than hours, this would enable remote computing resources to be used more effectively, while making physics groups remote from the experiment better able to carry out competitive data analyses. Completing these data-intensive transactions in just minutes would increase the likelihood of the transaction being completed successfully, and it would substantially increase the physics groups’ working efficiency. Such short transactions also are necessary to avoid the bottlenecks and fragility of the Grid system that would result if hundreds to thousands of such requests were left pending for long periods, or if a large backlog of requests was permitted to build up over time.

It is important to note that transactions on this scale, while still representing very small fractions of the data, correspond to throughputs across networks of 10 Gbps and up. A 1000 second-long transaction shipping 1 TByte of data corresponds to 8 Gbps of net throughput. Larger transactions, such as shipping 100 TBytes between Tier1 centers in 1000 seconds, would require 0.8 Terabits/sec (comparable to the capacity of a fully instrumented fiber today).

These considerations, along with the realization that network vendors and academic and research organizations are planning a rapid transition to optical networks with higher network speeds and much higher aggregate link capacities, led to a roadmap for HENP networks in the coming decade, shown in Figure 4. Using the US-CERN production and research network links92 as an example of the possible evolution of major network links in our field, the roadmap93 shows progressive upgrades every 2-3 years, going from the present 2.5-10 Gbps range to the Tbps range within approximately the next 10 years. The column on the right shows the progression from static bandwidth provisioning (up to today) to the future use of multiple wavelengths on an optical fiber, and the increasingly dynamic provision of end-to-end network paths through optical circuit switching.

91 See for example http://pcbunn.cacr.caltech.edu/GAE/GAE.htm and http://www.crossgrid.org 92 Jointly funded by the US DOE and NSF, CERN and European Union.93 Source: H.~Newman. Also see “Computing and Data Analysis for Future HEP Experiments” presented by M. Kasemann at the ICHEP02 Conference, Amsterdam (7/02). See http://www.ichep02.nl/index-new.html

69

Page 70: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

Year Production Experimental Remarks2001 0.155 0.622-2.5 SONET/SDH

SONET/SDHDWDM; GigE Integ.DWDM; 1 + 10 GigE

Integration Switch;

Provisioning~10 X 10;40 Gbps

~10 X 10 ~5 X 40 or 40 Gbps

or 1-2 X 40 ~20-50 X 10 Switching~5 X 40 or 2nd Gen Grids~20 X 10 Terabit Networks

2013 ~Terabit ~MultiTbps ~Fill One Fiber

1st Gen. Grids

2009

2011 ~25 X 40 or ~100 X 10

2005 10 2-4 X 10

2007 2-4 X 10

2002 0.622 2.5

2003 2.5 10

Table 4 A Roadmap for major links in HENP network over the next ten years. Future projections follow the trend of bandwidth pricing improvements of the last decade: by a factor of ~500 to 1000

in performance at equal cost every 10 years.

15. Coordination with Other Network Groups and Activities

In addition to the IEPM project mentioned above, there are a number of other groups sharing experience, and developing guidelines for best practices aimed at high-performance network use.

DataTAG project (http://www.datatag.org), The CHEPREO project, The Internet2 End-to-End Initiative (http://www.internet2.edu/e2e), The Internet2 HENP Working Group94 (see http://www.internet2.edu/henp) T The Internet2 HOPI Initiative.

ICFA-SCIC should therefore coordinate with these activities to achieve synergy and avoid duplication of efforts.

Grid projects such as GriPhyN/iVDGL, PPDG, the EU Data Grid and the LHC Grid Computing Project95 are relying heavily on the quality of our networks, and the availability of reliable high performance of the networks supporting the execution of some of the Grid operations. An international Grid Operations Center is planned at Indiana University (see http://igoc.iu.edu/igoc/index.html). There is a Grid High Performance Networking Research Group in the Global Grid Forum (http://www.epm.ornl.gov/ghpn/GHPNHome.html). 94 Chaired by S. McKee (Michigan) and H. Newman (Caltech).95 See http/ppdg.net , http://www.griphyn.org, http:///www.ivdgl.org , http://www.eu-datagrid.org/ and http://lhcgrid.web.cern.ch/LHCgrid/

70

Page 71: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

ICFA-SCIC, through its inter-regional character and the involvement of its members in various Grid projects, has a potentially important role to play in the achievement of a consistent set of guidelines and methodologies for network usage in support of Grids. Contacts have thus begun with the HEP InterGrid Coordination Board (HICB96) and network issues and the work of ICFA-SCIC are among the subjects discussed97 at HICB meetings.

16. Broader Implications: HENP and the World Summit on the Information Society

HENP’s network requirements, and its R&D on networks and Grid systems, have put it in the spotlight as a leading scientific discipline and application area for the use of current and future state-of-the-art networks, as well as a leading field in the development of new technologies that support worldwide information distribution, sharing and collaboration. In the past year these developments, and work on the Digital Divide (including some of the work by the ICFA SCIC) have been recognized by the world’s governments and international organizations as being vital for the formation of a worldwide “Information Society”98.

We were invited, on behalf of HENP and the Grid projects, to organize a technical session on “The Role of New Technologies in the Formation of the Information Society”99 at the WSIS Pan-European Ministerial Meeting in Bucharest in November 2002 (http://www.wsis-romania.ro/ ); then we took an active part in the organization of the conference on the Role of Science in the Information Society (RSIS) organized by CERN, a Summit Event at the World Summit on the Information Society (WSIS) which hold in Geneva from 10-12 December 2003. The RSIS’s goal was to illuminate science’s continuing role in driving the future of information and communication technologies. We hold a Science and Information Society forum during WSIS with demonstrations showing how advanced networking technology (used daily by the particle physics community) can bring benefits in a variety of fields, including medical diagnostics and imaging, e-learning, distribution of video material, teaching lectures, and distributed conferences and discussions around the world community.

The timeline and key documents relevant to the WSIS may be found at the US State Department site http://www.state.gov/e/eb/cip/wsis/ . As example, The “Tokyo Declaration” issued after the January 2003 WSIS Asia-Pacific Regional Conference, defines a “Shared Vision of the Information Society” that has remarkable synergy with the qualitative needs of our field, as follows:

“The concept of an Information Society is one in which highly-developed ICT [Information and Communication Technology] networks, equitable and ubiquitous

96 Chaired by L. Price (Argonne Nat’l Lab). 97 To be presented by H. Newman.98 The formation of an Information Society has been a central theme in government agency and diplomatic circles throughout 2002, leading up to the World Summit on the Information Society (WSIS; see http://www.itu.int/wsis/ ) in Geneva in December 2003 and in Tunis in 2005. The timeline and key documents may be found at the US State Department site http://www.state.gov/e/eb/cip/wsis/ . CERN is planning a scientific event shortly before the Geneva meeting. 99 The presentations, opening and concluding remarks from the New Technologies session, as well as the General Report from the Bucharest conference, may be found at http://cil.cern.ch:8080/WSIS

71

Page 72: Role of computer networking in HEP experiments€¦  · Web viewStanding Committee on Inter-Regional Connectivity (SCIC) Chairperson: Professor Harvey Newman, Caltech. ICFA SCIC

access to information, appropriate content in accessible formats and effective communication can help people achieve their potential…”

It continues with the broader economic and social goals, as follows:

[to] Promote sustainable economic and social development, improve quality of life for all, alleviate poverty and hunger, and facilitate participatory decision processes.

17. Relevance of Meeting These Challenges for Future Networks and Society

Successful construction of network and Grid systems able to serve the global HENP and other scientific communities with data-intensive needs could have wide-ranging effects on research, industrial and commercial operations. Resilient self-aware systems developed by the HENP community, able to support a large volume of robust Terabyte and larger transactions, and to adapt to a changing workload, could provide a strong foundation for the distributed data-intensive research in many fields, as well as the most demanding business processes of multinational corporations of the future.

Development of the new generation of systems of this kind, and especially the recent ideas and initial work in the HENP community on “Dynamic Workspaces” and “Grid Enabled Collaboratories”100 could also lead to new modes of interaction between people and “persistent information” in their daily lives. Learning to provide, efficiently manage and absorb this information and in a persistent, collaborative environment would have a profound effect on our society and culture.

Providing the high-performance global networks required by our field, as discussed in this report, would enable us to build the needed Grid environments and carry our scientific mission. But it could also be one of the key factors triggering a widespread transformation, to the next generation of global information systems in everyday life.

100 These new terms refer to some of the main concepts in current proposals by US CMS and US ATLAS to the NSF Information Technology Research program in 2003. They refer to integrated collaborative working environments aimed at effective worldwide data analysis and knowledge sharing, that fully exploit Grid technologies and networks.

72