National Institute of Advanced Industrial Science and Technology ApGrid: Current Status and Future...

16
National Institute of Advanced Industrial Science and Technology ApGrid: Current Status and Future Direction Yoshio Tanaka (AIST) Yoshio Tanaka (AIST)
  • date post

    20-Dec-2015
  • Category

    Documents

  • view

    218
  • download

    1

Transcript of National Institute of Advanced Industrial Science and Technology ApGrid: Current Status and Future...

National Institute of Advanced Industrial Science and Technology

ApGrid: Current Status and Future Direction

Yoshio Tanaka (AIST)Yoshio Tanaka (AIST)

Asia

ApGrid: Asia Pacific Partnership for Grid Computing

NorthAmericaEurope

•International Collaboration•Standardization

Possible Applications on the Grid•Bio Informatics ( Rice Genome, etc. )•Earth Science ( Weather forecast, Fluid prediction, Earthquake prediction, etc. )

ApGrid TestbedInternational Grid Testbed over the Asia Pacific countries

ApGrid focuses onSharing resources, knowledge, technologiesDeveloping Grid technologiesHelping the use of our technologies in create new applicationsCollaboration on each others work

http://www.pragma-grid.net

PRAGMAPacific Rim Application

andGrid Middleware Assembly

Demo @ HPCAsiaGold Coast, Australia

2001 2002

presentation @ SC2001SC Global Event

1st Core MeetingPhuket, Thailand

1st ApGrid WorkshopTokyo, Japan

2nd ApGid Workshop/Core MeetingTaipei, Taiwan

2000

Kick-off meetingYokohama, Japan

Presentation @ GF5Boston, USA

ApGridPRAGMAPresentation@ APANShanghai,China

demo @ SC2002Baltimore, USA(50cpu)

2nd PRAGMAWorkshopSeoul, Korea

1st PRAGMA WorkshopSan Diego, USA

History and Future Plan

demo @ iGrid2002Amsterdam,Netherland

2003

demo @ CCGridTokyo, Japan(100cpu)

4th PRAGMA WorkshopMelbourne, Australia(200cpu)

demo &ApGrid Informal Meeting@ APAC’03Gold Coast, Australia(250cpu)

5th PRAGMA WorkshopHsinchu, Taiwan(300cpu)

3rd PRAGMAWorkshopFukuoka,Japan

demo @ SC2003Joing Demo with TeraGridPhoenix, USA(853CPU)

demo @ SC2004Pittsburgh, USA

7th PRAGMAWorkshopSan Diego, USA

Asia Grid Workshop(HPC Asia)Oomiya, Japan

presentation@ APANHawaii, USA

6th PRAGMAWorkshopBeijing, China

2004

History and Future Plan (cont’d)

Architecture, technologyArchitecture, technology Based on GT2Based on GT2

Allow multiple CAsAllow multiple CAs Build MDS TreeBuild MDS Tree

Grid middleware/tools from Asia PacificGrid middleware/tools from Asia Pacific Ninf-G (GridRPC programming)Ninf-G (GridRPC programming) Nimrod-G (parametric modeling systemNimrod-G (parametric modeling system) SCMSWeb (resource monitoring) Grid Data Farm (Grid File System), etc.

StatusStatus 26 organizations (10 countries) 27 clusters (889 CPUs)

ApGrid/PRAGMA TestbedApGrid/PRAGMA Testbed

Users, Applications and Experiences

UsersUsersParticipants of both/either ApGrid and/or PRAGMA

ApplicationsApplicationsScientific Computing

Quantum Chemistry, Molecular Energy Calculations, Astronomy, Climate Simulation, Molecular Biology, Structural Biology, Ecology and Environment, SARS Grid, Neuroscience, Tele Science, …

ExperiencesExperiencesSuccessful resource sharing between more than 10 sites in the application level.Lessons Learned

We have to pay much efforts for initiationInstallation of GT2/JobManager, CA, firewall, etc.

Difficulties caused by the bottom-up approachResources are not dedicatedIncompatibility between different version of software

Performance problemsMDS, etc.

Instability of resourcesKey issue is sociological rather than technical

SeversAIST Cluster (50 CPU)

Titech Cluster (200 CPU)KISTI Cluster (25 CPU)

Behavior of the System

Client(AIST)

Ninf-G

SeversNCSA Cluster (225 CPU)

Preliminary EvaluationTestbed: 500 CPUTestbed: 500 CPU

TeraGrid: 225 CPU (NCSA)ApGrid: 275 CPU (AIST, TITECH, KISTI)

Ran 1000 SimulationsRan 1000 Simulations1 simulation = 20 seconds1000 simulation = 20000 seconds = 5.5 hour (if runs on a single PC)

ResultsResults150 seconds = 2.5 min

InsightsInsightsNinf-G2 efficiently works on large-scale cluster of clusterNinf-G2 provides good performance for fine grain task-parallel applications on large-scale Grid.

Observations

Still being a “grass roots” organizationStill being a “grass roots” organizationLess administrative formality

cf. PRAGMA, APAN, APEC/TEL, etc.Difficulty in establishing collaboration with others

Unclear membership rulesJoin/leave, membership levelsRights/Obligations

Vague mission, but already collected (potentially) large computing resources

Observations (cont’d)

Duplication of efforts on “similar” activitiesDuplication of efforts on “similar” activitiesOrganization-wise

APAN - participation by countryPRAGMA – most organizations are overlapped

Operation-wiseApGrid testbed vs PRAGMA-resource

may cause confusiontechnically, the same approach

Multi-grid federation

Network-wisePrimary APAN – TransPAC

Skillful engineering team

Summary of current status

Difficulties are caused by not technical Difficulties are caused by not technical problems but sociological/political problems but sociological/political problemsproblemsEach site has its own policyEach site has its own policy

account managementfirewallstrusted CAs…

Differences in interestsDifferences in interestsApplication, middleware, networking, etc.

Differences in culture, language, etc.Differences in culture, language, etc.Human interaction is very important

Summary of current status (cont’d)

Activities at the GGFActivities at the GGFProduction Grid Management RG

Draft a Case Study Document (ApGrid Testbed)

Groups in the Security AreaPolicy Management Authority RG (not yet approved)

Discuss with representatives from DOE Science Grid, NASA IPG, EUDG, etc.

Federation/publishing of CAs (will kick off)I’ll be one of co-chairs

Summary of current status (cont’d)

What has been done?What has been done?Resource sharing between more than 10 sites (853cpus are used by Ninf-G application)Use GT2 as a common software

What hasn’t?What hasn’t?Formalize “how to use the Grid Testbed”

I could use, but it is difficult for othersI was given an account at each site by personal communication

Provide documentation

Keep the testbed stableDevelop management tools

Browse informationCA/Cert. management

Future Direction (proposal)

Draft “Asia Pacific Grid Middleware Deployment Guide”, whiDraft “Asia Pacific Grid Middleware Deployment Guide”, which is a recommendation document for deployment of Grid ch is a recommendation document for deployment of Grid middlewaremiddleware

Minimum requirementsConfiguration

Draft “Instruction of Grid Operation in the Asia Pacific RegioDraft “Instruction of Grid Operation in the Asia Pacific Region”, which guides how to run Grid Operation Center to suppon”, which guides how to run Grid Operation Center to support management of stable Grid testbed.rt management of stable Grid testbed.Need support by APANNeed support by APAN

Ask APAN to approve the documents as “recommendation” and encourage member countries to follow the documents for deployment of Grid middleware.

Other issues (technical)

Should think about GT3/GT4-based Grid TestbedShould think about GT3/GT4-based Grid TestbedEach CA must provide CP/CPSEach CA must provide CP/CPSInternational CollaborationInternational Collaboration

TeraGrid, UK eScience, EUDG, etc.Run more applications to evaluate feasibility of GriRun more applications to evaluate feasibility of Gridd

large-scale cluster + fat link many small cluster + thin link