Co-funded by the European Union · Overview, status update andaccess. Outline Time Title Speaker(s)...

Post on 16-Oct-2020

0 views 0 download

Transcript of Co-funded by the European Union · Overview, status update andaccess. Outline Time Title Speaker(s)...

Co-funded bythe European Union

Co-funded bythe European Union

Fenix Research InfrastructureOverview, status update and access

OutlineTime Title Speaker(s)

17:35 - 17:45 Overview of the Fenix Infrastructure and the ICEI project Dirk Pleiter (JUELICH-JSC)

17:45 - 17:55 Implementation status update Colin McMurtrie (ETHZ-CSCS)

17:55 - 18:25 Use case presentations (5 min each)• Cerebellar simulation (SP1)• Data acquisition and analysis in the context of human

brain atlasing (SP2/SP5)• Hippocampus simulation (SP6)• Massive open online courses (MOOC) (SP6)• Fenix and the Neuromorphic Computing Platform (SP9)• Neurorobotics PaaS (SP10)

Egidio D'Angelo (UNIPV)Timo Dickscheid (JUELICH-INM1)

Michele Migliore (CNR)Jean-Denis Courcol (EPFL)Andrew Davison (CNRS) / Eric Müller (UHEI)Hossain Mahmud (FORTISS) / Susie Murphy (EPFL)

18:25 - 18:35 Access to Fenix resources Giuseppe Fiameni (CINECA)

18:35 - 18:45 Q&A All

Co-funded bythe European Union

ICEI Overview

Fenix Goals

• Establish HPC and data infrastructure services for multiple research communities• Encourage communities to build community specific platforms• Delegate resource allocation to communities

• Develop and deploy services that facilitate federation• Based on European and national resources• Science community driven approach

• Infrastructure realisation and enhancements based on co-design approach• Science communities providing resources to realise infrastructure

→ HBP SGA Interactive Computing E-Infrastructure• Resource allocation managed by community

Fenix Partners

• Currently involved centres• BSC (ES)• CEA (FR)• CINECA (IT)• CSCS (CH)• JSC (DE)

• Foreseen extensibility• Open for more partners and stakeholders

Infrastructure vs. Platform Services

Platform services

User communities

Infrastructure services

ICEI Infrastructure Services

• Computing services• Interactive Computing Services• (Elastic) Scalable Computing Services• VM Services

• Data services• Active Data Repositories• Archival Data Repositories• Data Mover Services, Data Location and Transport Services

• Service federation• Cope with variety of data sources and make data available to the wider

community• Provide access to a diversity of computing capabilities at different sites

ICEI Science and Use Cases for Co-Design# Working Title PI1 Data-driven cellular models of brain regions, Olfactory Bulb Migliore3 Learning-to-learn (LTL) in a complex spiking network on HPC and Neuromorphic hardware

interacting with NRPMaass, Meier

5 Large scale simulations of models: Cerebellum D' Angelo6 Large scale simulations of models: Hippocampus Migliore7 Elephant big data processing Grün, Denker8 Mouse Brain Atlas Pavone9 Towards a novel decoder of brain cytoarchitecture using large scale simulations Poupon, Axer

10 Multi-scale co-simulation: Connecting Arbor/Neuron, NEST and TVB to simulate the brain Morrison, Destexhe, Diesmann, Jirsa

11 Neurorobotics platform, large-scale brain simulations von Arnim, Cruz12 BBP columnar simulation Schürmann13 Ilastik as a service on the HBP Collaboratory Kreshuk14 Online visualization of multi-resolution reference atlases Amunts15 Data management and big data analytics for high throughput microscopy Dickscheid

16 Multi-area macaque NEST simulation with life visualization and interaction v. Albada, Diesmann

17 Data management and big data analytics for large cohort neuroimaging Caspers, Eickhoff

Resource Allocation Model

• Actors• Fenix Resource Providers• Fenix Communities• Fenix Users

• Role of Fenix Resource Providers• Provide fixed amount of resources for given period to Fenix Communities• Define rules for resource allocation (e.g., peer-review process)

• Fenix Users• Submit proposal for resources to relevant Fenix Community

• Fenix Community• Review proposal and award available resources to Fenix Users

Co-funded bythe European Union

ICEI - Implementation StatusBringing ICEI Infrastructure Services and Resources to the HBP

Fenix/ICEI provides the Base Infrastructure for HPAC

ICEI Resources for HBP • ICEI resources have already been made

available to the HBP and PRACE by CSCS

• There are currently 4 HBP projects with compute allocations at CSCS

• Usage of the Swift Object Storage is growing by >1% per month

• More resources are available than are being consumed so HBP users are encouraged to apply for a compute allocation

How do I use ICEI Resources? (1)Swift Object Storage:SWIFT OS can be accessed directly from your personal computer

• GUI clients e.g. CyberDuck• SP5 Python Library

• Better for mgmt. of the ACLs and Object Buckets

• https://hbp-archive.readthedocs.io/en/latest/

Reachable from inside the Collaboratory• Get/Put from Jupyter Notebooks• More capabilities coming soon

Active Data Repositories:• Comes with the compute allocation (= $SCRATCH)• Low-latency storage tier

Archival Data Repositories:• Available either as part of a computing request or

separately (proposal needed)

How do I use ICEI Resources? (2)

Pollux OpenStack IaaS:The Pollux OpenStack IaaS is available to host your platform VMs:• Accessible globally via the Horizon GUI interface• RESTful API can be used for automation

How do I use ICEI Resources? (3)Scalable Compute Resources:The Piz Daint system is available as a state-of-the-art scalable compute resource for use by HBP users• Accessible globally via Command-line Interface• Via the Unicore GUI• RESTful API offered via UNICORE for platforms

• Use of Service Accounts for Platforms is also acceptable at some sites (e.g. CSCS)

How to get HelpGeneral Contact for HPAC Platform:

• HPAC Platform: https://collab.humanbrainproject.eu/#/collab/264/nav/2378

How to apply for resources:• Send your proposals to: icei-coord@fz-juelich.de

Getting help:• Send emails to: hpac-support@humanbrainproject.eu

Examples of use case presentations

Co-funded bythe European Union

HippocampusMichele Migliore (CNR)

Motivation/Introduction

simplified net

real time

bottom-up

top-down

neuromorphic HW

clinicalimagingpharma

EXP

Join us for an EITN workshop on 28-29 January.

We would like to assemble a number of multidisciplinary HBP partners interested in working together on implementing a multilevel model of hippocampal functions and dysfunctions.

Requirements/ ICEI resources/ current problems• 450000 neurons, ~90·106 memb seg, 20 ODE/seg + synapses, ~ 2·109 ODEs

• 1” of sim time: 3-9hr on Piz Daint (CSCS) or Jureca (JSC) using 10000 cores

• ~2Tb of input, up to ~1-4Tb of output

• 50-90000 node/hours every 3 months, 20Tb of storage

Current use of ICEI resources

• 60M node/hours per year, 100Tb of storage for simulation and analysis

• offline interactive visualization

• User-friendly access from the Collaboratory (currently a blocking issue)

Planned use of ICEI resources

Co-funded bythe European Union

Co-funded by the European Union

Neurorobotics PlatformPaaS Use Case for SP7

Susie Murphy (EPFL)Hossain Mahmud (fortiss)

HBP Summit, Maastricht 16.10.2018

Co-funded by the European Union

What is the Neurorobotics Platform (NRP)?

● The Neurorobotics Platform's (NRP) purpose is to allow researchers to perform experiments on virtual robots using brain models

● In order to deal with huge simulations involving brains with millions of neurons we need access to supercomputers

● To achieve this we are working with CSCS to deploy the NRP on the Piz Daint supercomputer

Co-funded by the European Union

NRP on Piz Daint: Plan

Backend container

Frontend

Shared storage

1. Using user’s OIDC token submit request to start job(s)

Unicore

2. Job is allocated and started

3. Using a tunnel the backend sends data back to frontend and vice versa

Nest containers

Piz Daint0. HBP user has been allocated resource time on Piz Daint

Co-funded by the European Union

NRP on Piz Daint: Roadmap

SP7-SP10

Piz Daint STEP 1 Piz Daint STEP 2 Piz Daint STEP 3 Piz Daint STEP 4

2.1 October 18 2.2 March 19 2.3 October 19

Running ~10⁵ neurons in real-time ~10⁵ neurons

Run whole backend in a single container on Daint

Run whole backend AND multi-process NEST on single container on Daint

Run whole backend and spawn NEST multi-process on other nodes

Run multiple backends that spawn NEST multi-process in production

Co-funded by the European Union

NRP on Piz Daint: Accomplishments

● Having the backend container running on Piz Daint○ Docker image had to be importable into shifter

● Having OpenGL and GPU context from docker containers● Establishing bidirectional data flow to bypass the firewall○ Secure, easy and restricted method had to be set up

● NRP running on Piz Daint prototype setup

Co-funded by the European Union

NRP on Piz Daint: Challenges

● Authenticating HBP users at CSCS○ Right now HBP accounts and CSCS accounts are linked manually

● Procedure for updating docker images for users○ Currently images go to user space, and need to be updated manually○ Centralised repository for platform provider is needed

● Storing user data○ Centralised data store, which can be accessed within and outside Piz

Daint○ Need for mechanism to allow users to share data between them○ How to transport huge files

Co-funded by the European Union

Piz Daint

NRP on Piz Daint: Current Status

Backend container

Frontend

Shared storage

Piz Daint jobs can be started using OIDC, only dummy jobs have been tested

Unicore

Manual set up of a backend/frontend data flow successful

Nest containers

End solution not figured out yet. Not needed until step 3 / 4 of roadmap

At the moment Nest is inside the backend container. Planned for step 3 of roadmap

Co-funded by the European Union

NRP on Piz Daint: Roadmap status

SP7-SP10

Piz Daint STEP 1 Piz Daint STEP 2 Piz Daint STEP 3 Piz Daint STEP 4

2.1 October 18 2.2 March 19 2.3 October 19

~10⁵ ~10⁵

Run whole backend in a single container on Daint

Run whole backend AND multi-process NEST on single container on Daint

Run whole backend and spawn NEST multi-process on other nodes

Run multiple backends that spawn NEST multi-process in production

Co-funded by the European Union

http://neurorobotics.net/

/HBPNeurorobotic

@HBPNeurorobotic

Thanks for listening!

Access to Fenix resourcesGiuseppe Fiameni | Debora Testi - CINECA

Agenda■ Access mechanisms ■ Access mechanism - HBP■ Access mechanism - PRACE■ Available resources in 2018■ Available resources in 2019

Fenix: access mechanism

■ Fenix Resource Providers vs. Fenix Communities■ Fenix Resource Providers provide and operate

resources■ Fenix Communities decide

on resource allocation

Fenix: access mechanism via HBP

■ Current procedure: ■ Send 2-3 page proposal to icei-coord@fz-juelich.de■ ICEI will perform a technical review■ HBP Directorate will decide on awarding the resources■ ICEI will make resources available

■ Non HBP members can apply for resources via PRACE■ First groups started to use infrastructure

■ HBP projects awarded allocations at CSCS and are already using them

ICEI PMO

HBP DIR

(1) Proposal

(6) Decision result

ICEI Sites

(4) Decision proposal

(8) Support

(7) Information about allocated resources

(5) Decision result

ICEI Experts

(2) Review request

(3) Review results

Fenix: access mechanism via PRACE■ Pilot phase in place■ Optional ICEI resources are available in call #18 of PRACE

Tier-0 resources ■ Call is now open up to the 30th of October■ PRACE performs scientific review■ ICEI provides technical feedback on the Fenix aspects

■ If successful, ICEI resources will be included in 2019 Tier-0 calls

■ A combined PRACE-ICEI DECI call will be opened in 2019

Fenix: available resources in 2018In Q4 2018, resources are available at CSCS:

Total ICEI HBP (25%) PRACE (15%) Unit

Scalable computing 186’204 116’344 69’860 node * hours

Interactive computing 297’840 186’150 111’690 node * hours

VM services 784 490 294 # VMs

Archival data repository 1’600 1’000 600 TB

Active data repository 32 20 12 TB* day

Fenix: available resources in 2019Provisional numbers (subject to change):

Total ICEI HBP (25%) PRACE (15%) Unit

Scalable computing 1’800’000 465’000 279’000 node * hours

Interactive computing 1’311’000 327’000 196’000 node * hours

VM services 9‘000 2’270 1’362 # VMs

Archival data repository 22’000 5’500 3’300 TB

Active data repository (every quarter)

600 160 100 TB* day

Fenix Collab

https://collab.humanbrainproject.eu/#/collab/28520/nav/200129

Q&A

Co-funded bythe European Union