D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase...

27
D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting, Boston, July 12, 2012

Transcript of D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase...

Page 1: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

D u k e S y s t e m s

Virtualizing, Sharing, Interconnecting

Part 2: servers and pipes

Jeff ChaseDept. of Computer Science

Duke University

NSF CIO Meeting, Boston, July 12, 2012

Page 2: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

A CIO’s view?

GENI

research computing

Everything

everything else

Drawing is not to scale

Page 3: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

A broader view of GENI

• GENI is:– a constituency demanding attention;

– a process that can help to meet the needs of other constituencies;

– a bundle of technologies coming to campus.

• Some are already there.

• GENI engages key technologies that can help give campus users what they want.– virtualizing, sharing, and interconnecting

– infrastructure as a distributed service

Page 4: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Sponsored by the National Science Foundation 4July 10, 2012

GENI Portal Home Page

Page 5: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Constructing “slices”

• I like to use TinkerToys as a metaphor for creating a slice in GENI.

• The parts are virtual infrastructure resources: compute, networking, storage, etc.

• Parts come in many types, shapes, sizes.

• Parts interconnect in various ways. • We combine them to create useful

built-to-order assemblies.• Some parts are programmable.• Where do the parts come from?

Page 6: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

ExoGENI Racks

• Packaged infrastructure pod– servers, network, storage

– off-the-shelf cloud software (‘exo’)

– GENI-enabled ‘special sauce’

• Cookie-cutter deployment – funded @14 campuses

– linked for sharing, “plug and play”

– multiple use

Open Resource Control Architecture

Page 7: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

ExoGENI software structure

Page 8: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Executive summary

Virtualization Layer

Physical: metal and glass

Orchestration Service

GENI API ... API

Cloud Service

Stuff we build(automation)

Standard stuff you need anyway

Page 9: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Competing Racks “brands”

• InstaGENI– Emulab/PlanetLab

– lightweight VMs (vservers)

– bare-metal provisioning

– HP is an engaged sponsor

• ExoGENI– ORCA

– off-the-shelf cloud software

– hypervisor VMs (KVM) or bare-metal

– IBM is an engaged vendor

Page 10: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

EC2 The canonical public cloud

Virtual Appliance

Image

Page 11: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Infrastructure as a Service (IaaS)“Consumers of IaaS have access to virtual computers, network-accessible storage, network infrastructure components, and other fundamental computing resources…and are billed according to the amount or duration of the resources consumed.”

Page 12: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Client Server(s)

Cloud > server-based computing

• Client/server model (1980s - )

• Now called Software-as-a-Service (SaaS)

Page 13: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Cloud Provider(s)

Host

GuestClient Service

Host/guest model

• Service is hosted by a third party.– flexible programming model

– cloud APIs for service to allocate/link resources

– on-demand: pay as you grow

Page 14: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

OS

VMM

Physical

Platform

Client Service

IaaS: infrastructure services

Deployment of private clouds is growing rapidly w/ open IaaS cloud software.

Hosting performance and isolation is determined by virtualization layer

Virtual machines: VMware, KVM, etc.

A cloud solution for your campus?

Page 15: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

OS

VMM (optional)

Physical

Platform

Client Service

PaaS cloud services define the high-level programming models, e.g., for clusters or specific application classes.

PaaS: platform services

Hadoop, grids,batch job services, etc. can also be viewed as PaaS category.

Note: can deploy them over IaaS.

Page 16: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

OpenStack, the Cloud Operating SystemManagement Layer That Adds Automation & Control

[Anthony Young @ Rackspace]

Page 17: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Managing images

• “Let a thousand flowers bloom.”

• Curated image collections are needed!

• University IT can help.

• “Virtual appliance marketplace”

Page 18: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Connectivity: the missing link

Cloud Providers

Virtual Compute and Storage Infrastructure

Transport Network Providers

Cloud APIs (Amazon EC2 ..)Cloud APIs (Amazon EC2 ..) Dynamic circuit APIs (NLR Sherpa, DOE OSCARS, I2 ION, OGF NSI …)

Virtual Network Infrastructure

Page 19: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Linking clouds with L2 circuits

c1 c2CA B

A B

logical pipe (path)

cross-domain link

circuit providers

cloud sites

Campus clouds can serve as on-ramps to

national fabrics.

Page 20: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Circuit stitching

node

Produce/consume VLAN tag

Extends OpenStack and Euca to configure

virtual NICs and attach VLANs.

Connect adjacent circuits at network exchanges

(e.g., StarLight).

The last-hop provider to the

cloud site is your campus or RON.

Page 21: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

SC11 Demo: Solar Fuels Workflow

Page 22: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

ExoGENI: recap• ExoGENI is a network of standard OpenStack cloud sites

deployed (deploying) at campuses.– Initial sites centrally managed from RENCI, other providers may join and

advertise portions of their resources.

• Layered orchestration software (ORCA) manages multi-cloud slices and integrates with GENI.– Proxies GENI APIs, checks identity/authorization.

• Circuit backplane for L2 network connectivity.– By agreement with circuit providers....

• Configurable/flexible L3 connectivity.– “Easy button” to configure IP network within slice.

– Host campuses may offer L3 connectivity to slices.

Page 23: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Cloud-Based Credential Store

IdP

Issue user credentials

PA

Create project

SA

Registeruser

Issue project x

credentials

Create slice in x

Issueslice s

credentials

Create sliver in s

1

3

52 4

Delegate

Credentials: who has access?

Page 24: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

IdP

Issue user credentials Users have

roles e.g., student, faculty.

Registeruser

user registered

GENI uses Shibboleth IdPs

IdP.geniUserTIdP.studentT

IdP.enrolled(CS-114)T

IdP.geniUserDIdP.facultyD

• An IdP asserts facts about users.

• User attributes may include inCommon attributes harvested through indirect delegation to Shibboleth IdPs.

• These attributes may have parameters with simple values (strings or numbers).

D

T

Page 25: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Please work with inCommon and release IdP attributes to GENI!

Page 26: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Trust management: generalizing PKI• An entity A delegates trust to another by endorsing its

public key for possession of an attribute or role.

• The delegation is a fact written as a logic statement and issued in a credential signed by A.

• Other entities reason from these facts according to their own policy rules, which are declared in logic.

• Policy rules may also be signed and transmitted.

trustsA B

A.trusts B CertificateTerm of validity

Issuer’s name (or key)

Signature

Payload: statement

Page 27: D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,

Summary• GENI is about incorporating new foundational

infrastructure into campuses.– general-purpose multi-use

– best-of-breed off-the-shelf

• GENI is automation for this infrastructure.– ease of use, power, flexibility, safety

– Many others are working on these problems.

– GENI is the focal point in the academic space.

• Rolling deployment, ramping up....

• Best practices @ central IT will help!