Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler...

15
Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler http://www.cs.berkeley.edu/~culler U.C. Berkeley DARPA Meeting 9/21/1999
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    218
  • download

    0

Transcript of Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler...

Page 1: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

Clusters

Massive Cluster

Gigabit Ethernet

System Architecture for Extreme Devices

David Culler

http://www.cs.berkeley.edu/~culler

U.C. Berkeley

DARPA Meeting

9/21/1999

Page 2: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 2

Recap: Convergence at the Extremes

• Arbitrarily Powerful Services on “Small” Devices– massive computing and storage in the infrastructure

– active adaptation of form and content “along the way”

• Extremes more alike than either is to the middle– More specialized in function

– Communication centric design

» wide range of networking options

– Federated System of Many Many Systems

– Hands-off operation, mgmt, development

– Scalability, High Reliability, Availability

– Power and space limited

=> simplicity

• Each extends the other

Page 3: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 3

State-of-the-Art: Very Large Systems

• Scalable Clusters Established– high-speed user-level networking + single system image

– naming, authentication, resources, remote exec., storage, policy

• Meta-system glue over full OS and Institutional structure

– Glunix (UCB), Globus(ANL), Legion (UVA), IPG (NASA), Harness, NetSolve, Snipe (UTK), ...

– uniform, multiprotocol communication & access mechanism

– personal virtual machine spanning potentially diverse resources

» constructed and managed “by hand”

• Key challenges– Automatic Composition, Management, and Availability

– Scalability to global scale

– Ease of development for global-scale services

Page 4: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 4

State-of-the-Art: in the small...

• Unix-like support in a small form factor + real time seasoning

– microkernels dominate

» Commercial: PSION, GeoWorks, WinCE, Inferno, QNX, VxWorks, javaos, chorusOS,

» academic: Exokernel, OSKit, ucLinux, ELKS,

– + PalmOS, BeOS,

– Components and mobile objects: jini, corba, dcom, ...

=> tracks the 80386– when it becomes ~ 1990 PC Unix will run on it

– ability to remove components (modularity) + fault boundaries more important than performance

– legacy applications less dominant

• add-hoc networking for connectivity

Page 5: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 5

Design Issues for “Small Device OS”

• Current: Managing address spaces,Thread scheduling, IP stack, Windowing System, Device drivers, File system, Applications Programming Interface, Power management

• Challenge: How can operating systems for tiny devices be made radically simpler, manageable, and automatically composable?

Page 6: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 6

Emerging Devices

• RF COTS Mote– Atmel Microprocessor

– RF Monolithics transceiver

» 916MHz, ~20m, 4800 bps

– 1 week fully active, 2 yr @1%

N

S

EW 2 Axis Mag. Sensor

2 Axis AccelerometerLight Intensity Sensor

Humidity Sensor

Pressure Sensor

Temperature SensorLaser mote

• 650nm laser pointer• 2 day life full duty

CCR mote

• 4 corner cubes

• 40% hemisphere

Page 7: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 7

Micro Mote - First Attempt

Page 8: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 8

Structured Communication-Centric System Architecture

• Active Proxies– connected to the infrastructure– soft-state, bootstrap protocol– transcoding

• Ubiquitous Devices– billions– net + sensors / actuators– net + UI=> flow devices

Service Path

• Scalable Info. Utility Base– highly available– persistent state (safe)– databases, agents– service programming environment

• Service Paths– aggregate flows (rivers)– transcoding operators

Page 9: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 9

The Large: Info Utility Platform

• Not just storage and processing, but distributed innovation of scalable, available services

• Base Pgm extends the Ninja service platform

Complex nodeULN

service exec. env.

Scalable, Available Service PlatformRegistry PDDS (NB) Discovery

automatedsmart-clientfail-over & LB

xcode & soft-statevia Active Proxies

traditional OS functions as services - platform built by push services also

• Utility requirement =>

• Path connects device to clustered service through Soft-State APs, graceful failover within service via non-blocking PDDS and RMI*

Page 10: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 10

Key Utility Requirements• Utility Service Spreads itself over multiple Infra. Service

Providers– persistent state becomes decoupled from service (Oceanic)

» preserve security model

– contractual relationship between service and platform

– SDS, QoS, LB => negotiation, monitoring, adaptation

– effective incentive-compatible economic mechanisms

• Sevices composed from utility serv. of other providers– negotiation arch. generalizes path formation

– fail-over across competing services, not homogeneous operations

» self-checking, transactional service API

– economic mechanisms permeate services

• Massive information flows– via huge data stores and via vast sensor nets (Rivers)

– service-wide auto-scheduling of flows

Page 11: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 11

The Small: radically simple OS for management and composition• Communication is fundamental

– treated as part of the hardware, not “the system”

• Push path concept clear into the device– device fundamentally depends on infrastructure

– devices typically have well-connected proxies

• Focus on scheduling discrete chunks of data movement not general thread scheduling and unlimited memory management– there may be a bounded amount of work per chunk to xform or

check data

– easy to get very predictable scheduling

devicenetworkS

A

S

Adevicenetwork UI

Page 12: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 12

Precursors to the next generation• Operating systems that are not called “operating systems”

• eg: modern disk controller– event scheduler handling stream of commands from network link, controlling complex array of sensors and actuators, performing sophisticated calculations to determine what and when (scheduling and

caching) as well as transforming data on the fly

– automatic connection, enumeration, configuration

– but several simplifying assumptions must be removed

Complex array ofSensors and actuators

Network link: - EIDE, SCSI - FCAL, SSA - USB, 1394 - ???

Page 13: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 13

OS as little more than FSM

• Commands are an event stream merged with sensor/actuator (or UI) events

• Discrete flows to/from network

• General thread must be compiled to sequence of bounded atomic transactions

– spaghetti part of an application is configuring the flows

– steady-state is straight-forward event processing + signaling unusual events

• continuous self-checking and telemetry– rely on the infrastructure for hard mgmt stuff

• push very simple flow apps into devices

• correct-by-construction techniques for cooperating FSMs as basis for automated configuration and mgmt

Page 14: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 14

UCB Testbed

• 1x300 proc + 10x20 proc SAN clusters across depts.

• integrated through multiple gigabit ethernet

• extended out throug 100s desktops, RF laptop, IRDA PDA, Cell Phones, Pagers, and numerous motes

Cell Phones

PDAs Future Devices

Wireless DesktopPCs

Servers

Clusters

Massive Cluster

Gigabit Ethernet

Page 15: Clusters Massive Cluster Gigabit Ethernet System Architecture for Extreme Devices David Culler culler U.C. Berkeley DARPA Meeting.

9/21/99 Endeavour Sys. Arch 15

Plan

• Year 1 (Base):– Large: Deploy Ninja Service Platform on Cluster-of-Clusters

– Small: Prototype over PalmOS + wince + uc-Linux

• Year 2 (Options 1 & 4)– Automated service composition architecture

– FSM-OS and negotiation/mgmt architecture

– Broad simulation environment

• Year 3– Deploy widespread services, devices and feeds

– Evaluate against high-speed decision making applications