The 21st Century Internet: A Planetary-Scale Grid Powered by Intel Processors Invited Talk in...
-
date post
18-Dec-2015 -
Category
Documents
-
view
216 -
download
2
Transcript of The 21st Century Internet: A Planetary-Scale Grid Powered by Intel Processors Invited Talk in...
The 21st Century Internet: A Planetary-Scale Grid
Powered by Intel Processors
Invited Talk in Intel’s Forum and Seminar Series
Hillsboro, OR
February 19, 2002
Larry SmarrDepartment of Computer Science and Engineering
Jacobs School of Engineering, UCSDDirector, California Institute for Telecommunications and Information Technology
The 21st Century Internet: A Planetary-Scale Grid Powered by Intel Processors
After twenty years, the "S-curve" of building out the wired internet with hundreds of millions of PCs as its end points is flattening out. At the same time, several new "S-curves" are reaching their steep slope as ubiquitous computing begins to sweep the planet. As a result there will be a vast expansion in heterogeneous end-points to a new wireless internet, moving IP throughout the physical world. Billions of internet connected cell phones, embedded processors, hand held devices, sensors, and actuators will lead to radical new applications. The resulting vast increase in data streams, augmented by the advent of mass market broadband to homes and businesses, will drive the backbone of the internet to an optical lambda-switched network of tremendous capacity. Powering this global grid, will be Intel processors, arranged in various size "lumps." At the high end will be very large tightly coupled IA-64 clusters, exemplified by the new NSF TeraGrid. The next level will be optically connected IA-32 PC clusters, I have termed OptIPuters. Forming the floor of the pyramid of power will be peer-to-peer computing and storage, which will increasingly turn the individual Intel PC "dark matter" of the Grid into a vast universal power source for this emergent planetary computer. More speculative will be possible peer-to-peer wireless links of hand-held and embedded processors such as the Intel StrongARM Pocket PC processor. I will describe how the newly formed Cal-(IT)2 Institute is organizing research in each of these areas. Large scale "Laboratories for Living in the Future" are being designed, some of which provide opportunities for collaboration with Intel researchers.
Governor Davis Has Initiated Four New Institutes for Science and Innovation
UCSBUCLA
California NanoSystems Institute
UCSF UCB
California Institute for Bioengineering, Biotechnology,
and Quantitative Biomedical Research
UCI
UCSD
California Institute for Telecommunications and Information Technology
Center for Information Technology Research
in the Interest of Society
UCSC
UCDUCM
www.ucop.edu/california-institutes
Cal-(IT)2 Has Over Sixty Industrial Sponsors From a Broad Range of Industries
Akamai Technologies Inc.
AMCC
Ampersand Ventures
Arch Venture Partners
The Boeing Company
Broadcom Corporation
Conexant Systems, Inc.
Connexion by Boeing
Cox Communications
DaimlerChrylser
Diamondhead Ventures
Dupont iTechnologies
Emulex Corporation
Enosys Markets
Enterprise Partners VC
Entropia, Inc.
Ericsson Wireless Comm.
ESRI
Extreme Networks
Global Photon Systems
Graviton
IBM
Interactive Vis. Systems
IdeaEdge Ventures
The Irvine Company
Intersil Corporation
ComputersCommunications
SoftwareSensors
BiomedicalAutomotive
StartupsVenture Capital
Oracle
Orincon Industries
Panoram Technologies
Polexis
Printronix
QUALCOMM Incorporated
R.W. Johnson Pharma. R.I.
SAIC
Samueli, Henry (Broadcom)
SBC Communications
San Diego Telecom Council
SciFrame, Inc.
Seagate Storage Products
SGI
Silicon Wave
Sony
STMicroelectronics, Inc.
Sun Microsystems
TeraBurst Networks
Texas Instruments
Time Domain
Toyota
UCSD Healthcare
The Unwired Fund
Volkswagen
WebEx
Irvine Sensors Corporation
JMI, Inc.
Leap Wireless International
Link, William J. (Versant Ventures)
Litton Industries, Inc.
MedExpert International
Merck
Microsoft Corporation
Mindspeed Technologies
Mission Ventures
NCR
Newport Corporation
Nissan Motors
$140 M Match From Industry
Cal-(IT)2 -- An Integrated Approach to Research on the Future of the Internet
www.calit2.net
220 UCSD & UCI FacultyWorking in Multidisciplinary Teams
With Students, Industry, and the Community
State Gives $100 M Capital for New Buildings and Labs
Experimental Chip Design with Industrial Partner Support
Source: Ian Galton, UCSD ECE, CWC
A MultipleCrystal Interface Phase Lock Loop
(PLL) for a Bluetooth
Transceiver with Voltage Control Oscillator (VCO) Realignment toReduce Noise
Clean Room Will HouseMicroanalysis and Nanofabrication Labs
Superconducting Flux Pinning by Magnetic Dots-Nickel Nanoarray on Niobium Thin Film
M. I. Montero, O. M. Stoll, I. K. Schuller, UCSDM. Bachman, G-P Li, UCI
UCSD Used Electron Beam Lithography
To Create Ni Nanodots With a Spacing of ~500 Nm
UCI Used Photolithography To Link Device to Macro World
Applications: Increases in Current Carrying Capability of Superconducting Tapes And Reduction of Noise in Ultra-Sensitive Magnetic Field Detectors
“Commensurability” Effects From the Matching of the Nanoarray and the Superconductor Vortex Lattice
Cal-(IT)2 “Living-in-the-Future” Laboratories
• Technology Driven– Ubiquitous Connectivity– SensorNets– Knowledge and Data Systems– LambdaGrid
• Application Driven– Ecological Observatory– AutoNet– National Repository for Biomedical Data
• Culturally Driven– Interactive Technology and Popular Culture
Simon Penny, UCI
Robert Nideffer, UCI
Antoinette LaFarge, UCI
Celia Pearce, UCI
Dan Frost, UCI
Larry Carter, UCSD
Geoff Voelker, UCSD
Mike Bailey, UCSD
Edo Stern, UCSD
Sheldon Brown, UCSD
Adriene Jenik, UCSD
Lev Manovich, UCSD
Amy Alexander, UCSD
Miller Puckette, UCSD
Peter Otto, UCSD
The Convergence of Computing, Media and New Art Forms Is Creating a New Cultural Landscape for the 21st Century
Cal-(IT)2 Is Bringing Together Interdisciplinary Researchers from
UC San Diego and UC Irvineto Develop the Modalities, Methodologies,
Vocabularies and Technologies of This Emerging Landscape
Sheldon Brown, UCSD
The ingredients for cultural transformation
Computer Gaming As the Primary Media Realm for a New Generations Development of Media/Social
Literacy/Proficiency
Computer Gaming is a Major Focusof the New Media Arts Layer
• Networked Computing Environment
• Computing As Vehicle for Media Delivery
• Computing As Social Space
• Ubiquity of High Resolution Graphics and Audio
• Gaming As the Domain Where All of These Elements Are Brought Together
Sheldon Brown, UCSD
PS2 vs. PC
Dynamic Data Stream -Static Instruction Set
Static Data Stream –Dynamic Instruction SetArchitecture
optimized for real- time processing of multi-media data
Document Processing
Architecture
PC Architecture Development Provoked by Computer Gaming
Sheldon Brown, UCSD
• Wireless Access--Anywhere, Anytime– Broadband Speeds– “Always Best Connected”
• Billions of New Wireless Internet End Points– Information Appliances– Sensors and Actuators– Embedded Processors
• Emergence of a Distributed Planetary Grid– Broadband Becomes a Mass Market– Internet Develops Parallel Lambda Backbone– Scalable Distributed Computing Power– Storage of Data Everywhere
The Next S-Curves of Internet Growth:A Mobile Internet Powered by a Planetary Grid
A Planetary Scale GridPowered by Intel Processors
Nature of Lump
Number of Processors Per Lump
Number of National Scale Lumps
Typical Processor
Speed of WAN connection
Example
High Perf.
PC Cluster
1000s 4 Intel IA-64 10-100 Gbps TeraGrid
OptIPuter
PC Cluster 10s-100s 1000s Intel IA-32 1 Gbps Dedicated
Cluster
PC 1 millions Intel IA-32 1-100 Mbps Entropia
embedded processors
1 Hundreds of millions
Intel StrongARM
100 Kbps-
10 Mbps
AutoNet
Pocket PCs
Cell Phones
The Grid is “Lumpy”
Source: Smarr Talk 2000
Source: Smarr Talk January 1998
Source: Smarr Talk 2000
Source: Smarr Talk 2000
The NSF TeraGridLambda Connected Linux PC Clusters
NCSA8 TF
4 TB Memory240 TB disk
Caltech0.5 TF
0.4 TB Memory86 TB disk
Argonne1 TF
0.25 TB Memory25 TB disk
TeraGrid Backbone (40 Gbps)
SDSC4.1 TF
2 TB Memory250 TB disk
This will Become the National Backbone to Support Multiple Large Scale Science and Engineering Projects
DataCompute
VisualizationApplications
Note: Weakly Optically Coupled Compared to Cluster I/O
Intel, IBM, Qwest
www.intel.com/eBusiness/casestudies/snapshots/ncsa.htm
Large Data Challenges in Medicine and Earth Sciences
• Challenges– Each Data Object is 3D and Gigabytes– Data are Generated and Stored in Distributed Archives– Research is Carried Out on Federated Repository
• Requirements– Computing Requirements PC Clusters– Communications Dedicated Lambdas– Data Large Objects WAN Database Operations– Visualization Collaborative Volume Algorithms
• Response– OptIPuter Research Project– UCSD, UCI, USC, UIC, NW Large ITR Proposal– Potential Industrial Partners
– IBM, HP, Intel, Microsoft, Nortel, Ciena, Velocita, SBC
NIH is Funding a National-Scale Grid Which is an OptIPuter Application Driver
National Partnership for Advanced Computational Infrastructure
Part of the UCSD CRBS Center for Research on Biological Structure
Biomedical Informatics Research Network
(BIRN)NIH Plans to Expand
to Other Organs and Many Laboratories
Star Light International Wavelength Switching Hub
Seattle
Portland
Caltech
SDSC
NYC
SURFnet, CERN
CANARIE
Asia-Pacific
Asia-Pacific
AMPATH
TeraGrid
*ANL, UIC, NU, UC, IIT, MREN
AMPATH
Source: Tom DeFanti, Maxine Brown
UICStarLight Metro OptIPuter
Int’l GE, 10GE
Nat’l GE, 10GE
Metro GE, 10GE
16x1 GE 16x10 GE
16-Processor McKinley at UIC
16-Processor Montecito/Shavan
o at StarLight
10x1 GE
+
1x10GE
Nationals: Illinois, California, Wisconsin, Indiana, Washington…
Internationals: Canada, Holland, CERN, Tokyo…
Metro Lambda Grid Optical Data Analysis “Living Laboratory”
• High Resolution Visualization Facilities– Data Analysis– Crisis Management
• Distributed Collaboration– Optically Linked– Integrate Access Grid
• Data and Compute– PC Clusters– AI Data Mining
• Driven by Data-Intensive Applications– Civil Infrastructure– Environmental Systems– Medical Facilities
SDSCSIO
UCSD
Linking Control Rooms
Cox, Panoram,SAIC, SBC, SGI, IBM,TeraBurst Networks
UCSD HealthcareSD Telecom Council
Some Research Topics in Metro OptIPuters
• Enhance Security Mechanisms:– End-to-End Integrity Check of Data Streams– Access Multiple Locations With Trusted Authentication Mechanisms– Use Grid Middleware for Authentication, Authorization, Validation,
Encryption and Forensic Analysis of Multiple Systems and Administrative Domains
• Distribute Storage While Optimizing Storewidth:– Distribute Massive Pools of Physical RAM (Network Memory)– Develop Visual TeraMining Techniques to Mine Petabytes of Data – Enable Ultrafast Image Rendering – Create for Optical Storage Area Networks (OSANs)
– Analysis and Modeling Tools– OSAN Control and Data Management Protocols – Buffering Strategies & Memory Hierarchies for WDM Networks
UCSD, UCI, USC, UIC, & NW
A Planetary Scale GridPowered by Intel Processors
Nature of Lump
Number of Processors Per Lump
Number of National Scale Lumps
Typical Processor
Speed of WAN connection
Example
High Perf.
PC Cluster
1000s 4 Intel IA-64 10-100 Gbps TeraGrid
OptIPuter
PC Cluster 10s-100s 1000s Intel IA-32 1 Gbps Dedicated
Cluster
PC 1 millions Intel IA-32 1-100 Mbps Entropia
embedded processors
1 Hundreds of millions
Intel StrongARM
100 Kbps-
10 Mbps
AutoNet
Pocket PCs
Cell Phones
The Grid is “Lumpy”
Source: Smarr Talk 1997
Source: Smarr Talk 1997
Source: Smarr Talk 1998
Source: Smarr Talk 1998
Source: Smarr Talk 1999
Cal-(IT)2 Latest Dedicated Linux Intel IA-32 Cluster
• World’s Most Powerful Dedicated Oceanographic Computer– 512 Intel Processors– Dedicated December 2001– Simulates Global Climate Change
• IBM Cal-(IT)2 Industrial Partner • NSF and NRO Federal Funds• Scripps Institution of
Oceanography– Center for Observations,
Modeling and Prediction– Director Detlef Stammer
A Planetary Scale GridPowered by Intel Processors
Nature of Lump
Number of Processors Per Lump
Number of National Scale Lumps
Typical Processor
Speed of WAN connection
Example
High Perf.
PC Cluster
1000s 4 Intel IA-64 10-100 Gbps TeraGrid
OptIPuter
PC Cluster 10s-100s 1000s Intel IA-32 1 Gbps Dedicated
Cluster
PC 1 millions Intel IA-32 1-100 Mbps Entropia
embedded processors
1 Hundreds of millions
Intel StrongARM
100 Kbps-
10 Mbps
AutoNet
Pocket PCs
Cell Phones
The Grid is “Lumpy”
Source: Smarr Talk 1997
Source: Smarr Talk 1997
Early Peer-to-Peer NT/Intel SystemNCSA Mosaic (1994)NCSA Symbio (1997)Microsoft (1998)
Entropia’s Planetary Computer Grew to a Teraflop in Only Two Years
Deployed in Over 80 Countries
The Great Mersenne Prime (2P-1) Search (GIMPS)Found the First Million Digit Prime
www.entropia.comEight 1000p IBM Blue Horizons
Peer-to-Peer Computing and StorageIs a Transformational Technology
“The emergence of Peer-to-Peer computing signifies a revolution in connectivity
that will be as profound to the Internet of future
as Mosaic was to the Web of the past.”--Patrick Gelsinger, VP and CTO, Intel Corp.
March 2001
Bio-Pharma is the P2P Killer Application
Enterprise P2P, PC Clusters, and Internet Computing
Forbes 11.27.00
Evolution of Peer-to-Peer Distributed Computing
• Three Successive Technology Phases – – Different Application Integration Models
• These Models Enable Increasing Numbers of Applications
III. Binary Code with Open Scheduling System (no integration)
II. Binary Code Integration
I. Source Code Integration
Entropia DCGridtm 5.0
Entropia DCGrid™Enterprise System Architecture and Elements
• Job Management: Manage Applications, Ensembles of Subjobs, Application Management
• Scheduling: Match Subjobs to Appropriate Resources and Execute, User Account Management
• Resource Management: Manage and Condition Underlying Desktop and Network Resources
DCGridtm Manager 5.0
DCGridtm Scheduler 5.0
Resource Scheduling
Job Management
Resource Management
Scheduling
Job ManagementJobs
Subjobs
0
5
10
15
20
25
30
35
40
0 25 50 75 100 125 150
Number of Clients
Sequ
ence
s pe
r hou
r
Entropia
1CPU SGI
1CPU SUN
Linear (Entropia)
0
50
100
150
200
250
300
350
400
0 100 200 300 400 500 600
Number of Clients
Thr
ough
put (
Pac
kets
per
Hou
r)
0
20
40
60
80
100
120
140
160
0 5 10 15 20 25 30 35 40 45 50
Number of Clients
Com
poun
ds p
er H
our
DCGrid Performance Scales Linearly
GOLD
AUTODOCK
HMMER
0
1000
2000
3000
4000
5000
6000
7000
0 100 200 300 400 500
Number of Clients
Com
poun
ds p
er H
our
DOCK
D I S T R I B U T E D C O M P U T I N G
Adding Brilliance to Mobile Clients with Internet Computing
• Napster Meets SETI@Home– Distributed Computing and Storage
• Assume Ten Million PCs in Five Years– Average Speed Ten Gigaflop
– Average Free Storage 100 GB
• Planetary Computer Capacity– 100,000 TetaFLOP Speed
– 1 Million TeraByte Storage
A Mobile Internet Powered by a Planetary Scale Computer
A Planetary Scale GridPowered by Intel Processors
Nature of Lump
Number of Processors Per Lump
Number of National Scale Lumps
Typical Processor
Speed of WAN connection
Example
High Perf.
PC Cluster
1000s 4 Intel IA-64 10-100 Gbps TeraGrid
OptIPuter
PC Cluster 10s-100s 1000s Intel IA-32 1 Gbps Dedicated
Cluster
PC 1 millions Intel IA-32 1-100 Mbps Entropia
embedded processors
1 Hundreds of millions
Intel StrongARM
100 Kbps-
10 Mbps
AutoNet
Pocket PCs
Cell Phones
The Grid is “Lumpy”
We Are About to Transition to a Mobile Internet
0
200
400
600
800
1,000
1,200
1,400
1,600
1,800
2,000
1999 2000 2001 2002 2003 2004 2005
Mobile Internet
Fixed Internet
Subscribers (millions)
Third Generation Cellular SystemsWill Add Internet, QoS, and High Speeds
Source: Ericsson
Two Dozen ECE and CSE Faculty
LOW-POWEREDCIRCUITRY
ANTENNAS AND PROPAGATION
COMMUNICATIONTHEORY
COMMUNICATIONNETWORKS
MULTIMEDIAAPPLICATIONS
RFMixed A/D
ASICMaterials
Smart AntennasAdaptive Arrays
ModulationChannel CodingMultiple Access
Compression
ArchitectureMedia Access
SchedulingEnd-to-End QoS
Hand-Off
ChangingEnvironment
ProtocolsMulti-Resolution
Center for Wireless Communications
Source: UCSD CWC
Future Wireless Technologies Are a Strong Academic Research Discipline
Operating System Services for Power / Performance Management
• Management of Power and Performance – Efficient Way to Exchange Energy/Power Related Info
– Among Hardware / OS / Applications– Power-Aware API
Application
Power Aware API
Power Aware Middleware
POSIX PA-OSL
OperatingSystem
OperatingSystem
Modified OS Services
Hardware Abstraction Layer
PA-HAL
Hardware
Rajesh Gupta UCI, Cal-(IT)2
Using Students to Invent the Futureof Widespread Use of Wireless PDAs
• Makes Campus “Transparent”– See Into Departments, Labs, and Libraries
• Year- Long “Living Laboratory” Experiment 2001-02– 500+ Wireless-Enabled HP PocketPC PDAs
– Wireless Cards from Symbol, Chips from Intersil– Incoming Freshmen in Computer Science and Engineering
• Software Developed– ActiveClass: Student-Teacher Interactions– ActiveCampus: Geolocation and Resource Discovery– Extensible Software Infrastructure for Others to Build On
• Deploy to New UCSD Undergrad College Fall 2002– Sixth College Will be “Born Wireless”– Theme: Culture, Art, and Technology– Study Adoption and Discover New Services
Cal-(IT)2 Team: Bill Griswold, Gabriele Wienhausen
ActiveCampus Explorer:PDA Interface
Source: Bill Griswold, UCSD CSE
ActiveCampus Explorer:PDA Interface
Source: Bill Griswold, UCSD CSE
The Cal-(IT)2 Grid Model for Wireless Services Middleware
Real-TimeServices
Mobile Code
LocationAwareness
PowerControl Security
Wireless Services Interface
UCI WirelessInfrastructures
UCSD WirelessInfrastructures
Applications
J. Pasquale, UCSD
Data Management
Wireless Internet Puts the Global Grid in Your Hand
802.11b Wireless
Interactive Access to:• State of Computer• Job Status• Application Codes
Cellular Internet is Already Here At Experimental Sites
• UCSD Has Been First Beta Test Site – Qualcomm’s 1xEV Cellular Internet
• Optimized for Packet Data Services– Uses a 1.25 MHz channel
– 2.4 Mbps Peak Forward Rate– Part of the CDMA2000 Tech Family– Can Be Used as Stand-Alone
• Chipsets in Development Support– PacketVideo’s PVPlayer™ MPEG-4– gpsOne™ Global Positioning System– Bluetooth– MP3– MIDI– BREW
Rooftop HDR Access Point
Automobiles will Become SensorNet Platforms
• Autonet Concept– Make Cars Mobile, Ad Hoc, Wireless, Peer-to-Peer Platforms– Distributed Sensing, Computation, and Control– Autonomous Distributed Traffic Control– Mobile Autonomous Software Agents– Decentralized Databases
Congestion-free flowCongestion-free flowUrbanUrbanMobilityMobility
UrbanUrbanMobilityMobility
Rigid Line-Haul PerformanceRigid Line-Haul PerformanceClean Limited-Clean Limited-Range MobilityRange Mobility
Clean Limited-Clean Limited-Range MobilityRange Mobility
Will Recker, UCI and Mohan Trivedi, UCSD, Cal-(IT)2
• ZEVNET Partners– UCI Institute for Transportation Studies Testbed– UCSD Computer Vision and Robotics Research Lab (CVRRL)
REACT! ApplicationApplication ApplicationApplication TRACER
Internet Website
Service Provider
CDPDWireless Modem
Website
ISP
REACT!On-line Survey
Activity diary Tracing RecordsActivity diary Tracing Records
Initial Interview
Pre-Travel PlanningPost-Travel Updating
Initial Interview
Pre-Travel PlanningPost-Travel Updating
Activity diary Tracing RecordsActivity diary Tracing RecordsActivity diary Tracing RecordsActivity diary Tracing Records
Initial Interview
Pre-Travel PlanningPost-Travel Updating
Initial Interview
Pre-Travel PlanningPost-Travel Updating
Source: Will Recker, UCI, Cal-(IT)2
ZEVNetCurrent Implementation
GPSGPS SensorSensor SensoSensorrSensorSensor ......
Extensible Data Collection Unit
Currently 50 Toyotas
Embedded and Networked Intelligence
• On-Campus Navigation Enabled– Web Service and Seamless WLAN Connectivity– 50 Compaq Pocket PCs
• Virtual Device / Instrument Control Over Bluetooth Links• Energy-Aware Application Programming• Battery-Aware Communication Links
Source: Rajesh Gupta, UCI, Cal-(IT)2