Dr. Vijay Raghavan Defense Advanced Research Projects Agency Information Exploitation Office
description
Transcript of Dr. Vijay Raghavan Defense Advanced Research Projects Agency Information Exploitation Office
1
Dr. Vijay RaghavanDr. Vijay RaghavanDefense Advanced Research Projects AgencyDefense Advanced Research Projects Agency
Information Exploitation OfficeInformation Exploitation Office
Network Embedded Systems TechnologyNetwork Embedded Systems Technology
(NEST)(NEST)
December 17, 2003December 17, 2003
Extreme Scaling Program PlanExtreme Scaling Program Plan
2
Topics
• Extreme Scaling Overview• Workshop Action Items• Project Plan
3
Extreme Scaling Overview
• [Insert background/overview information here…]
4
Workshop Action Items
1. Concept of Operations2. Experiment Site Selection3. System Design and Tier II Architecture4. Xtreme Scaling Mote (XSM) Design5. Xtreme Scaling Mote Sensor Board Design6. Super Node Design7. Application Level Fusion8. Remote Programming9. Localization10.Power Management11.Group Formation12.Simulation13.Support Infrastructure
5
1. Concept of OperationsPrimary Effort
Surveillance of Long, Linear, Static Structures
PipelineProblem:Too vast an area for limited personnel
resources (mobile guards)Hostile actions:
• Destruction (explosives)• Damage to pumps and transformers• Stripping of copper power lines (for
pumps)
Operational Need:Reliable automated surveillance to detect
movement in security zone
FY04 Experiment:Sense movement of personnel and/or
vehicles toward the pipelineTrack the movement and the stop/start of
movement
Damage in Iraq
6
Pipeline
Pump Station
Security Zone
20 km
1 km
Detection and tracking of personnel
Detection and tracking of vehicles
Detection of unknowns
? Guard Force Alerted
Mobile Patrol
Reaction Force
Pipeline
1. Concept of Operations (cont.)Primary Effort
7
IED
Enemy Observation Point
(OP)
Similar Long, Linear, Static Structures
Surveillance of Supply Routes:Detect potential ambush sites:
Personnel w/shoulder fired weapons
Improvised Explosive Devices (IEDs)
FY04 Experiment:Sense movement of
personnel/vehicles toward supply route and then:They remain near a pointThey remain for a while and then
leaveSense suspicious movement on the
road
Wire to OP
More Related Efforts: Border Patrol, surveillance around air base/ammo point
1. Concept of Operations (cont.)Related Efforts
IED
8
2. Experiment Site SelectionCharacteristics
• Relatively flat, open area–Easier to survey/mark off 1 km x 20 km site–Easier to deploy/recover sensors–Easier for observers to see large section of experiment site
• No forests• No large physical obstructions (e.g., buildings) to line-of-
site communications–Small obstructions (e.g., small rocks) okay
• Relatively good weather (little rain, light winds, etc.)–Sensors can stay out for days
• Military base–Site can be guarded–Sensors deployed on day 1 and remain in place until end of experiment (days later)
–Potential for personnel to support deployment/recovery of sensors
9
• Being used for DARPA SensIT effort (Feb 2004)
• 150 miles NE of Los Angeles• Encompasses 1.1 million acres of
land in California's upper Mojave Desert, ranging in altitude from 2,100 to 8,900 feet
• Varies from flat dry lake beds to rugged piñon pine covered mountains
• Weather should be consistentSummer will be hot
2. Experiment Site Selection (cont.)Primary Candidate Site
Naval Air Weapons Facility, China LakeNaval Air Weapons Facility, China Lake
10
2. Experiment Site Selection (cont.)Other Candidate Sites
• Fort Bliss, TX (near El Paso, TX)• Nellis AFB (near Las Vegas, NV)• NAS Fallon (near Reno, NV)• Marine Corps Air to Ground Combat Center,
29Palms, CA
• Eglin AFB (near Pensacola, FL)
11
3. System Design and Tier II Architecture
RoutingRoutingRoutingRouting
Matched filterMatched filterMatched filterMatched filter
Group managementGroup management(for multilateration)(for multilateration)Group managementGroup management(for multilateration)(for multilateration)
LocalizationLocalizationClusteringClustering
LocalizationLocalizationClusteringClustering
Time syncTime syncTime syncTime syncPower managementPower managementPower managementPower management
12
3. System Design and Tier II Architecture (cont.)
• Localization, time sync, and multihop reprogramming design/testing to be joint for Tier I & II–e.g., localization design to include how to associate XSMs with super nodes; to maintain clusters in stable way; etc.
• Reliable communication needed for exfil unicasts and for localization/multihop reprogramming broadcasts –because hops are many, comms are almost-always-off, & latency requirement is tight
• Testing 802.11 indoors problematic for some environments
13
THREADED HALVES
BATTERIES
BALL SIMILAR TO PET TOYS, SIZE OF GRAPEFRUIT
ANTENNA
PIR SENSOR
MICROPHONE
4. Xtreme Scaling Mote DesignDesign Concept #1
14
STAKE
ANTENNA
BATTERIES
PIR SENSORMICROPHONE
STAKE IN THE GROUND, SENSOR ON TOP
4. Xtreme Scaling Mote Design (cont.)Design Concept #2
15
MICROPHONE
PIR SENSOR
ANTENNA
SODA CAN WITH SENSOR ON TOP
4. Xtreme Scaling Mote Design (cont.)Design Concept #3
16
4. Xtreme Scaling Mote Design (cont.)Proposed Changes
• Keep Daylight and Temperature Sensor• New Mag Circuit - 2-Axis, Amplifier, Filter, and Set/Reset
Circuit using HMR1052 or HMR1022 • Anti-alias filter on Microphone, no tone-detector• 1 PIR sensor with as big a FOV as possible - either
Perkin Elmer or Kube• No Adxl202 but the pads will be there so a few could be
populated• Loud Buzzer
–Needs research on size, voltage requirements
17
4. Xtreme Scaling Mote Design (cont.)Known Issues
• Loud (> 90dB) Sounders–Voltage Requirements – 9-12V–Size (1” x 1”)–What Frequency – 2, 4KHz–Tone Detection?
• PIR Field of View–Daylight detection circuit
• Standardize Battery Selection–Will improve battery voltage accuracy
• Watchdog Timer / Remote Programming–Needs significant testing–Preload Mote with Stable TinyOS+Watchdog+XNP
18
4. Xtreme Scaling Mote Design (cont.)Proposed Phase 1
• Build 20-30 New Sensor Boards and Distribute to Group for use with existing MICA2–Late January
• In parallel, review package design
19
5. Xtreme Scaling Mote Sensor Board Design
• Candidate sensor suite at Tier IPrimary SecondaryMagnetometer TemperaturePIR SeismicAcoustic HumidityBuzzer BarometerLEDs Infrared LED
• Issues–What analog electronics to include to reduce sampling rates, (e.g., tunable LPF, integrator)
–A to D lines–Packaging to address wind noise/eddies, and actuator visibility–Early API design for sensors and their TinyOS (driver) support–Early testing of sensor interference issues–Sensors at Tier II: GPS and ?
20
6. Super Node Design
• Candidates (Crossbow is doing fine grain comparison)–Stargate –iPAQ–Instrinsyc Cerfcube μPDA –AppliedData Bitsy, BitsyX–Inhand Fingertip3 –Medusa MK-2
• Evaluation Criteria–802.11 wireless range (need several hundred meters)–networking with motes–development environment–programming methodology support, simulation tool support–availability of network/middleware services –platform familiarity within NEST
• Issues–PDA wakeup times longer?
21
7. Application Level Fusion
• Features to include in application data from Tier ITier II–energy content–signal duration–signal amplitude and signal max/min ratio–angular velocity–angle of arrival
• Issues–Tradeoffs
•Tier I XSM density of active nodes •Tier II detection accuracy (to minimize communication power requirement)
•Tier III detection latency–Early validation of environment noise and intruder models–Early validation of influence field statistic w/ acoustic sensors and PIR
–Might need CFAR in space and time
22
8. Remote Programming
• The many levels of reprogramming:–In order of increasing cost and decreasing frequency:
• Re-configuration–Highly parameterized modules a big win for midterm demo
• Scripting–Good for rapid prototyping of top-level algorithmic approaches
• Page-level diffs for small changes to binary code–Pages are unit of loss, recovery, & change; acks for reliability–Many possible design choices for repair / loss recovery protocol–Space-efficient diffs will require some thought, compiler support
• Loading a whole image of binary code–Optimizations: pipelining a big win; but beware: many optimizations that sound good
don’t help as much in practice as you might think (see, e.g., Deluge measurements)
–Claim: All levels should use epidemic, anti-entropy mechanisms• Good for extreme reliability: deals with partitions, new nodes, version sync• Good for extreme scalability: avoids need for global state
–Tradeoff: flexibility of reprogramming vs. reliability of reprogramming• Want a minimal fail-safe bootloader that speaks reprogramming protocol• Good for reliability: if you’re really sick, blow away everything but bootloader• Discussion topic: How much do we hard-code in the bootloader?
23
9. Localization
Motes with the UIUC customized sounder
Distance Estimation with the Custom Sounder
• Distance estimates based on Time Difference of Arrival
•Sound and Radio signals used
•Median value of repeated measurements used to eliminate random errors
24
9. Localization (cont.)Experimental Validation
Demo Sensor Network Deployment Ft. Benning localization experiments
•Localization based on trilateration.
• Use more than three anchors for correction of systematic errors.
• Pick largest consistent cluster of different anchors’ distance intersections.
• Minimize sum of least squares• Gradient descent search used.
Fort Benning localization results: actual location versus estimated location
Results• Error correction is effective• Mean location errors of 30cm (median error lower)
• Computations can be done entirely on the motes.
25
9. Localization (cont.)Plans for Extreme Scaling
Problem• Localize nodes in 100x100m2• UIUC customized motes reliably
measure only up to 20m using acoustic ranging
• Proposed solution: Multi-Hop ranging
Algorithm• Measure the distances between nodes• Find a relative coordinate system for
each node for nodes within acoustic range
• Find transformations between coordinate systems
• Find distance to an anchor node or find position in the anchor’s coordinate system
Simulation ResultsError accumulates slowly with more transformations.
• 100 nodes in a100x100m2 area • Acoustic signal range: 18m• 0 mean, 3σ=2m normal error
hop count = number of transformations
26
10. Power Management
• Super Node Design–Power management at least as important at Tier II as at Tier I–Key evaluation criterion for device selection
• Tier II Power Management Needs–Exploit mote to PDA interrupt wakeup –Low Pfa in detection traffic from supernodes to support almost-always-off communication
–TDMA?
27
11. Group Formation
• Service(s) to Support–Multilateration with gradient descent for distributed tracking and classification (at Tier II)
–Reliable broadcast of information from Super Nodes to Xtreme Scaling Motes
–Power managed, persistent(?) hierarchical routing (at Tier II)
• Issues–Stability of persistent clusters
•e.g., in mapping XSM motes to supernodes use unison/hysteresis–Stabilization of clusters
• tolerance to failures, displacement, layout non-uniformity
28
12. SimulationEmStar Simulation Environment
UCLA (GALORE Project)
• Software for StarGate and other linux-based microserver nodes for hierarchical networks–EmStar: seamless simulation to deploymentand, EmView: extensible visualizer•http://cvs.cens.ucla.edu/viewcvs/viewcvs.cgi/emstar/
–CVS repository of Linux and bootloader code-base for StarGate•http://cvs.cens.ucla.edu/viewcvs/viewcvs.cgi/stargate/
–Stargate users mailing list•http://www.cens.ucla.edu/mailman/listinfo/stargate-users
29
12. Simulation (cont.)Programming Microservers: EmStar
• What is it?–Application development framework for microserver nodes–Defines standard set of interfaces–Simulation, emulation, and deployment with same code–Reusable modules, configurable wiring–Event-driven reactive model–Support for robustness, visibility for debugging, network visualization
–Supported on StarGate, iPAQs, Linux PCs, and pretty much anything that runs Linux 2.4.x kernel
• Where are we using it?–NEST GALORE system: sensing hierarchy–CENS Seismic network: time distribution, s/w upgrade–NIMS robotics application–Acoustic sensing using StarGate + acoustic hardware
• Note: EmStar co-funded by NSF CENS, main architect Jeremy Elson
30
12. Simulation (cont.)From {Sim,Em}ulation to Deployment
• EmStar code runs transparently at many degrees of “reality”: high visibility debugging before low-visibility deployment
Reality
Sca
le
Pure Simulation
Data Replay
Portable Array
Deployment
Ceiling Array
Scalability
Reality
31
12. Simulation (cont.)Real System
each node is autonomous; they communicate via the real environmenteach node is autonomous; they communicate via the real environment
Real Node 1
Radio
Topology Discovery
Collaborative SensorProcessing Application
NeighborDiscovery
ReliableUnicast
Sensors
LeaderElection
3d Multi-Lateration
Audio
TimeSync
AcousticRanging
StateSync
Real Node n
Radio
Topology Discovery
Collaborative SensorProcessing Application
NeighborDiscovery
ReliableUnicast
Sensors
LeaderElection
3d Multi-Lateration
Audio
TimeSync
AcousticRanging
StateSync
. . .
32
12. Simulation (cont.)Simulated System
the real software runs in a synthetic environment (radio, sensors, acoustics)the real software runs in a synthetic environment (radio, sensors, acoustics)
Simulated Node 1
Radio
Topology Discovery
Collaborative SensorProcessing Application
NeighborDiscovery
ReliableUnicast
Sensors
LeaderElection
3d Multi-Lateration
Audio
TimeSync
AcousticRanging
StateSync
Simulated Node n
Radio
Topology Discovery
Collaborative SensorProcessing Application
NeighborDiscovery
ReliableUnicast
Sensors
LeaderElection
3d Multi-Lateration
Audio
TimeSync
AcousticRanging
StateSync
. . .
Very Simple Radio Channel ModelVery Simple Acoustic Channel Model
EMULATOR/SIMULATOR
33
12. Simulation (cont.)Hybrid System
real software runs centrally, interfaced to hardware distributed in the real worldreal software runs centrally, interfaced to hardware distributed in the real world
Simulated Node 1
Radio
Topology Discovery
Collaborative SensorProcessing Application
NeighborDiscovery
ReliableUnicast
Sensors
LeaderElection
3d Multi-Lateration
Audio
TimeSync
AcousticRanging
StateSync
Simulated Node n
Radio
Topology Discovery
Collaborative SensorProcessing Application
NeighborDiscovery
ReliableUnicast
Sensors
LeaderElection
3d Multi-Lateration
Audio
TimeSync
AcousticRanging
StateSync
. . .
EMULATOR/SIMULATOR
Radio Radio
34
12. Simulation (cont.)Interacting with EmStar
• Text/Binary on same device file– Text mode enables interaction from
shell and scripts– Binary mode enables easy
programmatic access to data as C structures, etc.
• EmStar device patterns support multiple concurrent clients
– IPC channels used internally can be viewed concurrently for debugging
– “Live” state can be viewed in the shell (“echocat –w”) or using emview
35
13. Support Infrastructure
• Important techniques for monitoring, fault detection, and recovery:–System monitoring: big antenna was invaluable during midterm demo
–Network health monitoring: e.g., min, max transmission rates–Node health monitoring: e.g., ping; query version, battery voltage, sensor failures; reset/sleep commands
–Program integrity checks: e.g., stack overflow–Watchdog timer: e.g., tests timers, task queues, basic system liveness
–Graceful handling of partial faults: e.g., flash/eeprom low voltage conditions
–Log everything: use Matchbox flash filesystem + high-speed log extraction
–Simulation at scale: tractable to simulate 1000’s of nodes; use it!
36
13. Support Infrastructure (cont.)
• A possible network architecture:–Claim: the key to extreme scaling is hierarchy: 100 networks of 100 motes (+ a network of 100 Stargates?), not a network of 10,000 motes
–“Everything runs TinyOS”: enables simulation of all levels of hierarchy
–Consider adding high-speed backchannel (e.g., 802.11) to a subset of nodes for debugging, monitoring, log extraction
• Topics for discussion:–What is the role of end-to-end fault recovery? (e.g., watchdog timers)
–What can we learn from theory? (e.g., Byzantine fault toler., self-stabilization)
–Logging and replay mechanisms, for after-the-fact debugging?–Quantity vs. quality tradeoff? (Choice between focusing on making individual nodes more reliable, vs. adding more nodes for redundancy)
37
Project Plan
• [Insert Project Plan slides here…]
38
BACKUP / MISCELLANEOUS SLIDES
39
Preliminary Program PlanRoles and Responsibilities
Technology DevelopmentTechnology DevelopmentTechnology DevelopmentTechnology Development
Middleware ServicesMiddleware ServicesClock Sync (UCLA,OSU)Group Formation (OSU, UCB)Localization (UIUC)Remote Programming (UCB)Routing (OSU, UCB)Sensor Fusion (OSU)Power Management (UCB)Relay Node Services (UCLA)
Middleware ServicesMiddleware ServicesClock Sync (UCLA,OSU)Group Formation (OSU, UCB)Localization (UIUC)Remote Programming (UCB)Routing (OSU, UCB)Sensor Fusion (OSU)Power Management (UCB)Relay Node Services (UCLA)
Display UnitDisplay UnitOhio State
Display UnitDisplay UnitOhio State
Application LayerApplication LayerOhio State
UC Berkeley
Application LayerApplication LayerOhio State
UC Berkeley
MAC LayerMAC LayerUC Berkeley
MAC LayerMAC LayerUC Berkeley
Xtreme Scaling MoteXtreme Scaling MoteCrossbow Technology
Xtreme Scaling MoteXtreme Scaling MoteCrossbow Technology
Relay NodeRelay NodeCrossbow Technology
Relay NodeRelay NodeCrossbow Technology
Systems IntegrationSystems IntegrationOhio State
Systems IntegrationSystems IntegrationOhio State
Transition PartnersTransition PartnersUSSOUTHCOM, U.S. Customs & Border Protection, USSOCOM, AFRL
Transition PartnersTransition PartnersUSSOUTHCOM, U.S. Customs & Border Protection, USSOCOM, AFRL
SensorsSensorsCrossbow
SensorsSensorsCrossbow
Operating SystemOperating SystemUC Berkeley
Operating SystemOperating SystemUC Berkeley
Application ToolsApplication ToolsOhio State
UCLAUC Berkeley
Application ToolsApplication ToolsOhio State
UCLAUC Berkeley
Auxiliary Services Auxiliary Services Testing (OSU, MITRE, CNS Technologies) Monitoring, logging, and testing infrastructure (UCB, OSU) Evaluation (MITRE) Logistics, site planning (CNS Technologies, OSU) ConOps development (Puritan Research, CNS Technologies, SouthCom, US Customs & Border Protection, MITRE, OSU)
Auxiliary Services Auxiliary Services Testing (OSU, MITRE, CNS Technologies) Monitoring, logging, and testing infrastructure (UCB, OSU) Evaluation (MITRE) Logistics, site planning (CNS Technologies, OSU) ConOps development (Puritan Research, CNS Technologies, SouthCom, US Customs & Border Protection, MITRE, OSU)
GUIGUIOhio State
GUIGUIOhio State
• Simulation tools (UCB, UCLA, Vanderbilt, OSU)