Programming Vast Networks of Tiny Devices
description
Transcript of Programming Vast Networks of Tiny Devices
Programming Vast Networks of Tiny Devices
David Culler University of California, Berkeley
Intel Research Berkeley
http://webs.cs.berkeley.edu
11/14/2002 NEC Intro
Programmable network fabric • Architectural approach
– new code image pushed through the network as packets– assembled and verified in local flash– second watch-dog processor reprograms main controller
• Viral code approach (Phil Levis)– each node runs a tiny virtual machine interpreter– captures the high-level behavior of application domain as
individual instructions– packets are “capsule” sequence of high-level instructions– capsules can forward capsules
• Rich challenges– security– energy trade-offs– DOS
pushc 1 # Light is sensor 1sense # Push light readingpushm # Push message bufferclear # Clear message bufferadd # Append val to buffersend # Send message using AHRforw # Forward capsulehalt #
11/14/2002 NEC Intro
Mate’ Tiny Virtual Machine
• Comm. centric stack machine
– 7286 bytes code, 603 bytes RAM
– dynamicly typed• Four context types:
– send, receive, clock, subroutine (4)
– each 24 instructions• Fit in a single TinyOS AM
packet– Installation is atomic– Self-propagating
• Version information
0 1 2 3
SubroutinesClock
SendR
eceive
Events
gets/sets
Code
OperandStackReturnStack
PC MateContext
Network Programming Rate
0%20%
40%60%
80%100%
0 20 40 60 80 100 120 140 160 180 200 220 240
Time (seconds)
Perc
ent
Prog
ram
med
11/14/2002 NEC Intro
Case Study: GDI• Great Duck Island application• Simple sense and send loop• Runs every 8 seconds – low duty cycle• 19 Maté instructions, 8K binary code• Energy tradeoff: if you run GDI application for
less than 6 days, Maté saves energy
11/14/2002 NEC Intro
Higher-level Programming?• Ideally, would specify the desired global
behavior• Compilers would translate this into local
operations
• High-Performance Fortran (HPF) analog– program is sequence of parallel operations on large matrices– each of the matrices are spread over many processors on a
parallel machine– compiler translates from global view to local view
» local operations + message passing– highly structured and regular
• We need a much richer suite of operations on unstructured aggregates on irregular, changing networks
11/14/2002 NEC Intro
Sensor Databases – a start• Relational databases: rich queries
described by declarative queries over tables of data
– select, join, count, sum, ...– user dictates what should be computed– query optimizer determines how – assumes data presented in complete, tabular form
• First step: database operations over streams of data
– incremental query processing
• Big step: process the query in the sensor net
– query processing == content-based routing?– energy savings, bandwidth, reliability
App
Sensor Network
TinyDB
Query, Trigger
Data
SELECT AVG(light) GROUP BY roomNo
11/14/2002 NEC Intro
Motivation: Sensor Nets and In-Network Query Processing
• Many Sensor Network Applications are Data Oriented
• Queries Natural and Efficient Data Processing Mechanism– Easy (unlike embedded C code)– Enable optimizations through abstraction
• Aggregates Common Case– E.g. Which rooms are in use?
• In-network processing a must– Sensor networks power and bandwidth constrained– Communication dominates power cost– Not subject to Moore’s law!
11/14/2002 NEC Intro
SQL Primer
• SQL is an established declarative language; not wedded to it– Some extensions clearly necessary, e.g. for sample rates
• We adopt a basic subset:
• ‘sensors’ relation (table) has– One column for each reading-type, or attribute– One row for each externalized value
» May represent an aggregation of several individual readings
SELECT {aggn(attrn), attrs} FROM sensorsWHERE {selPreds}GROUP BY {attrs}HAVING {havingPreds}EPOCH DURATION s
SELECT AVG(light) FROM sensors WHERE sound < 100GROUP BY roomNoHAVING AVG(light) < 50
11/14/2002 NEC Intro
TinyDB Demo (Sam Madden)
Joe Hellerstein, Sam Madden, Wei Hong, Michael Franklin
11/14/2002 NEC Intro
Messages/ Epoch vs. Network Diameter
0
500
1000
1500
2000
2500
3000
10 20 30 40 50Network Diameter
Mes
sage
s /
Epoc
h
No GuessGuess = 50Guess = 90Snooping
Bytes / Epoch vs. Network Diameter
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
10 20 30 40 50Network Diameter
Avg.
Byt
es /
Epo
ch
COUNTMAXAVERAGEMEDIANEXTERNALDISTINCT
Tiny Aggregation (TAG) Approach
• Push declarative queries into network
– Impose a hierarchical routing tree onto the network
• Divide time into epochs• Every epoch, sensors evaluate query
over local sensor data and data from children
– Aggregate local and child data– Each node transmits just once per epoch– Pipelined approach increases throughput
• Depending on aggregate function, various optimizations can be applied
hypothesis testing
11/14/2002 NEC Intro
Aggregation Functions• Standard SQL supports “the basic 5”:
– MIN, MAX, SUM, AVERAGE, and COUNT• We support any function conforming to:
Aggn={fmerge, finit, fevaluate}Fmerge{<a1>,<a2>} <a12>finit{a0} <a0>Fevaluate{<a1>} aggregate value(Merge associative, commutative!)Example: AverageAVGmerge {<S1, C1>, <S2, C2>} < S1 + S2 , C1 + C2>AVGinit{v} <v,1>AVGevaluate{<S1, C1>} S1/C1
Partial Aggregate
11/14/2002 NEC Intro
Query Propagation
• TAG propagation agnostic– Any algorithm that can:
» Deliver the query to all sensors» Provide all sensors with one or
more duplicate free routes to some root
• simple flooding approach– Query introduced at a root; rebroadcast
by all sensors until it reaches leaves– Sensors pick parent and level when they
hear query– Reselect parent after k silent epochs
Query
P:0, L:1
2
1
5
3
4
6
P:1, L:2
P:1, L:2
P:3, L:3
P:2, L:3
P:4, L:4
11/14/2002 NEC Intro
Illustration: Pipelined Aggregation
1
2 3
4
5
SELECT COUNT(*) FROM sensors
Depth = d
11/14/2002 NEC Intro
Illustration: Pipelined Aggregation
1 2 3 4 5
1 1 1 1 1 1
1
2 3
4
51
1
11
1Sensor #
Epoc
h #
Epoch 1SELECT COUNT(*) FROM sensors
11/14/2002 NEC Intro
Illustration: Pipelined Aggregation
1 2 3 4 5
1 1 1 1 1 1
2 3 1 2 2 1
1
2 3
4
51
2
21
3Sensor #
Epoc
h #
Epoch 2SELECT COUNT(*) FROM sensors
11/14/2002 NEC Intro
Illustration: Pipelined Aggregation
1 2 3 4 5
1 1 1 1 1 1
2 3 1 2 2 1
3 4 1 3 2 1
1
2 3
4
51
2
31
4Sensor #
Epoc
h #
Epoch 3SELECT COUNT(*) FROM sensors
11/14/2002 NEC Intro
Illustration: Pipelined Aggregation
1 2 3 4 5
1 1 1 1 1 1
2 3 1 2 2 1
3 4 1 3 2 1
4 5 1 3 2 1
1
2 3
4
51
2
31
5Sensor #
Epoc
h #
Epoch 4SELECT COUNT(*) FROM sensors
11/14/2002 NEC Intro
Illustration: Pipelined Aggregation
1 2 3 4 5
1 1 1 1 1 1
2 3 1 2 2 1
3 4 1 3 2 1
4 5 1 3 2 1
5 5 1 3 2 1
1
2 3
4
51
2
31
5Sensor #
Epoc
h #
Epoch 5SELECT COUNT(*) FROM sensors
11/14/2002 NEC Intro
Discussion• Result is a stream of values
– Ideal for monitoring scenarios• One communication / node / epoch
– Symmetric power consumption, even at root• New value on every epoch
– After d-1 epochs, complete aggregation• Given a single loss, network will recover after at most d-1
epochs• With time synchronization, nodes can sleep between
epochs, except during small communication window• Note: Values from different epochs combined
– Can be fixed via small cache of past values at each node– Cache size at most one reading per child x depth of tree
1
2 34
5
11/14/2002 NEC Intro
Testbench & Matlab Integration
• Positioned mica array for controlled studies
– in situ programming– Localization (RF, TOF)– Distributed Algorithms– Distributed Control– Auto Calibration
• Out-of-band “squid” instrumentation NW
• Integrated with MatLab– packets -> matlab events– data processing– filtering & control
11/14/2002 NEC Intro
Acoustic Time-of-Flight Ranging
• Sounder/Tone Detect Pair• Emit Sounder pulse and RF message• Receiver uses message to arm Tone
Detector• Key Challenges
–Noisy Environment –Calibration
• On-mote Noise Filter• Calibration fundamental to “many
cheap” regime» variations in tone frequency and amplitude,
detector sensitivity• Collect many pairs
– 4-parameter model for each pair–T(A->B, x) = OA + OB + (LA + LB )x–OA , LA in message, OB, LB local
no calibration: 76% error
joint calibration: 10.1% errorjoint calibration: 10.1% error