Prediction-based Monitoring in Sensor Networks: Taking Lessons from MPEG Samir Goel and Tomasz...
-
date post
21-Dec-2015 -
Category
Documents
-
view
213 -
download
0
Transcript of Prediction-based Monitoring in Sensor Networks: Taking Lessons from MPEG Samir Goel and Tomasz...
Prediction-based Monitoring in Sensor Networks: Taking
Lessons from MPEG
Samir Goel and Tomasz Imielinski
Department of Computer Science
Rutgers, The State University Of New Jersey
ACM Computer Communication Review
Vol. 31, No. 5, October 2001
Outline
Background Model PREMON Experiment Conclusion
Background
The compression techniques in MPEG-2– Spatial compression– Temporal compression
Model
Large, non-deterministic topology Cluster-based Limited energy Access points Location-aware
PREdiction Based MONitoring
Update-mode Centralized approach A base station maintains the database of
current reading of all the sensors in the sensor field.
Classes of Prediction Models
Spatial– Reading at sensor X in time slot t is the same as r
eading at sensor Y during the same time slot. Temporal
– Reading at sensor X in time slot t is 2 greater than its reading in the previous time slot
Spatio-temporal– Reading at sensor X in time slot t is the same as t
he reading of sensor Y in the previous time slot. Absolute
– Readings at sensor X in time slots t, t+1, and t+2 will be 32, 34, and 35 respectively.
Key Characteristics of PREMON
Trades computation for communication– Cost(computation) << Cost(communication)
Works well if one can tolerate:– “small” amount of errors in predictions– “some” latency in generating prediction models
Applicable whenever correlation (temporal, spatial, or spatio-temporal) exists
The Framework
Spatial-Temporal Assumption
– All sensors within the <spatial-region> fall within one cluster.
– All sensors operate in the update-mode. The sensors in the <spatial-region>
Visualization
Monitoring may be seen as watching the snapshot images on a continuous scale
Visualization
Monitoring may be seen as watching a video of sensed values
Prediction Operation
Monitoring operation:– Initially, all sensors transmit their current reading t
o the base station– Subsequently, sensors transmit only when their re
adings change In the visualization:
– Initially, the full image is transmitted– Subsequently, only the diffs from the previous ima
ge are transmittedThis is analogous to how MPEG encodes a video!
PREMON
Apply block-matching algorithm to compute motion-vectors
Translate motion-vectors into motion-predictions
Frame#2Frame#1
<2, 0>
Frame#3
Is<2, 0>valid ?
Translating Motion-vectors into Prediction Models
No-motion case (motion-vector: <0, 0>):– Generate a Constant Value Prediction
General Case (Motion-vector: <dx, dy>):– Generate a Movement Prediction
MPEG Analogy
Sensor Field
Base station
updateframes
All sensors send data (I-frame)
Sensors send updates when their valuediffers from predicted ones (P-frames)
sensor
predictions
time
Differences between MPEG and PREMON
Hard real-time requirements for MPEG Soft real-time requirements for sensor nets Limited energy for sensor networks The number of sensors is small compared
to pixels The frame rate is an order of magnitude
higher compared to PREMON Non-uniform placement
Architecture
Processing at the Base Station– Collect updates from the sensors– Generate prediction model– Send the update– Send a set of prediction models
(If the previous model resulted in fewer updates) When low on power, the base station may divide
its cluster in spatial blocks and may only send average reading of each block to the access point.
Architecture
Processing at the Sensor– Update-mode by default: send an update
whenever the reading changes– Receive a prediction-model– After the prediction-model expires, revert
back to update-mode.
Prediction Model
Gridding– Interpolate or extrapolate the readings at gri
d points– Assign the closest sensor to a grid point– Or assign the average of the closest sensors– Transparent grid point
Prediction Model
Divide the image into macro-blocks
Block-matching and find motion vectors
Transparent pixel matches any other pixel
Only when the percentage of transparent pixels in a macro-block is above threshold
Prediction Model
The base station– With 4 most recent frames, apply block-ma
tching to frames 1 and 2 to get MVs.– For each MV, check frames 2 and 3, and 3
and 4.
Prediction Model
If a motion vector “holds”, generate an absolute model based on it; otherwise, discard it.
Their data is encoded in a more efficient way – depending on the type of sensors.
The magnetometers output binary values: LOW/HIGH
Only the coordinate of the largest rectangle of 1s is sent and only the prediction model within this range is valid.
While no motion, send only one flag to indicate it.
Prediction Model
Type– Absolute– Spatial– Temporal– Statio-temporal
Model– Tuples(<time, reading>) or a funciton
Destination– A broadcast address, sensor id, or a spatial polygon
TTL– Valid time
Experiments
4 MHz processor Radio: 10kbps 8KB program memory 512 bytes data memory Light sensor
Experimental Setup335 333 331
317 315 313 311
BS
BS location of base-station-mote
sensor-mote
focused light source
335 333 331
317 315 313 311
BSBS
BSBS location of base-station-mote
sensor-mote
focused light source
Experimental Setup One-dimensional version of the problem Base-station code fully resides in a mote Cases considered:
– Case#1: Default mode: Sensors send their sensed values once every second
– Case#2: Constant-value predictions only BS makes a constant value prediction if the value of a sens
or doesn’t change for 2 consecutive frames. BS doesn’t transmit movement predictions
– Case#3: Constant and movement predictions BS issues both constant-value and movement predictions BS makes a movement prediction based on two correlated
motio-sensor readings
Constants
Cost of transmission/bit = 1 µJ Cost of reception/bit = 0.5 µJ Cost of computing = 0.8 µJ per 100 instructions Update size = 11 bytes (Tu = 88 µJ, Ru = 44 µJ) Prediction size
– Movement Prediction = 8 bytes (Tp = 64 µJ, Rp = 32 µJ)– Constant Value Prediction = 5 bytes (Tp = 40 µJ, Rp = 20 µJ)
Results
Performance Graph
0
100
200
300
400
500
600
1 2 3Cases
Ener
gy C
omsu
med
(mJo
ules
)
Sensor#4 Sensor#8 Sensor#12
Sensor#14 Base-station-mote Total Energy Consumed
950mJCase#1: default mode
Case#2: constant-value predictions only
Case#3: constant and movement predictions
Summary of Results:- Case#3 performs 5 times better than case#1- Case#3 performs 28% better than case#2
Conclusion
Prediction-based monitoring paradigm can significantly increase energy efficiency
Monitoring of sensor data may be visualized as watching a “video” and MPEG-2 algorithms may be adapted for generating predictions