Smart Dust Sensor Mote Characterization, Validation, Fusion and Actuation
TABLE OF CONTENTS
1 Introduction..................................................................................................................... 11.1 Intelligent commercial daylighting control........................................................... 11.2 Smart dust motes................................................................................................... 11.3 Mote sensor network............................................................................................. 21.4 Sensor validation and fusion................................................................................. 21.5 BESTnet................................................................................................................. 2
2 Smart dust motes............................................................................................................ 42.1 Overview of wireless sensor network architecture............................................... 42.2 MICA processor and radio platform..................................................................... 52.3 MICA sensor board and prototype board.............................................................. 62.4 Interface and programming board......................................................................... 72.5 TinyOS overview................................................................................................... 8
3 MICA mote sensor characterization............................................................................... 93.1 Characterization objectives................................................................................... 93.2 Illuminance characterization................................................................................. 9
3.2.1 Testbed hardware.......................................................................................... 93.2.2 Characterization procedure......................................................................... 103.2.3 Characterization results.............................................................................. 113.2.4 Hysteresis test............................................................................................. 12
3.3 Temperature characterization.............................................................................. 133.3.1 Hardware testbed........................................................................................ 133.3.2 Characterization procedure......................................................................... 143.3.3 Results & comparison to manufacturer's mapping equations.................... 153.3.4 Light-temperature interference................................................................... 17
3.4 Accelerometer evaluation.................................................................................... 193.4.1 Test environment........................................................................................ 193.4.2 Evaluation procedure.................................................................................. 203.4.3 Results & discussion................................................................................... 20
4 BESTnet........................................................................................................................ 224.1 Mote sensor network........................................................................................... 22
4.1.1 Centralized configuration........................................................................... 224.1.2 Decentralized configuration....................................................................... 22
4.2 Hardware and configuration of BESTnet............................................................ 22
i
4.2.1 Message packet type................................................................................... 224.2.2 BESTnet node function............................................................................... 234.2.3 Base station function................................................................................... 234.2.4 BESTnet v. 1.0 hardware............................................................................ 244.2.5 BESTnet v. 1.1 hardware............................................................................ 24
4.3 BESTnet construction: communication challenges............................................. 245 Sensor validation and fusion....................................................................................... 26
5.1 Sensor validation: concept and methodology...................................................... 265.2 Sensor fusion: concept and methodology............................................................ 275.3 The fuzzy sensor validation and fusion (FUSVAF) algorithm............................ 275.4 Application of FUSVAF to Cory Hall data......................................................... 305.5 Application of FUSVAF to BESTnet.................................................................. 315.6 BESTnet failure patterns...................................................................................... 33
6 Extended fuzzy sensor validation and fusion algorithm: mote-FVF.......................... 366.1 Analysis of problems in applying FUSVAF to BESTnet.................................... 36
6.1.1 Near-zero failures........................................................................................ 366.1.2 Failure under sudden changes in environment............................................ 366.1.3 Precise initial guesses requirements............................................................ 38
6.2 Mote-FVF............................................................................................................ 386.2.1 Median value estimation............................................................................. 396.2.2 Gaussian correlation estimation.................................................................. 396.2.3 Fuzzy dynamic-mean Gaussian validation curve....................................... 40
6.3 Performance evaluation....................................................................................... 416.3.1 Tuning mote-FVF parameters..................................................................... 416.3.2 Real-time implementation and simulation: mote-FVF............................... 42
6.4 Synchronization challenges................................................................................. 446.5 Future development............................................................................................. 45
7 Mote-based actuation.................................................................................................... 467.1 Analysis on the ability of mote actuation............................................................ 467.2 Prototype fluorescent lighting actuation.............................................................. 467.3 Implementation of mote-based actuation............................................................. 477.4 Challenges in mote-based actuation..................................................................... 48
8 Conclusions................................................................................................................... 509 Future research.............................................................................................................. 5110 References................................................................................................................... 53Appendix........................................................................................................................... 56
ii
A.1 Mote photoconductor mapping equations........................................................... 56A.2 Mote thermistor mapping equations................................................................... 57
iii
INDEX OF FIGURES
Fig.1 MICA processor and radio platform....................................................................... 6Fig.2 (a) Standard MICA sensor board............................................................................ 6Fig.2 (b) Prototyping sensor board.................................................................................. 6Fig.3 MICA interface/programming board...................................................................... 7Fig.4 Illuminance calibration testbed............................................................................. 10Fig.5 (a) Illuminance calibration curve for mote01........................................................ 12Fig.5 (b) Illuminance calibration curves for all motes................................................... 12Fig.6 Hysteresis results - mote01................................................................................... 13Fig.7 Temperature calibration testbed............................................................................ 14Fig.8 (a) Temperature calibration curve for mote01...................................................... 15Fig.8 (b) Temperature calibration curves for all motes.................................................. 16Fig.8 (c) Temperature calibration curves for MICA sensor boards................................ 17Fig.8 (d) Temperature calibration curves for prototyping boards................................... 17Fig.9 Illuminance-temperature interference................................................................... 18Fig.10 Accelerometer testbed and hardware..................................................................... 19Fig.11 Accelerometer responses....................................................................................... 20Fig.12 Accelerometer x and y axes................................................................................... 20Fig.13 (a) AM message structure.................................................................................... 23Fig.13 (b) BESTnet payload data field............................................................................ 23Fig.14 BESTnet version 1.1.............................................................................................. 24Fig.15 Sensor validation layers......................................................................................... 27Fig.16 FUSVAF architecture............................................................................................ 28Fig.17 Adaptive parameter membership functions........................................................... 30Fig.18 Cory Hall sensing motes........................................................................................ 31Fig.19 Application of FUSVAF to Cory Hall data........................................................... 31Fig.20 Application of FUSVAF to BESTnet.................................................................... 32Fig.21 FUSVAF failure: (a) large changes....................................................................... 33Fig.21 FUSVAF failure: (b) poor initial guesses.............................................................. 33Fig.22 Failure pattern - packet loss................................................................................... 34Fig.23 Failure pattern - receiving failure.......................................................................... 34Fig.24 Failure pattern - noise............................................................................................ 35Fig.25 FUSVAF validation curve..................................................................................... 37Fig.26 Gaussian correlation curve.................................................................................... 40
iv
Fig.27 Membership functions for defining the center of the validation curve.................. 41Fig.28 Dynamic fuzzy validation curve............................................................................ 41Fig.29 Real-time application of mote-FVF....................................................................... 43Fig.30 (a) Raw data and reference illuminance............................................................... 44Fig.30 (b) Mote-FVF with median value approach......................................................... 44Fig.30 (c) Mote-FVF with Gaussian correlation approach.............................................. 44Fig.30 (d) Gaussian majority voting................................................................................ 44Fig.30 (e) FUSVAF........................................................................................................ 44Fig.31 Initial actuation architecture.................................................................................. 47Fig.32 Revised actuation architecture............................................................................... 47
v
1 INTRODUCTION
1.1 Intelligent Commercial Daylighting Control This ‘fuzzy validation and fusion for wireless sensor network’ research is part of a
larger project for concerning the development of an intelligent commercial daylighting control system. The intelligent commercial daylighting control system aims at balancing the conflicting preferences of occupants sharing a common lighting source as well as effecting maximum energy conservation through the implementation of demand-responsive control [1, 2]. A previous benchmarking test on the Berkeley Expert System Technology (BEST) Laboratory at University of California at Berkeley, a shared workspace containing individual workstations and a conference area without windows, shows that an average saving of 44.5% could be achieved by applying such an intelligent daylighting control system. Furthermore, this result can be scaled to general commercial buildings under certain assumptions and reveals at least 50% savings on annual electricity consumption [3]. On the other hand, the MEMS (micro-electrical mechanical systems) ‘smart dust mote’ technology enhances the sensing and actuating practicability of the control system without largely retrofitting the existing wiring.
1.2 Smart Dust Motes “Smart Dust” is proposed as a futuristic dust-sized sensing and communication unit
based on MEMS technology. Millimeter-scale “motes” are available today as prototypes and can be configured with a variety of sensors in high density distributed sensor networks. Several types of smart motes have been developed by different research organizations, typically consisting of a microcontroller, a communication unit and onboard sensors or integrable sensor board modules [4]. The operating system needs is compact and programmable to be stored in the very limited flash memory of motes and to perform customized tasks. TinyOS is one such operating system, developed at The University of California at Berkeley [5], and is installed in all the motes used in this research.
While mote sensors are miniature in volume, their power, communication range and memory is limited, and reliability and accuracy vary. Moreover, the sensors integrated on mote sensor boards are not calibrated. Characterization of motes to establish the connection between sensor readings and physical phenomena is essential before pursuing
1
them for commercial daylighting, and other targeted applications.
1.3 Mote Sensor Network To compensate for the limitations on power, reliability, communication range and
sensor fidelity, it is necessary to deploy a large number of motes throughout the target environment to form a sensor network leveraging the motes’ tiny volume. A sensor network could consist of either redundant sensors monitoring the same parameter or disparate sensors monitoring the same target from different parameters. As motes are equipped with several sensors and wireless communication modules, they are an excellent choice for distributed sensor networks. In addition, motes’ computational capability can enable more sophisticated sensor networks. Comparing to a single expensive high fidelity sensor, a network of mote sensors using appropriate algorithms is superior in providing local information as well as global knowledge, and is much more robust against failure.
1.4 Sensor Validation and Fusion Sensor validation and fusion is critical to the ascendancy of wireless sensor
networks, yet it is not a new procedure. A wide body of both military and non-military research is focused on validation and fusion of several redundant or disparate sensors for target tracking, automated target/threat recognizing, manufacturing processes monitoring, and robotics [6]. However, the work concerning clusters of sensors is limited since the networked sensor is a new technology growing with the maturation of MEMS technology.
As the sensor networks comprise dense mote sensors, they generate a mass of data at each time stamp. Although the networked sensors usually measure the same physical phenomenon, inconsistency among the sensed data arises due to sensor degradation or failure, external/internal interference, calibration error, or position/location variation of sensors. Validation and fusion of sensed data permits efficient extraction of pertinent data from the mass of sensor readings, while isolating false data and distinguishing system failures from sensor failures.
1.5 BESTnet BESTnet is a small-scale experimental network of mote sensors built in the BEST
2
laboratory for implementation and evaluation of the sensor validation and fusion algorithm that forms the foundation of this research. BESTnet data included illuminance readings from the photoconductor on each sensor node. The sensor validation and fusion algorithm developed applied off-line to sensor data set to fine tune the parameters of the algorithm. The algorithm was ultimately applied in real time with the sensor characterization findings to arrive at a set of calibrated readings. The routing protocol and configuration of BESTnet were improved in order to reduce failures due to communication and to augment the efficiency and accuracy of the validation and fusion algorithm. In addition, several failure modes including sensor failure, data packet loss and interfered data were identified.
3
2 SMART DUST MOTES
2.1 Overview of Wireless Sensor Network Architecture Several companies and organizations are devoted to the development of micro-sized
wireless sensor networks, including sensor/platform hardware, networking protocols and operating system software.
MICA, MICA2, and MICA2DOT motes by Crossbow [7], Dust Motes by Dust Networks [8], i-Bean Endpoints, Routers and Gateways by Millennial Net [9], and Evaluation Modules and Developer Kits by Ember [10] are examples of the most competitive commercialized sensor/platform hardware products. These platforms generally consist of a micro-processor, a communication unit and analog and/or digital sensor interfaces for onboard sensors or integrable sensor board modules. Among the choices of an RF transceiver, a laser module, or a corner cube reflector for communication unit, radio frequency communication dominates. Standardized or customized sensor modules that measure a number of physical or chemical stimuli such as illuminance, temperature, humidity, acceleration, magnetism, or pressure, can easily be integrated onto the platform via I/O interfaces [4].
Dust Networks’ DustCloud [8], Millennial Net’s network protocol [9], Ember’s EmberNet protocol [10], and MeshNetworks’ MEA (MeshNetworks Enabled Architecture) powered by MSR (MeshNetworks Scalable Routing) protocol and ATP (Adaptive Transmission Protocol) service are dedicated to the development of RF chips and software libraries for self-organizing and self-healing wireless networks. TinyOS, developed at UC Berkeley, is one of the most versatile operating systems exclusively intended for wireless sensor networks, providing flexibility for customized routing, sensing and processing tasks [11].
This research used Crossbow’s MICA processor/radio platform and sensor boards and UC Berkeley’s TinyOS operating system, which are described in detail in the following.
4
2.2 MICA1 Processor and Radio Platform UC Berkeley’s Smart Dust project began with the development of COTS
(commercial off-the-shelf) Dust [12]. The COTS Dust family included the RF mote, Laser mote, CCR mote, Mini mote, MALT (motorized active laser transceiver) mote, weC mote, and IrDA mote [4]. The family shared similar design architecture with variations on the embedded communication units and processors. The first commercialized generation of the mote platform was named Rene mote inheriting the weC mote, and was manufactured by Crossbow [13]. The next generation of Rene was dubbed MICA because of its electronic implementation resemblance to its silicate relative, which separates into thin mineral leaves [14]. The third generation of motes is named MICA2, and features a higher CUP clock and a better radio communication than MICA, and is presently the most common of the MICA family. MICAz is the next generation 2.4GHz, IEEE 802.15.4/ZigBee compliant platform featuring several new capabilities that enhance the functionality of MICA motes.
The MICA platform, as shown in Fig.1, consists of a micro-processor, radio transceiver, external module connector, and battery. The processor is an Atmel ATmega 128L low power microcontroller with a 4MHz CPU clock and 4KB SRAM that runs TinyOS from its 128 KB flash memory. The communication unit comprises a basic 916MHz band transceiver, antenna and discrete components to configure the physical layer characteristics, operating in an on/off key mode at speeds up to 50Kbps. The strength of radio frequency is adjustable with a programmable resistor potentiometer via TinyOS. The 51-pin connector, including an 8-channel, 10-bit A/D (analog-to-digital) converter, a serial UART port and an I2C serial port, serves as the interface for external modules, and can be connected to external peripherals such as sensor boards and interface boards [15]. Two AA batteries power the motes, setting a lower limit on their volume.
1 MICA is sometimes referred to as MICA1 since MICA2 has been released.
5
Fig.1 MICA processor and radio
2.3 MICA Sensor Board and Prototype Board Figure 2 shows the two sensor boards used in this research project - the standard
MICA sensor board, and the prototyping sensor board. The standard MICA sensor board contains a light sensor, temperature sensor, microphone, sound buzzer, magnetometer and accelerometer. The light sensor is a Clairex CL94L CdSe photoconductor, which is most sensitive at the light wavelength of 690nm [16]. The temperature sensor is a Panasonic ERT-J1VR103J thermistor operating from -40°C to 125°C with zero power resistance 10kΩ [17]. A Panasonic WM-62A omni-directional back electret condenser microphone cartridge is used for acoustic ranging with the sounder, or for general acoustic recording and measurement [18]. The sound buzzer is a 4 KHz fixed frequency piezoelectric resonator. A Honeywell HMC1002 2-axis magnetic sensor [19] serves as the magnetometer, and the accelerometer is a MEMS surface micro-machined 2-axis +/-2G device, manufactured by Analog Devices [20].
(a) (b)Fig.2 (a) Standard MICA sensor board, (b) Prototyping sensor board
The prototyping sensor board contains only a light sensor and a temperature sensor
6
on it, leaving the remaining power and A/D converter ports for customized thirty party sensors. The light sensor is of the same type as that on the standard MICA sensor board, and the temperature sensor is a high fidelity YSI 44006 thermistor that can achieve an accuracy of 0.2°C with proper calibration [21]. The design of the prototyping sensor boards allows for the possibility of using motes for actuation via the powered ports, and will be detailed in section 7. Other off-the-shelf MICA sensor boards are also available from Crossbow such as GPS/Weather sensor boards with a GPS module, an ambient light sensor, humidity sensor, accelerometer, and barometer.
In the intelligent commercial daylighting project only the light sensor was fully characterized. Thus the sensor validation and fusion algorithm developed was tested with data from the light sensor only. However, the photoconductor and thermistor were both calibrated and the accelerometer was assessed to evaluate its potential as an occupancy sensor.
2.4 Interface and Programming Board The MICA interface and programming board is shown in Fig.3. Using this interface
board, customized TinyOS code is installed to the motes’ flash memory of motes through the parallel port of a personal computer (PC). A MICA mote can also be used in conjunction with a PC and the interface board, forming base station that can aggregate sensor network data from the sensor network, and inject of commands into sensor network, [15].
Fig.3 MICA interface/programming board
In this research a base mote is used with the interface board to collect data into Matlab. This data is processed using the sensor validation and fusion algorithm. In addition, the base mote is used to send commands from Matlab back into the sensor
7
network.
2.5 TinyOS Overview TinyOS, or Tiny Microthreading Operating System, is an open-source operating
system designed for wireless embedded sensor networks that was originally authored by Jason Hill of UC Berkeley. This event-driven system enables fine-grained power management and flexible scheduling in order to accommodate the unpredictable aspects of wireless communication and interactions with the physical world. TinyOS systems, libraries, and applications are written in nesC, a programming language using C-like syntax. The nesC language allows users to specify the motes’ behavior by wiring appropriate components and even to enable the customization of components for new devices [5, 11, 14, 22].
8
3 MICA MOTE SENSOR CHARACTERIZATION
3.1 Characterization Objectives Mote sensors output digital values form 0 to 1023, obtained from transformation of the
analog reactions resulting from variations in physical phenomena. These transformations are performed by the on-board A/D converter. Therefore it is necessary to derive a mapping from the digital readings to appropriate physical units so that the sensors can be used in real-world applications. Another purpose of mote sensor characterization is to identify and recommend replacement of sensors that are nonlinear within the working range of the application.
The photo conductors on the MICA sensor board were calibrated first, since they are the primary sensors used in - daylighting control. The motes’ thermistors were also calibrated to determine the feasibility of using them for future heating ventilation and air conditioning (HVAC) control. As MICA mote sensor boards do not contain occupancy sensors, the accelerometers were also evaluated for occupancy sensing.
3.2 Illuminance Characterization 3.2.1 Testbed hardware
The testbed for calibrating the photoconductor consisted of the following:
Two double-tube fluorescent light fixture with four GE light tubes of color temperature 4100K Two Advance Mark VII electronic fluorescent dimming ballasts Watt Stopper ISOLé IRC-1000 remote control system for lighting Minolta T-10 illuminance meter
The remote control system controls the ballast and varies the illuminance between 60 lux and 1000 lux in 18 steps, each of which is roughly 50lux lower/higher than the next step.
Twelve MICA motes with standard MICA sensor boards were arranged in a 3-by-4 matrix in the testbed to form a centralized sensor network. The base station was placed in the center of the matrix, and was connected to a laptop running Matlab to gather data from the network. Each mote was programmed to acquire a reading from the photoconductor every second and send the data back to the base station. The high fidelity
9
color temperature and cosine corrected Minolta meter was also placed in the center of the matrix and used as the illuminance reference. Fig.4 illustrates the calibration environment.
Fig.4 Illuminance calibration testbed
3.2.2 Characterization procedure Calibration was performed by setting the light to the lowest setting (resulting in an
illuminance of 60 lux), and increasing the illuminance one step (approximately 50 lux) every minute until reaching the highest setting (1000 lux). Next, the light was dimmed step-by-step, for one minute per setting, back down to the lowest setting. The reference illuminance at each level was recorded manually from the Minolta meter. By the end of the process, each light setting was tested twice except the highest level, which was tested once.
In order to avoid the effects of photoconductor transience, sensor data was not considered until ten seconds after changing illuminance levels. The mean sensor reading for each mote at each illuminance was mapped to the associated reference illuminance using a curve fitting program. The curve fitting was performed under the following assumptions: the illuminance on the testbed was evenly distributed, and photoconductor tilt and orientation had negligible effects on sensed values
10
3.2.3 Characterization results The curve mapping sensed values to illuminance, and the corresponding equation for
mote01 are shown in Fig.5 (a). The mapping curves for all motes tested are shown in Fig.5 (b), with Table.1 of Appendix A.1 listing all twelve mapping equations. As seen in the plots, the photoconductors do not have a linear response. In fact some of the sensors (mote03 mote05 and mote08) saturate becoming insensitive to changes above a certain illuminance. It is difficult to find a suitable fitting curve for those motes that saturate, and for mote05 no fitting curve matches the data. It is not possible to derive a global fitting curve for the entire set of photoconductors that were tested because of the significant spread in sensor response. However, it is possible to map each sensor reading using its individual equation. Alternately, probabilistic methods might be applied to compensate for the disparity between sensors. Ultimately, we would like to identify linear light sensors for integration onto mote sensor boards.
Considering the individual mapping curves for each mote, the aforementioned curve fitting assumptions, and testing procedure are potential sources of error. For example, the illuminance was not perfectly distributed over the testbed; shifting the light meter to different locations in the testbed causes an illuminance variation of approximately 20 lux. , . In addition, the photoconductors are not soldered onto the sensor board with precise angle and orientation. The curve fitting itself could also produce errors. Furthermore, as indicated in the specification sheet, the photoconductors are most sensitive to light of 690mm (red) wavelength [16]. Finally, we conclude that the fitting curves for each mote are to be used exclusively good for light sources with color temperature 4100K, unless further corrected for color temperature.
11
Illum
inan
ce (l
ux)
Illum
inan
ce (l
ux)
1200 Data points Fitting curve
1000
800
600
400
200
0 600 650 700 750 800 850 900 950 1000
Digital Reading
(a)
1000 Sensorboard01Sensorboard02
900 Sensorboard03Sensorboard04
800 Sensorboard05Sensorboard06
700 Sensorboard07Sensorboard08
600 Sensorboard09Sensorboard10
500 Sensorboard11Sensorboard12
400
300
200
100
0 600 650 700 750 800 850 900 950 1000 1050
Digital Reading
(b) Fig.5 (a) Illuminance calibration curve for mote01;
(b) Illuminance calibration curves for all motes
3.2.4 Hysteresis test A hysteresis test was conducted by exposing two motes to illuminances above,
below, and at 500 lx. This procedure was intended to determine whether there is a memorizing effect in the photoconductor. The illuminance was varied according to the following sequence (in lux): 500 → 700 → 500 → 300 → 400 → 500. Fig.6 shows that
12
Illum
inan
ce (l
ux)
the mote didn’t exhibit significant hysteresis, as the sensor readings at 500 lux were relatively constant whether approaching 500 lux from above, or from below. The differences in sensed illuminance at 500 lux that did exist were likely caused by mapping errors and the slight disparity between the illuminance at the reference, and at the mote itself.
900 Mapped sensor readings Mean value of mapped sensor readings
800 Illuminance meter readings
700
600
500
400
300
200 0 100 200 300 400 500 600 700
Reading Number
Fig.6 Hysteresis results - mote01
3.3 Temperature Characterization 3.3.1 Hardware testbed The thermistor calibration consisted of the following:
Ravtek Ravnger infrared thermometer Tripod Mini-freezer Hair dryer
Due to the lack of an adjustable constant-temperature testing environment, it was impossible to calibrate the thermistor at several steady temperature levels as in the illuminance characterization. Instead, the motes were placed in a mini-freezer to obtain a temperature as low as 0°C, and were heated with a hair dryer to obtain a high temperature of approximately 90°C. The high fidelity infrared thermometer, which measures the surface temperature of its target, was mounted on the tripod and aimed at the mote thermistor as shown in Fig.7. One mote was calibrated at a time.
13
Fig.7 Temperature calibration testbed
Each mote was programmed to collect a reading from its thermistor every second and to immediately send the data to a base station running Matlab. The clock on the infrared thermometer and the computer were synchronized to avoid inaccuracies, and the readings of the infrared thermometer were recorded manually. Thirteen motes were tested; eleven were connected to standard MICA sensor boards while the other two were mounted with prototyping sensor boards, which embed more precise thermistors.
3.3.2 Characterization procedure The temperature range tested over was 0°C to 85°C. Each mote was initially placed in
the freezer to cool the temperature to 0°C. Then it was removed and allowed to warm to room temperature on the testbed. It was then heated under the hair dryer until reaching 85°C. In response to the rate of change in thermistor temperature, the time and the temperature of the thermistor were recorded every 10 seconds when the motes were first exposed to the room temperature., As the rate of change decreased, the recording interval was increased to thirty and then sixty seconds. When heating the motes under the hair dryer, time and temperature were recorded every ten seconds.
The digital sensor readings corresponding to the times of the thermometer readings were picked out of the data set of sensor readings that were acquired every second, and were plotted with respect to the meter readings. The following assumptions were made: the temperature distribution of thermometer’s detection area was even, the surface temperature of the thermistor dominated the variation in output resistance, and the
14
Tem
pera
ture
( °C
)
thermistors were not sensitive to the humidity in the freezer or the airflow of the hair dryer. A curve fitting was performed to identify an equation mapping sensor output to temperature, for each thermistor.
3.3.3 Results & comparison to manufacturer’s mapping equations The mapping curve and the corresponding equation for mote01 are plotted in Fig.8
(a). Figs.3(b) to (d). Table.2 of Appendix A.2 shows the mapping curve equations for all thermistors, as well as that provided in Crossbow’s specification sheets. Note that there are two clusters of curves in Fig.8 (b). The group containing eleven curves reflects the behavior of the thermistors embedded on the standard MICA sensor boards. They behave similarly and are nearly linear. The differences between curves are likely due to the tolerance of the thermistors and to errors introduced from the testing procedure environment. The slight nonlinearity appearing on every curve at nearly the same Temperature is likely a characteristic of the thermistors themselves. A global mapping curve is shown in Fig.8 (c) represented by the following equation:
Temperature(C)
−10 4 −7=−4.904×10 x +9.146×10
3 −4 2x −6.353×10 x +0.3037x−14.26,
where x is the digital reading from the A/D converter of the motes. For an HVAC application that doesn’t involve temperature variations as large as the calibration range, the thermistors are sufficiently reliable, if calibrated in a more highly controlled test environment.
100 Data points Fitting curve
80
60
40
20
0
-20 0 100 200 300 400 500 600 700
Sensor Readings
(a) 15
Tem
pera
ture
resi
dual
s ( °
C )
Tem
pera
ture
resi
dual
s ( °
C )
90
80
70
60
50
40
30
20
10
0
-100
100
80
60
40
20
0
-200
mote01 mote02 mote03 mote04 mote05 mote06 mote07 mote08 mote09 mote10 mote11 mote13 mote14
100 200 300 400 500 600 700 800 900 1000SensorReadings
(b)
Data pointsGlobal mapping curveXbow's curve
100 200 300 400 500 600 700 800Sensor readings
(c)
16
Tem
pera
ture
resi
dual
s ( °
C )
120 Data points Global mapping curve
100 Crossbow's curve
80
60
40
20
0
-20 400 500 600 700 800 900 1000
Sensor readings
(d) Fig.8 (a) Temperature cailbration curve for mote01; (b) Temperature calibration curves for all motes; (c) Temperature calibration curves for MICA sensor boards; (d) Temperature calibration curves for prototyping boards
The cluster of two coincident linear curves in Fig.8 (c) shows the response of the thermistors on the prototyping sensor boards, as opposed to the standard MICA sensor boards. Their consistency and linearity indicates a much higher fidelity and precision compared to those on the MICA sensor boards. The higher quality of thermistors on the prototyping sensor boards is confirmed in the manufacturer’s user manual, validating the experimental results [21]. .
Figures 8(c) and 8(d) show Crossbow’s calibration curve compared to the experimentally derived mapping curves, for the two types of thermistors. There is a clear offset between the experimental results and those of the manufacturer. Since the manufacturer doesn’t provide the details of their characterization procedure, but encourage users to rely on their results, it’s difficult to determine the reason underlying these calibration offsets. While some of the difference may be due to the experimental testing environment, another contributor might be light-temperature interference. This interference was observed during the photoconductor calibration, and is discussed in the following section.
3.3.4 Light-temperature interference A simple experiment was conducted using the temperature calibration testbed to
17
quantify the effect of light-temperature interference. Using the same hardware, setup, and sensing rate as detailed in section 3.3.1, an incandescent desk lamp was placed above the mote, and put the Minolta illuminance meter was placed beside the mote. At room temperature, the illuminance was varied over a range from 256 lux to 900 lux, remaining at each dimming level for one minute.
Fig.9 illustrates the output of the thermistor on mote01 in response to the illuminance variation. Although the surface temperature of the thermistors could be somewhat affected by the heat of the lamp, the thermistor output changes unreasonably with illuminance. The sensor readings decrease with increasing light intensity if started at low illuminance, but also decrease with decreasing light intensity if started at high illuminance. Without further testing, this interference cannot be fully characterized.
illum.(lx) sensor readings mapped temp. true temp.(°C) start time end time256 start 200.09 24.9 24.4 01:55:50 01:56:15300 184.75 22.5 24.4 01:56:30 01:57:00400 167.59 19.6 24.6 01:57:30 01:58:00500 159.32 18.2 24.8 01:58:50 01:59:20600 153.49 17.2 25.0 02:00:05 02:00:35700 147.4 16.2 25.2 02:01:00 02:01:30800 142.51 15.3 25.4 02:01:55 02:02:25900 139.61 14.8 25.6 02:03:00 02:03:30219 217.79 27.6 25.4 02:04:00 02:04:30900 end 139.7 14.8 25.6 02:04:40 02:05:10
illum.(lx) sensor readings mapped temp. true temp.(°C) start time end time300 end 192.66 26.7 25.2 02:14:50 02:15:20261 127.34 15.1 25.2 02:14:10 02:14:40300 131.76 16.0 25.3 02:13:30 02:14:00400 135.14 16.7 25.4 02:12:20 02:12:50500 140.02 17.6 25.5 02:10:35 02:11:05600 147.91 19.1 25.5 02:09:35 02:10:05700 156.47 20.7 25.4 02:08:50 02:09:20800 174.46 23.8 25.4 02:08:10 02:08:35900 185.79 25.7 25.2 02:07:10 02:07:40219 start 174.71 23.9 25.1 02:06:25 02:06:55
Fig.9 Illuminance-temperature interference
According to the user’s manual, the photoconductor and the thermistor on the standard MICA sensor board share the same A/D converter channel, possibly explaining the light-temperature interference. The manufacturer also warns that in order to get a meaningful reading, only one of the thermistor or photoconductor can be used at once. It is recommended that the unused sensor be turned off, allowing at least 10 ms for the capacitor to discharge after being turned off [21]. During this test however, the photoconductor was never turned on and used, but the interference still arose. This suggests that a careful investigation of the sensor board’s hardware design should be
18
performed if both the photoconductor and the thermistor are going to be used in future applications.
3.4 Accelerometer Evaluation Occupancy sensors are necessary so that the intelligent daylighting control system
can make decisions based on the presence and preferences of specific users. Although there is no occupancy sensor on the standard MICA sensor board, the included accelerometer is useful for the detection of tilt, movement, vibration, and seismic measurement [21]. Therefore the feasibility of using the accelerometer as an occupancy sensor was evaluated.
3.4.1 Test environment Two motes were attached to the underside of the seat and on the back of an office
chair as shown in Fig.10. The motes were programmed to acquire data from the accelerometer every second and to send the data to the base station. The base station was also attached to the underside of the cushion and was connected to a computer running Matlab.
(a) (b)Fig.10 Accelerometer testbed and hardware
19
Sen
sor R
eadi
ng
3.4.2 Evaluation procedure At the start of the experiment the chair was kept empty and stationary. A person was
then asked to sit in the chair and to perform reading and writing tasks for a period of time. During the next phase of the experiment, the occupant was asked to rock the chair hard enough to get the maximum response of the accelerometer, and then left the chair.
3.4.3 Results & discussion Figure 11 illustrates the response of the accelerometer to the occupant’s presence and
motion. The x and y direction of the accelerometer are defined as in Fig.12. As expected, the accelerometer signals fluctuate significantly when the occupant vigorously rocked the chair. There was also distinguishable variation in the signals when the occupant sat in and left the chair.
850
800
750
700
650
600
550
5000
Chair is empty Sit steady in the chair Leave the chair
Start sitting Swing the chairin the chair while sitting on it
mote13x
mote13y
mote14x
mote14y
200 400 600 800 1000 1200 1400 1600 1800 2000Time (sec)
Fig.11 Accelerometer response
y
xFig.12 Accelerometer x and y axes
20
Overall, the accelerometers do not appear sensitive to the presence of an occupant, showing a difference of only ±15 in digital output during the extreme condition. Therefore, while a single accelerometer doesn’t appear to serve as a very reliable occupancy sensor, it could be useful as a trigger indicating that someone is using the chair. In addition, information regarding occupancy might be extracted if the accelerometer were fused with other types of sensors, such as a swivel sensor.
21
4 BESTNET
4.1 Mote Sensor Network 4.1.1 Centralized configuration
In a centralized sensor network, all nodes send data directly or indirectly to the base station, without processing. The base station collects data, processes it, makes decisions, and takes appropriate actions, or injects commands back to the network. The sensor networks implemented in this report are all built using a centralized configuration.
Although a centralized sensor network is easy to construct, some of its inherent characteristics make implementation of a large-scale network impractical. For a sensor network containing thousands of nodes, the overwhelmingly massive volume of data sent to the base station causes severe collision and loss of data packets. In addition, the volume of data requires a processing unit of such power that it would be cost prohibitive, or might not yet exist. Moreover, to route a data packet all the way from the farthest node to the base station could introduce unacceptable delays if real-time processing is critical.
4.1.2 Decentralized configuration In contrast to a centralized sensor network, a decentralized configuration allows
each node to process data, make decision, and take action, greatly relieving the limitations and shortcomings of centralized sensor networks. A large-scale sensor network with a decentralized configuration can be divided into clusters, where each cluster has a selected ‘head’ to process data and to communicate with other clusters and/or the base station. Thus, it is possible to obtain local information from each cluster or to determine global conditions by processing the data from the clusters at the base station. Equipped with storage and computational capabilities, Smart Dust motes are extremely suitable for decentralized sensor networks. However, designing efficient, smart algorithms and protocols for decentralized sensor networks is a challenge.
4.2 Hardware and Configuration of BESTnet 4.2.1 Message packet type
The messages transmitted in the Smart Dust mote sensor network are defined as
22
Active Message (AM). The overall AM format contains five fields: destination address, AM handler ID, Group ID, message length and payload. The payload field can be defined according to the application [5]. In the BESTnet, the payload contains source mote ID, last sample number, ADC channel and ADC data array. Fig.13 (a) illustrates the five-field structure of a standard AM message, and Fig.13 (b) shows the payload format designed for use in BESTnet.
Destination address 2 bytes
Active Message handler ID 1 byte Source mote ID 2 bytes
Group ID 1 byte Last sample no. 2 bytes
Message Length 1 byte ADC channel 2 bytes
Payload Up to 29 bytes Data array 10×2bytes
(a) (b)Fig.13 (a) AM message structure; (b) BESTnet payload data field
The “source mote ID” and “last sample number” fields are used by intermediate messaging motes to determine whether an incoming packet is new, to avoid repeat forwarding of data. The “ADC channel” is used to define the sensor from which the data originates. The “data array” carries a set of sensor readings. Each sensor reading occupies 2 bytes, so that the data array can accommodate up to 10 sensor readings.
4.2.2 BESTnet node function Each node in the network functions as both a sensor and a data forwarder. Motes
activate the photo sensor every 5 seconds, acquiring 10 readings at a frequency of 50 msec/rdg. The data packet is sent to the base station once as the data array is filled. Motes also listen to the network for incoming packets, examine the last sample number of the received message to avoid repetitive forwarding, and then broadcast the message.
4.2.3 Base station function The mote in the base station acts as a receiver, listening to the network for incoming
messages, and transmits received messages through the serial port to the computer, via the interface board. The computer side of the base station runs Matlab with a Java I/O interface to gather the incoming data packet from the base mote and unwrap received packet. The mean value of the 10 readings in the data array is stored in a matrix
23
associated with the sensing mote. Finally, Matlab is used to run a data validation and fusion algorithm.
4.2.4 BESTnet v. 1.0 hardware BESTnet version 1.0 comprised six mote nodes, and was constructed in order to test
the fuzzy sensor validation and fusion (FUSVAF) algorithm. The testbed was roughly 4ft by 3ft in area, containing six motes arranged in a 3-by-2 matrix with the base mote placed in the center. The network was centrally configured so that data packets could be directly received by the base mote. The operating environment consisted of two double-tube dimmable fluorescent light fixtures, hanging approximately 1.5 meters above the test bed. This design permitted the light to be dimmed from 1000 lux to 50 lux in 18 lux intervals.
4.2.5 BESTnet v. 1.1 hardware BESTnet version 1.1 contained ten mote nodes, four of which served only as
forwarders in order to relay the data packets to the base station. This version of BESTnet intended to serve as a prototype mote network for sensing office lighting conditions. Two mote sensors are placed on each of three desks. The placement of mote sensors on the desktop was dictated by radio signal strength and worker preference. Fig.14 illustrates the layout of BESTnet version 1.1.
Sensor mote Relay mote Base station
Fig.14 BESTnet version 1.1
4.3 BESTnet Construction: Communication Challenges There were several obstacles in the testing area, such as partition walls, tall file
cabinets, and books on the desktop. Such obstacles certainly attenuate and, in some cases, even barrier the radio frequency communications between motes. In addition, signals
24
from wireless routers, cordless phones, computers and personal cell phones also contribute noise possibly interfering with the radio communications.
The communication problems were especially apparent in BESTnet version 1.1 since the motes were spread across desks that were separated by an intermediate desk containing no sensing motes. With an eye toward network simplicity, dedicated relay motes were programmed solely to forward data packets in order to link the sensors into a network.
25
5 SENSOR VALIDATION AND FUSION
5.1 Sensor Validation: Concept and Methodology Validation of sensor data prevents the system from wasting resources in processing
faulty data, and also tries to tag questionable data that critically affects the performance of the system. Noise rejection and fault detection are the main functions of sensor data validation, providing a more reliable data set for fusion.
In the BESTnet signals from wireless equipment and electronic waves from machines such as computers, introduce noise. Sensor failure, process failure, and system failure are three classic faults common in BESTnet. In order to minimize the cost and size of motes, the sensors on the motes are not of high fidelity, and might sometimes give incorrect values, causing sensor failure. This is especially true of the mote photoconductors. Process failure usually occurs when the sensing action is interrupted by other concurrent tasks. Since the mote operating system (TinyOS) is an event-driven system, the parameter values of internal functions can be altered accidentally when performing other tasks in between the processing task. Moreover, due to the characteristic of event-based operating systems, the base mote cannot simultaneously receive two data packets. In other words, the base mote will ignore or discard incoming packets when receiving and processing the current packet. This packet loss is defined as system failure.
There are five layers in the methodology of sensor data validation: signal check, absolute limits check, system performance limits check, expected behavior check, and empirical correlation check. These layers are depicted in Fig.15. At the signal check level, the algorithm verifies that there is a signal output from the sensor. At the level of absolute limits sensed values are examined to determine whether they fall inside the range which a sensor can possibly output. Next, the values corresponding to actions the system performance limits are filtered out. Expected behavior check determines whether the sensed values reasonably reflect normal system behavior, based on the prediction of the validation and fusion algorithm itself. Finally, the empirical correlation check compares the correlation of the data with that from redundant sensors [23].
26
Raw sensed data
Signal output check
Sensorfeature Absolute limits check
Previous Performance limits checkvalue
Expect behavior check
Correlation check
Fusion procedure
Fig.15 Sensor validation layers
5.2 Sensor Fusion: Concept and Methodology Sensor fusion efficiently extracts the most useful information from sensor readings,
and returns a pertinent value which reflects the state of the sensed condition accurately, to a certain degree. That is, sensor fusion leverages multiple sensors and sensor validation to mitigate the impact of inaccuracy from one or several of the sensors. The data used for fusion could be from quasi-redundant or disparate sensors. Quasi-redundant sensor fusion makes use of data from the same type of sensors measuring the same parameter. The term ‘quasi’ here is meant to emphasize the fact that, rigorously speaking, no matter how close the sensors are installed to one another, or how uniform the environment is, the physical phenomena measured is very well correlated but not precisely redundant [24]. Sensors used in disparate sensor fusion are of different type measuring the same parameter; they may even measure different parameters, which is helpful in revealing the state of the system or the value of the desired parameter. Since only the mote light sensors were used in BESTnet, the algorithm developed performs mainly quasi-redundant sensor fusion.
There are several different approaches to sensor fusion, such as a Fuzzy logic approach, Kalman filtering, Bayesian networks, and neural networks. The sensor fusion algorithm used in this research is base on the Fuzzy logic approach.
5.3 The Fuzzy Sensor Validation and Fusion (FUSVAF) Algorithm The development on the sensor validation and fusion algorithm for BESTnet began with
the Fuzzy Sensor Validation and Fusion (FUSVAF) algorithm developed by Dr. Kai Goebel [25, 26].
27
The FUSVAF algorithm makes use of a Fuzzy Exponential Weighted Moving Average (FEWMA) time series predictor, dynamic validation curves determined by sensor characteristics, and a fusion scheme which uses confidence values for the measurements, the predicted value, the measurements, and the system state. The architecture of FUSVAF is shown in Fig.16, and the algorithm works in the following manner: incoming sensor readings are validated using a validation gate and the previous fused value. This fused value is then used to assess the state of the system expressed by α. It is also used for prediction, which in turn is necessary to perform the validation in the next time step. There are three parts to the FUSVAF algorithm: validation, fusion and prediction.
Supervisory Controller
Intra-network
Sensor Fusion
Sensor Validation
Machine Level Controller
Raw Measurements or Processed Data
Data Fusion
Fig.16 FUSVAF Architecture
The sensor validation portion verifies the fidelity of each incoming reading by assigning it a confidence value, which is determined by a validation curve. The validation curve is a Gaussian curve generated from the specific sensor characteristics, the predicted value and the physical limitations on the sensor value. The assignment of the confidence value takes place utilizing a validation gate. If sensor readings show a change larger than the bounds of the gate, the readings are flagged as erroneous and are assigned a confidence value of ‘0’. A maximum confidence value of ‘1’ is assigned to readings that coincidence with the center of the gate. The center of the gate is determined by the predicted value. The curve between the maximum and the two minima is dependent upon the sensor characteristics. Generally, this is a non-symmetric curve that is wider around the maximum value if the sensor has small variance, and narrower if the sensor exhibits noisy behavior and large variance. The curves change dynamically with the operating conditions to capture the change in behavior of the sensor over its operating range.
28
The fusion of several sensor values is performed by taking the average of measurements weighted by their confidence values plus the predicted value weighted by α, an adaptive parameter representing the system state, and a constant scaling factor ω. The equation is
nzσ(z ) +
x f : fused value αx z i : measurements
xf
∑ i ii =1
=n
ω σ : confidence values, where
α α : adaptive parameter representing the system state∑ σ(zi ) +i=1
ω ω : constant scaling factorx:predicted value
The scaling factor ω is introduced to use a fraction of the predicted value to prevent the system from becoming unstable when no valid readings remain after the validation procedure, and to maintain robustness in case of a temporary sensor failure. Since the purpose of the term containing ω is to manage sensor failure, ω is typically large, preventing the predicted value from dominating the fused value. ω must be tuned to match the particular system under consideration. The adaptive parameter α carries information about the state of the system and is used in both the fusion unit and the prediction portions of the algorithm. If the system is in a steady state, α is set to a large value to heavily weight the past history, since the variation in measurements is very likely caused by noise. On the other hand, if the system is in a transient state, α is set to a small value, weighting the predicted value less heavily so as to reduce the lag induced by past history. A mechanism that distinguishes transient from steady state operations, i.e., that can adjust α dynamically according to the system state is given by the set of fuzzy rules below:
IF change of readings is small THEN α is large, IF change of readings is medium THEN α is medium, IF change of readings is large THEN α is small.
The membership functions are also designed using triangular shaped functions with maximum overlap, such that only two parameters have to be specified: me for fuzzification and mα for defuzzification. The membership functions are shown in Fig.17.
29
1 small
µerror
00 m e
1
medium µα
large
01 error 0
small mediun
large
m α 1α
Fig.17 Adaptive parameter membership functions
The predicted value for the next time step is generated by a time series predictor with the adaptive parameter α tuned to optimize the trade-offs between responsiveness, smoothness, stability, and lag of the predictor. The standard exponential weighted moving average predictor is combined with the fused value to predict the next state with the
following equation: x(k+1)=αx(k) + (1−α)xf ( k), where xf(k) is the current upgraded
fused value.
5.4 Application of FUSVAF to Cory Hall Data Before obtaining the motes for the construction of BESTnet, a database of
illuminance and temperature data was accessed. The data was collected and compiled as part of a long-term project conducted by Professor David Culler’s research group. The experiment was conducted from May 17th, 2001 to July 17th, 2001. 44 motes were deployed in the classrooms, offices, laboratories, and hallways of Cory Hall. Fig.18 shows one of the motes installed in the hallway. Since the experiment was intended for monitoring rather than for prototyping a mote sensor network, only one mote was placed in each room, and not all the motes were active at the same time. The exception was motes numbered 6190 and 6191, both installed in room 490, a large student study room.
By carefully selecting readings from the two motes that shared a common sensing time stamp, and by tuning the relevant parameters in the algorithm, FUSVAF was successfully applied directly to the two sets of data. This application is plotted in Fig.19. The raw data was comprised of uncalibrated readings from the two motes during an 11-hour period on May 17th. The algorithm not only showed its capability of pertinently fusing mote sensor data, but also encouraged construction of a mote sensor network and
30
Ligh
t Rea
ding
development of a fuzzy validation and fusion algorithm for the network.
(a) (b)Fig.18 Cory Hall sensing motes
1000
950
900
850
800
750
700
650
sensor reading 1 600 sensor reading 2
fused data550
4 5 6 7 8 9 10 11 12 13 14Time (hr)
Fig.19 Application of FUSVAF to Cory Hall data
5.5 Application of FUSVAF to BESTnet By slightly adjusting the parameters in the FUSVAF algorithm, that are specific to
sensor characteristics, the algorithm was applied to the BESTnet version 1.0. This was intended as a preliminary test to inform further development on validation and fusion algorithms for mote sensor networks.
31
Sen
sor R
eadi
ngs
As indicated in Fig.20, FUSVAF was able to track the set of readings from the six sensors quite well, provided that the light was changed slowly enough that the algorithm could treat the change as a continuous variation in illuminance. However, FUSVAF failed if the light was suddenly dimmed to the lowest level from the highest level or lit from the lowest level to the highest level. This shows that FUSVAF handle situations where the change in the sensed parameter is not approximately continuous enough with respect to the sensing rate. Moreover, the algorithm sometimes fails upon starting. Another interesting observation was that FUSVAF will behave anomalously if operated in a dark environment. This implies that the algorithm cannot deal with sensor readings close to zero. Fig.21 demonstrates the situations in which FUSVAF fails.
1000
900
800
700
600
500
400
300
200
100
00
fused data mote03mote04 mote05 mote06 mote09 mote10
10 20 30 40 50 60Event Numbers
Fig.20 Application of FUSVAF to BESTnet
32
Sen
sor R
eadi
ngs
Sen
sor R
eadi
ngs
1000
900
800
700
600
500
400
300
200
100
00 20 40 60
Event Numbers
(a)1000
900
800
700
600
500
400
300
200
fused data mote03mote04 mote05 mote06 mote09 mote10
80 100 120
fused data mote03mote04 mote05 mote06 mote09 mote10
100
0 0 10 20 30 40 50 60
Event Numbers
(b) Fig.21 FUSVAF failure : (a) large changes (b) poor initial guesses
5.6 BESTnet Failure Patterns FUSVAF does not perform particularly well in the BESTnet, yet several failure patterns were identified by examining the content of data packages. These failure patterns would be helpful for further development of validation and fusion algorithm.
Packet loss:
33
Sen
sor R
eadi
ngs
As shown in Fig.22, the discontinuity in the “last simple number” in a sequence of data received from the same sensor reveals the loss of data packets. This failure is usually caused by packet collision. Since the motes’ event-based operating system can perform only one task at a time, the base mote ignores or discards incoming packets unless it is idle. Another contributor to packet loss might be is the fact that the radio signal attenuates to dead before reaching to the base station.
Date & Time Sample No. Mote No. Channel Readings03-Jun-2003 18:32:17 165 1 1 101103-Jun-2003 18:32:17 166 1 1 101103-Jun-2003 18:32:18 168 1 1 100903-Jun-2003 18:32:19 169 1 1 100903-Jun-2003 18:32:19 170 1 1 1013
Fig.22 Failure pattern - packet loss
Receiving failure: Fig.23 shows a mysterious failure that occurred only once in all of the experiments conducted. No reasonable hypothesis regarding the cause could be proposed. According to the content of the data packet received by the base station, the readings from two sensors over a period of time are somehow simultaneously changed to zero.
1200
1000
800
600
400
200
0 0 500 1000 1500 2000 2500
Sampling Number
Fig.23 Failure pattern - receiving failure
Noises: It is clear from Fig.24 that some values in the fields of a data packet are altered or duplicated. Some values change to ridiculous numbers and can easily be rejected in the
34
validation portion of the FUSVAF algorithm. However, some of the values are altered only slightly or are duplicated without change, and not recognized by FUSVAF. These unrecognized faults can cause confusion or adversely affect the fused value.
Date & Time Sample No. Mote No. Channel Readings21-Oct-2003 14:16:43 390 3 1 857.321-Oct-2003 14:16:43 17798 3 69 857.321-Oct-2003 14:17:02 420 3 1 720.821-Oct-2003 14:17:08 430 3 1 70521-Oct-2003 14:17:09 17838 3 69 70521-Oct-2003 14:17:21 450 3 1 704.4
Fig.24 Failure pattern - noise
35
6 EXTENDED FUZZY SENSOR VALIDATION AND FUSION ALGORITHM: MOTE-FVF
6.1 Analysis of Problems in Applying FUSVAF to BESTnet Based on the observation of data generated from application of FUSVAF to
BESTnet, three weaknesses of FUSVAF have been identified.
6.1.1 Near-zero failures The FUSVAF algorithm was originally developed for operating conditions in which a
zero value implied a faulty sensor. When FUSVAF was applied to BESTnet under common operating conditions of zero illuminance, the algorithm failed.
6.1.2 Failure under sudden changes in environment FUSVAF also failed to track sensor readings if a gap appeared in all sensor readings
between two time stamps. That is, it fails if the target environment experiences a great change at a rate faster than the sensing rate of the mote sensors. This situation is illustrated in Fig.21(a). This type of failure is driven by the fact that the algorithm validates each sensor reading independently, according to the predicted sensor reading. It does not consider the correlation among all sensor readings. The Gaussian-shaped validation curve used in FUSVAF is centered at the last predicted value, filtering out readings that deviate significantly from the predicted value. Accordingly, the algorithm cannot follow relatively large changes between consecutive readings.
Fig.25 shows an example of the Gaussian function used as the validation curve. The mean of this function is the predicted reading, and both the prediction reading and the validation gate are determined directly from the adaptive parameter (α). This parameter is generated with the fuzzy rules in the FUSVAF algorithm. However, the algorithm begins by independently examining each sensor reading to determine whether it lies within validation gate. Readings that fall outside the limits of the gate will not be fused. Therefore, if a great gap appears between sensor readings, the readings from all of the sensors will fall outside of the boundaries of the validation gate and assigned ‘0’ confidence values. In this case the only value the fusion function can rely on is the previous predicted reading. That is, no information about the change would be conveyed
36
for fusion or for future prediction steps.
Sensor confidence (σ)
1
0.8
0.6
0.4
0.2
0
x x Measurement
Fuzzy validation gate
Fig.25 FUSVAF validation curve
In the case of large changes between consecutive readings, the adaptive parameter (α) which is calculated based only on the previous predicted reading, and the boundaries of the next validation gate will be unchanged. Furthermore, the gate still would not be wide enough to accept any of the readings following the jump in sensed values. The same situation would continue until the readings approach the value they held before the jump.
An intuitive means to solve this problem is to either increase the sensing frequency to reduce the size of the detected change in readings, so that they would fall within the limits of the validation gate. Another naïve approach might be to adjust the algorithm’s parameter such that the validation gate is always wide enough to accept large changes in consecutive readings. However, increasing sensing frequency compromises energy conservation and efficiency. Similarly, widening the validation gate is only feasible for small jumps in readings. Consider for example, the situation in which the lights of the target area are switched from off to the maximum illuminance level. In the BESTnet testbed the difference between maximum and minimum illuminance is approximately 700 lux, which maps to 950 in terms of the sensors’ digital reading. According to the fuzzy rules, a validation gate with a width of 950 requires an extremely small adaptive parameter (α), which may result in the acceptance of false readings. Moreover, there is no way for FUSVAF to generate such a small α in a limited number of iterations, and the small magnitude of the adaptive parameter would make the algorithm much more sensitive to noise and faulty readings.
37
6.1.3 Precise initial guesses requirements Similar to other estimation filters, FUSVAF requires an initial guess regarding the
system state, and attempts to converge to the true state over the next several iterations. Specifically, FUSVAF guesses the predicted readings in the first two time stamps in order to set the adaptive parameter (α) and validation gate for the algorithm to run. However, if not provided with a sufficiently accurate set of initial guesses, the algorithm cannot continue. This situation is shown in Fig.21(b).
FUSVAF was originally developed for a system in which it was not difficult to accurately guess the initial system state; the working operating state was quite limited, and held to a narrow range. In contrast, for applications such as illuminance detection in BESTnet, the initial can range from complete darkness to very high illuminance. Therefore, the problem with the width of the validation gate under large changes in consecutive readings that was discussed in the last section will cause FUSVAF to fail.
6.2 Mote-FVF An extension of the FUSVAF algorithm named mote fuzzy validation and fusion-
(mote-FVF) was developed to preserve FUSVAF’s high performance and easy of implementation for sensor networks, while overcoming the three major weaknesses reviewed in Section 6.1. The analysis therein revealed that the validation gate, the center of which is determined exclusively by the predicted value, contributes to the majority of failures. Rather than setting the center of the validation curve at the predicted value and validating each reading independently, the enhanced mote-FVF algorithm moves the validation curve according to the correlation, or consistency between all of the sensor readings.
The idea motivating this alteration is that if the mean of the Gaussian validation curve can be dynamically shifted from the predicated reading based on the correlation among all of the sensor readings, the problem can be solved. As this shifting does not greatly affect the adaptive parameter (α), the algorithm retains its ability to identify unreasonable readings, and robustness is preserved.
Two methods of quantifying the correlation among sensor readings are introduced -the median value approach and the Gaussian correlation approach. The median value approach is an approximate method for evaluating the correlation among sensor readings
38
and is relatively easy to implement, while the Gaussian correlation approach is more robust and more complex in terms of time and space requirements. The remainder of this section describes the two approaches and demonstrates how the fuzzy dynamic-mean Gaussian validation curve functions.
Two reasonable assumptions must be made for either approach to succeed. First, since a cluster of sensors contains tens to hundreds of sensors, it is assumed that there is no chance for all sensors to fail at once. This means that the only time each sensor reads zero is when the value of the sensed state parameter is truly zero. Second it is assumed that at least half of the readings that pass the absolute limits check (see Section 5.1) correctly reflect the true state of the sensed variable. This assumption does depend on the reliability of the sensor network, however according to the reported experiments and experience in using and analyzing of mote sensor networks, this has always been true.
6.2.1 Median value estimation In the first step of the median value approach to analyzing the behavior of a cluster of
sensor readings, readings that are obviously false based on the physical limitation of the sensors are filtered out. For example, readings outside the range of 0~1023 in the case of mote sensors. Next, the median of the remaining readings is calculated. The median is considered rather than the mean in order to prevent bias form outlier readings that are not filtered out in the first step. In general, the median will provide a reasonable estimate of the majority of sensor readings.
6.2.2 Gaussian correlation estimation The Gaussian correlation approach is motivated by the paper “A Methodology for
Fusion of Redundant Sensors” by Mohamed Abdelrahman et al [27]. Again, the first step is to isolate readings that are obviously false given the physical limitations of the sensors. Next, for the remaining readings, a Gaussian function that is centered on the reading is generated. The standard deviation of the Gaussian function is fine-tuned to fit each sensor. The resulting function is designated PDFn(x). The reading corresponding to the maximum value of the normalized summation of all Gaussian functions is taken as the voted majority, as shown in Fig.26. The normalized summation of all Gaussian functions is calculated as:
39
n
∑ PDFk
(x)k =1
n, x = 0,....., maximum sensor output ,
where n is the total number of readings remaining after filtering out readings initially determined to be false.
Reading 1 Reading 2 Reading 3 Reading 4 Reading 5 Reading 6 Normalized summation
0 100 200 300 400 500 600 700 800 900 1000Sensor Readings
Fig.26 Gaussian correlation curve
6.2.3 Fuzzy dynamic-mean Gaussian validation curve Denoting the estimated majority from either method as Cor(x), an additional set of
fuzzy rules is applied to the validation gate such that its mean denoted as x , moves toward the majority starting from x . The fuzzy rules are defined as follows:
IF Var(x) is small THEN move x toward the Cor(x) a small amount, IF Var(x) is medium THEN move x toward the Cor(x) a medium amount, IF Var(x) is large THEN move x toward the Cor(x) a large amount.
The fuzzy membership function is designed using standard triangular shaped functions and maximum overlap as shown in Fig.27 [28]. There are two parameters to be tuned, mvar for fuzzification and mmov for defuzzification. Fig.28 depicts the manner in which the center of the validation curve shifts between the value of voted majority and the predicted reading. Note that the offset from the predicted reading takes into account the relationship among sensor readings.
40
1
µVar
00
small medium
m 1
1large small medium large
µshift
0Var(x) 0 m 1
var mov shift(x)Fig.27 Membership functions for
defining the center of the validation curve
Sensor confidence (σ)
1
0.8
0.6
0.4
0.2
0
xCor(x)xx Measurement
Fuzzy validation gate
Fig.28 Dynamic fuzzy validation curve
6.3 Performance Evaluation 6.3.1 Tuning mote-FVF parameters
The parameters in mote-FVF that require tuning are aleft and aright for the validation curve, the fuzzification parameter mvar and the defuzzification parameter mmov for shifting the center of the validation curve, the constant scaling factor ω, the fuzzification parameter me and the defuzzification parameter mα for generating the adaptive parameter α. Without loss of generality, aleft and aright were set to equal values to ensure a symmetric validation curve. Symmetric validation curves have produced successful results in prior experiments [25, 26]. The a values for each mote sensor were also set equal, as all of the sensors are of the same type and are expected to have similar characteristics. mvar and mmov were adjusted so that the validation curve could shift enough to catch at least some information of the possible largest change in a single time step without becoming overly sensitive to failures. The parameters me and mα were tuned to a value that optimized the
41
response of the algorithm to the lighting environment. me, mα, mvar and mmov were tuned separately for each of the median value and the Gaussian correlation approach. The constant scaling factor ω was chosen to be large in order not to avoid an obvious lag in the fused value caused by weighting the previous predicted value too heavily.
6.3.2 Real-time implementation and simulation: mote-FVF The mote-FVF algorithm was implemented on BESTnet v. 1.0, using both the
median value and Gaussian correlation approaches. A high fidelity illuminance meter was placed in the center of the network to provide a reference illuminance for evaluation of the accuracy of the fused value [29]. The illuminance was held constant for at least one minute so that each level could be distinguished from the previous. The sensed digital readings were first converted to units of lux using the calibration results from Section 3.2, and the resulting illuminances were fed into mote-FVF.
Figure 29 shows the real-time implementation of the mote-FVF algorithm using the median value majority voting scheme. The ’x’ symbols represent the raw sensed data from each mote, the dashed line marks the reference illuminance measured with the illuminance meter, and the solid line indicates the fused value. The plot shows that the algorithm was able to track the sensed data accurately, and that the fused illuminance matched the reference illuminance with a maximum error of 3.36%, regardless of the lag in the transient mode. Similarly, the maximum error using the Gaussian correlation approach (not shown) is 4.01%. These errors include calibration errors from converting raw digital readings into units of lux as well as the uneven distribution in illuminance over the surface of the test bed. Physically, the 4.01% error represents approximately a 30 lux difference in illuminance, to which a human being is insensitive.
42
Fig.29 Real-time application of mote-FVF
A comparison of several variations the sensor validation and fusion algorithm is shown in Fig.30. Part of the data set gathered during real-time testing was used to run off-line simulations for four variations. In the simulation additional data points were introduced to imitate sensor failures. Plot 30(a) shows the sensed data points as ’x’ symbols, and the reference illuminance with a dashed line. In plots 30(b) through (e), the red dashed line indicates the reference illuminance while the solid black line shows how the fusion algorithms track the reference illuminance. Plots 30(b) and (c) show the performance of the mote-FVF algorithm using the median value and the Gaussian correlation methods, respectively. Both methods perform well even when sensor failures are introduced. Plot 30(d) shows that an algorithm without validation and prediction, that is, using the Gaussian correlation approach on its own, is much more sensitive to sensor failures than the mote-FVF algorithm. Plot 30(e) shows the performance of the original FUSVAF algorithm, which fixes the center of the validation curve at the predicted value, without consideration of the correlation between the entire group of sensor readings. The plot clearly indicates that FUSVAF fails under large, discontinuous changes in the state of the sensed variable.
43
Fig.30 (a) Raw data and reference illuminance; (b) Mote-FVF with median value approach; (c) Mote-FVF with Gaussian correlation approach; (d) Gaussian majority voting; (e) FUSVAF.
6.4 Synchronization Challenges The biggest challenge in applying the mote-FVF algorithm to the BESTnet data
concerns synchronization. Due to the lack of a global clock, each mote starts its timer when booting. Furthermore, even if all the timers could be synchronized, data packets could still be delayed in traveling through the network. Determining when to perform the mote-FVF algorithm is not straightforward; the fused value should be calculated as soon as possible, however loss of delayed data should be minimized. The synchronization process currently used is to perform mote-FVF after the base station receives the packet from the mote with the largest number of stored data, which arrives two time stamps ahead of that with the least number. The algorithm then assumes that the packets have been lost for those motes with the least number of stored data, assign zeros to their
44
previous time stamp, and runs mote-FVF on the data from the previous time stamp.
The above method of synchronization is weak for the following reason: if a packet is received by the base station after having been determined to be lost, there is no way to determine to which time-stamp this packet belongs. This drawback could cause unforeseen problems if the sensing frequency is very high. Therefore, an efficient synchronization algorithm is the most critical improvement to the mote-FVF algorithm that is to be made.
6.5 Future Development The mote-FVF algorithm has proven to perform well when applied to a centralized
wireless sensor network. However, a more rigorous synchronization process remains to be developed to enhance the robustness of the algorithm in environments that require a high sensing frequency. In addition, all the sensor-related parameters in the mote-FVF algorithm are set to be equal since the sensors are of the same type. The mote-FVF algorithm could provide more accurate results if the reliability and importance of each reading are distinguished according to the unique characteristics or location of the sensors.
This work leveraged the motes’ wireless communications, yet did not make use of their computational and storage capabilities. Therefore, the next stages of development will focus on embedding the mote-FVF code into the motes (as opposed to the base station), and to design a decentralized network capable of performing mote-FVF within each cluster. In theory the mote-FVF algorithm should succeed when performed on either raw sensor data or the pre-processed intra-network data. Nevertheless, optimal tuning of the algorithm’s parameters and collaboration of motes within and between clusters is expected to be non-trivial.
45
7 MOTE-BASED ACTUATION
7.1 Analysis on the Ability of Mote Actuation As the motes were originally designed for sensing, computation, and
communication, there is no official documentation regarding control of the motes’ actuating capabilities. One can imagine that a network of mote sensors would be much more versatile if it could react to sensed data by directly controlling the system actuation. Mote platforms and sensor boards can be used to perform actuation tasks as follows: There are six 3V DC power supplies on the mote platform; if connected with the prototyping sensor board (as opposed to the standard sensor board), four of the six power supplies can be used to encode actuation signals. The remaining two power supplies are dedicated to the onboard photoconductor and thermistor. By properly programming TinyOS, the four available power ports can be used to output 4-bit binary digital actuation signals. A 4-bit D/A (digital-to-analog) converter can be used to decode the digital actuation signals into analog signals that are sent to the actuator’s driver.
7.2 Prototype Fluorescent Lighting Actuation Figure 31 illustrates the initial architecture developed for actuating a dimmable
fluorescent light source using Smart Dust motes. Since there are four binary power ports on the mote available for actuating, 16 distinct illuminance states can be coded into motes’ operating system. The digital actuation signals are sent to a 4-bit D/A converter. The analog outputs from the converter are sent to an operational amplifier (op-amp) in current-to-voltage configuration. Output from the op-amp consists of analog signals ranging from 0 to 10V DC. The light fixture used in previous experiments is modified by replacing the ISOLé remote control system with the mote actuation circuit; i.e. the dimmable ballast is controlled by the 0~10V output signals from the op-amp according to the actuating command from the mote. In addition to the components for mote-based actuation mentioned above, some supplementary components are also necessary; a +5V and a -15V DC power supply is required for the D/A the converter, and a ±15V DC power supply circuit for the op-amp. To provide the required voltages for this initial architecture an adjustable DC power supply was used in combination with and a regulator were used. Fig.32 shows a revised version of this architecture, that replaces the DC power supply with a transformer and rectifier, so that the system can be powered directly from a 120V power line and integrated into existing lighting systems.
46
Actuating command
(state)
MoteRadio receiver
Prototype board
4 bit
16 states (levels)between 0~10V
120V Power Line
+5Vdigital-to-analog
converter
Optional amplifier
0~10V
Dimmable ballast
Regulator
Power supplier
-15V
+15V
Fig.31 Initial Actuation Architecture
Actuating command
(state)
MoteRadio receiver
Prototype board
4 bit
16 states (levels) between 0~10V
120V Power Line
+5Vdigital-to-analog
converter
Optional amplifier
0~10V
Dimmable ballast
Regulator
-15V Transformer
Regulator
Regulator Rectifier +15V
Fig.32 Revised Actuation Architecture
7.3 Implementation of Mote-based Actuation Two demonstrations were conducted to verify the functionality of the mote-based
actuation architecture. In the first experiment, the actuating mote was programmed to output an actuation signal corresponding to the lowest setting, or the lowest illuminance level. The actuation state was made to increase step-by-step every five second. Upon reaching the highest illuminance (the 16th state) the state was decreased step-by-step until
47
reaching the lowest setting. The actuation architecture that was implemented was successful in changing the luminance of the lamp. In addition, an interesting observation was made: the response of the dimmable ballast to the control voltage is nonlinear. Consequently, the change in illuminance between two consecutive sate was not always noticeable. Further calibration of the dimmable ballast is necessary to accurately define the illuminance associated with each actuation state.
In the second demonstration an actuating motes responded to commands from a sensing mote. The sensing mote, equipped with a standard MICA sensor board, sensed the illuminance every 300 milliseconds. The command signal sent to the actuating mote was to increase/decrease the actuation state if the sensed illuminance was under/over 500 lux. In this way, an approximately constant illuminance was maintained at the work surface. As intended, the illuminance increased when the photoconductor was deliberately shaded, and decreased when it was illuminated with a second light source.
7.4 Challenges in Mote-Based Actuation Several issues were raised when evaluating the feasibility of Smart Dust motes for
actuation. The number of power ports available for actuation places constraints on the number of actuation states that can be encoded, and thus the resolution of actuation. Four power ports are available with prototyping sensor board, and thus only 16 states can be defined. If all six power ports were dedicated to actuation, 64 states could be defined. For lighting control 16 states are be sufficient given the sensitivity of the human. However, 64 states might not suffice for applications in which precise control is crucial.
Another challenge in mote-based actuation arises when the mote must provide a continuous voltage to maintain the actuating state. Given the limitations on battery life, a mote will drain its batteries quite quickly in such a situation. A potential solution is to introduce another regulator circuit in order to power the mote with the 120V line electricity that powers the lamps.
Platform reliability also poses challenges to the implantation of mote-based actuation. In Section 5.6 it was noted that packet values are likely to be changed due to interference from several different sources common to office environments. If a packet containing an actuation command is altered during communication, unintended behavior may result, causing serious damage. Accordingly, a method for verifying the integrity of
48
the actuating command should to be investigated.
49
8 CONCLUSIONS
“Smart lighting” is a promising energy-saving approach to the control of commercial buildings - an application with challenging properties, such as large operating ranges and uneven distributions in the sensed environment. Smart Dust motes, equipped with various sensors, were proven to be suitable for implementation of a sensor network for monitoring as well as actuating a daylighting system. Smart Dust motes are additionally useful for daylighting systems because they eliminate the need for extensive rewiring in the case of retrofits.
Characterization of the photoconductors on standard MICA sensor boards revealed their exponential responses to fluorescent light. Encoding an exponential equation to map sensor output to physical units such as lux, results in significant consumption of computational resources, and should be avoided. The large variability between photoconductors also suggests that a more robust, linear light sensor would greatly improve the accuracy and efficiency of the mote-FVF fused value. Alternative types of sensors for inferring occupancy are recommended for use in combination with the accelerometer to provide occupancy information. The linearity and small variability between mote thermistors indicates that Smart Dust motes might be appropriate for HVAC control in addition to lighting. An environment with better controlled temperature would be helpful for accurately characterizing the thermistors while the effect of illuminance-temperature interference requires a more careful investigation.
The proposed mote-FVF algorithm inherits the strengths of the FUSVAF algorithm, while overcoming three of its major drawbacks: failure at near-zero operating conditions, failure due to large changes in consecutive sensed values, and the need for highly accurate initial guesses to begin the algorithm. The mote-FVF algorithm has proven successful in effectively extracting pertinent illuminance information. The algorithm consistently is able to rejecting readings that match the failure patterns identified in mote sensor networks used for commercial lighting applications.
50
9 FUTURE RESEARCH
As pointed out in section 6.4, the development of an efficient synchronization algorithm is critical to making mote-FVF more powerful. A method of distinguishing packet loss and packet delay is to determine the optimal time between iterations of mote-FVF. The optimal time lapse balances the system response with the volume of data gathered in one time stamp.
In this research the sensor calibration and sensor fusion were separate endeavors. However, recent papers discussing self-calibration algorithms, e.g., Whitehouse et. Al., have framed calibration as a parameter estimation problem for sensors measuring distances [30]. Feng et. al. applied a point-source model and treated photosensor on-line calibration as a nonlinear optimization problem [31]. The self-calibrating method used depends on the characteristics of sensor, and those of the physical phenomenon that it senses. The existing research provides a useful starting point for future development of a self-calibrating algorithm for Smart Dust motes. If a linear photosensor can be integrated onto mote sensor boards, the mote-FVF algorithm would be more powerful and able to provide more accurate results.
Smart dust motes, equipped with computational and memory capabilities, allow more sophisticated configurations of validation and fusion algorithms for future research. The design of the mote-FVF algorithm does not require a specific data type, or a specific type of system. It just needs to be tuned to fit the system. This implies that the algorithm has potential for fusion of intra-network sensor-cluster data and even different data types from disparate sensors. One possibility would be to distribute the mote-FVF calculations among clusters of motes, using local illuminance information. The fused cluster data would then be transmitted to the base station to arrive at a global calculation. This architecture lends itself to distributed actuation and control as well.
Methods for sensor validation and fusion based on fuzzy logic are unique in that they do not require a mathematical model of the system. Several competing approaches have also demonstrated powerful capabilities in sensor validation and fusion for diverse applications, with the exception of sensor networks. Admittedly it is challenging to derive a model for most massively distributed sensor networks. However, model-based approaches for the validation and fusion of data from sensor networks should be pursued,
51
allowing for a complete comparison of fuzzy and crisp methods. In particular, stochastic methods such as Kalman filtering and probabilistic data association filtering (PDAF) are of interest. For example, Dr. Alag developed a vector-based dynamic Bayesian network for sensor validation and fusion [32]. Such an algorithm provides a compelling foundation for development of a crisp algorithm that could be compared to the fuzzy mote-FVF algorithm.
52
10 REFERENCES
[1] Agogino, A. M., Granderson, J. and Qiu, S., 2002, “Sensor Validation and Fusion with Distributed ‘Smart Dust’ Motes for Monitoring and Enabling Efficient Energy Use,” Proc., AAAI 2002 Spring Symposium, Stanford, CA, pp. 51-58.
[2] Agogino, A., 2004, “MEMS ‘Smart Dust Motes’ for Designing, Monitoring and Enabling Efficient Lighting,” MICRO Report 03-001, Berkeley, CA.
[3] Yozell-Epstein, R., 2003, “Intelligent Lighting System Benchmarking,” Masters Report, Department of Mechanical Engineering, UC Berkeley.
[4] UC Berkeley, “Cots Dust: Large Scale Models for Smart Dust,” OnlineDocumentation,http://www-bsac.eecs.berkeley.edu/archive/users/hollar-seth/macro_motes/macromotes.html.
[5] UC Berkeley, 2003, “TinyOS Documentation,” Online Documentation,http://www.tinyos.net/tinyos-1.x/doc/index.html.
[6] Hall, D. L. and Llinas, J., 1997, “An Introduction to Multisensor Data fusion,”Proceedings of the IEEE, 85, No.1, pp. 6-23.
[7] Crossbow, www.xbow.com.[8] Dust Network, www.dustnetworks.com.[9] Millennial Net, www.millennial.net.[10] Ember, www.ember.com. [11] TinyOS, www.tinyos.net. [12] Hollar, S., 2000, “COTS Dust,” Master’s thesis, University of California at Berkeley,
Berkeley, CA. [13] Maurer, W., 2003, “The Scientist and Engineer's Guide to TinyOS Programming,”
Online Documentation, http://ttdp.org/tpg/html. [14] Horton M., Culler D., Pister K., Hill J., Szewczyk R., Woo A., 2002, “MICA: The
Commercialization of Microsensor Motes,” Sensors Online,http://www.sensorsmag.com/articles/0402/40/main.shtml.
[15] Crossbow Technology, 2003, “MPR- Mote Processor Radio Board, MIB- MoteInterface/ Programming Board User’s Manual,” Rev. A, San Jose, CA.
[16] Clairex Technology, 2001, “CL9P Epoxy-Encapsulated Photoconductors,” DataSheet, Plano, TX.
[17] Panasonic, “Multilayer Chip NTC Thermistors,” Data Sheet. [18] Panasonic, “Omnidirectional Back Electret Condenser Microphone Cartridge Series
53
WM-62A/62C/62CC/62K/62B,” Data Sheet. [19] Honeywell, “1- and 2-Axis Magnetic Sensors HMC1001/1002 HMC 1021/1022,”
Data Sheet, Plymouth, MN. [20] Analog Devices Inc., 2000, “Low-Cost +/-2g Dual-Axis Accelerometer with Duty
Cycle Optput ADXL202E,” Data Sheet, Norwood, MA. [21] Crossbow Technology, 2003, “MTS/MDA Sensor and Data Acquisition Boards
User’s Manual,” Rev. B, San Jose, CA. [22] Gay D., Levis P., Behren R., Welsh M., Brewer E. and Culler D., “The nesC
Language: A Holistic Approach to Networked Embedded Systems,” Proc., Conference on Programming Language Design and Implementation (PLDI) 2003, San Diego, CA, pp1-11.Agogino, A., Naassan, K., and Tseng, M., 1992, “Intelligent Sensor Validation for Process Monitoring and Control,” MICRO Report 90-003, Berkeley, CA.
[23] Agogino, A., Naassan, K., and Tseng, M., 1992, "Intelligent Sensor Validation for Process Monitoring and Control," MICRO Report 90-003, Berkeley, CA.
[24] Frolik, G. and Abdelrahman, M., 2000, “Synthesis of Quasi Redundant Sensor Data: A Probabilistic Approach”, Proc., American Control Conference, Chicago, IL, pp. 2917-2921.
[25] Goebel, K. and Agogino, A. M., 1996, “An Architecture for Fuzzy Sensor Validation and Fusion for Vehicle Following in Automated Highways,” Proc., 29th International Symposium on Automotive Technology and Automation (ISATA), Dedicated Conference on Fuzzy Systems/Soft Computing in the Automotive and Transportation Industry, Florence, Italy, pp. 203-209.
[26] Goebel, K. and Agogino, A.M., 1999, “Fuzzy Sensor Fusion for Gas Turbine Power Plants,” Proc., SPIE Conference on Sensor Fusion: Architectures, Algorithms, and Applications III, Orlando, FL, Vol. 3719, pp. 52-61.
[27] Abdelrahman, M., Kandasamy, P. and Frolik, J., 2000, “A Methodology of Fusion for Redundant Sensors,” Proc., 2000 American Control Conference, Chicago, IL, Vol. 4, pp. 2922-2966.
[28] Harris, J., 2000, An introduction to fuzzy logic applications, Kluwer AcademicPublishers, Dordrecht.
[29] Minolta, 1999, “Illuminance Meter T-10/T-10M,” Instruction Manual. [30] Whitehouse, K. and Culler, D., 2002, “Calibration as Parameter Estimation in Sensor
Network,” Workshop on Wireless Sensor Networks and Applications (WSNA), Atlanta, GA, pp. 59-67.
[31] Feng, J., Megerian, S. and Potkonjak, M., 2003, “Model-Based Calibration for
54
Sensor Networks,” Proc., IEEE International Conference on Sensors, Toronto, Canada, pp 737-742.
[32] Satnam, A., 1996, “A Bayesian Decision-Theoretic Framework for Real-Time Monitoring and Diagnosis of Complex System: Theory and Application,” Doctorial Dissertation, Department of Mechanical Engineering, UC Berkeley.
55
APPENDIX
A.1 Mote Photoconductor Mapping EquationsMote
ID Mapping Equation R-square−14 7 −10 6 −7 5 −4 4
y = 3.3806×10 x −1.8591×10 x + 4.3697×10 x −5.6894×10 x1 0.9998
3 2 2 −4 6+0.44312x −2.0644×10 x +5.3263×10 x−5.8709×10
−14 7 −10 6 −7 5 −4 4y = 3.3141×10 x −1.7935×10 x + 4.1526×10 x −5.3316×10 x2 0.9998
−1 3 2 2 4 6+4.0989×10 x −1.8867×10 x + 4.8137×10 x−5.2514×10
−3 4 3 4 2 6 93 y= 2.4297×10 x −8.9705x −1.2421×10 x −7.6449×10 x+1.7647×10 0.9981
−13 7 −9 6 −6 5 −3 4y= 3.5953×10 x −2.169×10 x +5.6014×10 x −8.0264×10 x4 0.9999
3 3 2 6 8+6.8919x −3.546×10 x +1.0122×10 x−1.2366×10
−14 7 −10 6 −6 5 −3 4y = 8.2457×10 x −4.6826×10 x +1.1371×10 x −1.5305×10 x
6 0.99993 2 2 5 7
+1.2327x −5.8412×10 x +1.5861×10 x−1.809×10−13 8 −9 7 −5 6 −2 5y = 8.7449×10 x −6.5201×10 x + 2.1256×10 x −3.9574×10 x
7 0.95921 4 4 3 7 2 9 11
+4.602×10 x −3.4231×10 x +1.5904×10 x −4.2197×10 x+4.8954×10−2 3 2 2 5 7
8 y= 3.995×10 x −1.1421×10 x +1.0884×10 x−3.4574×10 0.9977−13 7 −9 6 −6 5 −3 4y =1.784×10 x −1.0371×10 x + 2.5798×10 x −3.5592×10 x
9 0.99973 3 2 5 7
+2.9411x −1.4556×10 x +3.9951×10 x−4.6903×10−13 7 −9 6 −6 5 −3 4
y =1.815×10 x −1.0734×10 x + 2.7175×10 x −3.818×10 x10 0.9999
3 3 2 5 7+3.2147x −1.622×10 x + 4.5411×10 x−5.4416×10
−14 7 −10 6 −7 5 −4 4y = 4.0498×10 x −2.2557×10 x +5.3727×10 x −7.093×10 x11 0.9999
−1 3 2 2 4 6+5.6046×10 x −2.6502×10 x +6.9437×10 x−7.7754×10
−13 7 −10 6 −6 5 −3 4y =1.3341×10 x −7.8239×10 x +1.9635×10 x −2.7332×10 x12 0.9996
3 3 2 5 7+2.2789x −1.1381×10 x +3.1519×10 x−3.7341×10
Table1 Individual equations mapping mote output to illuminance
56
A.2 Mote Thermistor Mapping Equations
Mote ID Mapping Equation R-square1 y −7 3 −4
=1.6623×10 x −2.3694×102 −1x +2.2271×10 x−12.088 0.9942
2 y −10 4 −6 3 −3 2=−8.1042×10 x +1.5015×10 x −1.0021×10 x
−10.9963
+3.8607×10 x−24.1573 y = −7 3 −4
4.0762×10 x −5.3635×102 −1x +3.3127×10 x−21.114 0.9901
4 y −9 4 −6 3 −3 2=−1.0732×10 x +1.8451×10 x −1.1653×10 x
−10.9963
+4.2471×10 x−23.853
5 y −9 4 −6 3 −3 2=−1.0836×10 x +1.8808×10 x −1.2094×10 x
−10.9885
+4.4604×10 x−17.886 y =
7 y =
−7 3 −43.3935×10 x −4.5239×10
−7 3 −45.1641×10 x −6.1934×10
2 −1x +3.0349×10 x−18.3182 −1x +3.4878×10 x−17.414
0.99290.9917
8 y = −7 3 −42.5468×10 x −3.632×10
2 −1x +2.7336×10 x−10.745 0.98369 y −7 3 −4
=1.9512×10 x −2.8539×102 −1x +2.4727×10 x−14.699 0.9952
−10 4 −6 3 −4 2y=−7.4289×10
10+3.7934×10
x +1.3751×10 x −9.2326×10 x−1 x−20.882
0.9956
11 y = −7 3 −43.8435×10 x −4.8728×10
2 −1x +3.0644×10 x−20.5 0.9966Crossbow’s Conversion Formula for
MICA
y =−3
1.30705×10 +4
10 (x −1023)
1−4
2.14381×10 ×ln(Rthr−273,
−8) + 9.3×10 × [ln(Rthr )]3
Board Rthr = x13 y = −8 3 −5
5.3088×10 x −8.9729×102 −1x +1.8766×10 x−67.581 0.9988
14 y −7 3 −4=1.2522×10 x −2.1621×10
2 −1x +2.6029×10 x−81.24 0.9994Crossbow’s Conversion y =
−31.010024×10 +
1−273,
−4 −72.42127×10 ×ln(R ) +1.46×10 ×[ln(R )]3
Formula for Prototyping
BoardRthr
=
thr thr4
10 (x −1023)x
Table.2 Temperature mapping equations for each mote
57
Top Related