1
Distributed Adaptive Estimation and Tracking using Ad Hoc WSNs
Gonzalo Mateos
ECE Department, University of Minnesota
Acknowledgment: ARL/CTA grant no. DAAD19-01-2-0011
USDoD ARO grant no. W911NF-05-1-0283
Minneapolis, MNJuly 29, 2009
2
Wireless Sensor Networks (WSNs) Large number of wireless sensors
Randomly deployed Inexpensive Resource constrained
Unique feature: cooperative effort of sensors
Promising technology for crucial applications Environmental monitoring Fault diagnosis in process industry Protection of critical infrastructure Surveillance systems
Renewed interest in distributed computing
3
FC-based WSN Ad hoc WSN
Two Prevailing Topologies
Why ad hoc WSNs? Less power consumption as WSN scales (geographically) Improved robustness to sensor failures
+
= Increased life expectancy of the WSN
Ad hoc WSN
4
Motivation
Estimation using ad hoc WSNs raises exciting challenges Communication constraints Limited power budget Lack of hierarchy / in-network processing Consensus
Unique features Environment is constantly changing (e.g., WSN topology) Lack/variations of statistical information at sensor level
Bottom line: estimation algorithms must be Resource efficient Simple and flexible Adaptive and robust to changes
Single-hop communications
5
Subject of the Thesis Distributed estimation/tracking algorithms using ad hoc WSNs
In-network processing of sensor observations Stability/convergence analysis Quantifiable MSE (tracking) performance
Distributed (D-) least mean-square (LMS) & recursive least-squares (RLS) Affordable complexity Do not require a data model to be applicable Online data enriches the estimation process Can track slowly time-varying processes
Explore the complexity vs. performance tradeoff
6
This Work in Context Single-shot distributed estimation algorithms
Consensus averaging [Xiao-Boyd ’05, Tsitsiklis-Bertsekas ’86, ’97] Incremental strategies [Rabbat-Nowak etal ’05] Deterministic and random parameter estimation [Schizas etal ’06]
Consensus-based Kalman tracking using ad hoc WSNs MSE optimal filtering and smoothing [Schizas etal ’07] Suboptimal approaches [Olfati-Saber ’05], [Spanos etal ’05]
Distributed adaptive estimation and filtering LMS and RLS learning rules [Lopes-Cattivelli-Sayed ’06-08]
Optimization tools in distributed estimation Incremental strategies Primal-dual approaches Alternating-direction method of multipliers (AD-MoM)
7
Outline Part I: The D-LMS algorithm
Algorithm construction and operation Stability results Tracking performance analysis
Part II: The D-RLS algorithm Reduced complexity variants Stability and steady-state MSE performance analysis
Concluding remarks and future research directions
8
Problem Statement Ad hoc WSN with sensors
Single-hop communications only. Sensor ‘s neighborhood Connectivity information captured in Zero-mean additive (e.g., Rx) noise
Goal: estimate a signal vector
Each sensor , at time instant Acquires a regressor and scalar observation Both zero-mean and spatially uncorrelated
Least mean-squares (LMS) estimation problem of interest
9
Power Spectrum Estimation Find spectral peaks of a narrowband (e.g., seismic) source
AR model: Source-sensor multi-path channels modeled as FIR filters Unknown orders and tap coefficients
Observation at sensor is
Define:
Challenges Data model not completely known Channel fades at the frequencies occupied by
10
Centralized Approaches
If , jointly stationary with
Wiener solution
If , are available Steepest-descent converges avoiding matrix inversion
If (cross-) covariance info. not available or time-varying
Low complexity suggests (C-) LMS adaptation
Goal: develop a distributed (D-) LMS algorithm for ad hoc WSNs
11
Algorithmic Construction
Consider the convex, constrained optimization
Equivalent for connected WSN:
Two key steps in deriving D-LMS
1) Resort to the AD-MoM [Glowinski ‘75]Gain desired degree of parallelization
2) Apply stochastic approximation ideasCope with unavailability of statistical
information
12
D-LMS Recursions and Operation In the presence of communication noise, for and
Reduced communications possible with `bridge’ sensors
Step 1:
Step 2:
Rx from Tx to Rx from Tx to
Step 1: forming Step 2: forming
13
Consensus Controller Interpretation Consensus error at sensor :
Local LMS Algorithm
Sensor j
PI RegulatorTo
Consensus Loop
Superposition of two learning mechanisms Purely local LMS-type of adaptation PI consensus loop: tracks the consensus reference
14
D-LMS in Action node WSN, Regressors: i.i.d.Observations:
D-LMS:
True time-varying weight:
15
Error-form D-LMS Study the dynamics of
Local estimation errors: Local sum of multipliers:
(a1) Sensor observations obey where the zero-mean white noise has variance
Introduce and
Lemma: Under (a1), for then where
and consists of the blocks
and with
16
Performance Metrics Local (per-sensor) and global (network-wide) metrics of interest
(a2) is white Gaussian with covariance matrix(a3) and are independent
Define
Customary figures of merit
EMSEMSD
Local
Global
17
Tracking Performance(a4) Random-walk model: where is zero-mean
white with covariance ; independent of and
Let where Convenient c.v.:
Proposition: Under (a2)-(a4), the covariance matrix of obeys
with . Equivalently, after vectorization
where .
18
Stability and Steady-State Performance
MSE stability follows Intractable to obtain explicit bounds on
From stability, has bounded entries
The fixed point of is
Enables evaluation of all figures of merit in s.s.
Proposition: Under (a1)-(a4), the D-LMS algorithm is MSE stable for sufficiently small
Proposition: Under (a1)-(a4), the D-LMS algorithm achieves consensus in the mean, i.e., provided
with
19
Step-size Optimization If optimum minimizing EMSE
Not surprising Excessive adaptation MSE inflation Vanishing tracking ability lost
Recall
Hard to obtain closed-form , but easy numerically (1-D).
20
Available Extensions Results hold when communication noise is present
Tracking an AR(1) signal vector
Time-correlated, stationary ergodic regressors
Estimation errors are weakly stochastic bounded [Solo’97]
Almost sure exponential stability in the absence of noise
MSE performance analysis via stochastic averaging
21
, D-LMS:
Simulated Tests node WSN, Rx AWGN w/ ,
Random-walk model:Time-invariant parameter:
Regressors: w/
; i.i.d.; w/
Observations: linear data model, WGN w/
22
Motivation: fast convergence, increased complexity affordable
Second-order approach: exponentially-weighted LS (EWLS) estimator
is the `forgetting’ factor. Tracking with is a regularization matrix (small )
Equivalent reformulation for connected ad hoc WSN
Solve via AD-MoM
Distributed RLS Estimation
23
D-RLS Algorithm In the presence of communication noise, for and
Recursively compute
When , updated recursively in operations
Step 1:
Step 2:
24
Remarks
Communication exchanges and cost identical to D-LMS Cost is , no matrices exchanged Raw data not exchanged comm. noise resilience
Provides its own regularization can use
Multiplier updates identical to D-LMS
Increased cost in updating local estimates Cost is for D-LMS Cost is for D-RLS ( when )
D-LMS/D-RLS do not require a Hamiltonian cycle
25
D-RLS:
Diffusion RLS: Metropolis weights
D-RLS in Action node WSN, Regressors: i.i.d.Observations:
0 50 100 150 200 250 300
10-4
10-3
10-2
10-1
100
Time index t
Lear
ning
cur
ve
Centralized RLS
D-RLS in ideal links
D-RLS w/ com noise (15 dB)
Diffusion RLS in ideal links
Diffusion RLS w/ com noise (15 dB)
0 50 100 150 200 250 30010
-8
10-6
10-4
10-2
100
102
Time index t
Nor
mal
ized
est
imat
ion
erro
r
Centralized RLS D-RLS, ideal links
Diffusion RLS, ideal links
D-RLS w/ com noise (15 dB)
Diffusion RLS w/ com noise (15 dB)
Global MSE(t) evolution: Global MSD(t) evolution:
26
Spectrum Estimation Task node WSNSource: is AR(4)
Channels: . Sensors 3, 7, 15 and 27 have a zero at
D-LMS estimates (sensor 15) Global MSE(t) evolution:
27
D-RLS with Ideal Links Recall:
If and
Local estimate updates simplify to
Introduce
Savings: multipliers not exchanged
Step 1:
Step 2:
28
Alternating Minimization Algorithm Consider the convex separable problem
Lagrangian function: Augmented Lagrangian:
AD-MoM [Glowinski ‘75]:
AMA [Tseng ’91]: [S1]
[S2]
[S3]
[S2]
29
Because
Goal: reduce complexity in updating
Setting , then D-RLS = L-RLS Apply AMA (EWLSE cost strictly convex):
Savings: for all , complexity is
unless
AMA-based D-RLS
Step 1:
Step 2:
30
MSE Analysis Preliminaries Analysis challenging due to:
Finding the distribution of is typically intractable
Resort to simplifying assumptions
(a1) Sensor observations obey where the zero-mean white noise has variance(a2) is white with covariance matrix(a3) , , and are independent
and approximations for and
Approach: form `averaged’ error-form D-RLS system
31
Overview of Results
As for D-LMS, closed-form recursion for Approximation only valid for large Vectorized recursion sufficient condition for MSE stability
Solve for from a fixed-point equation Enables evaluation of all figures of merit in s.s.
Results account for communication noise
Proposition: Under (a1)-(a3) and for , the D-RLS algorithm achieves consensus in the mean, i.e., provided with
32
D-LMS: ,
Simulated Tests node WSN, Rx AWGN w/ , ,
Regressors: w/
; i.i.d.; w/
Observations: linear data model, WGN w/
D-RLS: , ,
33
Concluding Summary Developed D-LMS/D-RLS algorithms for general ad hoc WSNs
Estimators expressed as separable minimization problems
Detailed stability and MSE performance analysis for D-LMS Stationary setup, time-invariant parameter vector Tracking a random-walk/stable AR(1) process
D-RLS: complexity vs. performance tradeoff Reduced complexity variants Local and network-wide figures of merit for in s.s.
Ongoing research Tracking s.s. performance analysis for D-RLS Distributed lasso for estimation of sparse signals
34
Related Publications Journal publications:
I. D. Schizas, G. Mateos and G. B. Giannakis, ``Distributed LMS for Consensus-Based In-Network Adaptive Processing,'' IEEE Transactions on Signal Processing, vol. 57, no. 6, pp. 2365-2381, June 2009.
G. Mateos, I. D. Schizas, and G. B. Giannakis, ``Distributed Recursive Least-Squares for Consensus-Based In-Network Adaptive Estimation,'' IEEE Transactions on Signal Processing, 2009 (to appear)
G. Mateos, I. D. Schizas, and G. B. Giannakis, ``Performance Analysis of the Consensus-Based Distributed LMS Algorithm,'' EURASIP Journal on Advances in Signal Processing, submitted May 2009.
Conference papers: G. Mateos, I. D. Schizas and G. B. Giannakis, ``Distributed Least-Mean Square
Algorithm Using Wireless Ad Hoc Networks,'' Proc. of 45th Allerton Conf., Univ. of Illinois at U-C, Monticello, IL, Sept. 26-28, 2007.
I. D. Schizas, G. Mateos and G. B. Giannakis, ``Distributed Recursive Least-Squares Using Wireless Ad Hoc Sensor Networks,'' Proc. of 41st Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, Nov. 4-7, 2007.
I. D. Schizas, G. Mateos and G. B. Giannakis, ``Stability analysis of the consensus-based distributed LMS algorithm,'' Proc. of Intl. Conf. on Acoustics, Speech and Signal Processing, Las Vegas, NV, March 30-April 4, 2008.
G. Mateos, I. D. Schizas, and G. B. Giannakis, ``Closed-Form MSE Performance of the Distributed LMS Algorithm,'' Proc. of DSP Workshop, Marco Island, FL, January 4-7, 2009.
35
Deriving D-LMS Write constraints as Augmented Lagrangian
AD-MoM
[S1]
[S3]
[S2]
36
Deriving D-LMS (cont.) [S1]-[S3] boil down to: ( redundant)
First order optimality condition
Obtain recursion via Robbins-Monro iteration
Top Related