X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power...

12
1376 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 64, NO. 4, APRIL 2015 X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin Mostofi, Senior Member, IEEE Abstract—In this paper, unmanned vehicles are tasked with see- ing a completely unknown area behind thick walls based on only wireless power measurements using wireless local area network (WLAN) cards. We show that a proper modeling of wave propa- gation that considers scattering and other propagation phenomena can result in a considerable improvement in see-through imag- ing. More specifically, we develop a theoretical and experimental framework for this problem based on Rytov wave models and in- tegrate it with sparse signal processing and robotic path planning. Our experimental results show high-resolution imaging of three different areas, validating the proposed framework. Moreover, they show considerable performance improvement over the state of the art that only considers the line-of-sight (LOS) path, allowing us to image more complex areas not possible before. Finally, we show the impact of robot positioning and antenna alignment errors on our see-through imaging framework. Index Terms—Inverse scattering, radio-frequency (RF) imag- ing, robot sensing systems, see-through wall imaging, x-ray vision. I. I NTRODUCTION P ASSIVE device-free localization and mapping of objects in an environment has recently received considerable at- tention. There are several potential applications for such ap- proaches, from location-aware services, to search and rescue, and robotic networks. A survey of the related literature indicates that localization and mapping has been investigated by three different commu- nities. More specifically, in the networking community, both device-based and device-free localization based on RF signals have been explored, typically in the context of tracking human motion [1]–[7]. However, in most these setups, either the object of interest is not occluded, or the information of the first layer of the occluder is assumed known. Furthermore, most focus has been on motion tracking and not on high-resolution imaging. In robotics, localization and mapping of objects is crucial to proper navigation. As such, several works, such as Simultaneous Localization and Mapping (SLAM), has been developed for mapping based on laser scanner measurements Manuscript received May 16, 2014; revised October 16, 2014; accepted December 19, 2014. Date of publication February 2, 2015; date of cur- rent version April 14, 2015. The review of this paper was coordinated by Prof. D. Dardari. The authors are with the Department of Electrical and Computer Engi- neering, University of California, Santa Barbara, CA 93106 USA (e-mail: [email protected]; [email protected]; ymostofi@ece.ucsb.edu). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TVT.2015.2397446 [8]–[11]. However, in these approaches, mapping of occluded objects is not possible. For instance, in [12], some information of the occluded objects is first obtained with radar and then utilized as part of robotic SLAM. In the electromagnetic community, there has been interest in solving an inverse scattering problem [13], i.e., deducing infor- mation about objects in an environment based on their impact on a transmitted electromagnetic wave [14]–[16]. For instance, remote sensing to detect oil reserves beneath the surface of the earth is one example [17]. Traditional medical imaging based on X-ray also falls into this category [13]. There has also been a number of work on using a very general wave propagation model for inverse scattering, such as the distorted Born iterative method [14], the contrast source inversion method [18], and stochastic methods [19]. However, the computational complex- ity of these approaches makes it prohibitive for high-resolution imaging of an area of a reasonable size. Furthermore, most such approaches utilize bulky equipment, which makes their applicability limited. In this paper, we are interested in high-resolution see-through imaging of a completely unknown area, based on only WiFi measurements, and its automation with unmanned vehicles. With the advent of WiFi signals everywhere, this sensing ap- proach would make our framework applicable to several indoor or outdoor scenarios. The use of robotic platforms further allows for autonomous imaging. However, the overall problem of developing an autonomous system that can see everything through walls is considerably challenging due to three main factors: 1) Proper wave modeling is crucial but challenging; 2) the resulting problem is severely underdetermined, i.e., the number of measurements typically amount to only a few per- centage of the number of unknowns; and 3) robot positioning is prone to error, adding additional source of uncertainty to the imaging. In our past work [20], [21], we have shown that seeing through walls and imaging a completely unknown area is possible with only WiFi signals. However, we only considered the line-of-sight (LOS) path and the impact of the objects along this path when modeling the receptions. A transmitted WiFi signal will experience several other propagation phenomena, such as scattering, that an LOS model cannot embrace. This can then result in a significant gap between the true receptions (RF sensed values) and the way they were modeled (see Fig. 7, for instance) and is one of the main bottlenecks of the state of the art in RF sensing. Thus, the first contribution of this paper is to address this bot- tleneck and enable see-through imaging of more complex areas not possible before. To do so, we have to come up with a way of better modeling the receptions to achieve a leap in the imaging 0018-9545 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Transcript of X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power...

Page 1: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

1376 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 64, NO. 4, APRIL 2015

X-Ray Vision With Only WiFi Power MeasurementsUsing Rytov Wave Models

Saandeep Depatla, Lucas Buckland, and Yasamin Mostofi, Senior Member, IEEE

Abstract—In this paper, unmanned vehicles are tasked with see-ing a completely unknown area behind thick walls based on onlywireless power measurements using wireless local area network(WLAN) cards. We show that a proper modeling of wave propa-gation that considers scattering and other propagation phenomenacan result in a considerable improvement in see-through imag-ing. More specifically, we develop a theoretical and experimentalframework for this problem based on Rytov wave models and in-tegrate it with sparse signal processing and robotic path planning.Our experimental results show high-resolution imaging of threedifferent areas, validating the proposed framework. Moreover,they show considerable performance improvement over the stateof the art that only considers the line-of-sight (LOS) path, allowingus to image more complex areas not possible before. Finally, weshow the impact of robot positioning and antenna alignment errorson our see-through imaging framework.

Index Terms—Inverse scattering, radio-frequency (RF) imag-ing, robot sensing systems, see-through wall imaging, x-ray vision.

I. INTRODUCTION

PASSIVE device-free localization and mapping of objectsin an environment has recently received considerable at-

tention. There are several potential applications for such ap-proaches, from location-aware services, to search and rescue,and robotic networks.

A survey of the related literature indicates that localizationand mapping has been investigated by three different commu-nities. More specifically, in the networking community, bothdevice-based and device-free localization based on RF signalshave been explored, typically in the context of tracking humanmotion [1]–[7]. However, in most these setups, either the objectof interest is not occluded, or the information of the firstlayer of the occluder is assumed known. Furthermore, mostfocus has been on motion tracking and not on high-resolutionimaging. In robotics, localization and mapping of objects iscrucial to proper navigation. As such, several works, such asSimultaneous Localization and Mapping (SLAM), has beendeveloped for mapping based on laser scanner measurements

Manuscript received May 16, 2014; revised October 16, 2014; acceptedDecember 19, 2014. Date of publication February 2, 2015; date of cur-rent version April 14, 2015. The review of this paper was coordinated byProf. D. Dardari.

The authors are with the Department of Electrical and Computer Engi-neering, University of California, Santa Barbara, CA 93106 USA (e-mail:[email protected]; [email protected]; [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TVT.2015.2397446

[8]–[11]. However, in these approaches, mapping of occludedobjects is not possible. For instance, in [12], some informationof the occluded objects is first obtained with radar and thenutilized as part of robotic SLAM.

In the electromagnetic community, there has been interest insolving an inverse scattering problem [13], i.e., deducing infor-mation about objects in an environment based on their impacton a transmitted electromagnetic wave [14]–[16]. For instance,remote sensing to detect oil reserves beneath the surface of theearth is one example [17]. Traditional medical imaging basedon X-ray also falls into this category [13]. There has also beena number of work on using a very general wave propagationmodel for inverse scattering, such as the distorted Born iterativemethod [14], the contrast source inversion method [18], andstochastic methods [19]. However, the computational complex-ity of these approaches makes it prohibitive for high-resolutionimaging of an area of a reasonable size. Furthermore, mostsuch approaches utilize bulky equipment, which makes theirapplicability limited.

In this paper, we are interested in high-resolution see-throughimaging of a completely unknown area, based on only WiFimeasurements, and its automation with unmanned vehicles.With the advent of WiFi signals everywhere, this sensing ap-proach would make our framework applicable to several indooror outdoor scenarios. The use of robotic platforms furtherallows for autonomous imaging. However, the overall problemof developing an autonomous system that can see everythingthrough walls is considerably challenging due to three mainfactors: 1) Proper wave modeling is crucial but challenging;2) the resulting problem is severely underdetermined, i.e., thenumber of measurements typically amount to only a few per-centage of the number of unknowns; and 3) robot positioningis prone to error, adding additional source of uncertainty tothe imaging. In our past work [20], [21], we have shown thatseeing through walls and imaging a completely unknown area ispossible with only WiFi signals. However, we only consideredthe line-of-sight (LOS) path and the impact of the objects alongthis path when modeling the receptions. A transmitted WiFisignal will experience several other propagation phenomena,such as scattering, that an LOS model cannot embrace. Thiscan then result in a significant gap between the true receptions(RF sensed values) and the way they were modeled (see Fig. 7,for instance) and is one of the main bottlenecks of the state ofthe art in RF sensing.

Thus, the first contribution of this paper is to address this bot-tleneck and enable see-through imaging of more complex areasnot possible before. To do so, we have to come up with a way ofbetter modeling the receptions to achieve a leap in the imaging

0018-9545 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

DEPATLA et al.: X-RAY VISION WITH ONLY WiFi POWER MEASUREMENTS USING RYTOV WAVE MODELS 1377

results while maintaining similar computational complexity.If we take the modeling approaches of the communication/networking literature, the term multipath is typically used to de-scribe propagation phenomena beyond the LOS path. However,the modeling of multipath in these literatures, which is donevia probabilistic approaches or ray tracing, is not suitable forhigh-resolution detailed imaging through walls. In this paper,we then tap into the wave propagation literature to modelthe induced electric field over the whole area of interest andinclude the impact of objects that are not directly on the LOSpath. While Maxwell’s equations can accurately model the RFsensed values, it is simply not feasible to start with that levelof modeling. A further tapping into the wave literature thenshows several possible approximations to Maxwell’s equations.In this paper, we show that modeling the receptions based onRytov wave approximation can make a significant improvementin the see-through imaging results. Rytov approximation is alinearizing wave approximation model, which also includesscattering effects [13]. While it has been discussed in theelectromagnetic literature in the context of inverse scattering[13], [22], [23], there are no experimental results that showits performance for see-through imaging, particularly at WiFifrequencies. In this paper, it is therefore our goal to build onour previous work and significantly extend our sensing modelbased on Rytov wave approximation.

The second contribution of this paper is on achieving ahigher level of automation. More specifically, in [24], thetwo robots had to constantly coordinate their positioning andantenna alignment, and their positioning errors were manuallycorrected several times in a route (e.g., every 1 m). In this paper,each robot travels a route (see definition in Section IV) au-tonomously and without any coordination with the other robotor positioning error correction. It is therefore feasible to collectmeasurements much faster, reducing the experiment time fromseveral hours to a few minutes. However, this comes at thecost of nonnegligible errors in robot positioning and antennaalignment, the impact of which we discuss and experimentallyvalidate in Section V.

Finally, the last contribution of this paper is to show thattwo robots can see through walls and image an area with ahigher level of complexity, which was not possible before.More specifically, we integrate modeling of the receptions,based on Rytov approximation, with sparse signal processing(utilized in our past work for RF imaging) and robotic pathplanning and show how a considerable improvement in imagingquality can be achieved. We experimentally validate this bysuccessfully imaging three different areas, one of which cannotbe imaged with the state of the art, while the two others show aconsiderable improvement in imaging quality.

The rest of this paper is organized as follows. In Section II,we mathematically formulate our imaging problem based onRytov wave approximation. In Section III, we pose the resultingoptimization problems and discuss how to solve them based ontotal variation (TV) minimization. In Section IV, we introducethe hardware and software structures of our current experi-mental robotic platform, which allows for more autonomy inWiFi measurement collection. In Section V, we then presentour imaging results of three different areas and show the impact

Fig. 1. Two robots are tasked with imaging the unknown area D that is markedwith the red superimposed volume, which involves seeing through walls,based on only a small number of WiFi measurements. Note that this figureis generated for illustrative purposes. For a true snapshot of the robots inoperation, see Fig. 9.

of robot positioning and antenna alignment errors. We concludein Section VI.

II. PROBLEM FORMULATION

Consider a completely unknown workspace D ⊂ R3. Let

Dout be the complement of D, i.e., Dout = R3 \ D. We consider

a scenario where a group of robots in Dout is tasked withimaging the area D by using only WiFi. In other words, the goalis to reconstruct D, i.e., to determine the shapes and locationsof the objects in D based on only a small number of WiFimeasurements. We are furthermore interested in see-throughimaging, i.e., the area of interest can have several occludedparts, such as parts completely behind concrete walls, and thusinvisible to any node outside. Fig. 1 shows an example of ourconsidered scenario. The red superimposed volume marks thearea that the two unmanned vehicles are interested in imagingbut that is completely unknown to them. The area has severaloccluded parts, such as the parts blocked by the outer concretewall, which is highly attenuating. Note that both empty and fullspaces inside the red volume as well as its outer surfaces are allunknown to the robots and need to be imaged. The robots onlyhave WiFi for imaging. As the robots move outside of D, onerobot measures the received signal power from the transmis-sions of the other robot. The unknown area D then interacts witheach transmission, as dictated by the locations and properties ofits objects, leaving its impact on each reception. The robots thenneed to image the structure based on all the receptions.

Here, we start with the volume integral wave equation anddiscuss how it can be linearized and solved under certainassumptions, developing the system models that we shall utilizelater for our imaging. See [13] and [25] for more details on thewave propagation modeling.

A. Volume Integral Equations [13]

Let E(r) be the electric field, J(r) be the current density,ε(r) be the electric permittivity, and μ(r) be the magneticpermeability at r ∈ R

3, where r is the position vector in the

Page 3: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

1378 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 64, NO. 4, APRIL 2015

spherical coordinates.1 Then, we have the following volumeintegral equation relating the electric field to the current sourceand objects in D [13]:

E(r) = jωμ0

∫∫∫R3

�G(r, r′) • J(r′) dv′

+

∫∫∫R3

�G(r, r′) • (O(r′)E(r′)) dv′ (1)

where �G(r, r′) is the dyadic Green’s function given by

�G(r, r′) =

(I +

∇∇k20

)g(r, r′) (2)

g(r, r′) =ejk0|r−r′|

4π|r− r′| (3)

O(r) = k2(r)− k20 denotes the material property of the objectat position r; k20 = ω2ε0μ0 denotes the wavenumber of thefree space2; k2(r) = ω2μ0ε(r) denotes the wavenumber of themedium at r; ε0 and μ0 are the permittivity and permeability ofthe free space, respectively; ω is the angular frequency; and •denotes the vector dot product. The robots are then interestedin learning O(r), for r ∈ D, as it carries the information ofthe location/material property of the objects in the workspace.Note that (1) is valid for any inhomogeneous, isotropic, andnonmagnetic media. In addition, O(r)E(r) is the equivalentcurrent induced in the object at r. This induced current in turnproduces an electric field. The total field is then the sum of theelectric field due to the current in the transmit antenna, the firstterm on the right-hand side (RHS) of (1), and the electric fielddue to the induced current in the objects [the second term onthe RHS of (1)].

First, we start by assuming free space in Dout. Then, ε(r) =ε0, for r ∈ Dout, resulting in k2(r) = k20 and O(r) ≡ 0, for r ∈Dout. When there are no objects in D, we have k2(r) = k20 andO(r) ≡ 0, for all r ∈ R

3, and the second term on the RHS of(1) vanishes. This means that the first term is the incident fieldwhen there are no objects in D and the second term is the resultof scattering from the objects in D. By denoting the first termon the RHS of (1) by Einc(r), we then get

E(r) = Einc(r) +

∫∫∫D

�G(r, r′) • (O(r′)E(r′)) dv′ (4)

where �G(r, r′) is a second-order tensor and can be representedas the following 3 × 3 matrix in the Cartesian coordinates:

�G(r, r′) =

⎡⎣Gxx(r, r

′) Gxy(r, r′) Gxz(r, r

′)Gyx(r, r

′) Gyy(r, r′) Gyz(r, r

′)Gzx(r, r

′) Gzy(r, r′) Gzz(r, r

′)

⎤⎦ .

1Throughout this paper, single-frequency operation is assumed, and all thematerials are considered isotropic and nonmagnetic, i.e., μ(r) = μ0, for allr ∈ R3, where μ0 is the permeability of the free space.

2In this paper, free space refers to the case where there is no object.

In reality, there will be objects in Dout. Then, Einc denotesthe field when there are no objects in D.3 Without loss ofgenerality, we assume that the transceiver antennas are linearlypolarized in the z-direction. This means that we only need tocalculate the z-component of the electric field, which dependson the last row of �G(r, r′). Let Jeq(r) = [Jx

eq Jyeq Jz

eq]T =

O(r)E(r). We further assume near-zero cross-polarized com-ponents Jx

eq and Jyeq and take Jeq(r) � [0 0 Jz

eq]T . This approx-

imation is reported to have a negligible effect [26]. By using thisapproximation in (4) and only taking the z-component, we getthe following scalar equation:

Ez(r) = Ezinc(r) +

∫∫∫D

Gzz(r, r′)O(r′)Ez(r′) dv′ (5)

where Ez(r) and Ezinc(r) are the z-components of E(r) and

Einc(r), respectively.

B. Linearizing Approximations

In (5), the received electric field Ez(r) is a nonlinear functionof the object function O(r), since Ez(r′) inside the integralalso depends on O(r). This nonlinearity is due to the multiplescattering effect in the object region [25]. Thus, we next useapproximations that make (5) linear and easy to solve under thesetting of sparse signal processing.

1) LOS-Based Modeling [13], [20]: A simple way of mod-eling the receptions is to only consider the LOS path from thetransmitter to the receiver and the impact of the objects onthis path. This model has been heavily utilized in the literaturedue to its simplicity [20]. However, it results in a considerablemodeling gap for see-through imaging since it does not includeimportant propagation phenomena such as scattering. In thispart, we summarize the LOS model in the context of waveequations.

At very high frequencies, such as in X-ray, the wave can beassumed to travel in straight lines with negligible reflectionsand diffractions along its path [13]. Then, the solution to (5) isgiven as follows by using Wentzel–Kramers–Brillouin (WKB)approximation4:

E(r) =c√α(r)

e

jω∫

LT→R

α(r′)dr′

, WKB approximation

(6)

where α(r) is a complex number that represents the slowness ofthe medium at r and is related to k(r),

∫LT→R

is a line integralalong the line joining the positions of the transmitter and thereceiver, and c is a constant that depends on the transmittedsignal power.

It can be seen that the loss incurred by the ray is linearlyrelated to the objects along that path, resulting in a linear

3In our experiments, we will not have access to the exact incident field whenthere is nothing in D. Thus, the two robots make a few measurements in Dout,where there are no objects in between them to estimate and remove the impactof Einc. If the robots have already imaged parts of Dout, that knowledge canbe easily incorporated to improve the performance.

4Here, the field is along the z-direction, as explained before. From this pointon, superscript z is dropped for notational convenience.

Page 4: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

DEPATLA et al.: X-RAY VISION WITH ONLY WiFi POWER MEASUREMENTS USING RYTOV WAVE MODELS 1379

relationship between the received power and the objects, aswe shall see. This approximation is the base for X-ray to-mography [27]. However, the underlying assumption of thismethod is not valid at lower frequencies, such as microwavefrequencies, due to the nonnegligible diffraction effects [28].In [20] and [21], we proposed a see-through wall RF-basedimaging framework based on this approximation. In this paper,our goal is use a considerably more comprehensive modelingof the receptions (which has been a bottleneck in see-throughimaging) by tapping into the wave literature. We show that byaddressing the modeling of the receptions through using Rytovwave approximation, we can image areas not possible before.

2) Rytov Approximation [13]: In general, the field inside anyinhomogeneous media can be expressed as

E(r) = ejψ(r) (7)

and satisfies

[∇2 + k2(r)

]E(r) = 0 (8)

where ψ(r) is a complex phase term. It can then be shown thatthe solution to (8) can be approximated as follows:

E(r) = Einc(r)ejφ(r), Rytov approximation (9)

where

φ(r) =−j

Einc(r)

∫∫∫D

g(r, r′)O(r′)Einc(r′) dv′. (10)

The validity of Rytov approximation is established by dimen-sional analysis in [13] and is accurate at high frequencies5 if

δε(r)�=

ε(r)

ε0− 1 � 1

where δε(r) is the normalized deviation of the electric permit-tivity from the free space. At lower frequencies, the conditionfor validity of the Rytov approximation becomes

(k0L)2δε(r) � 1

where L is the order of the dimension of the objects. In ourcase, with a frequency of 2.4 GHz and L on the order of 1 m, wesatisfy the condition of high frequency, except at the boundariesof the objects, where there are abrupt changes in the material.

For the sake of completion, a more commonly used lineariz-ing approximation, which is known as Born approximation, issummarized in the Appendix. Rytov approximation is reportedto be more relaxed than the Born approximation at higherfrequencies [13]. In addition, Rytov approximation lends itselfto a simple linear form, when we only know the magnitude ofthe received electric field, as described next. Thus, in this paper,we focus on Rytov wave modeling.

5Through this paper, high frequency refers to the frequencies at which thesize of inhomogeneity of objects is much larger than the wavelength.

C. Intensity-Only Rytov Approximation

In the aforementioned equations, both magnitude and phaseof the received field are needed. In this paper, however, we areinterested in imaging based on only the received signal power.

Then, by conjugating (9), we get

E∗(r) = E∗inc(r)e

−jφ∗(r). (11)

From (9) and (11), we then have

|E(r)|2 = |Einc(r)|2 e−2Imag(φ(r)) (12)

where Imag(.) and |.| denote the imaginary part and the mag-nitude of the argument, respectively. Since the received power6

is proportional to the square of the magnitude of the receivedfield, we have the following equation by taking logarithms onboth sides of (12):

Pr(r) (dBm) = Pinc(r) (dBm) + 10 log10(e−2)Imag (φ(r))

(13)

wherePr(r) (dBm)=10 log10(|E(r)|2/(120π × 10−3)) is thereceived power in dBm at r, and Pinc(r) (dBm) = 10 log10(|Einc(r)|2/(120π × 10−3)) is the power incident in dBm at rwhen there are no objects.

To solve (12) for object O(r), we discretize D into N equal-volume cubic cells. The position of each cell is represented byits center position vector rn, for n ∈ {1, 2, . . . , N}. The electricfield and the object properties are assumed to be constant withineach cell. We then have

O(r) =N∑

n=1

O(rn)Cn (14)

Einc(r) =

N∑n=1

Einc(rn)Cn (15)

where r, rn ∈ D, and Cn is a pulse basis function, which is oneinside cell n and zero outside. By substituting (14) and (15) into(10), we get

φ(r) =−j

Einc(r)

N∑n=1

O(rn)Einc(rn)

∫∫∫Vn

g(r, r′) dv′

�−j

Einc(r)

N∑n=1

g(r, rn)O(rn)Einc(rn)ΔV (16)

where ∫∫∫Vn

g(r, r′) dv′ � g(r, rn)ΔV (17)

Vn is the nth cell, and ΔV is the volume of each cell. Note thatCn is not included in (16) since we are evaluating the integralinside cell n where Cn is one.

6This is the received power by an isotropic antenna. For a directional antenna,this should be multiplied by the gain of the antenna.

Page 5: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

1380 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 64, NO. 4, APRIL 2015

Let (pi,qi), for pi,qi ∈ Dout, denote the transmitter and re-ceiver position pair where the ith measurement is taken.In addition, let Φ= [φp1

(q1) φp2(q2) · · · φpM

(qM )]T ,where M is the number of measurements, φpi

(qi) =

(−j/Einc,pi(qi))

∑Nn=1 g(qi, rn)O(rn)Einc,pi

(rn)ΔV , andEinc,pi

(rn) is the incident field at rn when the transmitter is atpi. Then, we have

Φ = −jFO (18)

whereF is anM×N matrix withFi,j=g(qi, rj)Einc,pi(rj)ΔV/

Einc,pi(qi), and O = [O(r1) O(r2) · · · O(rN )]T . Using (13)

for each measurement and stacking them together, we get

Pryt = Imag(Φ) (19)

where Pryt = (Pr(dBm)−Pinc(dBm))/10 log10(e−2),

Pr(dBm) = [Pr,p1(q1) (dBm) Pr,p2

(q2)(dBm) · · · Pr,pM

(qM ) (dBm)]T , Pinc(dBm) = [Pinc,p1(q1)(dBm) Pinc,p2

(q2)(dBm) · · · Pinc,pM(qM )(dBm)]T , and Prpi

(qi)(dBm)and Pinc,pi

(qi)(dBm) are the received power and incidentpower corresponding to the transmitter and receiver pair(pi,qi), respectively. Using (18) and (19), we get

Pryt = Real(FO) = FROR + FIOI (20)

where Real(.) is the real part of the argument, and FR, FI , OR,and OI are the real part of F , imaginary part of F , real part ofO, and imaginary part of O, respectively. This can be furthersimplified by noting that FROR FIOI [29]. Therefore, thepreceding equation becomes

Pryt � FROR (21)

which is what we shall use for our RF-based robotic imaging.

D. Intensity-Only LOS Approximation

Starting from (6) and following similar steps to the intensity-only Rytov approximation, we get

Pr(r) (dBm) = Pinc(r) (dBm)

− 10 log10(e−2)ω

∫LT→R

Imag (α(r′)) dr′ (22)

where the integration is the line integral along the line joiningthe positions of the transmitter and the receiver, and r isthe position of the receiver. Denoting PLOS = (Pr (dBm)−Pinc (dBm))/10 log10(e

−2) and stacking M measurementstogether, we have

PLOS = AΓ (23)

where A is a matrix of size M ×N with its entry Ai,j = 1if the jth cell is along the line joining the transmitter and thereceiver of the ith measurement, and Ai,j = 0 otherwise, Γ =[αI(r1) αI(r2) · · · αI(rN )]T , and αI(.) = Imag(α(.)).

Equation (23) is what we then utilize in our setup whenshowing the performance of the state of the art.

III. BRIEF OVERVIEW OF SPARSE

SIGNAL PROCESSING [30]

In the formulations of the Rytov and LOS approximationsin Section II, we have a system of linear equations to solvefor each approach. However, the system is severely underde-termined as the number of wireless measurements typicallyamount to a small percentage of the number of unknowns. Morespecifically, let x ∈ R

N be a general unknown signal, y ∈ RM

be the measurement vector, and y = Bx be the observationmodel, where B is an M ×N observation matrix. We considerthe case where N M , i.e., the number of unknowns is muchlarger than the number of measurements. Thus, it is a severelyunderdetermined problem that cannot be solved uniquely forx given y. Here, we briefly summarize how sparse signalprocessing can be utilized to solve this problem.

Suppose x can be represented as a sparse vector in anotherdomain as follows: x = ΘX, where Θ is an invertible matrixand X is S-sparse, i.e., card(supp(X)) � N , where card(.)denotes the cardinality of the argument, and supp(.) denotesthe set of indices of nonzero elements of the argument. Then,we have y = KX, where K = BΘ and X has a much smallernumber of the nonzero elements than x. In general, the solutionto the aforementioned problem is obtained by solving thefollowing nonconvex combinatorial problem:

minimize ‖X‖0, subject to y = KX. (24)

Since solving (24) is computationally intensive and impracti-cal, considerable research has been devoted toward developingapproximated solutions for (24).

In our case, we are interested in imaging and localization ofthe objects in an area. Spatial variations of the objects in a givenarea are typically sparse. We thus take advantage of the sparsityof the spatial variations to solve our underdetermined system.7

More specifically, let R = [Ri,j ] denote an m× n matrix thatrepresents the unknown space. Since we are interested in thespatial variations of R, let

Dh,i,j =

{Ri+1,j −Ri,j , if 1 ≤ i < m

Ri,j −R1,j , if i = m

Dv,i,j =

{Ri,j+1 −Ri,j , if 1 ≤ j < n

Ri,j −Ri,1, if j = n.

Then, the TV of R is defined as

TV(R) =∑i,j

‖Di,j(R)‖ (25)

where Di,j(R) = [Dh,i,j Dv,i,j ], and ‖.‖ can represent either l1or l2 norm. TV minimization then solves the following convexoptimization problem:

minimize TV(R), subject to y = KX. (26)

7It is also possible to solve an l1 convex relaxation of (24). However, ourpast analysis has indicated a better performance with spatial variation minimi-zation [20].

Page 6: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

DEPATLA et al.: X-RAY VISION WITH ONLY WiFi POWER MEASUREMENTS USING RYTOV WAVE MODELS 1381

In the context of the current problem formulation, X repre-sents the object map D, R represents the spatial variations of X,K represents the observation model, i.e., K = FR for the Rytovapproach and K = A for the LOS approach, and y representsthe received power (after removing path loss). In solving (26),l1 or l2 norm results in a similar reconstruction [31]. Thus,unless otherwise stated, all results of this paper are based onl1 norm.

To solve the general compressive sensing problem of (26)robustly and efficiently, TV Minimization by AugmentedLagrangian and Alternating Direction Algorithms (TVAL3) isproposed in [32]. TVAL3 is a MATLAB-based solver thatsolves (26) by minimizing the augmented Lagrangian functionusing an alternating minimization scheme [31]. The augmentedLagrangian function includes coefficients that determine therelative importance of the terms TV(R) and ‖y −KX‖ in (26).See [32] for more details on TVAL3. We use TVAL3 for all theexperimental results of this paper.

IV. EXPERIMENT SETUP

Here, we briefly describe our enabling experimental testbed.As compared to our past testbed (results of which with an LOSmodeling of the receptions were reported in the literature), inour current setup, the robots can take channel measurementsover a given route autonomously and with no coordinationbetween themselves or stopping. More specifically, in [24],the two robots had to constantly stop and coordinate theirpositioning and antenna alignment, and their positioning errorswere manually corrected a few times in a route (e.g., every 1 m).In this paper, each robot travels a route autonomously andwithout any coordination with the other robot or positioningerror correction. It is therefore feasible to collect measurementsmuch faster, reducing the experiment time from several hours toa few minutes. However, this comes at the cost of nonnegligibleerrors in robot positioning and antenna alignment, as we willdiscuss later in this paper. In the rest of this section, we describethe software and hardware aspects of our testbed in moredetails, emphasizing the main differences from our previouslyreported testbed [33].

In our setup, we use two Pioneer 3-AT (P3-AT) mobile robotsfrom MobileRobots, Inc. [34], each equipped with an onboardPC and an IEEE 802.11g (WLAN) card. Each robot can si-multaneously follow a given path and take the correspondingreceived signal strength measurements (RSSI) as it moves. Thedata are then stored and transferred back to a laptop at the endof the operation.

A. Hardware Architecture

P3-AT mobile robots [34] are designed for indoor, outdoor,and rough-terrain implementations. They feature an onboardPC104 and a Renesas SH7144-based microcontroller platformfor control of the motors, actuators, and sensors. By utilizing aC/C++ application programming interface library provided byMobileRobots, users are able to program and control the robotvia the microcontroller platform. Fig. 2 shows the P3-AT robot.We have furthermore utilized directional antennas for better

Fig. 2. Pioneer 3-AT robot with the additionally mounted servomechanismand a directional antenna.

imaging results. To hold the directional antennas, we have builtan additional electromechanical fixture, as shown in Fig. 2.This antenna is rotated and positioned via a Hitec HA-7955TGdigital servo mounted on the antenna fixture. Via a serialport, pulsewidth-modulation (PWM) values are passed from theonboard PC104 to a Digilent Cerebot II microcontroller on theside of the antenna frame. These PWM waveforms are then out-putted to the Hitec Servo, specifying a range of 0◦–180◦ angle.We use a GD24-15 2.4-GHz parabolic grid antenna fromLaird Technologies [35]. This model has a 15-dBi gain with21◦ horizontal and 17◦ vertical beamwidth and is suitable forIEEE 802.11 b/g applications.

One of the robots has a D-Link WBR-1310 wireless routerattached to its antenna. It constantly outputs a wireless signalfor the other robot to measure the signal strength. The overalloperation is overseen by a remote PC, which is in charge ofpassing the initial plan to the robots to execute, and collectingthe final signal strength readings at the end of the operation.A block diagram of the hardware architecture of the robots isshown in Fig. 3 and is similar to what we have used in [33].

B. Software Architecture

The overall software architecture of our system is shown inFig. 4. The software system is composed of two applicationlayers, one running on a remote PC to control the experimentand one running on the robots themselves. The programsare developed in C++ using the ARIA library developed byMobileRobots. They communicate via a Transmission ControlProtocol/Internet Protocol (TCP/IP) connection between therobot-side application, which acts as the server, and the PC-side application, which acts as the client. The remote PC isin charge of overseeing the whole operation and giving initialcontrol commands and route information to the robots. The usercan specify the route information involving the direction of

Page 7: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

1382 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 64, NO. 4, APRIL 2015

Fig. 3. Block diagram of the hardware architecture of one of the robots.

Fig. 4. Software architecture of the robot platform.

movement, the length of the route, and the coordinates ofthe starting positions of the robots. While the overall soft-ware structure is similar to our previous work [24], significantchanges are made in programming each block in Fig. 4. Thesechanges enable a different level of autonomy as compared toour previous work. Next, we explain these changes made in thesoftware in more details.

To synchronize all the operations—robot movement, an-tenna alignment and signal strength measurement—the robotexecution is divided into four separate in-software threads:the antenna control thread, signal strength thread, motor con-trol thread, and main thread, which, respectively, control theantenna rotation, manage the reading of the wireless signalstrength, operate the motor such as in driving forward, and sendthe overall commands. The main thread initializes/finalizesother threads and communicates with the remote PC. Beforea route begins, the main thread first creates the threads neededto run the other operations and freezes their operations usingMutex. It then receives the path information of both robots fromthe remote PC. This information is passed to the antenna controland signal strength threads, where it will be used to calculatewhen to read the signal strength and how to rotate the antennaover the route to keep the antennas on both robots aligned.Once the threads are properly initialized, the path information

is passed to the motor control thread to begin the operation.The measurements gathered by one robot will be stored onits PC and are transferred back to the remote PC at the endof the operation. This is because any kind of TCP communi-cation introduces unnecessary delays in the code during themeasurements. It is necessary, however, to be able to controlthe robot movement and operation at all times from the remotePC in case of emergency. Therefore, the code is designed tomaximize the autonomy and precision of the operation, throughthreading, while being able to shut down via remote controlat any time. This is achieved with the main thread utilizing apolling approach.

C. Robot Positioning

Accurate positioning is considerably important as the robotsneed to constantly put a position stamp on the locations wherethey collect the channel measurements and further align theirantennas based on the position estimates. In our setup, ourrobots utilize onboard gyroscopes and wheel encoders to con-stantly estimate their positions. Since we use a dead reckoningapproach to localize our robots, timing is very important to theaccuracy of position estimation. We thus employ precise timersin software to help the robot determine its own position as wellas the position of the other robot based on the given speed.More specifically, when the motor control thread begins itsoperation, timers are simultaneously initiated in all the threads,allowing them to keep track of when and where they are intheir operations. In addition, the threads’ Mutex are released,allowing the robots to move and take measurements.

It is important to note that once the robots start a route,there is no communication or coordination between them. Eachrobot constantly uses the set speed and timer information toestimate its own location as well as the location of the otherrobot for antenna alignment and measurement collection. Thus,all the measurements and alignments are naturally prone topositioning errors. Currently, we use speeds up to 10 cm/s. Asample route is shown in Fig. 5 (see the 0◦ angle route, forinstance). Our current localization error is less than 2.5 cmper 1 m of straight line movement, and our current consideredroutes typically span 10–20 m. Additionally, the robot alsoexperiences a drift from the given path. These robot positioningerrors will also result in antenna alignment errors. In Section V,we discuss the impact of both errors on our imaging results indetails.

D. Robot Paths

So far, we have explained the hardware and software aspectsof the experimental testbed. Next, we briefly explain the routesthat the robots would take. More specifically, the transmittingand receiving robots move outside of D, similar to how CT-scanis done, in parallel, along the lines that have an angle θ withthe x-axis. This means that the line connecting the transmitterand the receiver would ideally (in the absence of positioningerrors) stay orthogonal to the line with angle θ. Sample routesalong 0◦ and 45◦ angles are shown in Fig. 5. Both of the robotsmove with a same velocity of 10 cm/s and take measurements

Page 8: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

DEPATLA et al.: X-RAY VISION WITH ONLY WiFi POWER MEASUREMENTS USING RYTOV WAVE MODELS 1383

Fig. 5. Sample routes for measurement collection are shown for 0◦ and45◦ angles.

every 0.2 s (i.e., measurements are taken with 2 cm resolu-tion). As explained earlier, there is no coordination betweenthe robots when traveling a route. To speed up the operation,we currently manually move the robots from the end of oneroute to the beginning of another route. This part can alsobe automated as part of future work. Additionally, randomwireless measurements, a term we introduced in [24], where thetransmitter is stationary and the receiver moves along a givenline, can also be used. In the next section, we only consider thecase of parallel routes, as shown in Fig. 5. Readers are referredto [33] for more details and tradeoff analysis on using parallelor random routes in the context of LOS reception modeling.

V. EXPERIMENTAL RESULTS AND DISCUSSIONS

Here, we show the results of see-through wall imaging withRytov wave approximation and further compare them with thestate-of-the-art results based on LOS modeling. We considerthree different areas, as shown in Figs. 8–10 (top left). We namethese cases as follows for the purpose of referencing in the restof this paper: T-shape, occluded cylinder, and occluded twocolumns. Two robots move outside of the area of interest andrecord the signal strength. These measurements are then usedto image the corresponding unknown regions using both Rytovand LOS approaches, as described in Section II. Figs. 8–10further show the horizontal cuts of these areas. In this paper,we only consider 2-D imaging, i.e., imaging a horizontal cut ofthe structure.

Fig. 6 shows a sample of the real measurement along the 0◦

line for the T-shape, with the distance-dependent path-loss com-ponent removed. As stated previously, the distance-dependentpath-loss component does not contain any information aboutthe objects. Thus, by making a few measurements in the sameenvironment where there are no objects between the robots,

Fig. 6. Real received signal power along the 0◦ line for the T-shape, with thedistance-dependent path-loss component removed.

Fig. 7. Comparisons of the Rytov and LOS approximations for the route alongthe 0◦ angle for the occluded cylinder. As shown, the Rytov approximationmatches the real measurement considerably better than the LOS modelingthrough WKB approximation.

it is estimated and removed. To compare how well the WKB(LOS modeling) and Rytov approximations match the realmeasurements, the simulated received signal loss using eachapproximation is plotted in Fig. 7 for the route along the 0◦

angle for the occluded cylinder structure. As shown, the Ry-tov approximation matches the real measurement considerablybetter than the LOS modeling.

Our imaging results for the T-shape, the occluded cylinder,and the occluded two columns are shown in Figs. 8–10, re-spectively. For the T-shape and the occluded cylinder, we havemeasurements along four angles of 0◦, 90◦, 45◦, and 135◦.For the occluded two columns, we have measurements alongfive angles of 0◦, 90◦, 80◦, −10◦, and 10◦. The total measure-ments thus amount to only 20.12%, 4.7%, and 2.6% for theT-shape, the occluded cylinder, and the occluded two columns,

Page 9: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

1384 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 64, NO. 4, APRIL 2015

Fig. 8. (Left) T-shape structure of interest that is completely unknown and needs to be imaged, as well as its horizontal cut (its dimension is 0.64 m × 1.82 m).The white areas in the true image indicate that there is an object, whereas the black areas denote that there is nothing in those spots. Imaging results based on20.12% measurements are shown for both Rytov and LOS approaches. Sample dimensions of the original and the reconstructed images are also shown. It can beseen that Rytov provides a considerably better imaging result.

Fig. 9. (Left) Occluded cylinder structure of interest that is completelyunknown and needs to be imaged, as well as its horizontal cut (its dimensionis 2.98 m × 2.98 m). The white areas in the true image indicate that there isan object, whereas the black areas denote that there is nothing in those spots.Imaging results based on 4.7% measurements are shown for both the Rytovand LOS approaches. It can be seen that Rytov provides a considerably betterimaging result.

respectively. The left side of Fig. 8 shows the T-shape structurewith its horizontal cut marked. This horizontal cut, which is thetrue original image that the robots need to construct, is an areaof 0.64 m × 1.82 m, which results in 2912 unknowns to beestimated. Fig. 8 further shows the imaging results with bothRytov and LOS for this structure. As shown, Rytov provides

Fig. 10. (Top) Occluded two columns structure of interest that is completelyunknown and needs to be imaged, as well as its horizontal cut (its dimensionis 4.56 m × 5.74 m). The white areas in the true image indicate that there isan object, whereas the black areas denote that there is nothing in those spots.Imaging results based on 2.6% measurements are shown in the bottom figuresfor both the Rytov and LOS approaches. Sample dimensions are also shown. Itcan be seen that the LOS approach fails to properly image the occluded objects,whereas Rytov performs significantly better.

a considerably better imaging quality, particularly around theedges. The reconstructions after thresholding are also shown inFig. 8, which uses the fact that we are interested in a black/whiteimage that indicates absence/presence of objects (more detailson this will follow soon).

Fig. 9 shows the imaging of the occluded cylinder. This areaof interest is 2.98 m × 2.98 m, amounting to 22 201 unknownsto be estimated based on only 4.7% measurements. This struc-ture is more challenging than the T-shape to reconstruct because1) it is fully behind thick brick walls, and 2) it consists ofmaterials with different properties (metal and brick). Similarly,

Page 10: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

DEPATLA et al.: X-RAY VISION WITH ONLY WiFi POWER MEASUREMENTS USING RYTOV WAVE MODELS 1385

we can see that Rytov provides a better imaging result for thisstructure as well, with the details reconstructed more accurately.Thresholded images are also shown.

Fig. 10 shows the imaging of the occluded two columns.This area of interest is 4.56 m × 5.74 m (amounting to 65 436unknowns) and is estimated only with 2.6% WiFi measure-ments. This structure is more challenging to image than boththe T-shape and the occluded cylinder since 1) there are twocolumns close to each other, which results in a higher multipathand other propagation phenomena, and 2) smaller percentagesof measurements are available for imaging (half of that usedfor the occluded cylinder). The figure shows the thresholdedimaging results as well. More specifically, any value above40% and below 20% of the maximum value is thresholded tothe 40% and 20% values, respectively (the same thresholdingapproach is used for the past two areas). As shown in Fig. 10,the LOS approach fails to image this more complex structure,whereas Rytov can image it. From Figs. 8 and 9, it can be seenthat imaging based on LOS modeling can vaguely image thedetails. However, for more complex areas such as Fig. 10, itsperformance becomes unacceptable, whereas Rytov can locatethe objects fairly accurately. This signifies the importance ofproperly modeling the receptions.

In general, the computational complexity of our imagingapproach depends on the size of the unknown area and the num-ber of gathered measurements. Furthermore, the utilized solvertypically converges faster if the model better matches the realmeasurements. Hence, we expect that the Rytov approach runsfaster than the LOS approach because of its better match withthe real measurement. We verify this on a desktop equippedwith a 3.7-GHz CPU. For the T-shape with 4096 unknowns and586 measurements, the Rytov approach takes 3.6 s, whereasthe LOS approach takes 5.74 s. For the occluded cylinder with22 201 unknowns and 1036 measurements, the Rytov approachtakes 17.01 s, whereas the LOS approach takes 27.14 s. Forthe occluded two columns inside with 65 436 unknowns and1699 measurements, the Rytov approach takes 54.8 s, whereasthe LOS approach takes 64 s. However, it should be noted thatRytov also requires an offline calculation of the FR matrix for agiven set of routes. This takes 9 min for the occluded cylinderstructure, for example. Once this matrix is calculated, it can beused for any setup that uses the same routes.

Finally, we note that the measurements of the T-shape werecollected with our past experimental setup since we do not haveaccess to this site anymore. However, we expect similar resultswith our new experimental setup for this site, for the reasonsexplained in Section V-A.

A. Effect of Robot Positioning and Antenna Alignment Errors

As each robot travels a route autonomously and without co-ordination with the other robot or positioning error correction,there will be nonnegligible positioning and antenna alignmenterrors accumulated throughout the route. We next show theimpact of these errors on our see-through imaging performance.

Figs. 11 and 12 show the impact of localization and antennaalignment errors on Rytov and LOS approaches, respectively.More specifically, each figure compares experimental imaging

Fig. 11. Effect of robot positioning and antenna alignment errors on imagingbased on Rytov approximation. It can be seen that they have negligible impact.(a)–(c) Length of the route = 4.88 m.

Fig. 12. Effect of robot positioning and antenna alignment errors on imagingbased on LOS modeling. It can be seen that they have negligible impact.(a)–(c) Length of the route = 4.88 m.

results of three cases with different levels of localization/antenna alignment errors. The most accurate localization casewas generated with our old setup, where positioning errorswere corrected every 1 m. The middle and right cases are bothautomated but the robot has different speeds, which results indifferent positioning accuracy. In each case, the positioningerror leads to a nonnegligible antenna alignment error, thevalue of which is reported (as a percentage of the antennabeamwidth). However, we can see that the combination ofboth antenna alignment and positioning errors, which are notnegligible, has a negligible impact on the imaging result. Thisis due to the fact that the main current bottleneck in see-throughimaging is the modeling of the receptions, which is the mainmotivation for this paper. For instance, as shown in Fig. 7,the gap between the state-of-the-art modeling (LOS) and thetrue receptions is huge, which we have reduced considerablyby a proper modeling of the receptions in this paper. However,the gap is still nonnegligible as compared to other sources oferrors such as robot positioning and antenna alignment errors,as confirmed in Figs. 11 and 12. Needless to say, if these errorsbecome more considerable, they will inevitably start impactingthe results. Thus, Figs. 11 and 12 imply that with our currentsetup and the size of the areas we imaged in this paper, theimpact of robot positioning and antenna alignment errors wasnegligible on our results.

VI. CONCLUSION AND FUTURE WORK

In this paper, we have considered the problem of high-resolution imaging through walls, with only WiFi signals, andits automation with unmanned vehicles. We have developed a

Page 11: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

1386 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 64, NO. 4, APRIL 2015

theoretical framework for this problem based on Rytov wavemodels, sparse signal processing, and robotic path planning. Wehave furthermore validated the proposed approach on our ex-perimental robotic testbed. More specifically, our experimentalresults have shown high-resolution imaging of three differentareas based on only a small number of WiFi measurements(20.12%, 4.7%, and 2.6%). Moreover, they showed consider-able performance improvement over the state of the art that onlyconsiders the LOS path, allowing us to image more complexareas not possible before. Finally, we showed the impact ofrobot positioning and antenna alignment errors on our see-through imaging framework. Overall, this paper addresses oneof the main bottlenecks of see-through imaging, which is theproper modeling of the receptions. Further improvement to themodeling, while maintaining similar computational complexity,is among future directions of this work.

APPENDIX

BORN APPROXIMATION

Consider the case of weak scatterers, where the electricproperties of the objects in D are close to free space, i.e., ε(r) isclose to ε0. In the Born approximation, this assumption is usedto approximate the electric field inside the integral of (5) withEz

inc(r), resulting in the following approximation:

Ez(r) = Ezinc(r) +

∫∫∫D

Gzz(r, r′)(O(r′)Ez

inc(r′)) dv′.

(27)

The validity of the Born approximation is established by dimen-sional analysis in (5), and it is accurate at high frequencies only if

k0Lδε(r) � 1, for all r ∈ D.

Born approximation is a theory of single scattering, wherein themultiple scattering due to object inhomogeneities is neglected.

ACKNOWLEDGMENT

The authors would like to thank Y. Yan for proofreadingthis paper and H. Cai and Z. Zhao for help in running theexperiments.

REFERENCES

[1] X. Chen et al., “Sequential Monte Carlo for simultaneous passive device-free tracking and sensor localization using received signal strengthmeasurements,” in Proc. 10th Int. Conf. IPSN, 2011, pp. 342–353.

[2] Y. Chen, D. Lymberopoulos, J. Liu, and B. Priyantha, “FM-based indoorlocalization,” in Proc. 10th Int. Conf. Mobile Syst., Appl., Serv., 2012,pp. 169–182.

[3] A. Kosba, A. Saeed, and M. Youssef, “RASID: A robust WLAN device-free passive motion detection system,” in Proc. IEEE Int. Conf. PerCom,2012, pp. 180–189.

[4] M. Moussa and M. Youssef, “Smart devices for smart environments:Device-free passive detection in real environments,” in Proc. IEEE Int.Conf. PerCom, 2009, pp. 1–6.

[5] R. Nandakumar, K. Chintalapudi, and V. Padmanabhan, “Centaur:Locating devices in an office environment,” in Proc. 18th Annu. Int. Conf.Mobile Comput. Netw., 2012, pp. 281–292.

[6] H. Schmitzberger and W. Narzt, “Leveraging WLAN infrastructure forlarge-scale indoor tracking,” in Proc. 6th ICWMC, 2010, pp. 250–255.

[7] M. Bocca et al., “RF-based device-free localization and tracking for am-bient assisted living,” in Proc. EvAAL Workshop, Sep. 2012, pp. 1–4.

[8] T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping(SLAM): Part I—The essential algorithms,” IEEE Robot. Autom. Mag.,vol. 13, no. 2, pp. 99–110, Jun. 2006.

[9] T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping(SLAM): Part II,” IEEE Robot. Autom. Mag., vol. 13, no. 3, pp. 108–117,Sep. 2006.

[10] S. Thrun, W. Burgard, and D. Fox, “A probabilistic approach to concurrentmapping and localization for mobile robots,” Auton. Robots, vol. 31,no. 1–3, pp. 29–53, 1998.

[11] F. Dellaert, F. Alegre, and E. Martinson, “Intrinsic localization and map-ping with 2 applications: Diffusion mapping and macro polo localization,”in Proc. Int. Conf. Robot. Autom., 2003, pp. 2344–2349.

[12] E. Jose and M. D. Adams, “An augmented state slam formulation for mul-tiple line-of-sight features with millimetre wave radar,” in Proc. IEEE/RSJInt. Conf. IROS. 2005, pp. 3087–3092.

[13] W. Chew, Waves and Fields in Inhomogeneous Media. New York, NY,USA: IEEE Press, vol. 522, 1995.

[14] W. Chew and Y. Wang, “Reconstruction of two-dimensional permittivitydistribution using the distorted Born iterative method,” IEEE Trans. Med.Imag., vol. MI-9, no. 2, pp. 218–225, Jun. 1990.

[15] L.-P. Song, C. Yu, and Q. Liu, “Through-wall imaging (TWI) by radar:2-D tomographic results and analyses,” IEEE Trans. Geosci. RemoteSens., vol. 43, no. 12, pp. 2793–2798, Dec. 2005.

[16] Q. Liu et al., “Active microwave imaging. I. 2-D forward and inversescattering methods,” IEEE Trans. Microw. Theory Tech., vol. 50, no. 1,pp. 123–133, Jan. 2002.

[17] Y.-H. Chen and M. Oristaglio, “A modeling study of borehole radar foroil-field applications,” Geophys., vol. 67, no. 5, pp. 1486–1494, 2002.

[18] P. Van Den Berg, A. Van Broekhoven, and A. Abubakar, “Extendedcontrast source inversion,” Inverse Problems, vol. 15, no. 5, p. 1325,Oct. 1999.

[19] M. Pastorino, “Stochastic optimization methods applied to microwaveimaging: A review,” IEEE Trans. Antennas Propag., vol. 55, no. 3,pp. 538–548, Mar. 2007.

[20] Y. Mostofi, “Cooperative wireless-based obstacle/object mapping and see-through capabilities in robotic networks,” IEEE Trans. Mobile Comput.,vol. 12, no. 5, pp. 817–829, May 2013.

[21] Y. Mostofi and P. Sen, “Compressive cooperative mapping in mo-bile networks,” in Proc. 28th ACC, St. Louis, MO, USA, Jun. 2009,pp. 3397–3404.

[22] G. Oliveri, L. Poli, P. Rocca, and A. Massa, “Bayesian compressive opticalimaging within the Rytov approximation,” Opt. Lett., vol. 37, no. 10,pp. 1760–1762, May 2012.

[23] G. Tsihrintzis and A. Devaney, “Higher order (nonlinear) diffractiontomography: Inversion of the Rytov series,” IEEE Trans. Inf. Theory,vol. 46, no. 5, pp. 1748–1761, Aug. 2000.

[24] Y. Mostofi, “Compressive cooperative sensing and mapping in mobilenetworks,” IEEE Trans. Mobile Comput., vol. 10, no. 12, pp. 1769–1784,Dec. 2011.

[25] L. Tsang, J. Kong, and K.-H. Ding, Scattering of Electromagnetic Waves,Theories and Applications. Hoboken, NJ, USA: Wiley, 2004, vol. 27.

[26] J. Shea, P. Kosmas, S. Hagness, and B. Van Veen, “Three-dimensionalmicrowave imaging of realistic numerical breast phantoms via a multiple-frequency inverse scattering technique,” Med. Phys., vol. 37, no. 8,pp. 4210–4226, Aug. 2010.

[27] K. Avinash and M. Slaney, Principles of Computerized TomographicImaging. Philadelphia, PA, USA: SIAM, 2001.

[28] A. Devaney, “A filtered backpropagation algorithm for diffraction tomog-raphy,” Ultrason. Imag., vol. 4, no. 4, pp. 336–350, Oct. 1982.

[29] A. Devaney, “Diffraction tomographic reconstruction from intensitydata,” IEEE Trans. Image Process., vol. 1, no. 2, pp. 221–228,Apr. 1992.

[30] E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exactsignal reconstruction from highly incomplete frequency information,”IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.

[31] Y. Wang, J. Yang, W. Yin, and Y. Zhang, “A new alternating minimizationalgorithm for total variation image reconstruction,” SIAM J. Imaging Sci.,vol. 1, no. 3, pp. 248–272, 2008.

[32] C. Li, “An efficient algorithm for total variation regularization with appli-cations to the single pixel camera and compressive sensing,” M.S. thesis,Rice Univ., Houston, TX, USA, 2009.

[33] A. Gonzalez-Ruiz, A. Ghaffarkhah, and Y. Mostofi, “An integrated frame-work for obstacle mapping with see-through capabilities using laserand wireless channel measurements,” IEEE Sensors J., vol. 14, no. 1,pp. 25–38, Jan. 2014.

[34] MobileRobots Inc. [Online]. Available: http://www.mobilerobots.com[35] Laird Technologies. [Online]. Available: http://www.lairdtech.com/

Products/Antennas-and-Reception-Solutions/

Page 12: X-Ray Vision With Only WiFi Power Measurements Using Rytov ... · X-Ray Vision With Only WiFi Power Measurements Using Rytov Wave Models Saandeep Depatla, Lucas Buckland, and Yasamin

DEPATLA et al.: X-RAY VISION WITH ONLY WiFi POWER MEASUREMENTS USING RYTOV WAVE MODELS 1387

Saandeep Depatla received the B.S. degree in elec-tronics and communication engineering from theNational Institute of Technology, Warangal, India, in2010 and the M.S. degree in electrical and computerscience engineering in 2014 from the University ofCalifornia, Santa Barbara (UCSB), CA, USA, wherehe is currently working toward the Ph.D. degree inelectrical and computer science engineering.

From 2010 to 2012, he worked on developingantennas for radars with the Electronics and RadarDevelopment Establishment, India. His research in-

terests include signal processing, wireless communications, and electro-magnetics.

Lucas Buckland received the B.S. and M.S. degreesin electrical and computer engineering from the Uni-versity of California, Santa Barbara (UCSB), CA,USA, in 2013 and 2014, respectively.

He is currently a Research Assistant in Prof.Mostofi’s laboratory with UCSB. His research inter-ests include autonomous and robotic systems.

Yasamin Mostofi (SM’14) received the B.S. degreein electrical engineering from Sharif University ofTechnology, Tehran, Iran, in 1997 and the M.S. andPh.D. degrees in the area of wireless communica-tions from Stanford University, Stanford, CA, USA,in 1999 and 2004, respectively.

She is currently an Associate Professor with theDepartment of Electrical and Computer Engineering,University of California, Santa Barbara, CA. Priorto that, she was with the faculty of the Departmentof Electrical and Computer Engineering, University

of New Mexico, Albuquerque, NM, USA, from 2006 to 2012. She was aPostdoctoral Scholar in control and dynamical systems with the CaliforniaInstitute of Technology, Pasadena, CA, from 2004 to 2006. Her researchis on mobile sensor networks. Her current research interests include radio-frequency (RF) sensing, see-through imaging with WiFi, X-ray vision forrobots, communication-aware robotics, and robotic networks. Her research hasappeared in several news outlets such as the BBC and Engadget.

Dr. Mostofi served on the IEEE Control Systems Society ConferenceEditorial Board during 2008–2013. She is currently an Associate Editor ofthe IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS. Shereceived the Presidential Early Career Award for Scientists and Engineers,the National Science Foundation CAREER Award, and the IEEE 2012 Out-standing Engineer Award of Region 6 (more than ten western states). Shealso received the Bellcore Fellow-Advisor Award from the Stanford Centerfor Telecommunications in 1999 and the 2008–2009 Electrical and Com-puter Engineering Distinguished Researcher Award from the University ofNew Mexico.