read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation...

16
Acoustic Tracker Demo Description This demo uses sound to identify the location of an object. It is a simplified version of what is used in military applications, in which an array of microphones “listens” to a battlefield to identify and locate threats. In our demo, the “threat” is modeled as a loudspeaker making bad guy noises. The listening array is two microphones, which will allow us to estimate the position along the straight line between the microphones. Location estimation is based solely on the rms value of the noise measured at the two microphones. Note that the algorithm in a real application will be much more complicated, relying heavily on phase information to identify source location, and spectral analysis for source characterization. What’s so great about this demo? It shows the true power of integrating tools for analysis and acquisition. I am developing my algorithm with live data, so that I get immediate feedback on its performance. In the dark ages, I would have tested my algorithm offline, with data stored from previous tests. This demo also hits on the infinite analytical capabilities that MATLAB can bring to a Test & Measurement Application. Not to mention that it pushes 5 or 6 products! Author of this document, and this version of the demo: Scott Hirsch. Author of the original acoustic tracker demo: Loren Dean. Table of Contents Description............................................... 1 Table of Contents.........................................1 Requirements.............................................. 1 Software.................................................1 Hardware.................................................2 Setup..................................................... 2

Transcript of read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation...

Page 1: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

Acoustic Tracker Demo

DescriptionThis demo uses sound to identify the location of an object. It is a simplified version of what is used in military applications, in which an array of microphones “listens” to a battlefield to identify and locate threats. In our demo, the “threat” is modeled as a loudspeaker making bad guy noises. The listening array is two microphones, which will allow us to estimate the position along the straight line between the microphones. Location estimation is based solely on the rms value of the noise measured at the two microphones. Note that the algorithm in a real application will be much more complicated, relying heavily on phase information to identify source location, and spectral analysis for source characterization.

What’s so great about this demo? It shows the true power of integrating tools for analysis and acquisition. I am developing my algorithm with live data, so that I get immediate feedback on its performance. In the dark ages, I would have tested my algorithm offline, with data stored from previous tests. This demo also hits on the infinite analytical capabilities that MATLAB can bring to a Test & Measurement Application. Not to mention that it pushes 5 or 6 products!

Author of this document, and this version of the demo: Scott Hirsch.Author of the original acoustic tracker demo: Loren Dean.

Table of ContentsDescription...........................................................................................................................1Table of Contents.................................................................................................................1Requirements.......................................................................................................................1

Software...........................................................................................................................1Hardware..........................................................................................................................2

Setup....................................................................................................................................2Software:..........................................................................................................................2

MATLAB....................................................................................................................2Windows......................................................................................................................2

Hardware..........................................................................................................................3Running the demo................................................................................................................4Extending the demo.............................................................................................................8

Curve Fitting Toolbox.....................................................................................................8Signal Processing Toolbox..............................................................................................9Filter Design Toolbox....................................................................................................11

Howzit work?.....................................................................................................................11

Page 2: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

RequirementsSoftware

*MATLAB *Data Acquisition Toolbox – Generate source tones, acquire microphone data *Curve Fitting Toolbox – Fit acoustic decay equation to calibration data. This

gives us a look-up function to guess source location. Signal Processing Toolbox – Filtering measured data to eliminate extraneous

noise sources Filter Design Toolbox – Optional extension to the story for designing realizable

filter. Virtual Reality Toolbox – Optional Visualization

*Required (though could work around use of Curve Fitting Toolbox).

Hardware External speaker. 2 speakers are required for the filter design aspect of the demo. Two external microphones, preferably powered stand microphones. A Y-adapter to allow both microphones to be plugged into one jack.

WARNING: For reasons that only Bob Bemis can explain, a simple single mini to double mini adapter will not work. Use a single mini to double RCA, followed by two RCA to mini adapters. The microphones will plug into the two mini to RCA adapters, and the single mini end will plug into your sound card.

SetupSoftware:

MATLABMake sure that all of the files on your path.

File List:Acoustic Tracker.doc This filetracker.m Main Application GUIconfigurewhereisit.m Calibrates microphoneswhereisit.m Main Applicationmakesound.m Makes a sound (used by configurewhereisit and whereisit)tracker.wrl Virtual Reality Worlddecaycurvefit.cfit CFTool session used for prototyping curve fit.decaycurvefit.m M file generated by CFTool from decaycurvefit.cfitnarrowpassfilter.fda Filter Design Tool session with band pass filter specified.

WindowsConfigure Windows to use Line-In for recording: Open Volume Control Panel. One way to do this is to double-click on the speaker icon in the service tray of the start bar. It should look like this:

Page 3: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

From the Options menu, select Properties. You’ll see the following dialog:

Select Adjust volume for Recording. This brings up the recording control panel:

Select Line In, as shown above. Make sure that the balance is centered, and that the volume is reasonably high.

Page 4: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

Hardware

Connect speaker (or speakers) to headphone jack of sound card.Connect microphones (via adapters) to LINE-IN jack of sound card. Note that the microphone jack is MONO, and will not work! Be sure that the left microphone is in channel 1. With at least one adapter, the white RCA input corresponds to channel 0 and the red input corresponds to channel 2. If this doesn’t work for you, use daqscope to identify which microphone goes into which channel.

If you are using an IBM T20, T21, or T22, you may find it difficult to squeeze in all of the plugs. It is helpful to prop up the side of the laptop, or set up the laptop at the edge of a table, so that there is sufficient room.

I find that performance is best when the microphones are separated by at least 1 meter. It also helps to set the microphones at the same height as the speaker.

If you have two speakers, use the right (as opposed to left, no as opposed to wrong) speaker as the source we are trying to track. The left speaker will be used later to add environmental noise. Use daqfcngen to ensure that your are using the RIGHT (channel 2) loudspeaker as your “threat”.

Running the demoThis section details the process to run the demo. After introducing the notation used in this document, it walks through the three main steps: getting started, configuring (calibrating), and actually running the main demo. Finally, suggested ways of extending the demo are presented.

rL rRLrms

p Rrms

p

Page 5: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

Notation:>> foo Type foo at the command promptguiname: <Action> Click the button (checkbox, …) labeled

“Action” on GUI referred to as “guiname”guiname: VarName = Val Specify the value Val for the variable

named VarName (typically via edit box or pulldown menu)

Get Started:>> trackerThe demo is controlled by the TRACKER gui (tracker.m), shown below.

The “Source” frame allows us to specify what comes out of the loudspeaker. A single tone is the easiest for our algorithm, but a .wav file could be much more interesting. I’ve included many .wav files of helicopters which can be used. Any recording which is reasonably stationary (the more so, the better), and which can loop without a big hiccup will do. The Shhh button just stops the sound from playing, without stopping the demo (useful for explaining the demo without having a pure tone piercing your brain).

After a source has been selected, the algorithm needs to be calibrated (configured). Click Tracker:<Configure> to perform the calibration (configurewhereisit.m). This will bring up the Configure window. Follow the directions regarding speaker placement, and you should be rewarded with a figure which looks something like the one shown in the next section.

Configure:

Page 6: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

The top half of the figure shows the raw data recorded at the microphones for the three speaker positions. As you could probably guess, you’d like to see that the left microphone is loudest when the speaker is to the left, and that the right microphone is loudest when the speaker is to the right.

The blue dots in the bottom half of the figure shows the rms values of the measured data (rms is a reasonably good measure of how loud something is). The red lines are the results of a curve fit to these data points. The fit equation is

This figure shows a typical problem, particularly when recorded data is used instead of a steady tone. Notice that the measurements at the left microphone are almost exactly the same for x=.5 and x=1. This will make the algorithm performance seem much more erratic. If nothing else, this highlights the power of our tools. I get immediate feedback that my assumption of 1/r decay might not be accurate enough for this system, and could develop a more accurate model.

Page 7: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

Calibration should be performed any time the source changes or the microphones are moved. If the RUN button is disabled, you should probably re-run the configuration. The results of the configuration are stored in whereisitconfiguration.mat. Close the Configure window when you are done.

Run:Tracker: <Run>Time to show off and RUN the demo. Select the VR check box to use the VR toolbox to enhance the display of the estimated source position. Either way, when you click Run, you should see the Acoustic Tracker window (whereisit.m):

The top of the figure shows the trace and fft of the raw data. The bottom shows the estimated position (the result of our algorithm). The red and blue dots are the position estimates provided by the right and left microphones, respectively. Each of these is the result of backing out the distance to the source from the curve fit we created during calibration. The equation is found by solving the fit equation for r:

The green dot is the final estimate – simply the average of the two microphones.

Page 8: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

If you’ve been brave enough to select the VR option, you should (eventually) see something like the following window open up in your internet browser.

The helicopter position is average of the two microphone estimates (i.e., the green dot). The trees indicate the listener locations.

Extending the demoCurve Fitting ToolboxThe curve fitting toolbox is used behind the scenes in this demo. We use fit in configurewhereisit to generate the 1/r fit. To highlight this toolbox, we can show the process of designing the original fit. This process here shows the simplicity of the toolbox, but it does not touch on its power.

Extract the data (cftool needs vectors)>> x=[0 .5 1]';>> y1 = data([1 3 2],1);>> y2 = flipud(data([1 3 2],2));>> cftoolcftool: <Data>. cftool:Data: XData=x, YData=y1, Data set name=LeftMiccftool:Data: XData=x, YData=y2, Data set name=RightMic(don’t forget to click <Apply> for each data set)

Page 9: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

cftool: <Fitting>cftool: Fitting: FitName=LeftFitcftool: Fitting: DataSet=LeftMiccftool: Fitting: Type of fit=Rationalcftool: Fitting: Numerator=Constantcftool: Fitting: Denominator=Linear Polynomialcftool: Fitting: <Apply>cftool: Fitting: <Copy Fit>cftool: Fitting: Update for right microphone, <Apply>

A similar session is saved as delaycurvefit.cfit

Now, what can we do?We can save the fits to the workspace. A useful thing to do would be to save the fit to an m-file. This is a must-show feature of the toolbox, as it lets customers know that they can have the best of both worlds – the simplicity and speed of developing in an interactive, visual environment, and the power of scripts to perform repetitive or automated tasks. The m-file will give us the code we need to reproduce this fit in our program. This mfile is a function which will reproduce the two fits for any x, y1, and y2. We really are looking for two lines of code in this file:

>> ft_ = fittype('rat01' );>> cf_ = fit(x,y2,ft_ ,'Startpoint',st_);

Signal Processing ToolboxWe can make the demo one step more “realistic” by adding additional environmental noise. We need to improve the performance of our algorithm so that it can ignore the extraneous noise. We will design a narrow bandpass filter to allow only the threat noise to pass through to the algorithm. To present this part of the demo, you need a second loudspeaker. Place the second speaker very close to one of the microphones. We will send a chirp signal through it, which should make the algorithm go nuts.

Tracker: <Fcn Generator>Fcn Generator:

ChirpInitial Frequency: 500 HzTarget Time Frequency: 600 Hz

Be sure to pick frequencies that don’t straddle your source frequency. If you do, our filter won’t attenuate the noise!

When you start the function generator, you should find that the estimated source location jumps all over the place. Lets design the filter:

Acoustic Tracker: Filter <Design > This opens fdatool

Page 10: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

fdatool: a) import session: narrowpassfilter.fda, OR do it yourself …b) Set these parameters:

Notice that fo is already defined in your base workspace (it was put there by the tracker gui). For a single tone source, fo is the frequency. For a recorded source, fo is the peak frequency. This is calculated in the beginning of configurewhereisit, by computing the psd of the complete record. A small correction is used to help guard against large DC offsets.

Export the filter to the workspace. Use the default names (Num and Den), and select to overwrite workspace variables.

Acoustic Tracker: Filter <Update>. This grabs the Num and Den values from the workspace. You can use this while the filter is running to update the filter, too.

Acoustic Tracker: Filter <Use>. This turns the filter on. There is an extra bit of processing required here. Even though the filter has (nearly) unity gain across the pass

Page 11: read.pudn.comread.pudn.com/downloads140/sourcecode/app/606539/A…  · Web viewLocation estimation is based solely on the rms value of the noise measured at the two microphones.

band, the rms value of the signal will inevitably be reduced by the filter. This is because there is likely an appreciable level of noise away from the main frequency (such as harmonics from a distorted single tone). Remember that our calibration was based on rms value of the entire signal. The signal is recalibrated by the ratio of the filtered rms value to the unfiltered rms value.

I have not tested the filter with helicopter noise. This will probably take some playing around, as the filter will be really, really long (over 2000 taps).

Filter Design ToolboxI haven’t worked out this part yet. I’ve tried just quantizing the filter, with [8 7] word length for the coefficients. This preserves the pass band performance, but boosts the noise floor significantly. I try to improve the filter with second order sections, but MATLAB hangs once I select <Convert Filter Structure> - the filter is apparently too big. My biggest concern with this whole step is that you’d have to be nuts to try to implement a filter this big (order 1000 for 500 Hz, order 2000 for 200 Hz).

We can reduce the filter order by relaxing the stop band attenuation specs. Mine are really high.

Future WorkThere’s one small thing that I’d like to add. It is great to show that changes you make in a daqcallback are reflected immediately (while things are running). It would be nice to show that the algorithm doesn’t work very well, change something in the code, and show that it works great. The parameter which seems to have the greatest impact on performance is the length of the moving average used to smooth the position measurement. The problem right now is that this average is defined in the initialization routine, so we would have to stop and restart to show the effect. Any ideas?