Modulation characterization using the wavelet transform
Transcript of Modulation characterization using the wavelet transform
Atlanta University CenterDigitalCommons@Robert W. Woodruff Library, AtlantaUniversity Center
ETD Collection for AUC Robert W. Woodruff Library
5-1-1997
Modulation characterization using the wavelettransformLanier A. WatkinsClark Atlanta University
Follow this and additional works at: http://digitalcommons.auctr.edu/dissertations
Part of the Physics Commons
This Thesis is brought to you for free and open access by DigitalCommons@Robert W. Woodruff Library, Atlanta University Center. It has beenaccepted for inclusion in ETD Collection for AUC Robert W. Woodruff Library by an authorized administrator of DigitalCommons@Robert W.Woodruff Library, Atlanta University Center. For more information, please contact [email protected].
Recommended CitationWatkins, Lanier A., "Modulation characterization using the wavelet transform" (1997). ETD Collection for AUC Robert W. WoodruffLibrary. Paper 640.
THESIS TRANSMITTAL FORM
Name of Student: Lanier A. Watkins
Title of Thesis: Modulation Characterization Using the Wavelet Transform
We the undersigned members of the Committee advising this
thesis have ascertained that in every respect it acceptably fulfills the final requirement for
the degree of M.S. in Physics
Computer Science
Department
Phvsics
Date
jartment Date
Dr. R. Date 'Dr.4> Hawk Date
As Chair ofthe Department of Phvsics
I have verified that this manuscript meets the School's Department's standards ofform and
content governing theses forjthe degree sought.
Date
As Dean ofthe School of Arts and Sciences I have verified that this
manuscript meets the School's regulations governing the content and form of theses.
Date
As Dean of Graduate Studies I have verified that this manuscript meets the
University's regulations governing the content and form oftheses.
32Dean ofGraduate Stuflies Date
ABSTRACT
PHYSICS
WATKINS, LANffiR A. B.S., CLARK ATLANTA UNIVERSITY, 1996
MODULATION CHARACTERIZATION USING THE WAVELET TRANSFORM
Advisor: Dr. Kenneth Peny, Department of Computer Science
Thesis Dated: May, 1997
The focus of this research is to establish an Automatic Modulation Identifier
(AMI) using the Continuous Wavelet Transform (CWT) and several different classifiers.
A Modulation Identifier is of particular interest to the military, because it has the potential
to quickly discriminate between different communication waveforms. The CWT is used to
extract characterizing information from the signal, and an artificial Neural Network is
trained to identify the modulation type.
Various analyzing wavelets and various classifiers were used to assess comparative
performance. The analyzing wavelets used were the Mexican Hat Wavelet, the Morlet
Wavelet, and the Haar Wavelet. The variety of classifiers used were the Multi-Layer
Perceptron, the K-Nearest Neighbor and the Fuzzy Artmap. The CWT served as a
preprocessor, and the classifiers served as an identifier for Binary Phase Shift Keying
(BPSK), Binary Frequency Shift Keying (BFSK), Binary Amplitude Shift Keying (BASK),
Quadature Phase Shift Keying (QPSK), Eight Phase Shift Keying (8PSK), and Quadature
Amplitude Modulation (QAM) signals. Separation of BASK, BFSK and BPSK was
performed in part one of the research project, and separation of BPSK, QPSK, 8PSK,
BFSK, and QAM comprised the second part of the project. Each experiment was
1
performed for waveforms corrupted with Additive White Gaussian Noise ranging from 20
dB - 0 dB carrier to noise ratio (CNR). To test the robustness of the technique, part one
of the research project was tested upon several carrier frequencies oa/2, and co/3 which
was different from the carrier frequency co that the classifiers were trained upon. In the
separation of BASK, BFSK and BPSK, the AMI worked extremely well (100% correct
classification) down to 5 dB CNR tested at carrier frequency co, and it worked well (80%
correct classification) down to 5 dB CNR tested at carrier frequencies oaf2, and co/3. In
the separation of BPSK, QPSK, 8PSK, BFSK, and QAM, the AMI performed very well at
10 dB CNR (98.8% correct classification). Also a hardware design in the Hewlet Packard
Visual Engineering Environment (HP-VEE) for implementation of the AMI algorithm was
constructed and is included for future expansion of the project
CLARK ATLANTA UNIVERSITY THESIS
DEPOSITED IN THE ROBERT W. WOODRUFF LIBRARY
STATEMENT OF UNDERSTANDING
In presenting this thesis as a partial fulfillment of the requirements for an advanced degree
from Clark Atlanta University, I agree that the Robert W. Woodruff library shall make it
available for inspection and circulation in accordance with its regulations governing
materials of this type. I agree that permission to quote from, to copy from, or to publish
this thesis may be granted by the author or, in his absence, the Dean of the School of Arts
and Sciences at Clark Atlanta University. Such quoting, copying, or publication must be
solely for scholarly purposes and must not involve potential financial gain. It is understood
that any copying from or publication of this thesis which involves potential financial gain
will not be allowed without written permission of the author.
SigAaiure of Author Date
NOTICE TO BORROWERS
All dissertations and theses deposited in the Robert W. Woodruff Library must be used
only in accordance with the stipulations prescribed by the author in the preceding
statement.
The author of this thesis is:
Name: Larder A. Watkins ___^
Street Address: P.O. Box 356
City, State and Zip: Marshallville. GA 30314
The directors of this thesis are:
Professors: Dr. K. Perrv/Dr. L. Lewis
Department: Physics _^_^
School: Arts and Sciences
Clark Atlanta University
Office Telephone: 880-8797
Users of this thesis not regularly enrolled as students of the Atlanta University Center are
required to attest acceptance of the preceding stipulations by signing below. Libraries
borrowing this thesis for use of patrons are required to see that each user records here the
information requested.
NAME OF USER ADDRESS DATE TYPE OF USE
MODULATION CHARACTERIZATION USING THE WAVELET TRANSFORM
A THESIS
SUBMITTED TO THE FACULTY OF CLARK ATLANTA UNIVERSITY IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE
BY
LANIER A. WATKINS
DEPARTMENT OF PHYSICS
ATLANTA, GEORGIA
MAY 1997 "-
VI'
©1997
LANIER A. WATKINS
All Rights Reserved
ACKNOWLEDGMENTS
I would like to thank Dr. Kennth Perry, my advisor, for giving me a chance to
work with him, sparking my interest in wavelets/neural network technology, and for a
monthly stipend. Also I would like to thank the members of my committee: Dr. Lonzy
Lewis, Dr. Romain Murenzi and Dr. Denise Stephenson-Hawk for taking time to review
my work. I would like to thank Dr. Lance Kaplan, Dr. John Hurley and Dr. Raymond
Brown for serving as my "last minute saviors." A special thanks goes to Dr. Dan
Dudgeon, Dr. Richard Molnar, and Dr. Robert Baxter all of M.I.T Lincoln Laboratory for
advising me on most of this work during my summer internship. An even bigger thanks
goes to Prism-D for funding me during my five years at Clark Atlanta University; thanks
goes also to the CTSP for allowing me to use their facilities at my leisure. I would like to
thank Alpha Phi Alpha Fraternity, Inc. for making me realize earlier in my life that, "I am
the master of my fate, I am the captain of my soul." Last but not least, I would like to
thank "The Most High" for my very existence.
TABLE OF CONTENTS
ACKNOWLEDGMENTS "
LIST OF TABLES v
LIST OF FIGURES *
LIST OF ABBREVIATIONS ix
Chapter
1. INTRODUCTION l
Communication Signals 5
Wavelet Theory 10
Neural Network Theory 17
Matlab Implementation of the CWT 21
2. RESEARCH METHODOLOGY AND DESIGN 23
Design Issues 23
Approach For Resolving Design Issues 24
Additive White Gaussian Noise and Varied Carrier Frequency.25
Feature Extraction 26
Classifiers 33
3. IMPLEMENTATION 37
Automatic Modulation Identification Algorithm 37
in
4. RESULTS 41
Results From Separation of BPSK/BFSK/BASK 41
Results From Separation of BPSK/QPSK/8PSK/BFSK/QAM. 48
5. SUMMARY/CONCLUSION 50
Future Work 52
APPENDIX
L COMMUNICATION SIGNALS 54
EL MATLAB PROGRAMS 56
m. SIMULATION OF GAUSSIAN NOISE CHANNEL 60
IV. DECISION BOUNDARIES FOR NEURAL NETWORKS 62
BIBLIOGRAPHY M
IV
LIST OF TABLES
Table Page
1. Comparison of Algorithms 3
2. Mexican Hat Wavelet Modulation Classifier Results 42
3. Morlet Wavelet Modulation Classifier Results 43
4. Haar Wavelet Modulation Classifier Results 45
5. Morlet Wavelet Modulation Classifier Results for 0^2 46
6. Morlet Wavelet Modulation Classifier Results for a/3 48
7. Morlet Wavelet Modulation Classifier Results for Experiment #2.... 49
LIST OF FIGURES
Figure Page
1. Binary Phase Shift Keying Signal From C computer program 6
2. Binary Frequency Shift Keying Signal 6
3. Binary Amplitude Shift Keying Signal 7
4. Quadature Phase Shift Keying Signal 8
5. 8 Phase Shift Keying Signal 8
6. 16-Quadature Amplitude Modulation Signal 9
7. Binary Phase Shift Keying Signal from internet on left, and Binary Phase
Shift Keying Signal from C computer program on right 10
8. 1-D Morlet Wavelet and FT of 1-D Morlet Wavelet 11
9. 1-D Mexican Hat 13
10. 1-D Haar Wavelet 13
11. Real part of 1-D Morlet Wavelet at (0=5.5 14
12. Artificial Neuron 18
13. Network Containing 1 Hidden Node 19
14. Haar Feature Vector For BPSK 27
15. Haar Features For BFSK 28
16. Haar Features For BASK 28
17. Mexican Hat Features For BFSK ... 29
vi
18. Mexican Hat Features For BPSK 29
19. Mexican Hat Features for BASK 30
20. Morlet Features For BASK 20
21. Morlet Features For BFSK 31
22. Morlet Features For BPSK From The Internet 31
23. Morlet Features For BPSK From C Program 32
24. Morlet Features For PSK8 32
25. Morlet Features For 16-QAM 33
26. Morlet Features For QPSK 33
27. Diagram Of Nearest Neighbor Classifier 34
28. Diagram Of Multi-Layer Perceptron Classifier 35
29. Diagram OfFuzzy Artmap Classifier 36
30. Flowchart Of Modulation Characterization Algorithm 37
31. Results From Mexican Hat Modulation Classifier For Constant ca— 42
32. Results From Morlet Modulation Classifier For Constant (a 43
33. Results From Haar Modulation Classifier For Constant eo. 44
34. Results From Modulation Classifier For tall 46
35. Results From Morlet Modulation Classifier for cott 47
36. Results From Second Experiment For Morlet Modulation Classifier... 49
37. HP-VEE Program For Wavelet Transform 53
38. Communication Signals From Experiment #1 54
39. Communication Signals From Experiment #2 55
vii
40. Simulation of Gaussian Noise Channel 60
41. Feature Extraction From Noisy Signal 61
42. Example of Decision Boundaries for Neural Networks 63
viu
LIST OF ABBREVIATIONS
AMI Automatic Modulation Identifier
AWGN Additive White Gaussian Noise
CNR Carrier-To-Noise Ratio
CWT Continuous Wavelet Transform
BASK Binary Amplitude Shift Keying
BFSK Binary Frequency Shift Keying
BPSK Binary Phase Shift Keying
8PSK Eight Phase Shift Keying
FT Fourier Transform
FFT Fast Fourier Transform
HMC Haar Modulation Classifier
HP-VEE Hewlett Packard Visual Engineering Environment
IFFT Inverse Fast Fourier Transform
KNN K-Nearest Neighbor
LNK Richard Lippman, Dave Nation and Linda Kukolich
MHMC Mexican Hat Modulation Classifier
M.I.T. Massachusetts Institute of Technology
MLP Multi-Layer Perceptron
IX
MMC Morlet Modulation Classifier
QAM Quadature Amplitude Modulation
QPSK Quadature Phase Shift Keying
CHAPTER 1
INTRODUCTION
This thesis deals with the problem of modulation characterization in the presence
of varying noise and varying carrier frequency. The results show that when the Automatic
Modulation Identifier (AMI) is trained and tested upon a constant carrier frequency, an
inverse relationship between increasing noise level and percent of correct classification is
found. This response is quite logical, because at higher noise levels the signals that the
classifiers are tested upon become distorted. Some of these distorted signals can no
longer be identified by the classifiers, thus the signals are misclassified.
The Wavelet Transform is very suitable for transient signal analysis.1 A transient is
an abrupt change in the signal. Each modulation pattern contains transients at the points
of phase, frequency or amplitude change. These transients are the main points of interest,
because they correspond to binary information that is imposed upon the carrier signal. In
the Wavelet Transform of the signal there exist local maxima at the points where transients
are detected, thus the Wavelet Transform is used to extract a signature pattern or a feature
vector from the modulation pattern. The classifiers are trained using the noise free feature
vectors taken from each modulation type, and the classifiers are tested upon noisy feature
vectors.
The approach for presenting the research will be as follows: Modulation
Characterization via algorithms other than the AMI will be discussed, background
information on the Wavelet Transform and on Neural Network Technology will be
discussed, the Research Methodology/System Design, and Implementation will be
discussed in Chapters 2,3, and 4 respectively. Finally in Chapter 5 the research will be
summarized and results will be presented.
Modulation Characterization has been accomplished by K.C. Ho, W. Prokopiw,
and Y.T. Chan.2 Their principle objective was to distinguish between Binary Frequency
Shift Keying (BFSK) and Binary Phase Shift Keying (BPSK) signals. The investigators
used Haar Wavelet analysis for feature extraction and statistical procedures to produce a
modulation identification scheme. Taking the desired features, the researchers used
median filtering and the variance to separate the two signals. Their system was found to
possess the Rican distribution, and it directly follows that this particular probability density
function reduces to the Rayleigh distribution at low Carrier to Noise Ratio (CNR) and
Gaussian at high CNR. Once the correct probability function was determined, a median
filter was used and the variance was calculated. They noticed that the variance would
have a Chi-squared distribution since the median filter output was Gaussian. The above
information combined with a probability of misclassification of BPSK allowed them to
determine the threshold for BPSK and BFSK separation. Their results were 100% correct
classification of BFSK patterns and 96.33% correct classification of BFSK patterns at 13
dBCNR.
Another modulation identification scheme was proposed by S.Z. Hsue and Samier
S. Loliman.3 Their approach to the problem was zero crossing. The algorithm starts with
sampling the signal with a zero crossing sampler. The zero crossing points are recorded
and information about the phase and frequency of the signal is recorded. Next the
probability density functions of the zero crossing points are generated. At high carrier to
noise ratio the probability density function is Gaussian, and because of this the probability
density of the zero crossing points are approximated by the Gaussian density. From their
result the method is accurate from 6.5 dB CNR to 17 dB CNR. The next step would be
to calculate either the phase or frequency histogram, depending upon variance of the
probability. This histogram information is used as a feature vector for a parallel
processing scheme that is programmed to characterize the information from the histogram.
They report results at 15 dB CNR and above. All of the algorithms mentioned in this
thesis are revisited in Table 1.
Table 1. Comparison of Algorithms
INVESTIGATOR
CNR
^CORRECT
METHOD
HSUE
15 dB
97%
ZERO-CROSS
K.C.HO
13 dB
9fi.lfi%
HAAR
WATKINS
PART 1 PART 2
5dB
97%
MORLET
MdB
98.8%
MORLET
In this thesis, a Modulation Characterization algorithm is presented that can
completely separate (100% correctly) the following signals at 10 dB CNR (At a constant
carrier frequency): Binary Phase Shift Keying (BPSK), Binary Frequency Shift Keying
(BFSK), and Binary Amplitude Shift Keying (BASK). At varied carrier frequency (a/3),
results are reported at 88.67% correct classification. In the second experiment the same
algorithm is used to separate the following signals: Quadature Phase Shift Keying
(QPSK), Eight Phase Shift Keying (8PSK), Quadature Amplitude Modulation (QAM),
Binary Phase Shift Keying (BPSK), and Binary Frequency Shift Keying (BFSK). The
algorithm is capable of achieving 98.8% correct classification for the separation of any of
the 5 signals at 10 dB CNR. The experiments are separated into two sections because of
the levels of difficulty. The first experiment is nontrivial; however the second experiment
is even more difficult This is because the signals in the first section are more dissimilar
than the signals in the second section, thus the first set of signals should be easier to
separate.
The three forms of modulation used in the first classification experiment were
created using C computer programs. These C programs alter the amplitude (A), the
frequency ( go), or the phase ((J)) of the carrier wave, Acos(cot + <J>), such that the following
modulation patterns are produced: Binary Phase Shift Keying (BPSK), Binary Frequency
Shift Keying (BFSK), and Binary Amplitude Shift Keying (BASK). There are five forms
of modulation used in the second classification experiment, Quadature Phase Shift Keying
(QPSK), Eight Phase Shift Keying (8PSK), Quadature Amplitude Modulation (QAM),
and Binary Phase Shift Keying (BPSK). All of these signals except BFSK were
downloaded via the internet from the website
www.ee.byu.edu/ee/class/ee492.44/ee492.44.html. The BPSK signal used in the first
experiment was created using a C computer program, and the BPSK signal used in the
second part was downloaded from the internet Each signal in the first experiment was
created using the same binary information, and the binary digits were sent every T seconds
where T is the period of the cosine carrier. The binary information is only four digits long
(1010). The cosine carrier is discretized to 27 or 128 points. This means that there are
128 points in each file that corresponds to each modulation type. In the first experiment
the main objective was to prove that the AMI could detect the transients in the signals,
and to show that these transients could characterize each signal. The signals in the second
experiment that were download from the internet were created using the same binary
information. The carrier frequency is 2 Hz, the symbol rate was 1 symbol/second, and the
sampling rate was 16 samples/second. Later, in the discussion of the results from
experiment one and experiment two it will be shown that even though these signals were
created using different methods the features extracted using the Morlet Wavelet are very
similar.
Communication Signals
Binary modulation is a method of altering a carrier waveform so that binary data
can be sent through an analog channel.4 Each modulation type has its own characteristics,
and the CWT exploits this fact to allow the classifier to discriminate between the
modulation types.
BPSK is the result of varying the phase (<|>), of a carrier wave in a manner that
corresponds to either a binary 1 or 0.4 It has the advantage of being less susceptible to
noise than the other two modulation types. BPSK is very desirable for applications such
as microwave radio (see Figure 1).
o.:0.6
A 0.4m
p 0.2
! •t -0.2u
d -0.4
8 -0.6
-0.8
HI• 1/ 1 / I/ y I/ W
0 -5
BPSK
A A A A
I Hill0
Time
A AH
15
A A]
11110
Fig. 1. Binary Phase Shift Keying Signal From C computer program.
BFSK is the second modulation pattern of interest. It is accomplished by switching
the frequency CO, of a carrier wave to send either a binary 1 or 0.4 This method is more
susceptible to noise than BPSK, but less susceptible to noise than BASK. BFSK is very
useful for multiplexing audio frequencies onto telephone channels for teletype or data (See
Figure 2).
Fig. 2. Binary Frequency Shift Keying Signal.
BASK is very similar to Morse Code. The amplitude A, of a carrier wave is varied in
order to send a binary 1 or 0.4 This type of modulation is very vulnerable to noise;
therefore it is less popular than BFSK or BPSK. Because it is such a simple method,
BASK is used primarily for optical fiber transmission (See Figure 3).
A
m
P1
t
u
d
e
0.8 R /0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
■-1io -5
BASK
A
M V0
Time5
•
•
-
•
•
•
-
■
10
Fig. 3. Binary Amplitude Shift Keying Signal.
The QPSK and 8PSK signals are very similar to BPSK. Instead of sending binary
1 or 0, binary 00,01,10, and 11 can be sent for QPSK.4 Each set of binary digits
correspond to a 0, n/2, n, or 3tc/2 phase change, respectively (See Figure 4). Since
BPSK, QPSK and 8PSK signals are related by the fact that all three use phase changes to
transmit binary information, then their separation should be more difficult than signals that
utilize amplitude or frequency changes. This is the main reason that the second
experiment is more difficult than the first.
Fig. 4. Quadature Phase Shift Keying Signal.
Similarly, the sets of binary digits 000,001,010,100,1101,101, Oil and 111 can be sent
for 8PSK.4 Each set of binary digits correspond to a 0, jc/4, tc/2, 3jc/4, tc, 5tc/4, 3tc/2, or
Ik/4 phase change respectively (See Figure 5).
Fig. 5. 8 Phase Shift Keying Signal.
The signal 16-QAM is a combination of amplitude shift keying and phase shift keying,
therefore the binary sets 0000,0001, 0011, 0111,1001,1101,0101,0010,0110,1100,
1010,0100,1011,0100,1000,1111 can be sent.4 Each set of binary digits correspond to
a 0, rc/8, n/4, 3tc/8, tc/2, 5jc/8, 3jc/4, 7tc/8, n, 9tc/8, 5tc/4, 11tc/8, 3tc/2, 13tc/8, Ik/4, or
15tc/8 phase change, respectively. Each set of four phase changes are displayed at an
amplitude of 1,2,3, or 4, respectively (See Figure 6). See Appendix I for more
information about the amplitude, phase or frequency changes used in producing the
signals.
Fig. 6. 16-Quadature Amplitude Modulation Signal.
Earlier it was mentioned that part one and two used different BPSK signals. In Figure 7
the two signals are placed side by side for comparison. Visual inspection reveals little
difference, and it will be shown in Chapter 3 that the extracted features are similar as well.
Fig. 7. Binary Phase Shift Keying Signal from internet on left, and Binary Phase Shift
Keying Signal from C computer program on right.
Wavelet Theory
Wavelet analysis is quickly becoming a standard for a variety of applications. It
has proven to be very useful because of its ability to analyze signals at different scales. In
this procedure the signal is decomposed as a linear superposition of the analyzing wavelet
at a variety of scales.5 This gives the user the flexibility to analyze any particular feature
of the signal desired. Wavelet analysis can be thought of as an advanced filtering device.
The reader will be spared the mathematical derivations and proofs of wavelets; instead the
Wavelet Transform will be defined and some of its properties will be explored.
The Fourier Transform (FT) is another tool that has been used for analyzing
signals, but for feature extraction the Fourier Transform provides less information about
the signals than the Wavelet Transform.6 This can be explained by considering the
procedure that the Fourier Transform uses to analyze a signal. The Fourier Transform
10
decomposes a signal as a superposition of sine and cosine functions.5 This transformed
signal then exist in what is called the frequency domain. Therefore the Fourier Transform
analyzes a signal for its frequency content. This is very useful when only the frequency
content of a signal is considered, but if transients are of interest then the Fourier
Transform is not effective. This is one of the main differences between the Fourier
Transform and the Wavelet Transform.
Each analyzing wavelet is localized in the time domain and in the frequency
domain. This means that all of the energy associated with the mother wavelet is
concentrated within a finite interval in the time domain and also in the frequency domain.
Figure 8 displays a plot of the one-dimensional Morlet Wavelet and its Fourier
Transform.7 The idea of localization evident in the plot.
M orlet W avelet
A
m
P
I
i
t
u
d
e
1
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-10
Fourier T rasform of M orlet W avelet
8
A
m
P
i
t
u
d
e
10
7
6
5
4
3
2
1
0
-1 0 1 0
Frequency
Fig. 8. 1-D Morlet Wavelet and FT of 1-D Morlet Wavelet
Localization is one property of wavelets. Before discussing other properties of wavelets,
the definition of a wavelet must be given. There are several requirements that must be met
11
before a function can be classified as a wavelet These requirement can be reduced to the
following statements:8
1) The function \jf(t) and its Fourier Transform *F(Q) must be square
integrable.
2) \|f(t) must meet the admissibility requirement
Cv/2n = j mCl)\2dn/\Q\<oo (1.1)
Equation (1.1) implies that
0, (1.2)
and that \j/(t) is of zero mean as stated in Equation (1.3).
J \j/(t)dt = O (1.3)
In equation (1.1) Cv is a constant that results from the admissibility condition. The above
statements are necessary requirements for a function to be called a wavelet, but there are
other restrictions that can be placed upon the wavelet to achieve better results. For
example, the wavelet can be required to have a certain number of vanishing moments.
This requirement improves the ability of \|f(t) to detect singularities in the signal.
This research deals with three different types of one-dimensional analyzing
wavelets, the Mexican Hat Wavelet (See Figure 9), the Haar Wavelet (See Figure 10), and
the Morlet Wavelet (See Figure 11).
12
The Mexican Hat Wavelet is defined as:
= (l-t2)exp(-t2/2) (1.4)
Fig. 9. 1-D Mexican Hat Wavelet.
The Haar Wavelet is defined as: Y(t)= 1, [-5a, 0) (1.5)
\|/(t) = -1, [0,5a)
\|f(t) = 0, elsewhere
Here "a" denotes the chosen scale of the wavelet.
Haar Wavelet
0.4
A
m 02f
1
t
1-02e
-0.4
°5o -5 0 5 10timn
Fig. 10. 1-D Haar Wavelet
13
The Morlet Wavelet is defined as: \|f(t)=exp(ioot)exp(-t2/2)+A (1.6)
Fig. 11. Real part of 1-D Morlet Wavelet at CD =5.5.
A is called the correction term. It allows the Morlet Wavelet to meet the admissibility
requirement; however if © is chosen greater than 5 then the correction term is not needed.
The Wavelet Transform can be thought of as an advanced filtering device. Before the
Wavelet Transform is possible, the analyzing wavelet must be scaled and translated. The
result is the family of wavelets which is very controllable.
The wavelet \|/(t) is referred to as the analyzing wavelet or the mother wavelet
Scaling and translating this mother wavelet produces the baby wavelet Vab(t).
The Continuous Wavelet Transform (CWT) is defined as the inner product of the baby
wavelet and the signal to be analyzed.
CWT =<\|/ab(0ls(t)> (1-8)
\|/ab(t)*s(t)dt (1.9)
14
It is obvious that the result of the CWT is a function of the scale 'a' and the shift 'b'. If
\|/(t) is assumed to be a one-dimensional function then the CWT will be a two-dimensional
function. This means that the CWT maps a one-dimensional function into a two-
dimensional function. This two-dimensional function exists in what is called Wavelet
Space. Wavelet Space exists because of several properties of the CWT as seen below:8
1) Linearity: The CWT is linear because of the linearity of the inner product.
2) Shift Property: If a function f (t) is shifted such that f(t) = f(t-b'), then
CWTf(a,b) =CWTf(a,b-b') (1.10)
=a-l/2j y((t-b)/a)*f(t-b')dt (1.11)
=a-l/2j \j/((t'+b'-b)/a)*f(t')dt' (1.12)
=CWTf(a,b-b') (1.13)
Property 2 simply states that a shift in the function, f(t) yields a shift in the CWT,
CWTf(a,b).
3) Scaling Property: The function f (t) is scaled such that f(t) = s
CWTf(a,b) =(a*s)"1/2J \|f( (t-b) /a )*f(t/s) dt (1.14)
=s(a*s)"1/2J \|/((st'-b)/a)*f(t')dt' (1.15)
=s1/2a"1/2 J \|/( (st'-b) /a )*f(t') dt' (1.16)
= CWTf(a/s,b/s) (1.17)
Property 3 states that energy is spread by a factor of s in both dimensions in the
CWTf (a,b) or simply scaling the function f(t) also scales the CWT off(t).
15
4) Localization Property: Consider a Dirac pulse at time to, 8(t-to)
CWT8(a,b) =a-l/2j \|/( (t-b)/a )* 8(t-to) dt (1.18)
= a-l/2\|/((to-b)/a) (1.19)
The CWT has sharp time localization at high frequencies, therefore at small scales the
CWT will yield a constant value at every point t and will be a local function at t^
5) Energy Conservation: Consider a function f(t) and its CWTf (a,b)
J lf(t)|2dt=l/CvJ j ICWTf(a,b)|2(dadb)/aV2 (1.20)
Here Cv is a constant that results from the admissibility condition. Property 5 is similar to
Parseval's formula of the Fourier Transform. It simply states that the energy in the time
domain is equivalent to the energy in the wavelet domain or that the wavelet transform
conserves energy. Also partial energies can be computed by considering ICWTabl »
which is the energy density of the signal in position and scale. The partial integration of
this density in one variable gives the energy density in the remaining variable as seen
below.
E(b) = J da/a2 ICWTab!2 (L21)
E(a) = J dblCWTabl2 (L22>
Equation (1.21) is the energy density as a function of shift, and equation (1.22) is the
energy density as a function of scale. The energy density as a function of shift expresses
the energy of the signal over all possible scales. Similarly the energy density as a function
16
of scale expresses the energy of the signal at all possible positions. The Wavelet
Transform and all of its properties affect the modulation patterns in such a way that a
Neural Network is capable of recognizing these signals. An introduction to Neural
Network technology is appropriate for better understanding of its ability to classify
modulation patterns.
Neural Network Theory
An artificial Neural Network is an information system that processes data or
stimuli in the same manner as biological Neural Networks. Once data or stimuli are
encountered, the neuron makes a decision to respond or not to respond. In an artificial
Neural Network a neuron is nothing more than a decision maker. This decision making
occurs in biological neurons as well as artificial neurons. Neural Networks are
mathematical models based on the assumptions that:9
1). Information processing occurs at many simple elements called neurons.
2). Signals are passed between neurons over connection links.
3). Each connection link has an associated weight, which in a typical Neural
Net multiplies the signal transmitted.
4). Each neuron applies an activation function (usually nonlinear) to its net
input to determine its output signal.
Neural Networks are characterized by:
1). Pattern of connection between the neurons (architecture).
2). Method of determining the weights on the connection (training).
3). Activation function.
17
Consider the neuron in Figure 12, it has three input nodes Xi and one output node Y.
Each input node is directly connected to the output node, and each connection has a
weight, wy associated with it. This particular single-layer net is feed forward, which
means that information flows from the input units to the output units in a forward
direction; however the flow of information is not necessarily limited to the forward
direction. In a Recurrent Net information may flow from node A to node B then back to
node A again. The flow of information has a magnitude associated with it, which is called
a weight. The weights are very important, because if a large value is given for wj, but
very small values are given for W2 and W3 then the output of the activation function is
going to be mostly dependent upon node Xj. This means that the final decision of the
neuron will be based almost entirely upon node Xj. This process of assigning values for
the weights is called training.
Fig. 12. Artificial Neuron.
The artificial neuron depicted in Figure 12 is capable of solving some simple problems.
The equations below model the behavior of this neuron.
Ym = wixi+W2X1 + W3xi (1.23)
Ye* =f(Yin), where f(x) = 1/(1+e"x) (1.24)
18
Here f(x) is the activation function of the Neural Network, which is called a sigmoid
function. This function is one of the most important concepts of Neural Network
technology. The type of activation function used actually limits the network to the type of
problems it can solve.10
Fig. 13. Network Containing 1 Hidden Node.
The network in Figure 13 contains a node in between the input and output nodes and a
nonlinear activation function f(x). This node denoted by Y is called a hidden node. A
hidden node allows the neural network to solve more problems than a network without a
hidden node such as the one illustrated in Figure 12. One of the unfavorable features of a
Neural Network with a hidden node is that it is harder to train than a network without a
hidden node. Neural Networks can train the weights associated with each node using
various methods. Two methods of training are supervised training and unsupervised
training. In supervised training, a sequence of training vectors is presented to the
classifier, then the weights are adjusted according to a learning algorithm. From this
example, the neural network would be trained to classify an input vector as being in or
19
not being in the same class as one of the training vectors. Unsupendsed training occurs
when a sequence of input vectors is presented to the network, but no training data is
specified. The net then sets its weights such that similar input vectors arc classified
together. Then the net produces an exemplar vector which is representative of the entire
class or cluster. In this section the general theory of Neural Network technology is given.
Almost any network can be produced by taking the networks from Figure 12 and Figure
13 and making multiple connections. An understanding of how each neuron works gives
an understanding of the network as a whole. Two of the networks used in this research
are feed forward, fully connected networks (The Multi-Layer Perceptron and the K-
Nearest Neighbor), and the other is a recurrent, fully connected network (the Fuzzy
Artmap). All three of the networks in this study used supervised training. These three
classifiers were chosen because their methods of classification are very dissimilar. The K-
Nearest Neighbor (KNN) classifier uses Euclidean distances from the training set to the
test set to determine the correct classification. The Multi-Layer Perceptron (MLP)
classifier uses the back propagation algorithm to train its weights to achieve optimal
classification. Finally the Fuzzy Artmap classifier uses match tracking to govern the
performance of the net The diversity of these classifiers should prove advantageous for
comparing and contrasting the results. More details about each individual network will be
given later in Chapter 2.
20
Matlab Implementation of the CWT
The CWT was used to analyze the six modulation patterns mentioned in Chapter 1.
The CWT is well suited for analyzing these six modulation types, because it is sensitive to
any change in phase, frequency or amplitude of the signal.
Matlab programs were written to calculate the CWT (See Appendix II). The logic
used in constructing the algorithm is given here. The received signal, r(t) is the sum of
the original signal and the noise.
r(t) = s(t) + n(t) (1.25)
Vab = a~1/2\|K(t-b)/a) (1.26)
\|fab is the scaled and translated mother wavelet The Continuous Wavelet Transform is
the inner product of the received signal and the scaled, translated mother wavelet
CWT= <Vab(t)lr(t)> (1.27)
CWT =} a-1/2yab(t)*r(t)dt (1.28)
For this research the CWT is a function of shift only, therefore the scale 'a' is taken out of
the integral and held constant.
CWT =a"1/2J Vab(t)*r(t)dt (1.29)
CWT =a1/2j ¥ab* R(co)eJo* da> (1.30)
21
Equation (1.30) is obtained by taking the Fourier Transform of equation (1.29)
(Justification follows from Parseval's Theorem). To simplify the calculation even further
the following substitution is made: P(aa>)=¥ab* R((0).
CWT =a1/2j P(a©)eJ(* dco (1.31)
Equation (1.31) reduces to the inverse Fourier Transform.
CWT =p(a,b) (1.32)
Taking the inverse Fourier Transform of equation (1.31) yields the CWT as a function of
shift (a) and scale (b); however for this research only one particular scale of the CWT was
considered. Matlab was used to calculate the FFT (Fast Fourier Transform) of the Mother
Wavelet and the signal. This process yields a fast calculation of the Wavelet Transform
using the power of the FFT and the IFFT (Inverse Fast Fourier Transform).
By definition the CWT is a convolution of the analyzing wavelet and the signal.
This means that the wavelet transform will be the decomposition of the signal as a linear
superposition of the analyzing wavelet This makes the scale factor "a" very important If
"a" is chosen large then the scale at which the wavelet will analyze the signal will be large;
however if "a" is chosen small then the analyzing wavelet will analyze the signal on a small
scale. The scale value is chosen manually such that narrow peaks arise at the points of
phase change, frequency change or amplitude change. Scale analysis is the main reason
that the CWT was used for feature extraction. Each modulation pattern in question has
been purposely altered to transmit data, and the CWT directly zooms in on these
alterations.
22
CHAPTER 2
RESEARCH METHODOLOGY AND DESIGN
This thesis presents a procedure for establishing a Automatic Modulation
Identification Algorithm. This algorithm is capable of simulating a simple surveillance
system or a simple wireless modem. The scenario for the surveillance system could be a
ground unit that is capable of receiving signals from the air, and the purpose of this
ground unit could be to determine the modulation type of the received signal for
comparison to a database of known hostile modulation types. The scenario for the
wireless modem could be two parties that are interested in one-way communication. A
modem could be used to send the binary data through the air to another modem. Once the
modulated data gets transmitted through the channel to the receiver, the data must be
demodulated, but before the data can be demodulated the method of modulation must be
known. These are just two of many applications that the AMI can simulate.
Design Issues
The first design issue was to gather a database of noiseless signals. Initially there
was no data available to begin the research. This was a problem, because without data
there was no way of generating results for the AMI algorithm.
The second design issue was modeling the noise channel. In each application,
communication through the air is mentioned, therefore the simulated channel was chosen
23
to be Additive White Gaussian Noise (AWGN). AWGN best models the random
processes that take place in the air. Also AWGN can be easily simulated in Matlab.
The third design issue was choosing the best analyzing wavelet for the AMI.
There exist many different types of analyzing wavelets, each with its own properties.
Examples would be, the Haar Wavelet, the Morlet Wavelet, the Mexican Hat Wavelet,
and the Daubechies Wavelets. Even though there were many analyzing wavelets to
choose from, several main ideas had to be considered: (1) the analyzing wavelet had to be
well localized in time and frequency, (2) the analyzing wavelet had to produce desired
results, and (3) the analyzing wavelet had to have an algorithm associated with it that was
not computationally intense.11
The final design issue dealt with selecting the types of artificial classifiers to use in
the AMI. This problem was similar to the dilemma with the analyzing wavelets. There are
many different types of artificial classifiers, each with its own properties. Whatever
classifier used had to have the ability to properly classify the corrupted signals, and it had
to be able to accept one-dimensional signals.
Approach For Resolving Design Issues
To resolve the design issues an intense literature search was done in the areas of
signals/systems, noise channels, wavelets, and artificial classifiers. To solve the problem
associated with the signals, a decision was made to manually produce general modulation
signals, and to continue to search for more signals. The C programming language was
chosen to simulate three general modulation types: Binary Phase Shift Keying, Binary
Amplitude Shift Keying, and Binary Frequency Shift Keying.
24
The issue corresponding to the wavelets was resolved by choosing three wavelets
for analyzing the signals: the Haar Wavelet, the Morlet Wavelet, and the Mexican Hat
Wavelet. Each of these wavelets have good localization properties and fast algorithms
(This means that the number of mathematical operations is not of the order n2, where n is
the total number of mathematical operations ) associated with them.
The classifier problem was resolved by choosing three classifiers for identification
of the signals: the Multi-Layer Perceptron, the K-Nearest Neighbor, and the Fuzzy
Artmap. Each of these classifiers are compatible with one-dimensional data and very
capable of classifying the signals.
The system design was as follows: (1) obtain a noiseless signal for each
modulation pattern, (2) construct signals that reflected corruption by Additive White
Gaussian Noise, (3) extract characterizing features from each signal (corrupted signal and
uncorrupted signal) using the CWT, (4) use features taken from uncorrupted signals to
train artificial classifiers, (5) use features taken from corrupted signals to test the artificial
classifiers (6) use confusion matrix taken from artificial classifiers to display results in the
form of graphs.
Additive White Gaussian Noise And Varied Carrier Frequency
Additive White Gaussian Noise (AWGN) was used to corrupt each modulated pattern.
Each classifier was trained with noiseless feature vectors, and each test set contained equal
amounts of AWGN. The noise model was assumed to have a zero mean and a finite
power. AWGN was simulated by using the random number function in Matlab (This
function produces random numbers that are normally distributed, of zero mean and of unit
25
variance.). A Matlab program was written that produces the desired amount of noise by
changing the variance of the randomly distributed numbers. As a safety feature the
program tests the random numbers to see if they indeed possess the desired variance.
The Carrier-to-Noise Ratio (CNR) is defined as follows:12
CNR = 101ogio(Pc/Pn) (2-1)
In this equation Pc is the average power of the carrier and Pn is the average power of the
noise. The average power of the carrier is computed from its power spectrum, and the
average power of the noise is computed from its variance.12 In essence the power of the
noise is chosen such that a desired Carrier to Noise ratio (CNR) is produced. Once the
desired noise level is obtained the characterizing features are extracted from each signal.
See Appendix IE for a demonstration of the noise channel.
Also as a subset of experiment one, the carrier frequency of BPSK, BASK and
BFSK test patterns are altered to see if the classifiers will detect the change. This means
the classifiers are trained on signals with a carrier frequency (0 and tested upon signals
with carrier frequency 0^2 and atf3. The results from this experiment are discussed in
Chapter 4.
Feature Extraction
The different modulation types were characterized by extracted features. These
features were used by the classifier to identify the modulation pattern. The CWT at a
particular scale "ao" was performed on each signal. The scale was chosen such that very
distinct peaks arose from each signal. These peaks directly corresponded to either a
change in frequency, amplitude or phase of the carrier. This method is not the traditional
26
CWT, because for a 1-Dimension CWT the result would be a function of two variables
(scale and shift); however these peaks are a function of shift only. The CWT at a
particular scale ao is simply a linear filtering operation. This particular scale at ao
characterizes each modulation pattern. The use of the CWT for feature extraction was
motivated by the research done by K.C. Ho, W. Prokopiw and Y.T. Chan; however the
methodology of this project is unique, because of the use of Neural Networks and various
wavelets.
Three mother wavelets were used to extract information from the different
modulation patterns. Haar Wavelet analysis will be discussed first, next Mexican Hat
Wavelet analysis will be discussed, then Morlet Wavelet analysis will be discussed. The
Haar Wavelet analysis of BPSK resulted in double peaks at the points where phase
changes occurred (See Figure 14).
Haar Analysis of BPSK
M
a
g 12n
I 1
t
u 0.8d
e 0.6
of0.4
C
W 0.2
T ■
■
•
■
•
■
-
•
-10 -5 0 5 10Shift(b)
Fig. 14. Haar Feature Vector For BPSK.
27
Haar Wavelet analysis ofBFSK resulted in single peaks at the points where frequency
changes occurred (See Figure 15). Close visual inspection shows the Wavelet Transform
of the carrier wave, which is represented by the smaller peaks.
M
a
Sn
1
t
u
d
e
of
C
w
T
2
1.8
1.6
1 .4
1.2
1
0.8
0.6
0.4
0.2
?1
Haar Analysis of BFSK
11 .
fl 1II 1lln n n n n nItll II II II 11 II iIllillflllflll
• |ii! 1! If lillmf\ f\ 1 \ \1 \l \l \/^\/~\/0-5 0
Shift(b)
1 in
i || *
In n n n n IIIII II II 11 II fl
1 |f |f If |f HI1 U U U U Ul /\ A
\J \] '
5 10
Fig. 15. Haar Features For BFSK.
Haar Wavelet analysis of BASK resulted in two broad peaks separated into four smaller
peaks at the points where the amplitude was nonzero, and the CWT yielded a value of
zero where the signal's amplitude was zero (See Figure 16)
M
a 3 ■[
9
t
u 2
d
e 1.5
of1 \
C
W 0.5 •T
°10
Haar Analysis of BASK
i n
k AA A
1
u
./-5 0
Shift(b)
5 10
Fig. 16. Haar Features For BASK.
28
Mexican Hat Wavelet analysis of BFSK (See Figure 17) yielded double peaks and
Mexican Hat Wavelet analysis of BPSK (See Figure 18) yielded single peaks.
Mexican Hat Analysis of BFSK
Fig. 17. Mexican Hat Features For BFSK.
M 3
a
I 2S1
t 2
u
e 1'5
of 1
C 0.5W
T
Mexican Hat Analvsis of BPSK
A A A A
■ A A A A•
«■■A I I 1 I \ "
■/» u U iwi «v ■0 -5 0 5 10
Shift(b)
Fig. 18. Mexican Hat Features For BPSK.
29
Mexican Hat wavelet analysis ofBASK produced two broad peaks that were separated
into three smaller peaks at the nonzero amplitude points, and zero at the zero amplitude
points of the signal (See Rgure 19). The features extracted from the signals using the Haar
Wavelet, and the features extracted from the signals using the Mexican Hat Wavelet are
somewhat visually dissimilar.
x 10-1< Mexican Hat Analysis ofBASK4.5 I
M La 4ft A
d 25re 2
of 1.5
C 1W
T °-5
■
"10
(I (IM I1
-5 0Shift(b)
A :•
"1 :5 10
Fig. 19. Mexican Hat Features for BASK.
Morlet Wavelet analysis produced results that were similar to Mexican Hat analysis for
BASK (See Figure 20).
Morld Analysis ol BASK
S hllt(b )
Fig. 20. Morlet Features For BASK.
30
For BFSK (See Figure 21) and BPSK (See Figure 22) the Morlet Wavelet sensed the
frequency and phase changes by double peaks. Note that Figure 22 is the Wavelet
Transform of the signal downloaded from the internet and Figure 23 is the Wavelet
Transform of the signal created by the C program mentioned earlier. Both graphs display
double peaks that mark phase changes; however Figure 22 has three peaks and Figure 23
has four. This difference comes from the fact that the carrier frequency of the two signals
are not the same.
M
a
Sn
1
t
u
d
0
of
C
w
T
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
P1
M
.1
11
AnIII!
■
i ii ii ii i1II111111111 \J 1 w\r
0 -5
1
L II
11 I
0Shift(b)
1
1
1 *I /
5
■
■
■
1 0
Fig. 21. Morlet Features For BFSK.
Fig. 22. Morlet Features For BPSK From The Internet
31
M
a
9n
I
t
u
d
e
of
C
W
T
0.8 II .
0.7 lA [IVI II
Oi | |
0.5
0.4
0.3
0.2
0.1\ I
-10
M oriel Analysis of BPSK
I k\1 II1
1 111
v / \S 0
S hift(b)
illIII
I
I ' 1
5
;
M
If1
1 0
Morlet Analysis ol BPSK
Fig. 23. Morlet Features For BPSK From C Program.
The chosen feature vectors are distinct for each modulation pattern, therefore it is possible
to visually classify each pattern according to its feature vector, however visual comparison
is not the method proposed for this thesis. Morlet Wavelet Analysis for the other signals
that were obtained from the internet are also illustrated. Morlet Wavelet Analysis of
PSK8 is given in Figure 24. Morlet Wavelet analysis of 16-QAM is given in Figure 25.
Finally, Morlet Wavelet analysis ofQPSK is given in Figure 26.
M
a
9n
I
u
d
ol
C
W
T
Morlst Analysis of PSK8
0
Shilt(b)1 0
Fig. 24. Morlet Features For PSK8
32
2.5
Morlet Analysis of QAM
0
Shilt(b)
Fig. 25. Morlet Features For 16-QAM.
1
0.8
0.6
0.4
0.2
-1
M orlet Analysis of
I ilk iik1
D -S 0
Shift(b)
QPSK
ll
Lu
5
■
j:
<s\s\yJ1 0
Fig. 26. Morlet Features For QPSK.
Classifiers
Two of the classifiers used in this research were part of a classification software
packet called LNKnet.13 The other classifier, the Fuzzy Artmap was supplied by Dr.
33
Robert Baxter of M.I.T Lincoln Laboratory. LNKnet is a collection of twenty classifiers
that have a graphical user interface which makes it easy to use. The acronym LNK is
attributed to the three principal programmers, Richard Lippman, Dave Nation and Linda
Kukolich, who are all employees at M.I.T Lincoln Laboratory. The Multi-Layer
Perceptron (MLP), the K Nearest Neighbor (KNN), and the Fuzzy Artmap were used to
identify the feature vectors.
The K-Nearest Neighbor is a simple classifier that stores all training data that is
given to it Then it measures the Euclidean distance from the K stored patterns to the test
data closest to it. The KNN then takes a vote among the K neighbors and the class that
occurs most is assigned to the test pattern. The KNN is a fully connected feed forward
network that uses a threshold as the activation function for each neuron (See Figure 27).
K-NEAREST NEIGHBOR
Feedforwsrd
3 Outputs
3 Hidden Nodes
128 Inputs
Fig. 27. Diagram Of Nearest Neighbor Classifier.
34
The Multi-Layer Perceptron is a feed forward net with one or more layers of nodes
between the input and output nodes. Until recently this type of net was not used for lack
of efficient training algorithms. This network can be used to solve complex problems, but
it cannot be proven that this network will converge to optimal weight values. The MLP is
a fully connected network that uses the sigmoid activation function (See Figure 28). The
training mechanism of this network is the back propagation algorithm. To optimize the
weights it performs a gradient decent algorithm which minimizes the error of the output
according to some cost function.
MULTI-LAYER PERCEPTRON
3O«psU
25 Hidden Nodei
12SInpull
Fig. 28. Diagram Of Multi-Layer Perceptron Classifier.
The Fuzzy Artmap is a fully connected net that performs match tracking. Match
tracking allows continuous communication between the hidden layer and the output layer.
In a sense, the hidden layer takes feedback from the output layers to determine if the data
35
are being correctly classified. If the data are not being correctly classified then the weights
are changed to produce the desired classification (See Figure 29).
FUZZY ARTMAP
3 OotpnU
3 Hidden Node.
Fig. 29. Diagram Of Fuzzy Artmap Classifier.
These three classifiers were chosen because their methods of classification are very
dissimilar. The KNN uses distances from the training set to the test set to determine the
correct classification. The MLP uses the back propagation algorithm to train its weights
to achieve optimal classification. Finally the Fuzzy Artmap uses match tracking to govern
the performance of the net The diversity of these classifiers should prove advantageous
for comparing and contrasting the results. Appendix IV has some addition information
concerning the method that the Neural Networks use to classify signals.
36
CHAPTER 3
IMPLEMENTATION
The first stage in the implementation process was to link the subunits described in
Chapter 2 into one unit Each subunit models a particular part of a simple
communications system, and linking subunits will complete software implementation.
Automatic Modulation Identification Algorithm
The goal of this research is to introduce an Automatic Modulation Identification
Algorithm that is capable of identifying BPSK, BFSK, QPSK, 16-QAM, 8PSK and BASK
patterns in the presence of high levels of noise and varying carrier frequency (See Figure
30).
ALGORITHM FLOWCHART
RECEIVED SIGNAL r(t) = s(t) + n(t)
M LP. KNN . A RTM A P
FEATU RE VECTOR
ID ENTIFIC A TIO NExample of Noiseless Training Vector
BFSK BASK |q AMPSK 8 P s'k
Fig. 30. Flowchart Of Modulation Characterization Algorithm.
37
The first step in the algorithm is to simulate a received signal, which contains the
modulated carrier { Acos(cot + <)>)} and noise { n(t) }. For BPSK, the parameter <}> is
switched from 0° to 180° corresponding to either a binary 0 or 1, respectively. In BFSK,
to is switched between ti/8 to n/16 corresponding to a binary 0 or 1, respectively. To
accomplish BASK, A is switched between 0 and 1 to denote a binary 1 or 0, respectively.
The description of QPSK, 8PSK, and QAM are given in Chapter 1. The AWGN is
simulated by a Matlab program that uses the normally distributed random number
generator. The program inputs a desired amount of noise in decibels. Then the program
calculates what the variance of the random numbers would have to be to produce the
desired amount of noise. The square root of this variance (standard deviation) is
multiplied by the random number generator to give it the desired variance. The calculation
is provided below:
10Logio(Pc/Pn) = #dB (3.1)
In this derivation Pn is replaced by x, because this is the quantity of interest
= #dB/10 (3.2)
Logl0(PC)-Logl0(x)= ttffi/10 (3.3)
Logio(x) = -#dB/10 + Logio(Pc) (3.4)
10Log(x) = 10-#dB/10 + Log(Pc) (35)
(3.6)
(3.7)
38
The value of x is the variance that will produce the desired carrier to noise ratio. The
square root of cr is multiplied by the random number generator to change the unit
variance to the desired variance, thus giving the user control over the levels of noise added
to each carrier. Once the desired CNR has been obtained, the program creates fifty sets
of each modulation pattern, and each pattern is corrupted with the same amount of
AWGN. A file is created for these resulting patterns. The motivation for doing this is to
produce a test set that is representative of the noise model.
In the third step, the CWT at a particular scale is taken for each pattern in each set
This corresponds to the inner product of the scaled-shifted mother wavelet and the
received signal. The results are CWT coefficients taken at a particular scale ao as a
function of shift These CWT coefficients are distinct for each modulation pattern;
therefore these coefficients are used as the feature vectors. The program writes the
coefficients to a file for later use by the classifier.
In the fourth step, the classifiers are trained with clean feature vectors, and tested
upon noisy test vectors of various CNRs. The performance of each classifier is recorded,
and their results are compared. For example, in the first part the training vectors consist
of three sets with one pattern in each seL Each pattern corresponds to BPSK, BFSK and
BASK. The test vectors consists of three sets with 50 patterns in each set. Each of these
patterns contain the same CNR, and it is the task of the classifier to place these patterns in
the correct class. There are four tests done corresponding to 20,10,5, or 0 dB CNR,
respectively. Each of these four tests are done ten times at random presentation orders, so
the results will be the average performance of the classifier. The classifiers are trained
39
with only three patterns to try to achieve maximum classification with the least amount of
training data. The same procedure is done for the second part of the experiment, except
the MLP is provided with fifty training vectors to increase performance.
In the final step the classifier computes the confusion matrix and assigns each
pattern in each set to a particular class. Then the errors that occurred with class confusion
are given. The percent of correct classification is recorded so comparisons can be made
among the different classifiers.
The entire algorithm has been simulated in the Hewelett Packard Visual
Engineering Environment (HP-VEE). This HP-VEE implementation is a working
demonstration of a surveillance system. The program reads a noisy modulation pattern in
from a file. Then the Wavelet Transform at a particular scale is taken. HP-VEE displays
both the signal and the Wavelet Transform of the signal. Then the square of the wavelet
coefficients or the Energy Density is applied to a threshold. This threshold serves as a
classifier for the modulation types. This classification technique is different from the
classification technique presented in Chapter 2. For this reason the HP-VEE
implementation is still under development, and a more detailed discussion will be held until
Chapter 5.
40
CHAPTER 4
RESULTS
An Automatic Modulation Identification (AMI) algorithm has been created using
the wavelet transform and various artificial classifiers. The AMI models a simple
communications system. All of the logic and mathematical derivation have been
presented. In theory the AMI should be able to correctly classify any modulation type
(assuming it is trained upon the signal prior to testing) at low levels of noise. It is
desirable for the AMI to work well in high levels of noise as well, otherwise the previously
mentioned algorithms would be more favorable. Indeed the results prove that the AMI
works well in high noise regions.
Results From Separation of BPSK/BFSK/BASK
Features extracted using the Mexican Hat coefficients were the first data sets fed
into the classifiers. The results for the Mexican Hat are displayed in Table 2 and Figure
31.
41
Mexican Hat Modulation Classifier at Constant ©
■
-M.P
-KNN
-ARTMAP
15
Carrier to Noise Ratio (dB)
20
Fig. 31. Results From Mexican Hat Modulation Classifier For Constant ca
Table 2. Mexican Hat Wavelet Modulation Classifier Results
Mexican Hat Wavelet
dB
20
10
5
0
MLP
100
100
86.67
45.5
KNN
100
100
94
36.67
ARTMAP
100
100
91.33
34.67
At a CNR of 20dB all three classifiers identified the patterns 100% correctly. As the noise
level increased at lOdB the classifiers remained constant in their performance. At 5dB
CNR the MLP classified 86.7% of the patterns correctly, the KNN classified 94% of the
patterns correctly, and the Artmap classified 91.3% of the patterns correctly. The final
test for the Mexican Hat data was at OdB CNR, at this point there is just as much noise as
carrier power, and the classifiers' performance immediately dropped. The MLP was down
to 55.5% correct classification of the modulation patterns, the KNN was down to 36.7%
correct classification, and the Artmap was down to 34.7% correct classification. The
overall performance of the Mexican Hat Modulation Classifier (MHMC) was pleasing. It
42
performed poorest at 0 dB CNR, but this is exactly what is expected at such high levels of
noise (See Figure 31).
The next coefficients used for feature extraction were those of the Morlet Wavelet
The results for the Morlet Wavelet are displayed in Table 3.
Morlet W8velel Modulation C lasslfler at Constant «d
so --
40 -"
30 "-
20 --
10 "-
0
5 10 IS
Carrier to Nolsa Ratio (d B )
20
■MLP
-KNN
•ARTM AP
Fig. 32. Results From Morlet Modulation Classifier For Constant ca
Table 3. Morlet Wavelet Modulation Classifier Results
Morlet Wavelet
dB
20
10
5
0
MLP
86.6
84.9
77.1
60.47
KNN
100
100
96.67
71.33
ARTMAP
100
100
97.33
69.33
At a CNR of 20dB the MLP was able to recognize 86.6% of the modulation patterns
correctly. The KNN and the Artmap classified 100% of the modulation patterns correctly.
Once the noise level increased in the lOdB case the MLP was down to about 84.9%
43
correct classification, and the KNN and the Artmap were still performing at 100%. With a
steady increase in noise at 5dB CNR the MLP was getting about 77.1% of the patterns
correct while the KNN and Artmap were sensing 96.7% correctly and 97.3% correctly
respectively. At OdB the MLP classified 60.5% of the patterns correctly, the KNN
classified 71.3%, and the Artmap classified 69.3% of the patterns correctly. The Morlet
Modulation Classifier (MMC) shows an inverse relationship between percent of correct
classification and increased noise level (See Figure 32). The MMC's performance
decreased with increasing noise level in the same manner as the Mexican Hat Modulation
Classifier, however the MMC was able to classify more patterns correctly at high noise
levels (97.33% correct at 5 dB CNR) than the MHMC.
The Haar Wavelet coefficients were used next for feature extraction. The results
for Haar Wavelet are displayed in Table 4.
100
Haar Wavelet Modulation Classifier at
Constant co
■+■
5 10 15
Carrier to Noise Ratio (dB)
20
-♦—MLP
-A—ARTMAP
Fig. 33. Results From Haar Modulation Classifier For Constant CO.
44
Table 4. Haar Wavelet Modulation Classifier Results
Haar Wavelet
dB
20
10
5
0
MLP
75.67
61.93
49.47
47
KNN
100
65.33
52.67
50
ARTMAP
66
64.67
62
41
The 20dB case reflected 75.7% correct classification for the MLP and 100% correct
classification for the KNN. The Artmap was totally confused, it could only classify 66%
of the patterns correctly in the low noise region. At lOdB the MLP could only pick out
61.9% of the patterns correctly while the KNN recognized 65.3% correct, and the Artmap
was down to 64.7% correct classification. A gradual increase in noise level at 5dB
dropped the MLP down to 49.5% correct classification, 52.7% correct classification for
the KNN, and 62.0% correct classification for the Artmap. At OdB the three classifiers
were performing poorly. The MLP recognized 47.0% of the patterns correctly while the
KNN recognized 50% of the patterns correctly, and the Artmap could only see 41.0% of
the patterns correctly.
From the results of the test, the Haar Wavelet would probably not make a good
analyzing wavelet for modulation characterization. At the low noise level (lOdB CNR)
the best that the Haar coefficients could be classified was 65.3%. The Haar Wavelet has
a blocky shape, and it probably would not be able to extract all of the curved features of
the modulation patterns. This could be one of the reasons that the Haar Modulation
45
Qassifier (HMC) performed so poorly as compared to the other classifiers (See Figure
33).
The results began to differ a little more in the presence of varying noise level and
varying carrier frequency. The results for the Morlet Wavelet at carrier frequency oV2 are
displayed in Table 4. Since the Morlet Wavelet outperformed the other wavelets at
constant carrier frequency, only the performance of the Morlet Wavelet at varied carrier
frequency was considered.
Morlet Wavelet Classifier for Varied Carrier
co/2
100
0 5 10 15 20
Carrier to Noise Ratio (dB)
•MLP
-KNN
-ARTMAP
Fig. 34. Results From Modulation Classifier For aJ2.
Table 5. Morlet Wavelet Modulation Classifier Results for
Morlet Wavelet
(carrier frequency co/2)
dB
20
10
5
0
MLP
71.09
67.4
40.06
55
KNN
33.33
51.33
61.33
56
ARTMAP
80
84
88
69.33
46
There was an immediate decrease in performance due to the change in carrier frequency
(The discussion will be limited to the performance of the Artmap, because it was the only
classifier that showed consistent performance during this part of the experiment). At 20
dB CNR and a carrier frequency change of (a/2 the Artmap was able to classify 80% of the
patterns correctly. As the noise level increased in the 10 dB CNR case, the Artmap
recognized 84.0% of the patterns correctly. At 5 dB, the Artmap was up to 88.0%
correct classification, and at 0 dB the percent of correct classification dropped to 69.3%
(See Figure 34). At 20 dB CNR and a carrier frequency change of oV3 the Artmap
classified 80.7% of the pattern correctly. In the 10 dB case, the percent of correct
classification was 88.7%, and at 5 dB the percent of correct classification was down to
75.3%. At 0 dB CNR the Artmap was only seeing about 33.3% of the patterns correctly
(See Figure 35).
Morlet Wavelet Classifier at Varied Carrier co/3
■+-
5 10 15
Carrier to Noise Ratio (dB)
20
-M.P
-KNN
-ARTMAP
Fig. 35. Results From Morlet Modulation Classifier for atf3.
47
Table 6. Morlet Wavelet Modulation Classifier Results for ojG
Morlet Wavelet
(carrier frequenc
dB
20
10
5
0
|MLP
68.4
63.67
54.53
38.47
foa/3)
KNN
76.67
92
79.33
35.33
ARTMAP
80.67
88.67
75.33
33.33
From the fact that the Artmap is at the least 80% accurate at varied carrier frequency, it
can be assumed that the Artmap is capable ofrecognizing modulation patterns in the
presence of varying noise level and carrier frequency.
Results From Separation of BPSK/QPSK/8PSK/BFSK/QAM
The Morlet Modulation Identification Algorithm was extended to include more
signals. The signals used were Binary Phase Shift Keying, Quadature Phase Shift Keying,
8 Phase Shift Keying, Binary Frequency Shift Keying and Quadature Amplitude
Modulation. They were downloaded from an internet web site. The same message was
embedded upon each of these signals. The carrier frequency is 2 Hz, the symbol rate is 1
symbol/second, and the sampling rate is 16 samples/second. The Matlab programs and
Neural Networks were modified to accommodate 5 signals instead of 3 signals. The same
Neural Networks mentioned in Chapter 2 were used to classify these signals, using the
Morlet Wavelet coefficients. The Wavelet Transform proved to be just as effective in the
second experiment as it was in the first experiment at detecting transients in each signal.
The performance of the Morlet Modulation Classifier is given in Figure 36, and the
numerical results are displayed in Table 7.
48
Morlet Wavelet Modulation Classifier
Experiment #2
0 5 10 15 20
Carrier to Noise Ratio (CNR)
-MLP
-KNN
ARTMAP
Fig. 36. Results From Second Experiment For Morlet Modulation Qassifier.
Table 7. Morlet Modulation Classifier Results for Experiment #2
Experiment #2 Morlet Wavelet
dB
20
10
5
0
MLP
100
96.8
55.6
28.4
KNN
100
98.8
60.8
22
ARTMAP
100
93.6
56.8
22.4
The results show that even though the second experiment is more difficult than the first,
the percent of correct classification is still better at a lower CNR (This means a higher
noise region) than the previously mentioned algorithms.
49
CHAPTER 5
SUMMARY/CONCLUSIONS
This paper takes the concepts of wavelet analysis, signal processing and pattern
classification and intermingles them into a well defined project The theory of each
concept was introduced and discussed. Then an effective algorithm was devised to solve
the problem of Modulation Characterization. This algorithm was applied and data was
collected. The data was analyzed and compared to other research projects of similar
nature.
The CWT was used with a fixed scale in this research project. The 1-Dimensional
CWT maps a one variable function into a two variable function. This creates a large
amount of redundant data. This redundant data can be very helpful in trying to
characterize the behavior of a particular signal, but for some applications large amounts of
data are not desired. The CWT decomposes the signal at different scales, and depending
upon the application, the scale can be large or small. In this project the focus is upon the
small scales. At these scales the analyzing wavelet is more responsive to phase changes,
frequency changes and amplitude changes, and this is exactly what is needed. These CWT
coefficients taken at a small scale become feature vectors for direct input into a classifier.
In today's technological society, artificial intelligence has proven to be very useful.
There are many hazardous tasks that human operators once had to perform. Now
50
machines do these jobs for less money and with no loss to human life. This has produced
a desire to automate just about anything that is harmful or tedious. Classifiers are a result
of the need to automate society. They are the nucleus of any type of pattern recognition
application. Human operators are normally used for discriminating between modulation
types either by using moment techniques, zero crossing techniques or some other manual
procedure. In electronic warfare there is a need to accurately distinguish between
waveforms in free space. If it is possible to teach a machine to do this job accurately then
the human operator could be replaced. The classifiers used in this project are trained with
only the minimum amount of information, and still manage to produce results that are
comparable to other methods.
This research exclusively deals with the problem of modulation characterization in
the presence of varying noise and varying carrier frequency. It has been shown that this
system at constant carrier frequency possesses an inverse relationship between increasing
noise level and percent of correct classification. This is quite logical, because at higher
noise levels the features that the classifiers are trained upon become distorted. Also the
chosen wavelet scale is small, therefore in the frequency domain the wavelet acts as a
bandpass filter that only allows certain frequencies to pass.14
The second variable in this project was changing carrier frequency. Taking the
Artmap for example (mainly because it outperformed the other classifiers), even though
the carrier frequency was altered the Artmap continued to do a goodjob of classifying the
signals. This can be explained by understanding how the classifiers operate. As noise is
added to a signal, the features extracted by the CWT keep a constant horizontal position,
51
but are distorted vertically. The power of the technique comes from the extraction of
peaks of different magnitudes and shape, but during addition of noise all the peaks become
distorted, therefore really the only features that the classifiers can depend on are those that
are vertical. In varying the carrier frequency in the presence of noise the extracted peaks
become shifted and sometimes even broadened. This broadening of the peaks sometimes
makes the features more prominent. There will definitely be a reduction in percent of
correct classification from the fact that the peaks are being shifted (from the change in
carrier frequency); however it should not be drastic as seen in the results (In the Morlet
case there was a change from 100% correct to 80% correct at 20 dB CNR). At the point
where the broadening peaks become unrecognizable, the percent of correct classification
begins to drop again. Taking all of this information into account the system remains well
behaved even in the presence of varying noise level and carrier frequency (where the
carrier frequency does not deviate more than 1/3 of the original carrier frequency).
FUTURE WORK
This paper reflects the initial progress of an AutomaticModulation Identifier
centered around the use of pattern recognition and wavelet theory. This paper integrates
two topics in today's technology into one project. In a system such as this there are many
variables, and it is not sensible to change all possible variables at once. This work is
evidence that wavelet theory and pattern recognition can be combined to produce a
working system that is comparable to other models. At present the only variables that
52
were explored were increasing noise level and varying carrier frequency. The Wavelet
Transform has been implemented in HP-VEE (See Figure 37).
Fig. 37. HP-VEE Program For Wavelet Transform.
This program allows any algorithm to be expressed in the form of modules. This
language is lower than Matlab, and it will be beneficial to others in my research group
who want to implement the Wavelet Transform on an Altera Field Programmable Gate
Array computer board. The HP-VEE implementation is still being studied, thus the reason
for placing it in this section. Also plans are underway for a Neural Network that can
accept imaginary input.
53
APPENDIX I
COMMUNICATION SIGNALS
The purpose of this section is to reiterate the procedure used to produce each signal.
Figure 38 displays the signals used in the first experiment. These signals were created
using C programs. The values of the phase, frequency and amplitude changes are given in
the figure. From closely observing the figure, it should be evident the procedure used to
create each signal.
COMMUNICATION SIGNALS
EXPERIMENTS
CARRIER: A(Cos cot + 4>)
>=7C or
co=7i/8 or co=iz/l6
A = lorA=0
Fig. 38. Communication Signals From Experiment #1.
54
All of the signals used in the second experiment are Phase Shift Keying except for BFSK
which was described in Figure 38. Phase Shift Keying signals can be described by "Signal
Constellations."12 A Signal Constellation is a representation of the Cartesian Graphing
Plane (See Figure 39). The axis has labels that mark the angle of the phase change for
each signal, thus a binary digit or a set of binary digits correspond to a particular phase
change.
COMMUNICATION SIGNALS
EXPERIMENTS
BPSKQPSK
BFSK
eo=n/8 or co=7i/16
00=0
01=71/2
10=71
ll=3n/2
8PSK 16-QAM
A=l 0000=0 1001= tf2
A=2 0001=71/8 1101=571/8
A=3 0011=71/4 0101=3/4
A=4 0111=3708 O01O=7n/8
Fig. 39. Communication Signals From Experiment #2.
55
APPENDIX n
MATLAB PROGRAMS
This program produces the CWT of the 50 sets of the 3 (experiment 1) or 5
(experiment 2) signals.
Cwt5.m
function do_cwt(fni,fno,a,wltype,display)
% do_cwt(fni,fno,a,wltype,display)
% Reads noisy raw patterns and convert them to wavelet patterns
% Inputs: fni = noisy raw pattern file name
% fno = output (wavelet pattern) file name
% a = wavelet scale factors for each class
% wltype = wavelet type ('Morlet1, 'Mexican1, or 'Haar1)
% display = if display > 0, the display-th patterns are plotted
if nargin < 4, error('Requires 4 or 5 arguments'), end
if nargin = 4, display = 0, end
ndim = 128;
npat = 250;
nclass = 5;
% Create wavelet filter
tl=-10:20/(ndim-l):10;
y = zeros(nclass,ndim);
forc= lmclass
t2 = tl/a(c);
if wltype ='Morlef
y(c,:)=real(l/sqrt(a(c)) * exp(i.*5.6.*t2) .* exp(-t2.A2/2));
elseif wltype = 'Mexican'
y(c,:)=l/sqrt(a(c)) .* Q-t2.A2) .* exp((-t2.A2)*.5);
elseif wltype= 'Haar1
forr=l:ndim
if t2(r) >= -.5*a(c) & t2(r) <0
y(c,r)=l/sqrt(a(c));
elseif t2(r) >= 0 & t2(r) < .5*a(c)
y(c,r)=-l/sqrt(a(c));
else
y(c,r)=0;
end
end
else
56
errorCwltype not recognized');
end
end
% Read the raw noisy patterns
M=fscanf(fid,'%gt,[(ndim+1) npat]);
M=M";
status=fclose(fid);
C = M(:, 1); % C contains the class information for each input vector
x = M(:,2:ndim+1); % x contains the input vectors
clear M;
% Compute and store the wavelet patterns
fid = fopenCfho.W);
for g= lrnpat
c = round(C(g))+l;
multiply = conj(fft(y(c,:))) .* fft(x(g,:));
wt=real(ifft(multiply));
CWTstoring=g;
mag=sqrt(fftshift(wt).A2);
if display = g
subplot(2,l,l), plot(tl,x(g,:));
tide('Modulation Pattern');
subplot(2,l,2), plot(tl,mag);
title('CWT of Modulation Pattern');
end
fprintf(fid,'%d',c-l);
fork=l:ndim
fprintf(fid, '%5.3f',mag(k));
end
fprintf(fid,V);
end
fclose(fid);
This program produces the 50 sets of 3 or 5 signals that contain AWGN.
Noisemak5.m
function CNR_actual = do_cnr(CNR_desired,fho,seed)
% CNR_actual = do_cnr(CNR_desired,fno)
% Generates a pattern file for LNKnet classifier
% from wavelet routines
57
% Inputs: CNR_desired = desired CNR
% fno = output file name
if nargin < 2, errorCRequires 2 or 3 arguments'), end
if nargin = 3, seed = randn('seed',seed); end
global noise;
global variable;
% Loop over modtypes
noise = CNR_desired;
fnn=fno;
fpnsfopenCfnn/w1);
formodtype= 1:5
ifmodtype=l
fnm = 'opsk.dat';
elseif modtype = 2
fnm = 'qaml6.dat1;
elseif modtype = 3
fnm = 'qpsk.dat';
elseif modtype= 4
fnm ='bask.dat';
elseif modtype= 5
fnm ='bfsk.dat';
end
fpm=fopen(fnm,'rl);
x^fscanf(fpm,'%f1);status=fclose(fpm);
forv=l:128
h(v)=x(v);
end
x=h;
t=-10:20/127:10;
fore = 1:50
noisewriting=c
A=max(x);
loglO(AA2/2));
n=sqrt(B)*randn(l,128);
s=x+n;
fprintfCfpn.^d \modtype-l);
forb=l:128;
fprintf(fpn,1%5.3ft,s(b));
fprintf(fpn, '
end
58
end
end
fclose(fpn);
CNR_actual = -999;
This program performs multi-resolutional analysis of each modulation pattern to
help determine the scale that should be used to extract features.
Scales.m
% This program takes the wavlet transfrom of the wavelet for various values of a
clear
energy =0;
k=inputCPlease enter the file name of your signal? 7s1)
p=inputCPress 1 to print anything else not to.')
fid=fopen(k,'f);
x=fscanf(fid/%f1);
status=fclose(fid);
forv=l:128;
f(v)=x(v);
end
x=f;b=size(x)
l=b(U)
tl=-10:20/a-l):10;
t=-10:20/a-l):10;
forq=l:10;
a=2*(-q+l)
t2= tl./a;
y=real(l/sqrt(a).*exp(i.*5.6.*t2).*exp(-t2.A2/2));
multiply=conj(fft(y)).*fft(x);
inverse=real(ifft(multiply));
A(q,:)=inverse(:,:);
mag=abs(A(q,:));
figure(q),subplot(31 l),plot(t^c)
titleCModulation Pattern')
figure(q),subplot(312),plot(t,fftshift(mag.A2yaA3))
title(['Energy Density at scale = \num2str(a)]);
figure(q),subplot(313),plot(t4ftshift(mag))
titleCrMagnitude of CWT at scale = ',num2str(a)]);
ifp = l
print -dps slotps
!lpr -Php4 slotps
end
end
B=real(fftshift(A));
59
APPENDIX
SIMULATION OF GAUSSIAN NOISE CHANNEL
The noise channel simulation is a straight forward process. As seen in Figure 40 the
noiseless modulation pattern is added to the noise signal. This noise signal has a specific
CNR that was created using the method outlined in Chapter 3. The addition of the noise
signal and the noiseless modulation pattern yields the received signal. The received signal
is a corrupted version of the noiseless modulation pattern, and this is exactly what happens
when a signal is transmitted through a channel. Therefore the procedure used in Figure 40
is indeed a noise channel simulation.
SIMULATION OF GAUSSIANNOISE CHANNEL
Fig. 40. Simulation of Gaussian Noise Channel.
60
To go a step further, the Wavelet Transform of the received signal is provided. Note that
the Wavelet Transform reflects the fact that noise was added to the modulation pattern;
however it is still possible to see the maximas which correspond to frequency changes.
This is the whole idea behind the research. The Neural Networks can identify these
maxima even though the signal is noisy, because the noise only corresponds to shifts in the
original signal.
Wavelet Transform of (BFSK + 20 dB AWGN)
Fig. 41. Feature Extraction From Noisy Signal.
61
APPENDIX IV
DECISION BOUNDARIES FOR NEURAL NETWORKS
Figure 42 is an example of a Decision Region that a Neural Network could
possible form.7 The concept behind Neural Network Technology is to train the Neural
Network on a particular pattern, then test the Neural Network on corrupted versions of
the training pattern. As a result the Neural Network will either be able to classify all of the
patterns, some of the patterns or none of the patterns. The procedure that the Neural
Networks use to decide which patterns are in what class is outlined in Figure 42. The
Neural Network uses the training data to form boundaries in what is called a Decision
Region. Then the Neural Networks place the test patterns in that Decision Region. The
test patterns either fall within the boundary of a particular class, on the boundary of a
particular class or near the boundary of a particular class. Given this information the
Neural Networks make the final decision about the class of the test pattern via its training
algorithm.
62
EXAMPLEOFDEQSIONBOUNDARIES FORNEURALNETWORKS
Fig. 42. Example of Decision Boundaries for Neural Networks.
63
BIBLIOGRAPHY
1. Ta, Nhi P. "A Wavelet Packet Approach To Radio Signal Modulation Classification."
Singapore ICCS (1994) 210-214.
2. Ho, K.C., W. Prokopiw and Y.T. Chan. "Modulation Identification By the Wavelet
Transform". IEEE. (July 1995) 886-890.
3. Hsue, S.Z., S. S. Soliman, "Automatic Modulation Classification Using Zero
Crossing", IEEE Proceedings, vol 137; No. 6, (Dec 1990) 459-464.
4. Pearson, John, Basic Communication Theory, pp 198-214; Prentice Hall; New Jersey
1982.
5. Cohen, Albert and Jelena Kovacevic. "Wavelets: The mathematical Background."
Proceedings of the IEEE vol. 84: No.4, (April 1996) 514-522.
6. Szu, Harold, Yingping Zang, Sun Mingui, and Ching-Chung Li. "Neural Network
Adaptive Digital Image Screen Halftoning (DISH) based on Wavelet Transform
Preprocessing." SPffi Wavelet Applications vol. 2242 (1994) 963-966.
7. Mallat, Stephane G. "Multiresolution Approximations and Wavelet Qrthonormal Bases
of L2( R)." Transactions of The American Mathematical Society vol. 315: No. 1
(September 1989) 69-87.
8. Smith, Mark J.T., and Ali N. Akasu. Subband and Wavelet Transform Design and
Applications. Academic Publishers: 1996.
9. Fausett, Laurene. Fundamentals of Neural Networks. Englewood Cliffs, N.J.: Prentice-
Hall, 1994.
10. Lippman, Richard P. "An Introduction To Computing With Neural Nets." IEEE
ASSP Magazine (April 1987): 4-22.
11. Sweldens, Wim. "Wavelets: What Next?" Proceedings of the IEEE vol. 84: No. 4,
(April 1996) 680-685.
12. Wilson, Stephen G. Digital Modulation and Coding, pp 62-65; Prentice Hall; New
Jersey 1996.
13. L. Kukolich, R. Lippman, D. Nation, LNKnet User's Guide. pp45-57; M.I.T. Lincoln
Laboratory; 1994.
64
14. Guillemain, Philippe, and Richard Kronland-Martinet. "Characterization of Acoustic
Signals Through Continuous Linear Time-Frequency Representations."
Proceedings of the IEEE vol. 84: No. 4, (April 1996) 561-585.
65