Research Article Information Entropy- and Average-Based ...

13
Research Article Information Entropy- and Average-Based High-Resolution Digital Storage Oscilloscope Jun Jiang, Lianping Guo, Kuojun Yang, and Huiqing Pan School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China Correspondence should be addressed to Jun Jiang; [email protected] Received 21 June 2014; Accepted 27 August 2014; Published 25 September 2014 Academic Editor: Guangming Xie Copyright © 2014 Jun Jiang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Vertical resolution is an essential indicator of digital storage oscilloscope (DSO) and the key to improving resolution is to increase digitalizing bits and lower noise. Averaging is a typical method to improve signal to noise ratio (SNR) and the effective number of bits (ENOB). e existing averaging algorithm is apt to be restricted by the repetitiveness of signal and be influenced by gross error in quantization, and therefore its effect on restricting noise and improving resolution is limited. An information entropy-based data fusion and average-based decimation filtering algorithm, proceeding from improving average algorithm and in combination with relevant theories of information entropy, are proposed in this paper to improve the resolution of oscilloscope. For single acquiring signal, resolution is improved through eliminating gross error in quantization by utilizing the maximum entropy of sample data with further noise filtering via average-based decimation aſter data fusion of efficient sample data under the premise of oversampling. No subjective assumptions and constraints are added to the signal under test in the whole process without any impact on the analog bandwidth of oscilloscope under actual sampling rate. 1. Introduction Bandwidth, sampling rate, and storage depth are three core indicators to evaluate the performance of digital storage oscilloscope (DSO). In addition, there is another indicator which is of great significance but always being ignored, that is, vertical resolution [1] (hereinaſter referred to as resolution). Higher resolution means more refined waveform display and more precise signal measurement. e resolution of DSO depends on the digitalizing bits of analog-digital converter (ADC) and the noise and distortion level of oscilloscope itself. Common oscilloscopes generally adopt 8-bit or 12-bit ADC, and the key to achieving higher resolution is to lower noise under given digitalizing bits of ADC. One method to reduce the effect of ADC-related system noise is to combine multiple ADCs in a parallel array. In such a system, the same analog signal is applied to all ADCs and the output is digitally summed. e maximum signal to noise ratio (SNR) increases because the signal is correlated from channel to channel while the noise is not. In a parallel array, the SNR, therefore, increases by a factor of assuming the noise is uncorrelated from channel to channel [2, 3]. Furthermore, decorrelation techniques are proposed in [4] to reduce the effect of correlated sampling noise introduced by clock jitter in all parallel ADCs. In [5, 6], another method which is named stacked ADC is proposed to enhance resolution of ADC system. It uses multiple ADCs, each connected to the same radar IF through amplifier chains with different gain factors. Aſter digital amplitude and phase equalization, the obtained SNR is much greater than that of an individual ADC. e two aforemen- tioned methods will consume more ADCs and result in a high cost. Another method to enhance SNR is averaging, which can increase the resolution of a measurement without resorting to the cost and complexity of using expensive multiple ADCs [710]. ere are two common averaging modes for DSO, that is, successive capture averaging and successive sample averaging. e former averages the corresponding sampling points in the multiwaveforms acquired repeatedly one by one, and the latter averages the multiple adjacent sampling points in a single waveform acquired once one by one. Both of them Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2014, Article ID 947052, 12 pages http://dx.doi.org/10.1155/2014/947052

Transcript of Research Article Information Entropy- and Average-Based ...

Page 1: Research Article Information Entropy- and Average-Based ...

Research ArticleInformation Entropy- and Average-Based High-ResolutionDigital Storage Oscilloscope

Jun Jiang Lianping Guo Kuojun Yang and Huiqing Pan

School of Automation Engineering University of Electronic Science and Technology of China Chengdu 611731 China

Correspondence should be addressed to Jun Jiang jiangjunuestceducn

Received 21 June 2014 Accepted 27 August 2014 Published 25 September 2014

Academic Editor Guangming Xie

Copyright copy 2014 Jun Jiang et alThis is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Vertical resolution is an essential indicator of digital storage oscilloscope (DSO) and the key to improving resolution is to increasedigitalizing bits and lower noise Averaging is a typical method to improve signal to noise ratio (SNR) and the effective number ofbits (ENOB)The existing averaging algorithm is apt to be restricted by the repetitiveness of signal and be influenced by gross errorin quantization and therefore its effect on restricting noise and improving resolution is limited An information entropy-based datafusion and average-based decimation filtering algorithm proceeding from improving average algorithm and in combination withrelevant theories of information entropy are proposed in this paper to improve the resolution of oscilloscope For single acquiringsignal resolution is improved through eliminating gross error in quantization by utilizing themaximumentropy of sample datawithfurther noise filtering via average-based decimation after data fusion of efficient sample data under the premise of oversamplingNo subjective assumptions and constraints are added to the signal under test in the whole process without any impact on the analogbandwidth of oscilloscope under actual sampling rate

1 Introduction

Bandwidth sampling rate and storage depth are three coreindicators to evaluate the performance of digital storageoscilloscope (DSO) In addition there is another indicatorwhich is of great significance but always being ignored that isvertical resolution [1] (hereinafter referred to as resolution)Higher resolution means more refined waveform display andmore precise signal measurement The resolution of DSOdepends on the digitalizing bits of analog-digital converter(ADC) and the noise anddistortion level of oscilloscope itselfCommon oscilloscopes generally adopt 8-bit or 12-bit ADCand the key to achieving higher resolution is to lower noiseunder given digitalizing bits of ADC

One method to reduce the effect of ADC-related systemnoise is to combine multiple ADCs in a parallel array In sucha system the same analog signal is applied to all 119872 ADCsand the output is digitally summed The maximum signal tonoise ratio (SNR) increases because the signal is correlatedfrom channel to channel while the noise is not In a parallelarray the SNR therefore increases by a factor of119872 assuming

the noise is uncorrelated from channel to channel [2 3]Furthermore decorrelation techniques are proposed in [4] toreduce the effect of correlated sampling noise introduced byclock jitter in all parallel ADCs

In [5 6] another method which is named stacked ADCis proposed to enhance resolution of ADC system It usesmultiple ADCs each connected to the same radar IF throughamplifier chains with different gain factors After digitalamplitude and phase equalization the obtained SNR is muchgreater than that of an individual ADC The two aforemen-tionedmethodswill consumemoreADCs and result in a highcost

Another method to enhance SNR is averaging which canincrease the resolution of a measurement without resortingto the cost and complexity of using expensive multiple ADCs[7ndash10] There are two common averaging modes for DSOthat is successive capture averaging and successive sampleaveraging The former averages the corresponding samplingpoints in themultiwaveforms acquired repeatedly one by oneand the latter averages the multiple adjacent sampling pointsin a single waveform acquired once one by one Both of them

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2014 Article ID 947052 12 pageshttpdxdoiorg1011552014947052

2 Mathematical Problems in Engineering

have certain limitations on restricting noise and improvingresolution Since successive capture averaging is based onrepetitive acquisition of signals it is limited by the repet-itiveness of the signals under test Even though successivesample averaging is based on single acquisition of the signalunder test it sacrifices analog bandwidthwhile filtering noiseIn addition gross error in quantization such as irrelevantnoise and quantization error caused by data mismatch isalways included in the sampling data of oscilloscope dueto the influence of environmental disturbance clock jittertransmission delay and so forth If not being handled first theresults of direct averaging of the acquired sample data withgross error will deviate from actual signal drastically

Data fusion (also called information fusion) is a sampledata processing method which is widely applied currentlyIn previous studies a data fusion algorithm based on theestimation theory in batches of statistics theory is proposed inthe [11] and is further improved in [12] However it is kind ofsubjective that both algorithms assume that sample data arecharacterized by normal distributionThe concept of entropyoriginates from physics to describe the disordered state ofthermodynamic system Entropy reflects the statistical prop-erty of systemand is introduced into numerous research fieldssuccessively In 1948 an American mathematician Shannonintroduced the entropy in thermodynamics into informationtheory and proposed information entropy [13] to measurethe uncertainty degree of information Information entropyprovided a new approach for data fusion [14]

An information entropy-based data fusion and average-based decimation filtering algorithm proceeding fromimproving average algorithm and in combination with rel-evant theories of entropy are proposed in this paper toimprove the resolution of oscilloscope effectively Additionalhorizontal sampling information is used to achieve highervertical resolution under the premise of oversampling Firstlycomparing with traditional averaging algorithm this algo-rithm aims at the signal sample acquired once and thereforeit is subject to no restrictions of the repetitiveness of signalSecondly the resolution of oscilloscope is improved througheliminating gross error in quantization due to noise andquantization error by utilizing the maximum entropy ofsample data with further noise filtering via average-baseddecimation after data fusion of efficient sample data beforeaveraging No subjective assumptions and constraints areadded to the signal under test in the whole process withoutany impact on the analog bandwidth of oscilloscope underactual sampling rate

2 Common Averaging Theory

21 Successive Capture Averaging Successive capture aver-aging is a basic denoising signal processing technologyfor acquisition systems of most DSOs which depends onrepetitive triggering and acquisition of repetitive signalsSuccessive capture averaging averages the correspondingsampling points in these waveforms one by one by usingthe multiwaveforms acquisition repetitively to form a singlecapture result after averaging that is output single waveform

Figure 1 shows the schematic diagram of averaging of 119873successive acquisitions

The direct calculation method of successive captureaveraging is to sum the corresponding sampling points inall acquisitions and then divide them by the number ofacquisitions The expression is given by

119860119873 (119899 + 119894) =

1

119873

119873

sum

119895=1

119909119895 (119899 + 119894) (1)

where 119894 = 0 plusmn1 plusmn2 119860119873(119899 + 119894) is the averaging result

119873 is the number of acquisitions to average and 119909119895(119899 + 119894)

represents the corresponding sampling point at the moment119899 + 119894 in the 119895th acquisition Obviously average value cannotbe achieved in this algorithm until all 119873 acquisitions arecompleted If 119873 is too large the throughput rate of systemwill be affected remarkably For users the delay caused byaveraging is unacceptable For oscilloscope the huge sampledata will use out memory capacity rapidly

Consequently an improved exponential averaging algo-rithm is widely applied in successive capture averagingExponential averaging algorithm is shown in (2) whichcreates a new averaging result 119860

119895(119899 + 119894) by utilizing a new

sampling point 119909119895(119899+119894) atmoment 119899+119894 and the last averaging

result 119860119895minus1(119899 + 119894)

119860119895 (119899 + 119894) =

[119909119895 (119899 + 119894) + (119901 minus 1)119860119895minus1 (119899 + 119894)]

119901

=

119909119895 (119899 + 119894)

119901+ [

(119901 minus 1)

119901]119860119895minus1 (119899 + 119894)

(2)

where 119894 = 0 plusmn1 plusmn2 In (2) 119895 represents the currentnumber of acquisitions119860

119895(119899 + 119894) is the new averaging result

119860119895minus1(119899 + 119894) is the last averaging result 119909

119895(119899 + 119894) is the new

sampling point and 119901 is the weighting coefficient Assumingthat 119873 is the total number of acquisitions to average if (119895 lt119873) then 119901 = 119895 otherwise 119901 = 119873 Obviously higherefficiency can be achieved in exponential averaging algorithmwhile calculating and storing the acquired and the averagedwaveforms Exponential averaging algorithm can not onlyupdate averaged results immediately after each acquisitionand obtain the final same waveform as in direct averagingalgorithm but can also lower the requirements on memorycapacity significantly

Nomatter which algorithm is adopted successive captureaveraging can improve the vertical resolution of signal Thisimprovement is measured in bits which is a function of 119873(number of acquisitions to average) [1]

119877119862= 05log

2119873 (3)

In (3) 119877119862

is the improved resolution Since averagealgorithm is achieved by using fixed point mathematicsin numerous oscilloscopes and the maximum number ofacquisitions to average will not exceed 8192 generally aftertaking the real-time and memory capacity into accounttherefore the maximum number of bits of total resolution islimited within 145 In fact fixed point mathematics noise

Mathematical Problems in Engineering 3

AN(n minus 2) AN(n minus 1) AN(n)AN(n + 1) AN(n + 2)

x1(n minus 2) (n minus 1) (n) (n + 1) (n + 2)

divideNdivideN divideN divideN divideN

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +++

Output samples

Input samples

x1 x1 x1x1

Figure 1 Averaging of119873 successive acquisitions

and dithering error can lower the maximum resolution to acertain extent

Successive capture averaging can improve SNR eliminatethe noise unrelated to triggering and improve vertical reso-lutionMeanwhile successive capture averaging will not limitwaveform bandwidth under ideal circumstances which is anobvious advantage compared with other signal processingtechnologies However based on numerous triggering andrepetitive quantization of signal successive capture averagingis limited by the repetitiveness of the signal itself under testand consequently is only applicable to observing repetitivesignal

22 Successive Sample Averaging Successive sample aver-aging also known as boxcar filtering or moving averagefiltering is another average algorithmwidely applied inDSOrsquosacquisition system Since it is based on the single acquisitionof the signal under test successive sample averaging is notinfluenced by the repetitiveness of the signal itself In thisaveraging process each output sampling point represents theaverage value of 119873 successive output sampling points [15]which is shown as follows

119860119873 (119899 + 119894) =

1

119873

119873minus1

sum

119895=0

119909119873(119899 + 119894 + 119895) (4)

where 119894 = 0 plusmn1 plusmn2 and 119873 is the number of samplingpoints to average

Figure 2 shows the averaging principle of 3 successivesampling points

For successive sample averaging the sampling rate beforeand after averaging is equal It eliminates noise and improvesthe vertical resolution of signal by reducing the bandwidthof DSO This improvement is measured in bits which is afunction of119873 (number of sampling points to average) [1]

119877119878= 05log

2119873 (5)

where 119877119878is the improved resolution In essence successive

sample averaging is a low-pass filter function and the 3 dBbandwidth is deducted in [9]

119861119878=044119878119877

119873 (6)

where 119861119878is the bandwidth and 119878

119877is the sampling rate This

type of filter is with extremely sharp cut-off frequency and isconsistent with the signal whose period is integral multiplesof 119873119878

119877 Noise elimination is almost in direct proportion to

the square root of number of sampling points to average Forinstance an average of 25 sampling points will reduce themagnitude of high-frequency noise to 15 of its original valueFor DSO successive sample averaging is often used to achievevariable bandwidth function

It can be easily seen from (6) that even though successivesample averaging is based on single acquisition of the signalunder test it lowers the analog bandwidth under actualsampling rate while filtering noise and therefore it is withpoor practicability

3 Resolution Improving AlgorithmBased on Information Entropy- andAverage-Based Decimation

31 Data Fusion Based on Information Entropy The infor-mation entropy-based data fusion researched in this paperaims to eliminate gross error in quantization and thenobtain precise measuring results under the condition ofoversampling Firstly the acquisition system of oscilloscopeutilizes the maximum entropy method (MEM) to estimatethe probability distribution of discrete sample data acquiredunder oversampling and then calculates the measuringuncertainty of sample according to its probability distributionto determine a confidence interval Then the acquisitionsystemdiscriminates gross error based on confidence intervaland finally determines weight coefficient of fusion accordingto information entropy to achieve data fusion and obtain

4 Mathematical Problems in Engineering

A3(n minus 2) A3(n minus 1) A3(n) A3(n + 1) A3(n + 2)

x(n minus 1) x(n) x(n + 1) x(n + 2) x(n + 3)

divide3divide3 divide3 divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +++

Output samples

Input samples

Figure 2 Averaging principle of 3 successive sampling points

a precise measured value of the signal under test without anysubjective assumptions and restrictions being added

311 Distribution Estimation of Maximum Entropy Entropyis an essential concept in thermodynamics For isolated sys-tems entropy is growing constantly The maximum entropycan determine the steady state of system Similar conclusionscan be discovered in information theory In 1957 Jaynesproposed the maximum entropy theory proceeding fromthe maximum information entropy that is when we arededucing an unknown distribution pattern with only partof known information we should select the probabilitydistribution with the maximum entropy and in conformitywith restriction conditions and any other selection maymean the addition of other restrictions or changes to theoriginal assumption conditions [16] In other words forcircumstances with only the sample under test but lackof sufficient reasons to select some analysis distributionfunction we can determine the form of the least tendentiousmeasurand distribution through the maximum entropy [14]

Assuming that oversampling is implemented by theacquisition system of DSO at a high sampling rate 119873 timesof actual sampling rate and 119873 discrete sample data areobtained at the high sampling rate that is 119909

1 1199092 119909119873 the

sample sequence after eliminating repetitive sample data is1199091 1199092 1199091198731015840 with corresponding probability of occurrence

of 119901(1199091) 119901(1199092) 119901(119909

1198731015840) and the probability distribution

119901(119909119894) of sample can be estimated through the maximum

discrete entropy Based on the information entropy definedby Shannon [13] the information entropy of discrete randomvariable 119909 is as follows [17]

119867(119909) = minus119896

1198731015840

sum

119894=1

119901 (119909119894) ln119901 (119909

119894) (7)

where 119901(119909119894) denotes the probability distribution to be esti-

mated of sample 119909119894and meets the restriction conditions

below

st1198731015840

sum

119894=1

119901 (119909119894) = 1

119901 (119909119894) ge 0 (119894 = 1 2 119873

1015840)

st1198731015840

sum

119894=1

119901 (119909119894) 119892119895(119909119894) = 119864 (119892

119895) (119895 = 1 2 119872)

(8)

In (8) 119892119895(119909119894) (119895 = 1 2 119872) is the statistical moment

function with order 119895 and 119864(119892119895) is the desired value of

119892119895(119909119894) Lagrange multiplier methods can be used to solve

this problem Since 119896 is a positive constant take 119896 = 1 forconvenience to constitute the Lagrangian function 119871(119909 120582) asis shown in

119871 (119909 120582) = minus

1198731015840

sum

119894=1

119901 (119909119894) ln119901 (119909

119894) + (120582

0+ 1)[

[

1198731015840

sum

119894=1

119901 (119909119894) minus 1]

]

+

119872

sum

119895=1

120582119895[

[

1198731015840

sum

119894=1

119901 (119909119894) 119892119895(119909119894) minus 119864 (119892

119895)]

]

(9)

where 120582119895(119895 = 0 1 119872) is the Lagrangian coefficient

Partial derivative should be obtained for 119901(119909119894) and 120582

119895

respectively and the equation set is

120597119871 (119909119894 120582119895)

120597119901 (119909119894)

= 0 119894 = 1 2 1198731015840

120597119871 (119909119894 120582119895)

120597120582119895

= 0 119895 = 0 1 2 119872

(10)

Mathematical Problems in Engineering 5

probability distribution function is

119901 (119909119894) = exp[

[

1205820+

119872

sum

119895=1

120582119895119892119895(119909119894)]

]

(11)

and the corresponding maximum entropy can be given by

119867(119909)max = minus1205820 minus119872

sum

119895=1

120582119895119864 (119892119895) (12)

For given sample data the expectation and variance ofsample sequence can be chosen as expectation functiontherefore the distribution estimation based on maximumentropy is to estimate its probability distribution accordingto the entropy of discrete random variable and to achievethe maximum entropy (119867(119909)max) by adjusting probabilitydistribution model (119901(119909

119894)) under the condition of ensuring

the statistical property of sample [16]

312 Gross Error Discrimination Traditional criteria ongross error discrimination (eg 3120590 criterion Grubbs cri-terion and Dixon criterion) are based on mathematicalstatistics The probability distribution of sample data needsto be known in case these algorithms are applied to dealwith sample data However probability distribution is rarelyknown in advance in actualmeasurement Statistical propertycannot be satisfied if few sets of sample data are obtainedduring measurement and therefore the precision for dealingwith gross error will be influenced [18] A new algorithm ongross error discrimination is proposed in this paper whichcalculates the measuring uncertainty of sample sequencethrough the probability distribution of themaximumdiscreteentropy and then determines confidence interval based onuncertainty to discriminate gross error

In [14] for continuous random variable 119909 if 119909 is expecta-tion and the estimated probability density function based onMEM is119891(119909) then themeasurement uncertainty is expressedby

119906 = radicint

119887

119886

(119909 minus 119909)2119891 (119909) 119889119909 (13)

The measurement uncertainty calculation method ofdiscrete random variable 119909 can be deduced thereout If 119909is expectation and the estimated probability distributionestimated by MEM is 119901(119909

119894) then the uncertainty of sample

should be calculated after eliminating repetitive sample dataand can be given by

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894) (14)

The confidence interval is [119909 minus 119906 119909 + 119906] and then judgewhether gross errors are contained in the sample basedon confidence interval The data outside this confidenceinterval is considered as that with gross error and shouldbe eliminated from sample sequence with a new samplesequence being constituted to fulfill data fusion

313 Effective Data Fusion For DSO sampling aims atobtaining the information related to the signal under testAs the measurement of information quantity informationentropy is used to determine the level of uncertainty andtherefore it can be used to fuse the sample data acquiredTo reduce the uncertainty of fusion results small weightcoefficient should be distributed to the sample with largeuncertainty while large weight coefficient should be dis-tributed to the sample with small uncertainty

As mentioned above oversampling is implemented bythe acquisition system of DSO at a high sampling rate 119873times of actual sampling rate and 119873 discrete sample dataare obtained under a high sampling rate after eliminatingrepetitive sample data that is 119909

1 1199092 1199091198731015840 and11987310158401015840 samples

1199091 1199092 11990911987310158401015840 are obtained after eliminating gross errorsThe

information quantity provided by each sample 119909119894is denoted

by self-information quantity 119868(119909119894) Information entropy is the

average uncertainty of the samples and therefore the ratio ofself-information quantity to information entropy can be usedto measure the uncertainty of each sample in all the samplesThe weight coefficient of fusion is in inverse proportionto self-information quantity The detailed algorithms are asfollows

(1) Utilize MEM to estimate the maximum entropy dis-tribution and the maximum entropy of sample dataand then figure out the self-information quantity ofeach sample defined by

120596119894=

119868 (119909119894)

119867(119909)max 119894 = 1 2 119873

10158401015840 (15)

where

119868 (119909119894) = minus ln119901 (119909

119894) (16)

(2) Define the weight coefficient of fusion with normal-ization processing

119902119894=

1120596119894

sum11987310158401015840

119894=11120596119894

119894 = 1 2 11987310158401015840 (17)

(3) Fuse data

119909119891=

11987310158401015840

sum

119894=1

119902119894119909119894 (18)

where the number of data used in data fusion thatis 11987310158401015840 equals the number of samples remained aftereliminating repetitive data and gross errors

(4) Replace gross error with data fusion results 119909119891

to constitute a new sample sequence if 119896 grosserrors are eliminated from 119873 data add 119896 119909

119891in the

sample sequence and thus obtain 119873 new samples1199091015840

1 1199091015840

2 1199091015840

119873without gross error under a high sam-

pling rate

6 Mathematical Problems in Engineering

32 Average-BasedDecimation Filtering Themaximum sam-pling rate of ADC in DSO is generally much higher than theactually required sampling rate of measured signal spectraThus oversampling has an advantage of filtering digital signalto improve the effective resolution of displayed waveformsand reduce the undesired noiseTherefore under the premiseof oversampling the vertical resolution at the actual samplingrate can be increased by adopting average-based decimationfiltering algorithm To be specific the DSO can carry outoversampling at high sampling rate that is 119873 times of theactual sampling rate corresponding to the time base selectedby users and then apply the information entropy-based datafusion algorithm mentioned in the former section to 119873

sampling points at high sampling rate to exclude gross errorsfuse data of effective samples average after creating newsample sequences and finally decimate sampling points atthe actual sampling rate Average-based decimation under119873times oversampling is given by

119860119873 (119899 + 119894) =

1

119873

119873minus1

sum

119895=0

119909119873(119899 + 119894119873 + 119895) (19)

where 119894 = 0 plusmn1 plusmn2 Figure 3 shows the average-based decimation principle

when119873 = 3The resolution improved by average-based decimation

filtering is measured in bits which is the function of119873 (the number of samples to average or oversamplingfactor)

119877119867= 05log

2119873 = 05log

2(119878119872

119878119877

) (20)

In (20) 119877119867

is the improved resolution 119878119872

is the highsampling rate and 119878

119877is the actual sampling rate The minus3 dB

bandwidth after average is

119861119867= 044119878

119877 (21)

where119861119867denotes the bandwidth and 119878

119877represents the actual

sampling rate It can be seen that improved vertical resolutionand analog bandwidth vary with themaximum sampling rateand actual sampling rate of oscilloscopes

Table 1 lists the ideal values of improved resolutionsand analog bandwidth of the oscilloscope with maximumsampling rate of 1 GSas and 8-bit ADC adopting average-based decimation algorithm under oversampling

Values in Columns 3 and 4 of Table 1 are ideal and theimprovement of resolution is directly proportional to 119873that is to say when 119873 increases by 4 times the resolutioncan be improved by 1 bit In reality the maximum 119873 fallsinto the range of 10000 since it is limited by real-timeperformance and the memory capacity Moreover the fixedpoint mathematics and noise will also lower the highestresolution to some extent Therefore it would better notbe expected that resolution can be improved by over 4to 6 bits It should be also noted that the improvement ofresolution depends on dynamic signals as well For thosesignals whose conversion results always deviate between

different codes resolution can be always improved Forsteady-state signals only when the noise amplitude is morethan 1 or 2 ADC LSBs the improvement of resolution can beobvious Fortunately signals in actual world are always in thecase

Generally speaking when measured signals are charac-terized by single pass or repeat at low speeds conventionalsuccessive capture averaging cannot be adopted and thusaverage-based decimation under oversampling can be usedas an alternative To be specific average-based decimationunder oversampling is especially applicable in the followingtwo situations

Firstly if noise in signals is obviously high (what ismore it is not required to measure noise) average-baseddecimation under oversampling can be adopted to ldquoclearrdquonoise

Secondly average-based decimation can be adopted toimprove the measurement resolution when high-precisionmeasurement of waveforms is required even if the noise insignals is not loud

According to the comparison between (6) and (20) it canbe easily seen that in the conventional successive samplingpoint averaging algorithm the bandwidth is directly propor-tional to actual sampling rate and inversely proportional tothe number of sample points to average When the actualsampling rate is given the bandwidth dramatically decreasesas the number of sample points to average increases How-ever when average-based decimation under oversamplingis adopted bandwidth is only directly proportional to theactual sampling rate but has nothing to do with the numberof sample points to average (oversampling factor) Whenthe actual sampling rate is given the bandwidth is deter-mined accordingly without other additional loss BesidesNyquist frequency increases by119873 times by adopting119873 timesoversamplingTherefore another advantage of average-baseddecimation is reducing aliasing

33 Processing Example A group of sets of sample dataacquired by DSO is used as an example to illustrate theprocessing procedure of the algorithm proposed in the paperThe oscilloscope works at the time base of 500 nsdiv andthe corresponding actual sampling rate is 100MSas Theoscilloscope carries out 10-time oversampling at 1 GSas toobtain 10 original discrete sets of sample data at high samplingrate that is 119909

1 1199092 11990910 which are shown in Table 2 In the

samples there are obvious glitches or gross errors caused byADC quantization errors

The expectation and variance of the 10 sample dataare

120583 = 119909 =1

10

10

sum

119894=1

119909119894= 1136

1205902=1

9

10

sum

119894=1

(119909119894minus 119909)2= 5304889

(22)

Mathematical Problems in Engineering 7

x(n minus 1)x(n minus 2)x(n minus 3)

A3(n minus 1) A3(n) A3(n + 1)

x(n) x(n + 1) x(n + 2) x(n + 3) x(n + 4) x(n + 5)

divide3divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +

Output samples

Input samples

Figure 3 Average-based decimation principle of 3 times oversampling

Table 1 Ideal values of improved vertical resolution and bandwidth based on average-based decimation under oversampling

Sampling rate Oversampling factor Total resolution minus3 dB bandwidth1 GSas 1 80 bits 440MHz500MSas 2 85 bits 220MHz250MSas 4 90 bits 110MHz100MSas 10 97 bits 44MHz50MSas 20 102 bits 22MHz25MSas 40 107 bits 11MHz10MSas 100 113 bits 44MHz1MSas 1000 130 bits 440 kHz100 kSas 10000 146 bits 44 kHz

Exclude one repeated data 121 and then constraint con-ditions met by the sample probability distribution are

1198731015840

sum

119894=1

119901 (119909119894) =

9

sum

119894=1

119901 (119909119894) = 1

1198731015840

sum

119894=1

119909119894119901 (119909119894) =

9

sum

119894=1

119909119894119901 (119909119894) = 120583 = 1136

1198731015840

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) =

9

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) = 1205902= 5304889

(23)

According to MEM and Lagrangian function the calcu-lated Lagrangian coefficients are 120582

0= minus43009 120582

1= 001648

and 1205822= 0000452 respectivelyThe expressions of estimated

maximum entropy probability distribution the maximumentropy and self-information quantity are given by

119901 (119909119894)

= exp[

[

1205820+

119898

sum

119895=1

120582119895119892119895(119909119894)]

]

= exp [minus43009 + 001648119909119894+ 0000452(119909

119894minus 1136)

2]

119867(119909)max = minus1205820 minus119898

sum

119895=1

120582119895119864 (119892119895) = 21893

119868 (119909119894) = minus ln119901 (119909

119894) = 43009 minus 001648119909

119894

minus 0000452(119909119894minus 1136)

2

(24)

Corresponding probability and self-information quantityof each sample data are shown in Table 3

8 Mathematical Problems in Engineering

Table 2 10 sets of original sample data obtained by oversampling

Number 1199091

1199092

1199093

1199094

1199095

1199096

1199097

1199098

1199099

11990910

Sample data 120 121 121 123 124 125 63 79 129 131

Table 3 Probability and self-information quantity of each set ofsample data

Number Sampledata Probability Self-information

quantity1199091 120 00998 230481199092 121 01021 228211199093 123 01071 223391199094 124 01099 220851199095 125 01128 218221199096 63 01217 210541199097 79 00856 245791199098 129 01265 206781199099 131 01346 20052

According to the distribution of the maximum entropythe uncertainty of measurement is

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894)

= radic

9

sum

119894=1

(119909119894minus 1136)

2119901 (119909119894) = 23033

(25)

and then confidence interval is

[119909 minus 119906 119909 + 119906] = [90567 136633] (26)

It can be judged that in Table 2 1199097and 119909

8are gross errors

in quantization so they should be excluded from the samplesequence At the same time the repeated sample 119909

3in Table 2

should also be excluded so the remaining 7 sets of effectivesample data form a new sample sequence (ie 119909

1 1199092 1199097)

for data fusion and the calculation results are shown inTable 4

According to (15)ndash(18) the result of data fusion is

119909119891= 124893 (27)

Replace the sample data 1199097and 119909

8in Table 2 with data

fusion result to form a new sample sequence containing 10sets of new sample data without gross errors at high samplingrate that is 1199091015840

1 1199091015840

2 1199091015840

10 as shown in Table 5

Finally average 10 sets of new sample data at highsampling rate (1 GSas) to achieve 1 sampling point at actualsampling rate (100MSas) that is

119860 =1

10

10

sum

119894=1

1199091015840

119894= 1243786 (28)

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (original signal)

SamplesD

igita

l out

put c

ode

Figure 4 Time-domain waveform of originally acquired signals at100MSas

When averaging the 10 sets of original sample data(1199091 1199092 11990910) obtained through oversampling directly the

average value is

1198601015840=1

10

10

sum

119894=1

119909119894= 1136 (29)

Else when averaging the remaining 8 sets of sample dataexcluding gross errors 119909

7and 119909

8from 10 sets of sample data

(1199091 1199092 11990910) obtained through oversampling the average

value is

11986010158401015840=1

8(1199091+ 1199092+ 1199093+ 1199094+ 1199095+ 1199096+ 1199099+ 11990910) = 12425

(30)

According to the maximum entropy theory the result1243786 obtained by the algorithm proposed in the paperis the precise measurement of unknown signal obtainedfrom sample data without any subjective hypotheses andconstraints

Similarly based on information entropy theory foreach group of original sample data (119909

1 1199092 11990910) obtained

through oversampling (1 GSas) gross errors excluded datafusing and average-based decimating can be adopted toobtain precise measurement data of a complete waveform atthe actual sampling rate (100MSas)

4 Experiment and Result Analysis

In order to verify the effectiveness and superiority of verticalresolution improved by the algorithm mentioned in the

Mathematical Problems in Engineering 9

Table 4 Fusion weight coefficient of 7 sets of effective sample data

Number Sample data Self-informationquantity

Ratio of informationquantity

Weight coefficient offusion

1199091

120 23048 10528 013501199092

121 22821 10424 013641199093

123 22339 10204 013931199094

124 22085 10088 014091199095

125 21822 09968 014261199096

129 20678 09445 015051199097

131 20051 09159 01552

Table 5 10 New sample data containing data fusion result

Number 1199091015840

11199091015840

21199091015840

31199091015840

41199091015840

51199091015840

61199091015840

71199091015840

81199091015840

91199091015840

10

Sample data 120 121 121 123 124 125 124893 124893 129 131

0 5 10 15 20 25 30 35 40 45 50minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (original signal)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 280776

Figure 5 Spectrum of originally acquired signals at 100MSas

paper we utilize 8-bit ADC model provided by AnalogDevice corporation to establish the acquisition system ofoscilloscope conduct simulation experiments with conven-tional average algorithm direct decimation algorithmand thealgorithm mentioned in the paper and finally estimate andcompare performances of all algorithms

The oscilloscope works at the time base of 500 nsdivand the corresponding actual sampling rate is 100MSasThefrequency of input sine wave is 119891

119894= 1MSas In order to

simulate quantization errors caused by noise interference andclock jitter data mismatched samples are randomly addedto the ideal ADC sampling model so the acquired samplesequence includes gross errors in quantization

Experiment 1 Sampling rate 119891119904= 100MSas Time-domain

waveform and signal spectrum obtained from sampling

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)A

mpl

itude

(dB)

SNR = 430485

Figure 6 Signal spectrumof successive capture averaging (119873 = 10)

results without any processing are shown in Figures 4 and 5respectively

After calculation in Figure 5 SNR = 280776 dB andENOB = 43717 bits

Experiment 2 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by applying successive capture averaging tosampling results with119873 = 10 is shown in Figure 6

After calculation in Figure 6 SNR = 430485 dB andENOB = 68585 bits

Experiment 3 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by adopting successive sample averaging tosampling results with119873 = 10 is shown in Figure 7

After calculation in Figure 7 SNR = 442538 dB andENOB = 70588 bits

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Information Entropy- and Average-Based ...

2 Mathematical Problems in Engineering

have certain limitations on restricting noise and improvingresolution Since successive capture averaging is based onrepetitive acquisition of signals it is limited by the repet-itiveness of the signals under test Even though successivesample averaging is based on single acquisition of the signalunder test it sacrifices analog bandwidthwhile filtering noiseIn addition gross error in quantization such as irrelevantnoise and quantization error caused by data mismatch isalways included in the sampling data of oscilloscope dueto the influence of environmental disturbance clock jittertransmission delay and so forth If not being handled first theresults of direct averaging of the acquired sample data withgross error will deviate from actual signal drastically

Data fusion (also called information fusion) is a sampledata processing method which is widely applied currentlyIn previous studies a data fusion algorithm based on theestimation theory in batches of statistics theory is proposed inthe [11] and is further improved in [12] However it is kind ofsubjective that both algorithms assume that sample data arecharacterized by normal distributionThe concept of entropyoriginates from physics to describe the disordered state ofthermodynamic system Entropy reflects the statistical prop-erty of systemand is introduced into numerous research fieldssuccessively In 1948 an American mathematician Shannonintroduced the entropy in thermodynamics into informationtheory and proposed information entropy [13] to measurethe uncertainty degree of information Information entropyprovided a new approach for data fusion [14]

An information entropy-based data fusion and average-based decimation filtering algorithm proceeding fromimproving average algorithm and in combination with rel-evant theories of entropy are proposed in this paper toimprove the resolution of oscilloscope effectively Additionalhorizontal sampling information is used to achieve highervertical resolution under the premise of oversampling Firstlycomparing with traditional averaging algorithm this algo-rithm aims at the signal sample acquired once and thereforeit is subject to no restrictions of the repetitiveness of signalSecondly the resolution of oscilloscope is improved througheliminating gross error in quantization due to noise andquantization error by utilizing the maximum entropy ofsample data with further noise filtering via average-baseddecimation after data fusion of efficient sample data beforeaveraging No subjective assumptions and constraints areadded to the signal under test in the whole process withoutany impact on the analog bandwidth of oscilloscope underactual sampling rate

2 Common Averaging Theory

21 Successive Capture Averaging Successive capture aver-aging is a basic denoising signal processing technologyfor acquisition systems of most DSOs which depends onrepetitive triggering and acquisition of repetitive signalsSuccessive capture averaging averages the correspondingsampling points in these waveforms one by one by usingthe multiwaveforms acquisition repetitively to form a singlecapture result after averaging that is output single waveform

Figure 1 shows the schematic diagram of averaging of 119873successive acquisitions

The direct calculation method of successive captureaveraging is to sum the corresponding sampling points inall acquisitions and then divide them by the number ofacquisitions The expression is given by

119860119873 (119899 + 119894) =

1

119873

119873

sum

119895=1

119909119895 (119899 + 119894) (1)

where 119894 = 0 plusmn1 plusmn2 119860119873(119899 + 119894) is the averaging result

119873 is the number of acquisitions to average and 119909119895(119899 + 119894)

represents the corresponding sampling point at the moment119899 + 119894 in the 119895th acquisition Obviously average value cannotbe achieved in this algorithm until all 119873 acquisitions arecompleted If 119873 is too large the throughput rate of systemwill be affected remarkably For users the delay caused byaveraging is unacceptable For oscilloscope the huge sampledata will use out memory capacity rapidly

Consequently an improved exponential averaging algo-rithm is widely applied in successive capture averagingExponential averaging algorithm is shown in (2) whichcreates a new averaging result 119860

119895(119899 + 119894) by utilizing a new

sampling point 119909119895(119899+119894) atmoment 119899+119894 and the last averaging

result 119860119895minus1(119899 + 119894)

119860119895 (119899 + 119894) =

[119909119895 (119899 + 119894) + (119901 minus 1)119860119895minus1 (119899 + 119894)]

119901

=

119909119895 (119899 + 119894)

119901+ [

(119901 minus 1)

119901]119860119895minus1 (119899 + 119894)

(2)

where 119894 = 0 plusmn1 plusmn2 In (2) 119895 represents the currentnumber of acquisitions119860

119895(119899 + 119894) is the new averaging result

119860119895minus1(119899 + 119894) is the last averaging result 119909

119895(119899 + 119894) is the new

sampling point and 119901 is the weighting coefficient Assumingthat 119873 is the total number of acquisitions to average if (119895 lt119873) then 119901 = 119895 otherwise 119901 = 119873 Obviously higherefficiency can be achieved in exponential averaging algorithmwhile calculating and storing the acquired and the averagedwaveforms Exponential averaging algorithm can not onlyupdate averaged results immediately after each acquisitionand obtain the final same waveform as in direct averagingalgorithm but can also lower the requirements on memorycapacity significantly

Nomatter which algorithm is adopted successive captureaveraging can improve the vertical resolution of signal Thisimprovement is measured in bits which is a function of 119873(number of acquisitions to average) [1]

119877119862= 05log

2119873 (3)

In (3) 119877119862

is the improved resolution Since averagealgorithm is achieved by using fixed point mathematicsin numerous oscilloscopes and the maximum number ofacquisitions to average will not exceed 8192 generally aftertaking the real-time and memory capacity into accounttherefore the maximum number of bits of total resolution islimited within 145 In fact fixed point mathematics noise

Mathematical Problems in Engineering 3

AN(n minus 2) AN(n minus 1) AN(n)AN(n + 1) AN(n + 2)

x1(n minus 2) (n minus 1) (n) (n + 1) (n + 2)

divideNdivideN divideN divideN divideN

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +++

Output samples

Input samples

x1 x1 x1x1

Figure 1 Averaging of119873 successive acquisitions

and dithering error can lower the maximum resolution to acertain extent

Successive capture averaging can improve SNR eliminatethe noise unrelated to triggering and improve vertical reso-lutionMeanwhile successive capture averaging will not limitwaveform bandwidth under ideal circumstances which is anobvious advantage compared with other signal processingtechnologies However based on numerous triggering andrepetitive quantization of signal successive capture averagingis limited by the repetitiveness of the signal itself under testand consequently is only applicable to observing repetitivesignal

22 Successive Sample Averaging Successive sample aver-aging also known as boxcar filtering or moving averagefiltering is another average algorithmwidely applied inDSOrsquosacquisition system Since it is based on the single acquisitionof the signal under test successive sample averaging is notinfluenced by the repetitiveness of the signal itself In thisaveraging process each output sampling point represents theaverage value of 119873 successive output sampling points [15]which is shown as follows

119860119873 (119899 + 119894) =

1

119873

119873minus1

sum

119895=0

119909119873(119899 + 119894 + 119895) (4)

where 119894 = 0 plusmn1 plusmn2 and 119873 is the number of samplingpoints to average

Figure 2 shows the averaging principle of 3 successivesampling points

For successive sample averaging the sampling rate beforeand after averaging is equal It eliminates noise and improvesthe vertical resolution of signal by reducing the bandwidthof DSO This improvement is measured in bits which is afunction of119873 (number of sampling points to average) [1]

119877119878= 05log

2119873 (5)

where 119877119878is the improved resolution In essence successive

sample averaging is a low-pass filter function and the 3 dBbandwidth is deducted in [9]

119861119878=044119878119877

119873 (6)

where 119861119878is the bandwidth and 119878

119877is the sampling rate This

type of filter is with extremely sharp cut-off frequency and isconsistent with the signal whose period is integral multiplesof 119873119878

119877 Noise elimination is almost in direct proportion to

the square root of number of sampling points to average Forinstance an average of 25 sampling points will reduce themagnitude of high-frequency noise to 15 of its original valueFor DSO successive sample averaging is often used to achievevariable bandwidth function

It can be easily seen from (6) that even though successivesample averaging is based on single acquisition of the signalunder test it lowers the analog bandwidth under actualsampling rate while filtering noise and therefore it is withpoor practicability

3 Resolution Improving AlgorithmBased on Information Entropy- andAverage-Based Decimation

31 Data Fusion Based on Information Entropy The infor-mation entropy-based data fusion researched in this paperaims to eliminate gross error in quantization and thenobtain precise measuring results under the condition ofoversampling Firstly the acquisition system of oscilloscopeutilizes the maximum entropy method (MEM) to estimatethe probability distribution of discrete sample data acquiredunder oversampling and then calculates the measuringuncertainty of sample according to its probability distributionto determine a confidence interval Then the acquisitionsystemdiscriminates gross error based on confidence intervaland finally determines weight coefficient of fusion accordingto information entropy to achieve data fusion and obtain

4 Mathematical Problems in Engineering

A3(n minus 2) A3(n minus 1) A3(n) A3(n + 1) A3(n + 2)

x(n minus 1) x(n) x(n + 1) x(n + 2) x(n + 3)

divide3divide3 divide3 divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +++

Output samples

Input samples

Figure 2 Averaging principle of 3 successive sampling points

a precise measured value of the signal under test without anysubjective assumptions and restrictions being added

311 Distribution Estimation of Maximum Entropy Entropyis an essential concept in thermodynamics For isolated sys-tems entropy is growing constantly The maximum entropycan determine the steady state of system Similar conclusionscan be discovered in information theory In 1957 Jaynesproposed the maximum entropy theory proceeding fromthe maximum information entropy that is when we arededucing an unknown distribution pattern with only partof known information we should select the probabilitydistribution with the maximum entropy and in conformitywith restriction conditions and any other selection maymean the addition of other restrictions or changes to theoriginal assumption conditions [16] In other words forcircumstances with only the sample under test but lackof sufficient reasons to select some analysis distributionfunction we can determine the form of the least tendentiousmeasurand distribution through the maximum entropy [14]

Assuming that oversampling is implemented by theacquisition system of DSO at a high sampling rate 119873 timesof actual sampling rate and 119873 discrete sample data areobtained at the high sampling rate that is 119909

1 1199092 119909119873 the

sample sequence after eliminating repetitive sample data is1199091 1199092 1199091198731015840 with corresponding probability of occurrence

of 119901(1199091) 119901(1199092) 119901(119909

1198731015840) and the probability distribution

119901(119909119894) of sample can be estimated through the maximum

discrete entropy Based on the information entropy definedby Shannon [13] the information entropy of discrete randomvariable 119909 is as follows [17]

119867(119909) = minus119896

1198731015840

sum

119894=1

119901 (119909119894) ln119901 (119909

119894) (7)

where 119901(119909119894) denotes the probability distribution to be esti-

mated of sample 119909119894and meets the restriction conditions

below

st1198731015840

sum

119894=1

119901 (119909119894) = 1

119901 (119909119894) ge 0 (119894 = 1 2 119873

1015840)

st1198731015840

sum

119894=1

119901 (119909119894) 119892119895(119909119894) = 119864 (119892

119895) (119895 = 1 2 119872)

(8)

In (8) 119892119895(119909119894) (119895 = 1 2 119872) is the statistical moment

function with order 119895 and 119864(119892119895) is the desired value of

119892119895(119909119894) Lagrange multiplier methods can be used to solve

this problem Since 119896 is a positive constant take 119896 = 1 forconvenience to constitute the Lagrangian function 119871(119909 120582) asis shown in

119871 (119909 120582) = minus

1198731015840

sum

119894=1

119901 (119909119894) ln119901 (119909

119894) + (120582

0+ 1)[

[

1198731015840

sum

119894=1

119901 (119909119894) minus 1]

]

+

119872

sum

119895=1

120582119895[

[

1198731015840

sum

119894=1

119901 (119909119894) 119892119895(119909119894) minus 119864 (119892

119895)]

]

(9)

where 120582119895(119895 = 0 1 119872) is the Lagrangian coefficient

Partial derivative should be obtained for 119901(119909119894) and 120582

119895

respectively and the equation set is

120597119871 (119909119894 120582119895)

120597119901 (119909119894)

= 0 119894 = 1 2 1198731015840

120597119871 (119909119894 120582119895)

120597120582119895

= 0 119895 = 0 1 2 119872

(10)

Mathematical Problems in Engineering 5

probability distribution function is

119901 (119909119894) = exp[

[

1205820+

119872

sum

119895=1

120582119895119892119895(119909119894)]

]

(11)

and the corresponding maximum entropy can be given by

119867(119909)max = minus1205820 minus119872

sum

119895=1

120582119895119864 (119892119895) (12)

For given sample data the expectation and variance ofsample sequence can be chosen as expectation functiontherefore the distribution estimation based on maximumentropy is to estimate its probability distribution accordingto the entropy of discrete random variable and to achievethe maximum entropy (119867(119909)max) by adjusting probabilitydistribution model (119901(119909

119894)) under the condition of ensuring

the statistical property of sample [16]

312 Gross Error Discrimination Traditional criteria ongross error discrimination (eg 3120590 criterion Grubbs cri-terion and Dixon criterion) are based on mathematicalstatistics The probability distribution of sample data needsto be known in case these algorithms are applied to dealwith sample data However probability distribution is rarelyknown in advance in actualmeasurement Statistical propertycannot be satisfied if few sets of sample data are obtainedduring measurement and therefore the precision for dealingwith gross error will be influenced [18] A new algorithm ongross error discrimination is proposed in this paper whichcalculates the measuring uncertainty of sample sequencethrough the probability distribution of themaximumdiscreteentropy and then determines confidence interval based onuncertainty to discriminate gross error

In [14] for continuous random variable 119909 if 119909 is expecta-tion and the estimated probability density function based onMEM is119891(119909) then themeasurement uncertainty is expressedby

119906 = radicint

119887

119886

(119909 minus 119909)2119891 (119909) 119889119909 (13)

The measurement uncertainty calculation method ofdiscrete random variable 119909 can be deduced thereout If 119909is expectation and the estimated probability distributionestimated by MEM is 119901(119909

119894) then the uncertainty of sample

should be calculated after eliminating repetitive sample dataand can be given by

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894) (14)

The confidence interval is [119909 minus 119906 119909 + 119906] and then judgewhether gross errors are contained in the sample basedon confidence interval The data outside this confidenceinterval is considered as that with gross error and shouldbe eliminated from sample sequence with a new samplesequence being constituted to fulfill data fusion

313 Effective Data Fusion For DSO sampling aims atobtaining the information related to the signal under testAs the measurement of information quantity informationentropy is used to determine the level of uncertainty andtherefore it can be used to fuse the sample data acquiredTo reduce the uncertainty of fusion results small weightcoefficient should be distributed to the sample with largeuncertainty while large weight coefficient should be dis-tributed to the sample with small uncertainty

As mentioned above oversampling is implemented bythe acquisition system of DSO at a high sampling rate 119873times of actual sampling rate and 119873 discrete sample dataare obtained under a high sampling rate after eliminatingrepetitive sample data that is 119909

1 1199092 1199091198731015840 and11987310158401015840 samples

1199091 1199092 11990911987310158401015840 are obtained after eliminating gross errorsThe

information quantity provided by each sample 119909119894is denoted

by self-information quantity 119868(119909119894) Information entropy is the

average uncertainty of the samples and therefore the ratio ofself-information quantity to information entropy can be usedto measure the uncertainty of each sample in all the samplesThe weight coefficient of fusion is in inverse proportionto self-information quantity The detailed algorithms are asfollows

(1) Utilize MEM to estimate the maximum entropy dis-tribution and the maximum entropy of sample dataand then figure out the self-information quantity ofeach sample defined by

120596119894=

119868 (119909119894)

119867(119909)max 119894 = 1 2 119873

10158401015840 (15)

where

119868 (119909119894) = minus ln119901 (119909

119894) (16)

(2) Define the weight coefficient of fusion with normal-ization processing

119902119894=

1120596119894

sum11987310158401015840

119894=11120596119894

119894 = 1 2 11987310158401015840 (17)

(3) Fuse data

119909119891=

11987310158401015840

sum

119894=1

119902119894119909119894 (18)

where the number of data used in data fusion thatis 11987310158401015840 equals the number of samples remained aftereliminating repetitive data and gross errors

(4) Replace gross error with data fusion results 119909119891

to constitute a new sample sequence if 119896 grosserrors are eliminated from 119873 data add 119896 119909

119891in the

sample sequence and thus obtain 119873 new samples1199091015840

1 1199091015840

2 1199091015840

119873without gross error under a high sam-

pling rate

6 Mathematical Problems in Engineering

32 Average-BasedDecimation Filtering Themaximum sam-pling rate of ADC in DSO is generally much higher than theactually required sampling rate of measured signal spectraThus oversampling has an advantage of filtering digital signalto improve the effective resolution of displayed waveformsand reduce the undesired noiseTherefore under the premiseof oversampling the vertical resolution at the actual samplingrate can be increased by adopting average-based decimationfiltering algorithm To be specific the DSO can carry outoversampling at high sampling rate that is 119873 times of theactual sampling rate corresponding to the time base selectedby users and then apply the information entropy-based datafusion algorithm mentioned in the former section to 119873

sampling points at high sampling rate to exclude gross errorsfuse data of effective samples average after creating newsample sequences and finally decimate sampling points atthe actual sampling rate Average-based decimation under119873times oversampling is given by

119860119873 (119899 + 119894) =

1

119873

119873minus1

sum

119895=0

119909119873(119899 + 119894119873 + 119895) (19)

where 119894 = 0 plusmn1 plusmn2 Figure 3 shows the average-based decimation principle

when119873 = 3The resolution improved by average-based decimation

filtering is measured in bits which is the function of119873 (the number of samples to average or oversamplingfactor)

119877119867= 05log

2119873 = 05log

2(119878119872

119878119877

) (20)

In (20) 119877119867

is the improved resolution 119878119872

is the highsampling rate and 119878

119877is the actual sampling rate The minus3 dB

bandwidth after average is

119861119867= 044119878

119877 (21)

where119861119867denotes the bandwidth and 119878

119877represents the actual

sampling rate It can be seen that improved vertical resolutionand analog bandwidth vary with themaximum sampling rateand actual sampling rate of oscilloscopes

Table 1 lists the ideal values of improved resolutionsand analog bandwidth of the oscilloscope with maximumsampling rate of 1 GSas and 8-bit ADC adopting average-based decimation algorithm under oversampling

Values in Columns 3 and 4 of Table 1 are ideal and theimprovement of resolution is directly proportional to 119873that is to say when 119873 increases by 4 times the resolutioncan be improved by 1 bit In reality the maximum 119873 fallsinto the range of 10000 since it is limited by real-timeperformance and the memory capacity Moreover the fixedpoint mathematics and noise will also lower the highestresolution to some extent Therefore it would better notbe expected that resolution can be improved by over 4to 6 bits It should be also noted that the improvement ofresolution depends on dynamic signals as well For thosesignals whose conversion results always deviate between

different codes resolution can be always improved Forsteady-state signals only when the noise amplitude is morethan 1 or 2 ADC LSBs the improvement of resolution can beobvious Fortunately signals in actual world are always in thecase

Generally speaking when measured signals are charac-terized by single pass or repeat at low speeds conventionalsuccessive capture averaging cannot be adopted and thusaverage-based decimation under oversampling can be usedas an alternative To be specific average-based decimationunder oversampling is especially applicable in the followingtwo situations

Firstly if noise in signals is obviously high (what ismore it is not required to measure noise) average-baseddecimation under oversampling can be adopted to ldquoclearrdquonoise

Secondly average-based decimation can be adopted toimprove the measurement resolution when high-precisionmeasurement of waveforms is required even if the noise insignals is not loud

According to the comparison between (6) and (20) it canbe easily seen that in the conventional successive samplingpoint averaging algorithm the bandwidth is directly propor-tional to actual sampling rate and inversely proportional tothe number of sample points to average When the actualsampling rate is given the bandwidth dramatically decreasesas the number of sample points to average increases How-ever when average-based decimation under oversamplingis adopted bandwidth is only directly proportional to theactual sampling rate but has nothing to do with the numberof sample points to average (oversampling factor) Whenthe actual sampling rate is given the bandwidth is deter-mined accordingly without other additional loss BesidesNyquist frequency increases by119873 times by adopting119873 timesoversamplingTherefore another advantage of average-baseddecimation is reducing aliasing

33 Processing Example A group of sets of sample dataacquired by DSO is used as an example to illustrate theprocessing procedure of the algorithm proposed in the paperThe oscilloscope works at the time base of 500 nsdiv andthe corresponding actual sampling rate is 100MSas Theoscilloscope carries out 10-time oversampling at 1 GSas toobtain 10 original discrete sets of sample data at high samplingrate that is 119909

1 1199092 11990910 which are shown in Table 2 In the

samples there are obvious glitches or gross errors caused byADC quantization errors

The expectation and variance of the 10 sample dataare

120583 = 119909 =1

10

10

sum

119894=1

119909119894= 1136

1205902=1

9

10

sum

119894=1

(119909119894minus 119909)2= 5304889

(22)

Mathematical Problems in Engineering 7

x(n minus 1)x(n minus 2)x(n minus 3)

A3(n minus 1) A3(n) A3(n + 1)

x(n) x(n + 1) x(n + 2) x(n + 3) x(n + 4) x(n + 5)

divide3divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +

Output samples

Input samples

Figure 3 Average-based decimation principle of 3 times oversampling

Table 1 Ideal values of improved vertical resolution and bandwidth based on average-based decimation under oversampling

Sampling rate Oversampling factor Total resolution minus3 dB bandwidth1 GSas 1 80 bits 440MHz500MSas 2 85 bits 220MHz250MSas 4 90 bits 110MHz100MSas 10 97 bits 44MHz50MSas 20 102 bits 22MHz25MSas 40 107 bits 11MHz10MSas 100 113 bits 44MHz1MSas 1000 130 bits 440 kHz100 kSas 10000 146 bits 44 kHz

Exclude one repeated data 121 and then constraint con-ditions met by the sample probability distribution are

1198731015840

sum

119894=1

119901 (119909119894) =

9

sum

119894=1

119901 (119909119894) = 1

1198731015840

sum

119894=1

119909119894119901 (119909119894) =

9

sum

119894=1

119909119894119901 (119909119894) = 120583 = 1136

1198731015840

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) =

9

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) = 1205902= 5304889

(23)

According to MEM and Lagrangian function the calcu-lated Lagrangian coefficients are 120582

0= minus43009 120582

1= 001648

and 1205822= 0000452 respectivelyThe expressions of estimated

maximum entropy probability distribution the maximumentropy and self-information quantity are given by

119901 (119909119894)

= exp[

[

1205820+

119898

sum

119895=1

120582119895119892119895(119909119894)]

]

= exp [minus43009 + 001648119909119894+ 0000452(119909

119894minus 1136)

2]

119867(119909)max = minus1205820 minus119898

sum

119895=1

120582119895119864 (119892119895) = 21893

119868 (119909119894) = minus ln119901 (119909

119894) = 43009 minus 001648119909

119894

minus 0000452(119909119894minus 1136)

2

(24)

Corresponding probability and self-information quantityof each sample data are shown in Table 3

8 Mathematical Problems in Engineering

Table 2 10 sets of original sample data obtained by oversampling

Number 1199091

1199092

1199093

1199094

1199095

1199096

1199097

1199098

1199099

11990910

Sample data 120 121 121 123 124 125 63 79 129 131

Table 3 Probability and self-information quantity of each set ofsample data

Number Sampledata Probability Self-information

quantity1199091 120 00998 230481199092 121 01021 228211199093 123 01071 223391199094 124 01099 220851199095 125 01128 218221199096 63 01217 210541199097 79 00856 245791199098 129 01265 206781199099 131 01346 20052

According to the distribution of the maximum entropythe uncertainty of measurement is

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894)

= radic

9

sum

119894=1

(119909119894minus 1136)

2119901 (119909119894) = 23033

(25)

and then confidence interval is

[119909 minus 119906 119909 + 119906] = [90567 136633] (26)

It can be judged that in Table 2 1199097and 119909

8are gross errors

in quantization so they should be excluded from the samplesequence At the same time the repeated sample 119909

3in Table 2

should also be excluded so the remaining 7 sets of effectivesample data form a new sample sequence (ie 119909

1 1199092 1199097)

for data fusion and the calculation results are shown inTable 4

According to (15)ndash(18) the result of data fusion is

119909119891= 124893 (27)

Replace the sample data 1199097and 119909

8in Table 2 with data

fusion result to form a new sample sequence containing 10sets of new sample data without gross errors at high samplingrate that is 1199091015840

1 1199091015840

2 1199091015840

10 as shown in Table 5

Finally average 10 sets of new sample data at highsampling rate (1 GSas) to achieve 1 sampling point at actualsampling rate (100MSas) that is

119860 =1

10

10

sum

119894=1

1199091015840

119894= 1243786 (28)

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (original signal)

SamplesD

igita

l out

put c

ode

Figure 4 Time-domain waveform of originally acquired signals at100MSas

When averaging the 10 sets of original sample data(1199091 1199092 11990910) obtained through oversampling directly the

average value is

1198601015840=1

10

10

sum

119894=1

119909119894= 1136 (29)

Else when averaging the remaining 8 sets of sample dataexcluding gross errors 119909

7and 119909

8from 10 sets of sample data

(1199091 1199092 11990910) obtained through oversampling the average

value is

11986010158401015840=1

8(1199091+ 1199092+ 1199093+ 1199094+ 1199095+ 1199096+ 1199099+ 11990910) = 12425

(30)

According to the maximum entropy theory the result1243786 obtained by the algorithm proposed in the paperis the precise measurement of unknown signal obtainedfrom sample data without any subjective hypotheses andconstraints

Similarly based on information entropy theory foreach group of original sample data (119909

1 1199092 11990910) obtained

through oversampling (1 GSas) gross errors excluded datafusing and average-based decimating can be adopted toobtain precise measurement data of a complete waveform atthe actual sampling rate (100MSas)

4 Experiment and Result Analysis

In order to verify the effectiveness and superiority of verticalresolution improved by the algorithm mentioned in the

Mathematical Problems in Engineering 9

Table 4 Fusion weight coefficient of 7 sets of effective sample data

Number Sample data Self-informationquantity

Ratio of informationquantity

Weight coefficient offusion

1199091

120 23048 10528 013501199092

121 22821 10424 013641199093

123 22339 10204 013931199094

124 22085 10088 014091199095

125 21822 09968 014261199096

129 20678 09445 015051199097

131 20051 09159 01552

Table 5 10 New sample data containing data fusion result

Number 1199091015840

11199091015840

21199091015840

31199091015840

41199091015840

51199091015840

61199091015840

71199091015840

81199091015840

91199091015840

10

Sample data 120 121 121 123 124 125 124893 124893 129 131

0 5 10 15 20 25 30 35 40 45 50minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (original signal)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 280776

Figure 5 Spectrum of originally acquired signals at 100MSas

paper we utilize 8-bit ADC model provided by AnalogDevice corporation to establish the acquisition system ofoscilloscope conduct simulation experiments with conven-tional average algorithm direct decimation algorithmand thealgorithm mentioned in the paper and finally estimate andcompare performances of all algorithms

The oscilloscope works at the time base of 500 nsdivand the corresponding actual sampling rate is 100MSasThefrequency of input sine wave is 119891

119894= 1MSas In order to

simulate quantization errors caused by noise interference andclock jitter data mismatched samples are randomly addedto the ideal ADC sampling model so the acquired samplesequence includes gross errors in quantization

Experiment 1 Sampling rate 119891119904= 100MSas Time-domain

waveform and signal spectrum obtained from sampling

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)A

mpl

itude

(dB)

SNR = 430485

Figure 6 Signal spectrumof successive capture averaging (119873 = 10)

results without any processing are shown in Figures 4 and 5respectively

After calculation in Figure 5 SNR = 280776 dB andENOB = 43717 bits

Experiment 2 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by applying successive capture averaging tosampling results with119873 = 10 is shown in Figure 6

After calculation in Figure 6 SNR = 430485 dB andENOB = 68585 bits

Experiment 3 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by adopting successive sample averaging tosampling results with119873 = 10 is shown in Figure 7

After calculation in Figure 7 SNR = 442538 dB andENOB = 70588 bits

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Information Entropy- and Average-Based ...

Mathematical Problems in Engineering 3

AN(n minus 2) AN(n minus 1) AN(n)AN(n + 1) AN(n + 2)

x1(n minus 2) (n minus 1) (n) (n + 1) (n + 2)

divideNdivideN divideN divideN divideN

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +++

Output samples

Input samples

x1 x1 x1x1

Figure 1 Averaging of119873 successive acquisitions

and dithering error can lower the maximum resolution to acertain extent

Successive capture averaging can improve SNR eliminatethe noise unrelated to triggering and improve vertical reso-lutionMeanwhile successive capture averaging will not limitwaveform bandwidth under ideal circumstances which is anobvious advantage compared with other signal processingtechnologies However based on numerous triggering andrepetitive quantization of signal successive capture averagingis limited by the repetitiveness of the signal itself under testand consequently is only applicable to observing repetitivesignal

22 Successive Sample Averaging Successive sample aver-aging also known as boxcar filtering or moving averagefiltering is another average algorithmwidely applied inDSOrsquosacquisition system Since it is based on the single acquisitionof the signal under test successive sample averaging is notinfluenced by the repetitiveness of the signal itself In thisaveraging process each output sampling point represents theaverage value of 119873 successive output sampling points [15]which is shown as follows

119860119873 (119899 + 119894) =

1

119873

119873minus1

sum

119895=0

119909119873(119899 + 119894 + 119895) (4)

where 119894 = 0 plusmn1 plusmn2 and 119873 is the number of samplingpoints to average

Figure 2 shows the averaging principle of 3 successivesampling points

For successive sample averaging the sampling rate beforeand after averaging is equal It eliminates noise and improvesthe vertical resolution of signal by reducing the bandwidthof DSO This improvement is measured in bits which is afunction of119873 (number of sampling points to average) [1]

119877119878= 05log

2119873 (5)

where 119877119878is the improved resolution In essence successive

sample averaging is a low-pass filter function and the 3 dBbandwidth is deducted in [9]

119861119878=044119878119877

119873 (6)

where 119861119878is the bandwidth and 119878

119877is the sampling rate This

type of filter is with extremely sharp cut-off frequency and isconsistent with the signal whose period is integral multiplesof 119873119878

119877 Noise elimination is almost in direct proportion to

the square root of number of sampling points to average Forinstance an average of 25 sampling points will reduce themagnitude of high-frequency noise to 15 of its original valueFor DSO successive sample averaging is often used to achievevariable bandwidth function

It can be easily seen from (6) that even though successivesample averaging is based on single acquisition of the signalunder test it lowers the analog bandwidth under actualsampling rate while filtering noise and therefore it is withpoor practicability

3 Resolution Improving AlgorithmBased on Information Entropy- andAverage-Based Decimation

31 Data Fusion Based on Information Entropy The infor-mation entropy-based data fusion researched in this paperaims to eliminate gross error in quantization and thenobtain precise measuring results under the condition ofoversampling Firstly the acquisition system of oscilloscopeutilizes the maximum entropy method (MEM) to estimatethe probability distribution of discrete sample data acquiredunder oversampling and then calculates the measuringuncertainty of sample according to its probability distributionto determine a confidence interval Then the acquisitionsystemdiscriminates gross error based on confidence intervaland finally determines weight coefficient of fusion accordingto information entropy to achieve data fusion and obtain

4 Mathematical Problems in Engineering

A3(n minus 2) A3(n minus 1) A3(n) A3(n + 1) A3(n + 2)

x(n minus 1) x(n) x(n + 1) x(n + 2) x(n + 3)

divide3divide3 divide3 divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +++

Output samples

Input samples

Figure 2 Averaging principle of 3 successive sampling points

a precise measured value of the signal under test without anysubjective assumptions and restrictions being added

311 Distribution Estimation of Maximum Entropy Entropyis an essential concept in thermodynamics For isolated sys-tems entropy is growing constantly The maximum entropycan determine the steady state of system Similar conclusionscan be discovered in information theory In 1957 Jaynesproposed the maximum entropy theory proceeding fromthe maximum information entropy that is when we arededucing an unknown distribution pattern with only partof known information we should select the probabilitydistribution with the maximum entropy and in conformitywith restriction conditions and any other selection maymean the addition of other restrictions or changes to theoriginal assumption conditions [16] In other words forcircumstances with only the sample under test but lackof sufficient reasons to select some analysis distributionfunction we can determine the form of the least tendentiousmeasurand distribution through the maximum entropy [14]

Assuming that oversampling is implemented by theacquisition system of DSO at a high sampling rate 119873 timesof actual sampling rate and 119873 discrete sample data areobtained at the high sampling rate that is 119909

1 1199092 119909119873 the

sample sequence after eliminating repetitive sample data is1199091 1199092 1199091198731015840 with corresponding probability of occurrence

of 119901(1199091) 119901(1199092) 119901(119909

1198731015840) and the probability distribution

119901(119909119894) of sample can be estimated through the maximum

discrete entropy Based on the information entropy definedby Shannon [13] the information entropy of discrete randomvariable 119909 is as follows [17]

119867(119909) = minus119896

1198731015840

sum

119894=1

119901 (119909119894) ln119901 (119909

119894) (7)

where 119901(119909119894) denotes the probability distribution to be esti-

mated of sample 119909119894and meets the restriction conditions

below

st1198731015840

sum

119894=1

119901 (119909119894) = 1

119901 (119909119894) ge 0 (119894 = 1 2 119873

1015840)

st1198731015840

sum

119894=1

119901 (119909119894) 119892119895(119909119894) = 119864 (119892

119895) (119895 = 1 2 119872)

(8)

In (8) 119892119895(119909119894) (119895 = 1 2 119872) is the statistical moment

function with order 119895 and 119864(119892119895) is the desired value of

119892119895(119909119894) Lagrange multiplier methods can be used to solve

this problem Since 119896 is a positive constant take 119896 = 1 forconvenience to constitute the Lagrangian function 119871(119909 120582) asis shown in

119871 (119909 120582) = minus

1198731015840

sum

119894=1

119901 (119909119894) ln119901 (119909

119894) + (120582

0+ 1)[

[

1198731015840

sum

119894=1

119901 (119909119894) minus 1]

]

+

119872

sum

119895=1

120582119895[

[

1198731015840

sum

119894=1

119901 (119909119894) 119892119895(119909119894) minus 119864 (119892

119895)]

]

(9)

where 120582119895(119895 = 0 1 119872) is the Lagrangian coefficient

Partial derivative should be obtained for 119901(119909119894) and 120582

119895

respectively and the equation set is

120597119871 (119909119894 120582119895)

120597119901 (119909119894)

= 0 119894 = 1 2 1198731015840

120597119871 (119909119894 120582119895)

120597120582119895

= 0 119895 = 0 1 2 119872

(10)

Mathematical Problems in Engineering 5

probability distribution function is

119901 (119909119894) = exp[

[

1205820+

119872

sum

119895=1

120582119895119892119895(119909119894)]

]

(11)

and the corresponding maximum entropy can be given by

119867(119909)max = minus1205820 minus119872

sum

119895=1

120582119895119864 (119892119895) (12)

For given sample data the expectation and variance ofsample sequence can be chosen as expectation functiontherefore the distribution estimation based on maximumentropy is to estimate its probability distribution accordingto the entropy of discrete random variable and to achievethe maximum entropy (119867(119909)max) by adjusting probabilitydistribution model (119901(119909

119894)) under the condition of ensuring

the statistical property of sample [16]

312 Gross Error Discrimination Traditional criteria ongross error discrimination (eg 3120590 criterion Grubbs cri-terion and Dixon criterion) are based on mathematicalstatistics The probability distribution of sample data needsto be known in case these algorithms are applied to dealwith sample data However probability distribution is rarelyknown in advance in actualmeasurement Statistical propertycannot be satisfied if few sets of sample data are obtainedduring measurement and therefore the precision for dealingwith gross error will be influenced [18] A new algorithm ongross error discrimination is proposed in this paper whichcalculates the measuring uncertainty of sample sequencethrough the probability distribution of themaximumdiscreteentropy and then determines confidence interval based onuncertainty to discriminate gross error

In [14] for continuous random variable 119909 if 119909 is expecta-tion and the estimated probability density function based onMEM is119891(119909) then themeasurement uncertainty is expressedby

119906 = radicint

119887

119886

(119909 minus 119909)2119891 (119909) 119889119909 (13)

The measurement uncertainty calculation method ofdiscrete random variable 119909 can be deduced thereout If 119909is expectation and the estimated probability distributionestimated by MEM is 119901(119909

119894) then the uncertainty of sample

should be calculated after eliminating repetitive sample dataand can be given by

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894) (14)

The confidence interval is [119909 minus 119906 119909 + 119906] and then judgewhether gross errors are contained in the sample basedon confidence interval The data outside this confidenceinterval is considered as that with gross error and shouldbe eliminated from sample sequence with a new samplesequence being constituted to fulfill data fusion

313 Effective Data Fusion For DSO sampling aims atobtaining the information related to the signal under testAs the measurement of information quantity informationentropy is used to determine the level of uncertainty andtherefore it can be used to fuse the sample data acquiredTo reduce the uncertainty of fusion results small weightcoefficient should be distributed to the sample with largeuncertainty while large weight coefficient should be dis-tributed to the sample with small uncertainty

As mentioned above oversampling is implemented bythe acquisition system of DSO at a high sampling rate 119873times of actual sampling rate and 119873 discrete sample dataare obtained under a high sampling rate after eliminatingrepetitive sample data that is 119909

1 1199092 1199091198731015840 and11987310158401015840 samples

1199091 1199092 11990911987310158401015840 are obtained after eliminating gross errorsThe

information quantity provided by each sample 119909119894is denoted

by self-information quantity 119868(119909119894) Information entropy is the

average uncertainty of the samples and therefore the ratio ofself-information quantity to information entropy can be usedto measure the uncertainty of each sample in all the samplesThe weight coefficient of fusion is in inverse proportionto self-information quantity The detailed algorithms are asfollows

(1) Utilize MEM to estimate the maximum entropy dis-tribution and the maximum entropy of sample dataand then figure out the self-information quantity ofeach sample defined by

120596119894=

119868 (119909119894)

119867(119909)max 119894 = 1 2 119873

10158401015840 (15)

where

119868 (119909119894) = minus ln119901 (119909

119894) (16)

(2) Define the weight coefficient of fusion with normal-ization processing

119902119894=

1120596119894

sum11987310158401015840

119894=11120596119894

119894 = 1 2 11987310158401015840 (17)

(3) Fuse data

119909119891=

11987310158401015840

sum

119894=1

119902119894119909119894 (18)

where the number of data used in data fusion thatis 11987310158401015840 equals the number of samples remained aftereliminating repetitive data and gross errors

(4) Replace gross error with data fusion results 119909119891

to constitute a new sample sequence if 119896 grosserrors are eliminated from 119873 data add 119896 119909

119891in the

sample sequence and thus obtain 119873 new samples1199091015840

1 1199091015840

2 1199091015840

119873without gross error under a high sam-

pling rate

6 Mathematical Problems in Engineering

32 Average-BasedDecimation Filtering Themaximum sam-pling rate of ADC in DSO is generally much higher than theactually required sampling rate of measured signal spectraThus oversampling has an advantage of filtering digital signalto improve the effective resolution of displayed waveformsand reduce the undesired noiseTherefore under the premiseof oversampling the vertical resolution at the actual samplingrate can be increased by adopting average-based decimationfiltering algorithm To be specific the DSO can carry outoversampling at high sampling rate that is 119873 times of theactual sampling rate corresponding to the time base selectedby users and then apply the information entropy-based datafusion algorithm mentioned in the former section to 119873

sampling points at high sampling rate to exclude gross errorsfuse data of effective samples average after creating newsample sequences and finally decimate sampling points atthe actual sampling rate Average-based decimation under119873times oversampling is given by

119860119873 (119899 + 119894) =

1

119873

119873minus1

sum

119895=0

119909119873(119899 + 119894119873 + 119895) (19)

where 119894 = 0 plusmn1 plusmn2 Figure 3 shows the average-based decimation principle

when119873 = 3The resolution improved by average-based decimation

filtering is measured in bits which is the function of119873 (the number of samples to average or oversamplingfactor)

119877119867= 05log

2119873 = 05log

2(119878119872

119878119877

) (20)

In (20) 119877119867

is the improved resolution 119878119872

is the highsampling rate and 119878

119877is the actual sampling rate The minus3 dB

bandwidth after average is

119861119867= 044119878

119877 (21)

where119861119867denotes the bandwidth and 119878

119877represents the actual

sampling rate It can be seen that improved vertical resolutionand analog bandwidth vary with themaximum sampling rateand actual sampling rate of oscilloscopes

Table 1 lists the ideal values of improved resolutionsand analog bandwidth of the oscilloscope with maximumsampling rate of 1 GSas and 8-bit ADC adopting average-based decimation algorithm under oversampling

Values in Columns 3 and 4 of Table 1 are ideal and theimprovement of resolution is directly proportional to 119873that is to say when 119873 increases by 4 times the resolutioncan be improved by 1 bit In reality the maximum 119873 fallsinto the range of 10000 since it is limited by real-timeperformance and the memory capacity Moreover the fixedpoint mathematics and noise will also lower the highestresolution to some extent Therefore it would better notbe expected that resolution can be improved by over 4to 6 bits It should be also noted that the improvement ofresolution depends on dynamic signals as well For thosesignals whose conversion results always deviate between

different codes resolution can be always improved Forsteady-state signals only when the noise amplitude is morethan 1 or 2 ADC LSBs the improvement of resolution can beobvious Fortunately signals in actual world are always in thecase

Generally speaking when measured signals are charac-terized by single pass or repeat at low speeds conventionalsuccessive capture averaging cannot be adopted and thusaverage-based decimation under oversampling can be usedas an alternative To be specific average-based decimationunder oversampling is especially applicable in the followingtwo situations

Firstly if noise in signals is obviously high (what ismore it is not required to measure noise) average-baseddecimation under oversampling can be adopted to ldquoclearrdquonoise

Secondly average-based decimation can be adopted toimprove the measurement resolution when high-precisionmeasurement of waveforms is required even if the noise insignals is not loud

According to the comparison between (6) and (20) it canbe easily seen that in the conventional successive samplingpoint averaging algorithm the bandwidth is directly propor-tional to actual sampling rate and inversely proportional tothe number of sample points to average When the actualsampling rate is given the bandwidth dramatically decreasesas the number of sample points to average increases How-ever when average-based decimation under oversamplingis adopted bandwidth is only directly proportional to theactual sampling rate but has nothing to do with the numberof sample points to average (oversampling factor) Whenthe actual sampling rate is given the bandwidth is deter-mined accordingly without other additional loss BesidesNyquist frequency increases by119873 times by adopting119873 timesoversamplingTherefore another advantage of average-baseddecimation is reducing aliasing

33 Processing Example A group of sets of sample dataacquired by DSO is used as an example to illustrate theprocessing procedure of the algorithm proposed in the paperThe oscilloscope works at the time base of 500 nsdiv andthe corresponding actual sampling rate is 100MSas Theoscilloscope carries out 10-time oversampling at 1 GSas toobtain 10 original discrete sets of sample data at high samplingrate that is 119909

1 1199092 11990910 which are shown in Table 2 In the

samples there are obvious glitches or gross errors caused byADC quantization errors

The expectation and variance of the 10 sample dataare

120583 = 119909 =1

10

10

sum

119894=1

119909119894= 1136

1205902=1

9

10

sum

119894=1

(119909119894minus 119909)2= 5304889

(22)

Mathematical Problems in Engineering 7

x(n minus 1)x(n minus 2)x(n minus 3)

A3(n minus 1) A3(n) A3(n + 1)

x(n) x(n + 1) x(n + 2) x(n + 3) x(n + 4) x(n + 5)

divide3divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +

Output samples

Input samples

Figure 3 Average-based decimation principle of 3 times oversampling

Table 1 Ideal values of improved vertical resolution and bandwidth based on average-based decimation under oversampling

Sampling rate Oversampling factor Total resolution minus3 dB bandwidth1 GSas 1 80 bits 440MHz500MSas 2 85 bits 220MHz250MSas 4 90 bits 110MHz100MSas 10 97 bits 44MHz50MSas 20 102 bits 22MHz25MSas 40 107 bits 11MHz10MSas 100 113 bits 44MHz1MSas 1000 130 bits 440 kHz100 kSas 10000 146 bits 44 kHz

Exclude one repeated data 121 and then constraint con-ditions met by the sample probability distribution are

1198731015840

sum

119894=1

119901 (119909119894) =

9

sum

119894=1

119901 (119909119894) = 1

1198731015840

sum

119894=1

119909119894119901 (119909119894) =

9

sum

119894=1

119909119894119901 (119909119894) = 120583 = 1136

1198731015840

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) =

9

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) = 1205902= 5304889

(23)

According to MEM and Lagrangian function the calcu-lated Lagrangian coefficients are 120582

0= minus43009 120582

1= 001648

and 1205822= 0000452 respectivelyThe expressions of estimated

maximum entropy probability distribution the maximumentropy and self-information quantity are given by

119901 (119909119894)

= exp[

[

1205820+

119898

sum

119895=1

120582119895119892119895(119909119894)]

]

= exp [minus43009 + 001648119909119894+ 0000452(119909

119894minus 1136)

2]

119867(119909)max = minus1205820 minus119898

sum

119895=1

120582119895119864 (119892119895) = 21893

119868 (119909119894) = minus ln119901 (119909

119894) = 43009 minus 001648119909

119894

minus 0000452(119909119894minus 1136)

2

(24)

Corresponding probability and self-information quantityof each sample data are shown in Table 3

8 Mathematical Problems in Engineering

Table 2 10 sets of original sample data obtained by oversampling

Number 1199091

1199092

1199093

1199094

1199095

1199096

1199097

1199098

1199099

11990910

Sample data 120 121 121 123 124 125 63 79 129 131

Table 3 Probability and self-information quantity of each set ofsample data

Number Sampledata Probability Self-information

quantity1199091 120 00998 230481199092 121 01021 228211199093 123 01071 223391199094 124 01099 220851199095 125 01128 218221199096 63 01217 210541199097 79 00856 245791199098 129 01265 206781199099 131 01346 20052

According to the distribution of the maximum entropythe uncertainty of measurement is

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894)

= radic

9

sum

119894=1

(119909119894minus 1136)

2119901 (119909119894) = 23033

(25)

and then confidence interval is

[119909 minus 119906 119909 + 119906] = [90567 136633] (26)

It can be judged that in Table 2 1199097and 119909

8are gross errors

in quantization so they should be excluded from the samplesequence At the same time the repeated sample 119909

3in Table 2

should also be excluded so the remaining 7 sets of effectivesample data form a new sample sequence (ie 119909

1 1199092 1199097)

for data fusion and the calculation results are shown inTable 4

According to (15)ndash(18) the result of data fusion is

119909119891= 124893 (27)

Replace the sample data 1199097and 119909

8in Table 2 with data

fusion result to form a new sample sequence containing 10sets of new sample data without gross errors at high samplingrate that is 1199091015840

1 1199091015840

2 1199091015840

10 as shown in Table 5

Finally average 10 sets of new sample data at highsampling rate (1 GSas) to achieve 1 sampling point at actualsampling rate (100MSas) that is

119860 =1

10

10

sum

119894=1

1199091015840

119894= 1243786 (28)

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (original signal)

SamplesD

igita

l out

put c

ode

Figure 4 Time-domain waveform of originally acquired signals at100MSas

When averaging the 10 sets of original sample data(1199091 1199092 11990910) obtained through oversampling directly the

average value is

1198601015840=1

10

10

sum

119894=1

119909119894= 1136 (29)

Else when averaging the remaining 8 sets of sample dataexcluding gross errors 119909

7and 119909

8from 10 sets of sample data

(1199091 1199092 11990910) obtained through oversampling the average

value is

11986010158401015840=1

8(1199091+ 1199092+ 1199093+ 1199094+ 1199095+ 1199096+ 1199099+ 11990910) = 12425

(30)

According to the maximum entropy theory the result1243786 obtained by the algorithm proposed in the paperis the precise measurement of unknown signal obtainedfrom sample data without any subjective hypotheses andconstraints

Similarly based on information entropy theory foreach group of original sample data (119909

1 1199092 11990910) obtained

through oversampling (1 GSas) gross errors excluded datafusing and average-based decimating can be adopted toobtain precise measurement data of a complete waveform atthe actual sampling rate (100MSas)

4 Experiment and Result Analysis

In order to verify the effectiveness and superiority of verticalresolution improved by the algorithm mentioned in the

Mathematical Problems in Engineering 9

Table 4 Fusion weight coefficient of 7 sets of effective sample data

Number Sample data Self-informationquantity

Ratio of informationquantity

Weight coefficient offusion

1199091

120 23048 10528 013501199092

121 22821 10424 013641199093

123 22339 10204 013931199094

124 22085 10088 014091199095

125 21822 09968 014261199096

129 20678 09445 015051199097

131 20051 09159 01552

Table 5 10 New sample data containing data fusion result

Number 1199091015840

11199091015840

21199091015840

31199091015840

41199091015840

51199091015840

61199091015840

71199091015840

81199091015840

91199091015840

10

Sample data 120 121 121 123 124 125 124893 124893 129 131

0 5 10 15 20 25 30 35 40 45 50minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (original signal)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 280776

Figure 5 Spectrum of originally acquired signals at 100MSas

paper we utilize 8-bit ADC model provided by AnalogDevice corporation to establish the acquisition system ofoscilloscope conduct simulation experiments with conven-tional average algorithm direct decimation algorithmand thealgorithm mentioned in the paper and finally estimate andcompare performances of all algorithms

The oscilloscope works at the time base of 500 nsdivand the corresponding actual sampling rate is 100MSasThefrequency of input sine wave is 119891

119894= 1MSas In order to

simulate quantization errors caused by noise interference andclock jitter data mismatched samples are randomly addedto the ideal ADC sampling model so the acquired samplesequence includes gross errors in quantization

Experiment 1 Sampling rate 119891119904= 100MSas Time-domain

waveform and signal spectrum obtained from sampling

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)A

mpl

itude

(dB)

SNR = 430485

Figure 6 Signal spectrumof successive capture averaging (119873 = 10)

results without any processing are shown in Figures 4 and 5respectively

After calculation in Figure 5 SNR = 280776 dB andENOB = 43717 bits

Experiment 2 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by applying successive capture averaging tosampling results with119873 = 10 is shown in Figure 6

After calculation in Figure 6 SNR = 430485 dB andENOB = 68585 bits

Experiment 3 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by adopting successive sample averaging tosampling results with119873 = 10 is shown in Figure 7

After calculation in Figure 7 SNR = 442538 dB andENOB = 70588 bits

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Information Entropy- and Average-Based ...

4 Mathematical Problems in Engineering

A3(n minus 2) A3(n minus 1) A3(n) A3(n + 1) A3(n + 2)

x(n minus 1) x(n) x(n + 1) x(n + 2) x(n + 3)

divide3divide3 divide3 divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +++

Output samples

Input samples

Figure 2 Averaging principle of 3 successive sampling points

a precise measured value of the signal under test without anysubjective assumptions and restrictions being added

311 Distribution Estimation of Maximum Entropy Entropyis an essential concept in thermodynamics For isolated sys-tems entropy is growing constantly The maximum entropycan determine the steady state of system Similar conclusionscan be discovered in information theory In 1957 Jaynesproposed the maximum entropy theory proceeding fromthe maximum information entropy that is when we arededucing an unknown distribution pattern with only partof known information we should select the probabilitydistribution with the maximum entropy and in conformitywith restriction conditions and any other selection maymean the addition of other restrictions or changes to theoriginal assumption conditions [16] In other words forcircumstances with only the sample under test but lackof sufficient reasons to select some analysis distributionfunction we can determine the form of the least tendentiousmeasurand distribution through the maximum entropy [14]

Assuming that oversampling is implemented by theacquisition system of DSO at a high sampling rate 119873 timesof actual sampling rate and 119873 discrete sample data areobtained at the high sampling rate that is 119909

1 1199092 119909119873 the

sample sequence after eliminating repetitive sample data is1199091 1199092 1199091198731015840 with corresponding probability of occurrence

of 119901(1199091) 119901(1199092) 119901(119909

1198731015840) and the probability distribution

119901(119909119894) of sample can be estimated through the maximum

discrete entropy Based on the information entropy definedby Shannon [13] the information entropy of discrete randomvariable 119909 is as follows [17]

119867(119909) = minus119896

1198731015840

sum

119894=1

119901 (119909119894) ln119901 (119909

119894) (7)

where 119901(119909119894) denotes the probability distribution to be esti-

mated of sample 119909119894and meets the restriction conditions

below

st1198731015840

sum

119894=1

119901 (119909119894) = 1

119901 (119909119894) ge 0 (119894 = 1 2 119873

1015840)

st1198731015840

sum

119894=1

119901 (119909119894) 119892119895(119909119894) = 119864 (119892

119895) (119895 = 1 2 119872)

(8)

In (8) 119892119895(119909119894) (119895 = 1 2 119872) is the statistical moment

function with order 119895 and 119864(119892119895) is the desired value of

119892119895(119909119894) Lagrange multiplier methods can be used to solve

this problem Since 119896 is a positive constant take 119896 = 1 forconvenience to constitute the Lagrangian function 119871(119909 120582) asis shown in

119871 (119909 120582) = minus

1198731015840

sum

119894=1

119901 (119909119894) ln119901 (119909

119894) + (120582

0+ 1)[

[

1198731015840

sum

119894=1

119901 (119909119894) minus 1]

]

+

119872

sum

119895=1

120582119895[

[

1198731015840

sum

119894=1

119901 (119909119894) 119892119895(119909119894) minus 119864 (119892

119895)]

]

(9)

where 120582119895(119895 = 0 1 119872) is the Lagrangian coefficient

Partial derivative should be obtained for 119901(119909119894) and 120582

119895

respectively and the equation set is

120597119871 (119909119894 120582119895)

120597119901 (119909119894)

= 0 119894 = 1 2 1198731015840

120597119871 (119909119894 120582119895)

120597120582119895

= 0 119895 = 0 1 2 119872

(10)

Mathematical Problems in Engineering 5

probability distribution function is

119901 (119909119894) = exp[

[

1205820+

119872

sum

119895=1

120582119895119892119895(119909119894)]

]

(11)

and the corresponding maximum entropy can be given by

119867(119909)max = minus1205820 minus119872

sum

119895=1

120582119895119864 (119892119895) (12)

For given sample data the expectation and variance ofsample sequence can be chosen as expectation functiontherefore the distribution estimation based on maximumentropy is to estimate its probability distribution accordingto the entropy of discrete random variable and to achievethe maximum entropy (119867(119909)max) by adjusting probabilitydistribution model (119901(119909

119894)) under the condition of ensuring

the statistical property of sample [16]

312 Gross Error Discrimination Traditional criteria ongross error discrimination (eg 3120590 criterion Grubbs cri-terion and Dixon criterion) are based on mathematicalstatistics The probability distribution of sample data needsto be known in case these algorithms are applied to dealwith sample data However probability distribution is rarelyknown in advance in actualmeasurement Statistical propertycannot be satisfied if few sets of sample data are obtainedduring measurement and therefore the precision for dealingwith gross error will be influenced [18] A new algorithm ongross error discrimination is proposed in this paper whichcalculates the measuring uncertainty of sample sequencethrough the probability distribution of themaximumdiscreteentropy and then determines confidence interval based onuncertainty to discriminate gross error

In [14] for continuous random variable 119909 if 119909 is expecta-tion and the estimated probability density function based onMEM is119891(119909) then themeasurement uncertainty is expressedby

119906 = radicint

119887

119886

(119909 minus 119909)2119891 (119909) 119889119909 (13)

The measurement uncertainty calculation method ofdiscrete random variable 119909 can be deduced thereout If 119909is expectation and the estimated probability distributionestimated by MEM is 119901(119909

119894) then the uncertainty of sample

should be calculated after eliminating repetitive sample dataand can be given by

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894) (14)

The confidence interval is [119909 minus 119906 119909 + 119906] and then judgewhether gross errors are contained in the sample basedon confidence interval The data outside this confidenceinterval is considered as that with gross error and shouldbe eliminated from sample sequence with a new samplesequence being constituted to fulfill data fusion

313 Effective Data Fusion For DSO sampling aims atobtaining the information related to the signal under testAs the measurement of information quantity informationentropy is used to determine the level of uncertainty andtherefore it can be used to fuse the sample data acquiredTo reduce the uncertainty of fusion results small weightcoefficient should be distributed to the sample with largeuncertainty while large weight coefficient should be dis-tributed to the sample with small uncertainty

As mentioned above oversampling is implemented bythe acquisition system of DSO at a high sampling rate 119873times of actual sampling rate and 119873 discrete sample dataare obtained under a high sampling rate after eliminatingrepetitive sample data that is 119909

1 1199092 1199091198731015840 and11987310158401015840 samples

1199091 1199092 11990911987310158401015840 are obtained after eliminating gross errorsThe

information quantity provided by each sample 119909119894is denoted

by self-information quantity 119868(119909119894) Information entropy is the

average uncertainty of the samples and therefore the ratio ofself-information quantity to information entropy can be usedto measure the uncertainty of each sample in all the samplesThe weight coefficient of fusion is in inverse proportionto self-information quantity The detailed algorithms are asfollows

(1) Utilize MEM to estimate the maximum entropy dis-tribution and the maximum entropy of sample dataand then figure out the self-information quantity ofeach sample defined by

120596119894=

119868 (119909119894)

119867(119909)max 119894 = 1 2 119873

10158401015840 (15)

where

119868 (119909119894) = minus ln119901 (119909

119894) (16)

(2) Define the weight coefficient of fusion with normal-ization processing

119902119894=

1120596119894

sum11987310158401015840

119894=11120596119894

119894 = 1 2 11987310158401015840 (17)

(3) Fuse data

119909119891=

11987310158401015840

sum

119894=1

119902119894119909119894 (18)

where the number of data used in data fusion thatis 11987310158401015840 equals the number of samples remained aftereliminating repetitive data and gross errors

(4) Replace gross error with data fusion results 119909119891

to constitute a new sample sequence if 119896 grosserrors are eliminated from 119873 data add 119896 119909

119891in the

sample sequence and thus obtain 119873 new samples1199091015840

1 1199091015840

2 1199091015840

119873without gross error under a high sam-

pling rate

6 Mathematical Problems in Engineering

32 Average-BasedDecimation Filtering Themaximum sam-pling rate of ADC in DSO is generally much higher than theactually required sampling rate of measured signal spectraThus oversampling has an advantage of filtering digital signalto improve the effective resolution of displayed waveformsand reduce the undesired noiseTherefore under the premiseof oversampling the vertical resolution at the actual samplingrate can be increased by adopting average-based decimationfiltering algorithm To be specific the DSO can carry outoversampling at high sampling rate that is 119873 times of theactual sampling rate corresponding to the time base selectedby users and then apply the information entropy-based datafusion algorithm mentioned in the former section to 119873

sampling points at high sampling rate to exclude gross errorsfuse data of effective samples average after creating newsample sequences and finally decimate sampling points atthe actual sampling rate Average-based decimation under119873times oversampling is given by

119860119873 (119899 + 119894) =

1

119873

119873minus1

sum

119895=0

119909119873(119899 + 119894119873 + 119895) (19)

where 119894 = 0 plusmn1 plusmn2 Figure 3 shows the average-based decimation principle

when119873 = 3The resolution improved by average-based decimation

filtering is measured in bits which is the function of119873 (the number of samples to average or oversamplingfactor)

119877119867= 05log

2119873 = 05log

2(119878119872

119878119877

) (20)

In (20) 119877119867

is the improved resolution 119878119872

is the highsampling rate and 119878

119877is the actual sampling rate The minus3 dB

bandwidth after average is

119861119867= 044119878

119877 (21)

where119861119867denotes the bandwidth and 119878

119877represents the actual

sampling rate It can be seen that improved vertical resolutionand analog bandwidth vary with themaximum sampling rateand actual sampling rate of oscilloscopes

Table 1 lists the ideal values of improved resolutionsand analog bandwidth of the oscilloscope with maximumsampling rate of 1 GSas and 8-bit ADC adopting average-based decimation algorithm under oversampling

Values in Columns 3 and 4 of Table 1 are ideal and theimprovement of resolution is directly proportional to 119873that is to say when 119873 increases by 4 times the resolutioncan be improved by 1 bit In reality the maximum 119873 fallsinto the range of 10000 since it is limited by real-timeperformance and the memory capacity Moreover the fixedpoint mathematics and noise will also lower the highestresolution to some extent Therefore it would better notbe expected that resolution can be improved by over 4to 6 bits It should be also noted that the improvement ofresolution depends on dynamic signals as well For thosesignals whose conversion results always deviate between

different codes resolution can be always improved Forsteady-state signals only when the noise amplitude is morethan 1 or 2 ADC LSBs the improvement of resolution can beobvious Fortunately signals in actual world are always in thecase

Generally speaking when measured signals are charac-terized by single pass or repeat at low speeds conventionalsuccessive capture averaging cannot be adopted and thusaverage-based decimation under oversampling can be usedas an alternative To be specific average-based decimationunder oversampling is especially applicable in the followingtwo situations

Firstly if noise in signals is obviously high (what ismore it is not required to measure noise) average-baseddecimation under oversampling can be adopted to ldquoclearrdquonoise

Secondly average-based decimation can be adopted toimprove the measurement resolution when high-precisionmeasurement of waveforms is required even if the noise insignals is not loud

According to the comparison between (6) and (20) it canbe easily seen that in the conventional successive samplingpoint averaging algorithm the bandwidth is directly propor-tional to actual sampling rate and inversely proportional tothe number of sample points to average When the actualsampling rate is given the bandwidth dramatically decreasesas the number of sample points to average increases How-ever when average-based decimation under oversamplingis adopted bandwidth is only directly proportional to theactual sampling rate but has nothing to do with the numberof sample points to average (oversampling factor) Whenthe actual sampling rate is given the bandwidth is deter-mined accordingly without other additional loss BesidesNyquist frequency increases by119873 times by adopting119873 timesoversamplingTherefore another advantage of average-baseddecimation is reducing aliasing

33 Processing Example A group of sets of sample dataacquired by DSO is used as an example to illustrate theprocessing procedure of the algorithm proposed in the paperThe oscilloscope works at the time base of 500 nsdiv andthe corresponding actual sampling rate is 100MSas Theoscilloscope carries out 10-time oversampling at 1 GSas toobtain 10 original discrete sets of sample data at high samplingrate that is 119909

1 1199092 11990910 which are shown in Table 2 In the

samples there are obvious glitches or gross errors caused byADC quantization errors

The expectation and variance of the 10 sample dataare

120583 = 119909 =1

10

10

sum

119894=1

119909119894= 1136

1205902=1

9

10

sum

119894=1

(119909119894minus 119909)2= 5304889

(22)

Mathematical Problems in Engineering 7

x(n minus 1)x(n minus 2)x(n minus 3)

A3(n minus 1) A3(n) A3(n + 1)

x(n) x(n + 1) x(n + 2) x(n + 3) x(n + 4) x(n + 5)

divide3divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +

Output samples

Input samples

Figure 3 Average-based decimation principle of 3 times oversampling

Table 1 Ideal values of improved vertical resolution and bandwidth based on average-based decimation under oversampling

Sampling rate Oversampling factor Total resolution minus3 dB bandwidth1 GSas 1 80 bits 440MHz500MSas 2 85 bits 220MHz250MSas 4 90 bits 110MHz100MSas 10 97 bits 44MHz50MSas 20 102 bits 22MHz25MSas 40 107 bits 11MHz10MSas 100 113 bits 44MHz1MSas 1000 130 bits 440 kHz100 kSas 10000 146 bits 44 kHz

Exclude one repeated data 121 and then constraint con-ditions met by the sample probability distribution are

1198731015840

sum

119894=1

119901 (119909119894) =

9

sum

119894=1

119901 (119909119894) = 1

1198731015840

sum

119894=1

119909119894119901 (119909119894) =

9

sum

119894=1

119909119894119901 (119909119894) = 120583 = 1136

1198731015840

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) =

9

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) = 1205902= 5304889

(23)

According to MEM and Lagrangian function the calcu-lated Lagrangian coefficients are 120582

0= minus43009 120582

1= 001648

and 1205822= 0000452 respectivelyThe expressions of estimated

maximum entropy probability distribution the maximumentropy and self-information quantity are given by

119901 (119909119894)

= exp[

[

1205820+

119898

sum

119895=1

120582119895119892119895(119909119894)]

]

= exp [minus43009 + 001648119909119894+ 0000452(119909

119894minus 1136)

2]

119867(119909)max = minus1205820 minus119898

sum

119895=1

120582119895119864 (119892119895) = 21893

119868 (119909119894) = minus ln119901 (119909

119894) = 43009 minus 001648119909

119894

minus 0000452(119909119894minus 1136)

2

(24)

Corresponding probability and self-information quantityof each sample data are shown in Table 3

8 Mathematical Problems in Engineering

Table 2 10 sets of original sample data obtained by oversampling

Number 1199091

1199092

1199093

1199094

1199095

1199096

1199097

1199098

1199099

11990910

Sample data 120 121 121 123 124 125 63 79 129 131

Table 3 Probability and self-information quantity of each set ofsample data

Number Sampledata Probability Self-information

quantity1199091 120 00998 230481199092 121 01021 228211199093 123 01071 223391199094 124 01099 220851199095 125 01128 218221199096 63 01217 210541199097 79 00856 245791199098 129 01265 206781199099 131 01346 20052

According to the distribution of the maximum entropythe uncertainty of measurement is

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894)

= radic

9

sum

119894=1

(119909119894minus 1136)

2119901 (119909119894) = 23033

(25)

and then confidence interval is

[119909 minus 119906 119909 + 119906] = [90567 136633] (26)

It can be judged that in Table 2 1199097and 119909

8are gross errors

in quantization so they should be excluded from the samplesequence At the same time the repeated sample 119909

3in Table 2

should also be excluded so the remaining 7 sets of effectivesample data form a new sample sequence (ie 119909

1 1199092 1199097)

for data fusion and the calculation results are shown inTable 4

According to (15)ndash(18) the result of data fusion is

119909119891= 124893 (27)

Replace the sample data 1199097and 119909

8in Table 2 with data

fusion result to form a new sample sequence containing 10sets of new sample data without gross errors at high samplingrate that is 1199091015840

1 1199091015840

2 1199091015840

10 as shown in Table 5

Finally average 10 sets of new sample data at highsampling rate (1 GSas) to achieve 1 sampling point at actualsampling rate (100MSas) that is

119860 =1

10

10

sum

119894=1

1199091015840

119894= 1243786 (28)

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (original signal)

SamplesD

igita

l out

put c

ode

Figure 4 Time-domain waveform of originally acquired signals at100MSas

When averaging the 10 sets of original sample data(1199091 1199092 11990910) obtained through oversampling directly the

average value is

1198601015840=1

10

10

sum

119894=1

119909119894= 1136 (29)

Else when averaging the remaining 8 sets of sample dataexcluding gross errors 119909

7and 119909

8from 10 sets of sample data

(1199091 1199092 11990910) obtained through oversampling the average

value is

11986010158401015840=1

8(1199091+ 1199092+ 1199093+ 1199094+ 1199095+ 1199096+ 1199099+ 11990910) = 12425

(30)

According to the maximum entropy theory the result1243786 obtained by the algorithm proposed in the paperis the precise measurement of unknown signal obtainedfrom sample data without any subjective hypotheses andconstraints

Similarly based on information entropy theory foreach group of original sample data (119909

1 1199092 11990910) obtained

through oversampling (1 GSas) gross errors excluded datafusing and average-based decimating can be adopted toobtain precise measurement data of a complete waveform atthe actual sampling rate (100MSas)

4 Experiment and Result Analysis

In order to verify the effectiveness and superiority of verticalresolution improved by the algorithm mentioned in the

Mathematical Problems in Engineering 9

Table 4 Fusion weight coefficient of 7 sets of effective sample data

Number Sample data Self-informationquantity

Ratio of informationquantity

Weight coefficient offusion

1199091

120 23048 10528 013501199092

121 22821 10424 013641199093

123 22339 10204 013931199094

124 22085 10088 014091199095

125 21822 09968 014261199096

129 20678 09445 015051199097

131 20051 09159 01552

Table 5 10 New sample data containing data fusion result

Number 1199091015840

11199091015840

21199091015840

31199091015840

41199091015840

51199091015840

61199091015840

71199091015840

81199091015840

91199091015840

10

Sample data 120 121 121 123 124 125 124893 124893 129 131

0 5 10 15 20 25 30 35 40 45 50minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (original signal)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 280776

Figure 5 Spectrum of originally acquired signals at 100MSas

paper we utilize 8-bit ADC model provided by AnalogDevice corporation to establish the acquisition system ofoscilloscope conduct simulation experiments with conven-tional average algorithm direct decimation algorithmand thealgorithm mentioned in the paper and finally estimate andcompare performances of all algorithms

The oscilloscope works at the time base of 500 nsdivand the corresponding actual sampling rate is 100MSasThefrequency of input sine wave is 119891

119894= 1MSas In order to

simulate quantization errors caused by noise interference andclock jitter data mismatched samples are randomly addedto the ideal ADC sampling model so the acquired samplesequence includes gross errors in quantization

Experiment 1 Sampling rate 119891119904= 100MSas Time-domain

waveform and signal spectrum obtained from sampling

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)A

mpl

itude

(dB)

SNR = 430485

Figure 6 Signal spectrumof successive capture averaging (119873 = 10)

results without any processing are shown in Figures 4 and 5respectively

After calculation in Figure 5 SNR = 280776 dB andENOB = 43717 bits

Experiment 2 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by applying successive capture averaging tosampling results with119873 = 10 is shown in Figure 6

After calculation in Figure 6 SNR = 430485 dB andENOB = 68585 bits

Experiment 3 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by adopting successive sample averaging tosampling results with119873 = 10 is shown in Figure 7

After calculation in Figure 7 SNR = 442538 dB andENOB = 70588 bits

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Information Entropy- and Average-Based ...

Mathematical Problems in Engineering 5

probability distribution function is

119901 (119909119894) = exp[

[

1205820+

119872

sum

119895=1

120582119895119892119895(119909119894)]

]

(11)

and the corresponding maximum entropy can be given by

119867(119909)max = minus1205820 minus119872

sum

119895=1

120582119895119864 (119892119895) (12)

For given sample data the expectation and variance ofsample sequence can be chosen as expectation functiontherefore the distribution estimation based on maximumentropy is to estimate its probability distribution accordingto the entropy of discrete random variable and to achievethe maximum entropy (119867(119909)max) by adjusting probabilitydistribution model (119901(119909

119894)) under the condition of ensuring

the statistical property of sample [16]

312 Gross Error Discrimination Traditional criteria ongross error discrimination (eg 3120590 criterion Grubbs cri-terion and Dixon criterion) are based on mathematicalstatistics The probability distribution of sample data needsto be known in case these algorithms are applied to dealwith sample data However probability distribution is rarelyknown in advance in actualmeasurement Statistical propertycannot be satisfied if few sets of sample data are obtainedduring measurement and therefore the precision for dealingwith gross error will be influenced [18] A new algorithm ongross error discrimination is proposed in this paper whichcalculates the measuring uncertainty of sample sequencethrough the probability distribution of themaximumdiscreteentropy and then determines confidence interval based onuncertainty to discriminate gross error

In [14] for continuous random variable 119909 if 119909 is expecta-tion and the estimated probability density function based onMEM is119891(119909) then themeasurement uncertainty is expressedby

119906 = radicint

119887

119886

(119909 minus 119909)2119891 (119909) 119889119909 (13)

The measurement uncertainty calculation method ofdiscrete random variable 119909 can be deduced thereout If 119909is expectation and the estimated probability distributionestimated by MEM is 119901(119909

119894) then the uncertainty of sample

should be calculated after eliminating repetitive sample dataand can be given by

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894) (14)

The confidence interval is [119909 minus 119906 119909 + 119906] and then judgewhether gross errors are contained in the sample basedon confidence interval The data outside this confidenceinterval is considered as that with gross error and shouldbe eliminated from sample sequence with a new samplesequence being constituted to fulfill data fusion

313 Effective Data Fusion For DSO sampling aims atobtaining the information related to the signal under testAs the measurement of information quantity informationentropy is used to determine the level of uncertainty andtherefore it can be used to fuse the sample data acquiredTo reduce the uncertainty of fusion results small weightcoefficient should be distributed to the sample with largeuncertainty while large weight coefficient should be dis-tributed to the sample with small uncertainty

As mentioned above oversampling is implemented bythe acquisition system of DSO at a high sampling rate 119873times of actual sampling rate and 119873 discrete sample dataare obtained under a high sampling rate after eliminatingrepetitive sample data that is 119909

1 1199092 1199091198731015840 and11987310158401015840 samples

1199091 1199092 11990911987310158401015840 are obtained after eliminating gross errorsThe

information quantity provided by each sample 119909119894is denoted

by self-information quantity 119868(119909119894) Information entropy is the

average uncertainty of the samples and therefore the ratio ofself-information quantity to information entropy can be usedto measure the uncertainty of each sample in all the samplesThe weight coefficient of fusion is in inverse proportionto self-information quantity The detailed algorithms are asfollows

(1) Utilize MEM to estimate the maximum entropy dis-tribution and the maximum entropy of sample dataand then figure out the self-information quantity ofeach sample defined by

120596119894=

119868 (119909119894)

119867(119909)max 119894 = 1 2 119873

10158401015840 (15)

where

119868 (119909119894) = minus ln119901 (119909

119894) (16)

(2) Define the weight coefficient of fusion with normal-ization processing

119902119894=

1120596119894

sum11987310158401015840

119894=11120596119894

119894 = 1 2 11987310158401015840 (17)

(3) Fuse data

119909119891=

11987310158401015840

sum

119894=1

119902119894119909119894 (18)

where the number of data used in data fusion thatis 11987310158401015840 equals the number of samples remained aftereliminating repetitive data and gross errors

(4) Replace gross error with data fusion results 119909119891

to constitute a new sample sequence if 119896 grosserrors are eliminated from 119873 data add 119896 119909

119891in the

sample sequence and thus obtain 119873 new samples1199091015840

1 1199091015840

2 1199091015840

119873without gross error under a high sam-

pling rate

6 Mathematical Problems in Engineering

32 Average-BasedDecimation Filtering Themaximum sam-pling rate of ADC in DSO is generally much higher than theactually required sampling rate of measured signal spectraThus oversampling has an advantage of filtering digital signalto improve the effective resolution of displayed waveformsand reduce the undesired noiseTherefore under the premiseof oversampling the vertical resolution at the actual samplingrate can be increased by adopting average-based decimationfiltering algorithm To be specific the DSO can carry outoversampling at high sampling rate that is 119873 times of theactual sampling rate corresponding to the time base selectedby users and then apply the information entropy-based datafusion algorithm mentioned in the former section to 119873

sampling points at high sampling rate to exclude gross errorsfuse data of effective samples average after creating newsample sequences and finally decimate sampling points atthe actual sampling rate Average-based decimation under119873times oversampling is given by

119860119873 (119899 + 119894) =

1

119873

119873minus1

sum

119895=0

119909119873(119899 + 119894119873 + 119895) (19)

where 119894 = 0 plusmn1 plusmn2 Figure 3 shows the average-based decimation principle

when119873 = 3The resolution improved by average-based decimation

filtering is measured in bits which is the function of119873 (the number of samples to average or oversamplingfactor)

119877119867= 05log

2119873 = 05log

2(119878119872

119878119877

) (20)

In (20) 119877119867

is the improved resolution 119878119872

is the highsampling rate and 119878

119877is the actual sampling rate The minus3 dB

bandwidth after average is

119861119867= 044119878

119877 (21)

where119861119867denotes the bandwidth and 119878

119877represents the actual

sampling rate It can be seen that improved vertical resolutionand analog bandwidth vary with themaximum sampling rateand actual sampling rate of oscilloscopes

Table 1 lists the ideal values of improved resolutionsand analog bandwidth of the oscilloscope with maximumsampling rate of 1 GSas and 8-bit ADC adopting average-based decimation algorithm under oversampling

Values in Columns 3 and 4 of Table 1 are ideal and theimprovement of resolution is directly proportional to 119873that is to say when 119873 increases by 4 times the resolutioncan be improved by 1 bit In reality the maximum 119873 fallsinto the range of 10000 since it is limited by real-timeperformance and the memory capacity Moreover the fixedpoint mathematics and noise will also lower the highestresolution to some extent Therefore it would better notbe expected that resolution can be improved by over 4to 6 bits It should be also noted that the improvement ofresolution depends on dynamic signals as well For thosesignals whose conversion results always deviate between

different codes resolution can be always improved Forsteady-state signals only when the noise amplitude is morethan 1 or 2 ADC LSBs the improvement of resolution can beobvious Fortunately signals in actual world are always in thecase

Generally speaking when measured signals are charac-terized by single pass or repeat at low speeds conventionalsuccessive capture averaging cannot be adopted and thusaverage-based decimation under oversampling can be usedas an alternative To be specific average-based decimationunder oversampling is especially applicable in the followingtwo situations

Firstly if noise in signals is obviously high (what ismore it is not required to measure noise) average-baseddecimation under oversampling can be adopted to ldquoclearrdquonoise

Secondly average-based decimation can be adopted toimprove the measurement resolution when high-precisionmeasurement of waveforms is required even if the noise insignals is not loud

According to the comparison between (6) and (20) it canbe easily seen that in the conventional successive samplingpoint averaging algorithm the bandwidth is directly propor-tional to actual sampling rate and inversely proportional tothe number of sample points to average When the actualsampling rate is given the bandwidth dramatically decreasesas the number of sample points to average increases How-ever when average-based decimation under oversamplingis adopted bandwidth is only directly proportional to theactual sampling rate but has nothing to do with the numberof sample points to average (oversampling factor) Whenthe actual sampling rate is given the bandwidth is deter-mined accordingly without other additional loss BesidesNyquist frequency increases by119873 times by adopting119873 timesoversamplingTherefore another advantage of average-baseddecimation is reducing aliasing

33 Processing Example A group of sets of sample dataacquired by DSO is used as an example to illustrate theprocessing procedure of the algorithm proposed in the paperThe oscilloscope works at the time base of 500 nsdiv andthe corresponding actual sampling rate is 100MSas Theoscilloscope carries out 10-time oversampling at 1 GSas toobtain 10 original discrete sets of sample data at high samplingrate that is 119909

1 1199092 11990910 which are shown in Table 2 In the

samples there are obvious glitches or gross errors caused byADC quantization errors

The expectation and variance of the 10 sample dataare

120583 = 119909 =1

10

10

sum

119894=1

119909119894= 1136

1205902=1

9

10

sum

119894=1

(119909119894minus 119909)2= 5304889

(22)

Mathematical Problems in Engineering 7

x(n minus 1)x(n minus 2)x(n minus 3)

A3(n minus 1) A3(n) A3(n + 1)

x(n) x(n + 1) x(n + 2) x(n + 3) x(n + 4) x(n + 5)

divide3divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +

Output samples

Input samples

Figure 3 Average-based decimation principle of 3 times oversampling

Table 1 Ideal values of improved vertical resolution and bandwidth based on average-based decimation under oversampling

Sampling rate Oversampling factor Total resolution minus3 dB bandwidth1 GSas 1 80 bits 440MHz500MSas 2 85 bits 220MHz250MSas 4 90 bits 110MHz100MSas 10 97 bits 44MHz50MSas 20 102 bits 22MHz25MSas 40 107 bits 11MHz10MSas 100 113 bits 44MHz1MSas 1000 130 bits 440 kHz100 kSas 10000 146 bits 44 kHz

Exclude one repeated data 121 and then constraint con-ditions met by the sample probability distribution are

1198731015840

sum

119894=1

119901 (119909119894) =

9

sum

119894=1

119901 (119909119894) = 1

1198731015840

sum

119894=1

119909119894119901 (119909119894) =

9

sum

119894=1

119909119894119901 (119909119894) = 120583 = 1136

1198731015840

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) =

9

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) = 1205902= 5304889

(23)

According to MEM and Lagrangian function the calcu-lated Lagrangian coefficients are 120582

0= minus43009 120582

1= 001648

and 1205822= 0000452 respectivelyThe expressions of estimated

maximum entropy probability distribution the maximumentropy and self-information quantity are given by

119901 (119909119894)

= exp[

[

1205820+

119898

sum

119895=1

120582119895119892119895(119909119894)]

]

= exp [minus43009 + 001648119909119894+ 0000452(119909

119894minus 1136)

2]

119867(119909)max = minus1205820 minus119898

sum

119895=1

120582119895119864 (119892119895) = 21893

119868 (119909119894) = minus ln119901 (119909

119894) = 43009 minus 001648119909

119894

minus 0000452(119909119894minus 1136)

2

(24)

Corresponding probability and self-information quantityof each sample data are shown in Table 3

8 Mathematical Problems in Engineering

Table 2 10 sets of original sample data obtained by oversampling

Number 1199091

1199092

1199093

1199094

1199095

1199096

1199097

1199098

1199099

11990910

Sample data 120 121 121 123 124 125 63 79 129 131

Table 3 Probability and self-information quantity of each set ofsample data

Number Sampledata Probability Self-information

quantity1199091 120 00998 230481199092 121 01021 228211199093 123 01071 223391199094 124 01099 220851199095 125 01128 218221199096 63 01217 210541199097 79 00856 245791199098 129 01265 206781199099 131 01346 20052

According to the distribution of the maximum entropythe uncertainty of measurement is

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894)

= radic

9

sum

119894=1

(119909119894minus 1136)

2119901 (119909119894) = 23033

(25)

and then confidence interval is

[119909 minus 119906 119909 + 119906] = [90567 136633] (26)

It can be judged that in Table 2 1199097and 119909

8are gross errors

in quantization so they should be excluded from the samplesequence At the same time the repeated sample 119909

3in Table 2

should also be excluded so the remaining 7 sets of effectivesample data form a new sample sequence (ie 119909

1 1199092 1199097)

for data fusion and the calculation results are shown inTable 4

According to (15)ndash(18) the result of data fusion is

119909119891= 124893 (27)

Replace the sample data 1199097and 119909

8in Table 2 with data

fusion result to form a new sample sequence containing 10sets of new sample data without gross errors at high samplingrate that is 1199091015840

1 1199091015840

2 1199091015840

10 as shown in Table 5

Finally average 10 sets of new sample data at highsampling rate (1 GSas) to achieve 1 sampling point at actualsampling rate (100MSas) that is

119860 =1

10

10

sum

119894=1

1199091015840

119894= 1243786 (28)

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (original signal)

SamplesD

igita

l out

put c

ode

Figure 4 Time-domain waveform of originally acquired signals at100MSas

When averaging the 10 sets of original sample data(1199091 1199092 11990910) obtained through oversampling directly the

average value is

1198601015840=1

10

10

sum

119894=1

119909119894= 1136 (29)

Else when averaging the remaining 8 sets of sample dataexcluding gross errors 119909

7and 119909

8from 10 sets of sample data

(1199091 1199092 11990910) obtained through oversampling the average

value is

11986010158401015840=1

8(1199091+ 1199092+ 1199093+ 1199094+ 1199095+ 1199096+ 1199099+ 11990910) = 12425

(30)

According to the maximum entropy theory the result1243786 obtained by the algorithm proposed in the paperis the precise measurement of unknown signal obtainedfrom sample data without any subjective hypotheses andconstraints

Similarly based on information entropy theory foreach group of original sample data (119909

1 1199092 11990910) obtained

through oversampling (1 GSas) gross errors excluded datafusing and average-based decimating can be adopted toobtain precise measurement data of a complete waveform atthe actual sampling rate (100MSas)

4 Experiment and Result Analysis

In order to verify the effectiveness and superiority of verticalresolution improved by the algorithm mentioned in the

Mathematical Problems in Engineering 9

Table 4 Fusion weight coefficient of 7 sets of effective sample data

Number Sample data Self-informationquantity

Ratio of informationquantity

Weight coefficient offusion

1199091

120 23048 10528 013501199092

121 22821 10424 013641199093

123 22339 10204 013931199094

124 22085 10088 014091199095

125 21822 09968 014261199096

129 20678 09445 015051199097

131 20051 09159 01552

Table 5 10 New sample data containing data fusion result

Number 1199091015840

11199091015840

21199091015840

31199091015840

41199091015840

51199091015840

61199091015840

71199091015840

81199091015840

91199091015840

10

Sample data 120 121 121 123 124 125 124893 124893 129 131

0 5 10 15 20 25 30 35 40 45 50minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (original signal)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 280776

Figure 5 Spectrum of originally acquired signals at 100MSas

paper we utilize 8-bit ADC model provided by AnalogDevice corporation to establish the acquisition system ofoscilloscope conduct simulation experiments with conven-tional average algorithm direct decimation algorithmand thealgorithm mentioned in the paper and finally estimate andcompare performances of all algorithms

The oscilloscope works at the time base of 500 nsdivand the corresponding actual sampling rate is 100MSasThefrequency of input sine wave is 119891

119894= 1MSas In order to

simulate quantization errors caused by noise interference andclock jitter data mismatched samples are randomly addedto the ideal ADC sampling model so the acquired samplesequence includes gross errors in quantization

Experiment 1 Sampling rate 119891119904= 100MSas Time-domain

waveform and signal spectrum obtained from sampling

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)A

mpl

itude

(dB)

SNR = 430485

Figure 6 Signal spectrumof successive capture averaging (119873 = 10)

results without any processing are shown in Figures 4 and 5respectively

After calculation in Figure 5 SNR = 280776 dB andENOB = 43717 bits

Experiment 2 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by applying successive capture averaging tosampling results with119873 = 10 is shown in Figure 6

After calculation in Figure 6 SNR = 430485 dB andENOB = 68585 bits

Experiment 3 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by adopting successive sample averaging tosampling results with119873 = 10 is shown in Figure 7

After calculation in Figure 7 SNR = 442538 dB andENOB = 70588 bits

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Information Entropy- and Average-Based ...

6 Mathematical Problems in Engineering

32 Average-BasedDecimation Filtering Themaximum sam-pling rate of ADC in DSO is generally much higher than theactually required sampling rate of measured signal spectraThus oversampling has an advantage of filtering digital signalto improve the effective resolution of displayed waveformsand reduce the undesired noiseTherefore under the premiseof oversampling the vertical resolution at the actual samplingrate can be increased by adopting average-based decimationfiltering algorithm To be specific the DSO can carry outoversampling at high sampling rate that is 119873 times of theactual sampling rate corresponding to the time base selectedby users and then apply the information entropy-based datafusion algorithm mentioned in the former section to 119873

sampling points at high sampling rate to exclude gross errorsfuse data of effective samples average after creating newsample sequences and finally decimate sampling points atthe actual sampling rate Average-based decimation under119873times oversampling is given by

119860119873 (119899 + 119894) =

1

119873

119873minus1

sum

119895=0

119909119873(119899 + 119894119873 + 119895) (19)

where 119894 = 0 plusmn1 plusmn2 Figure 3 shows the average-based decimation principle

when119873 = 3The resolution improved by average-based decimation

filtering is measured in bits which is the function of119873 (the number of samples to average or oversamplingfactor)

119877119867= 05log

2119873 = 05log

2(119878119872

119878119877

) (20)

In (20) 119877119867

is the improved resolution 119878119872

is the highsampling rate and 119878

119877is the actual sampling rate The minus3 dB

bandwidth after average is

119861119867= 044119878

119877 (21)

where119861119867denotes the bandwidth and 119878

119877represents the actual

sampling rate It can be seen that improved vertical resolutionand analog bandwidth vary with themaximum sampling rateand actual sampling rate of oscilloscopes

Table 1 lists the ideal values of improved resolutionsand analog bandwidth of the oscilloscope with maximumsampling rate of 1 GSas and 8-bit ADC adopting average-based decimation algorithm under oversampling

Values in Columns 3 and 4 of Table 1 are ideal and theimprovement of resolution is directly proportional to 119873that is to say when 119873 increases by 4 times the resolutioncan be improved by 1 bit In reality the maximum 119873 fallsinto the range of 10000 since it is limited by real-timeperformance and the memory capacity Moreover the fixedpoint mathematics and noise will also lower the highestresolution to some extent Therefore it would better notbe expected that resolution can be improved by over 4to 6 bits It should be also noted that the improvement ofresolution depends on dynamic signals as well For thosesignals whose conversion results always deviate between

different codes resolution can be always improved Forsteady-state signals only when the noise amplitude is morethan 1 or 2 ADC LSBs the improvement of resolution can beobvious Fortunately signals in actual world are always in thecase

Generally speaking when measured signals are charac-terized by single pass or repeat at low speeds conventionalsuccessive capture averaging cannot be adopted and thusaverage-based decimation under oversampling can be usedas an alternative To be specific average-based decimationunder oversampling is especially applicable in the followingtwo situations

Firstly if noise in signals is obviously high (what ismore it is not required to measure noise) average-baseddecimation under oversampling can be adopted to ldquoclearrdquonoise

Secondly average-based decimation can be adopted toimprove the measurement resolution when high-precisionmeasurement of waveforms is required even if the noise insignals is not loud

According to the comparison between (6) and (20) it canbe easily seen that in the conventional successive samplingpoint averaging algorithm the bandwidth is directly propor-tional to actual sampling rate and inversely proportional tothe number of sample points to average When the actualsampling rate is given the bandwidth dramatically decreasesas the number of sample points to average increases How-ever when average-based decimation under oversamplingis adopted bandwidth is only directly proportional to theactual sampling rate but has nothing to do with the numberof sample points to average (oversampling factor) Whenthe actual sampling rate is given the bandwidth is deter-mined accordingly without other additional loss BesidesNyquist frequency increases by119873 times by adopting119873 timesoversamplingTherefore another advantage of average-baseddecimation is reducing aliasing

33 Processing Example A group of sets of sample dataacquired by DSO is used as an example to illustrate theprocessing procedure of the algorithm proposed in the paperThe oscilloscope works at the time base of 500 nsdiv andthe corresponding actual sampling rate is 100MSas Theoscilloscope carries out 10-time oversampling at 1 GSas toobtain 10 original discrete sets of sample data at high samplingrate that is 119909

1 1199092 11990910 which are shown in Table 2 In the

samples there are obvious glitches or gross errors caused byADC quantization errors

The expectation and variance of the 10 sample dataare

120583 = 119909 =1

10

10

sum

119894=1

119909119894= 1136

1205902=1

9

10

sum

119894=1

(119909119894minus 119909)2= 5304889

(22)

Mathematical Problems in Engineering 7

x(n minus 1)x(n minus 2)x(n minus 3)

A3(n minus 1) A3(n) A3(n + 1)

x(n) x(n + 1) x(n + 2) x(n + 3) x(n + 4) x(n + 5)

divide3divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +

Output samples

Input samples

Figure 3 Average-based decimation principle of 3 times oversampling

Table 1 Ideal values of improved vertical resolution and bandwidth based on average-based decimation under oversampling

Sampling rate Oversampling factor Total resolution minus3 dB bandwidth1 GSas 1 80 bits 440MHz500MSas 2 85 bits 220MHz250MSas 4 90 bits 110MHz100MSas 10 97 bits 44MHz50MSas 20 102 bits 22MHz25MSas 40 107 bits 11MHz10MSas 100 113 bits 44MHz1MSas 1000 130 bits 440 kHz100 kSas 10000 146 bits 44 kHz

Exclude one repeated data 121 and then constraint con-ditions met by the sample probability distribution are

1198731015840

sum

119894=1

119901 (119909119894) =

9

sum

119894=1

119901 (119909119894) = 1

1198731015840

sum

119894=1

119909119894119901 (119909119894) =

9

sum

119894=1

119909119894119901 (119909119894) = 120583 = 1136

1198731015840

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) =

9

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) = 1205902= 5304889

(23)

According to MEM and Lagrangian function the calcu-lated Lagrangian coefficients are 120582

0= minus43009 120582

1= 001648

and 1205822= 0000452 respectivelyThe expressions of estimated

maximum entropy probability distribution the maximumentropy and self-information quantity are given by

119901 (119909119894)

= exp[

[

1205820+

119898

sum

119895=1

120582119895119892119895(119909119894)]

]

= exp [minus43009 + 001648119909119894+ 0000452(119909

119894minus 1136)

2]

119867(119909)max = minus1205820 minus119898

sum

119895=1

120582119895119864 (119892119895) = 21893

119868 (119909119894) = minus ln119901 (119909

119894) = 43009 minus 001648119909

119894

minus 0000452(119909119894minus 1136)

2

(24)

Corresponding probability and self-information quantityof each sample data are shown in Table 3

8 Mathematical Problems in Engineering

Table 2 10 sets of original sample data obtained by oversampling

Number 1199091

1199092

1199093

1199094

1199095

1199096

1199097

1199098

1199099

11990910

Sample data 120 121 121 123 124 125 63 79 129 131

Table 3 Probability and self-information quantity of each set ofsample data

Number Sampledata Probability Self-information

quantity1199091 120 00998 230481199092 121 01021 228211199093 123 01071 223391199094 124 01099 220851199095 125 01128 218221199096 63 01217 210541199097 79 00856 245791199098 129 01265 206781199099 131 01346 20052

According to the distribution of the maximum entropythe uncertainty of measurement is

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894)

= radic

9

sum

119894=1

(119909119894minus 1136)

2119901 (119909119894) = 23033

(25)

and then confidence interval is

[119909 minus 119906 119909 + 119906] = [90567 136633] (26)

It can be judged that in Table 2 1199097and 119909

8are gross errors

in quantization so they should be excluded from the samplesequence At the same time the repeated sample 119909

3in Table 2

should also be excluded so the remaining 7 sets of effectivesample data form a new sample sequence (ie 119909

1 1199092 1199097)

for data fusion and the calculation results are shown inTable 4

According to (15)ndash(18) the result of data fusion is

119909119891= 124893 (27)

Replace the sample data 1199097and 119909

8in Table 2 with data

fusion result to form a new sample sequence containing 10sets of new sample data without gross errors at high samplingrate that is 1199091015840

1 1199091015840

2 1199091015840

10 as shown in Table 5

Finally average 10 sets of new sample data at highsampling rate (1 GSas) to achieve 1 sampling point at actualsampling rate (100MSas) that is

119860 =1

10

10

sum

119894=1

1199091015840

119894= 1243786 (28)

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (original signal)

SamplesD

igita

l out

put c

ode

Figure 4 Time-domain waveform of originally acquired signals at100MSas

When averaging the 10 sets of original sample data(1199091 1199092 11990910) obtained through oversampling directly the

average value is

1198601015840=1

10

10

sum

119894=1

119909119894= 1136 (29)

Else when averaging the remaining 8 sets of sample dataexcluding gross errors 119909

7and 119909

8from 10 sets of sample data

(1199091 1199092 11990910) obtained through oversampling the average

value is

11986010158401015840=1

8(1199091+ 1199092+ 1199093+ 1199094+ 1199095+ 1199096+ 1199099+ 11990910) = 12425

(30)

According to the maximum entropy theory the result1243786 obtained by the algorithm proposed in the paperis the precise measurement of unknown signal obtainedfrom sample data without any subjective hypotheses andconstraints

Similarly based on information entropy theory foreach group of original sample data (119909

1 1199092 11990910) obtained

through oversampling (1 GSas) gross errors excluded datafusing and average-based decimating can be adopted toobtain precise measurement data of a complete waveform atthe actual sampling rate (100MSas)

4 Experiment and Result Analysis

In order to verify the effectiveness and superiority of verticalresolution improved by the algorithm mentioned in the

Mathematical Problems in Engineering 9

Table 4 Fusion weight coefficient of 7 sets of effective sample data

Number Sample data Self-informationquantity

Ratio of informationquantity

Weight coefficient offusion

1199091

120 23048 10528 013501199092

121 22821 10424 013641199093

123 22339 10204 013931199094

124 22085 10088 014091199095

125 21822 09968 014261199096

129 20678 09445 015051199097

131 20051 09159 01552

Table 5 10 New sample data containing data fusion result

Number 1199091015840

11199091015840

21199091015840

31199091015840

41199091015840

51199091015840

61199091015840

71199091015840

81199091015840

91199091015840

10

Sample data 120 121 121 123 124 125 124893 124893 129 131

0 5 10 15 20 25 30 35 40 45 50minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (original signal)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 280776

Figure 5 Spectrum of originally acquired signals at 100MSas

paper we utilize 8-bit ADC model provided by AnalogDevice corporation to establish the acquisition system ofoscilloscope conduct simulation experiments with conven-tional average algorithm direct decimation algorithmand thealgorithm mentioned in the paper and finally estimate andcompare performances of all algorithms

The oscilloscope works at the time base of 500 nsdivand the corresponding actual sampling rate is 100MSasThefrequency of input sine wave is 119891

119894= 1MSas In order to

simulate quantization errors caused by noise interference andclock jitter data mismatched samples are randomly addedto the ideal ADC sampling model so the acquired samplesequence includes gross errors in quantization

Experiment 1 Sampling rate 119891119904= 100MSas Time-domain

waveform and signal spectrum obtained from sampling

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)A

mpl

itude

(dB)

SNR = 430485

Figure 6 Signal spectrumof successive capture averaging (119873 = 10)

results without any processing are shown in Figures 4 and 5respectively

After calculation in Figure 5 SNR = 280776 dB andENOB = 43717 bits

Experiment 2 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by applying successive capture averaging tosampling results with119873 = 10 is shown in Figure 6

After calculation in Figure 6 SNR = 430485 dB andENOB = 68585 bits

Experiment 3 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by adopting successive sample averaging tosampling results with119873 = 10 is shown in Figure 7

After calculation in Figure 7 SNR = 442538 dB andENOB = 70588 bits

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Information Entropy- and Average-Based ...

Mathematical Problems in Engineering 7

x(n minus 1)x(n minus 2)x(n minus 3)

A3(n minus 1) A3(n) A3(n + 1)

x(n) x(n + 1) x(n + 2) x(n + 3) x(n + 4) x(n + 5)

divide3divide3 divide3

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

middot middot middot

+ + +

Output samples

Input samples

Figure 3 Average-based decimation principle of 3 times oversampling

Table 1 Ideal values of improved vertical resolution and bandwidth based on average-based decimation under oversampling

Sampling rate Oversampling factor Total resolution minus3 dB bandwidth1 GSas 1 80 bits 440MHz500MSas 2 85 bits 220MHz250MSas 4 90 bits 110MHz100MSas 10 97 bits 44MHz50MSas 20 102 bits 22MHz25MSas 40 107 bits 11MHz10MSas 100 113 bits 44MHz1MSas 1000 130 bits 440 kHz100 kSas 10000 146 bits 44 kHz

Exclude one repeated data 121 and then constraint con-ditions met by the sample probability distribution are

1198731015840

sum

119894=1

119901 (119909119894) =

9

sum

119894=1

119901 (119909119894) = 1

1198731015840

sum

119894=1

119909119894119901 (119909119894) =

9

sum

119894=1

119909119894119901 (119909119894) = 120583 = 1136

1198731015840

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) =

9

sum

119894=1

(119909119894minus 120583)2119901 (119909119894) = 1205902= 5304889

(23)

According to MEM and Lagrangian function the calcu-lated Lagrangian coefficients are 120582

0= minus43009 120582

1= 001648

and 1205822= 0000452 respectivelyThe expressions of estimated

maximum entropy probability distribution the maximumentropy and self-information quantity are given by

119901 (119909119894)

= exp[

[

1205820+

119898

sum

119895=1

120582119895119892119895(119909119894)]

]

= exp [minus43009 + 001648119909119894+ 0000452(119909

119894minus 1136)

2]

119867(119909)max = minus1205820 minus119898

sum

119895=1

120582119895119864 (119892119895) = 21893

119868 (119909119894) = minus ln119901 (119909

119894) = 43009 minus 001648119909

119894

minus 0000452(119909119894minus 1136)

2

(24)

Corresponding probability and self-information quantityof each sample data are shown in Table 3

8 Mathematical Problems in Engineering

Table 2 10 sets of original sample data obtained by oversampling

Number 1199091

1199092

1199093

1199094

1199095

1199096

1199097

1199098

1199099

11990910

Sample data 120 121 121 123 124 125 63 79 129 131

Table 3 Probability and self-information quantity of each set ofsample data

Number Sampledata Probability Self-information

quantity1199091 120 00998 230481199092 121 01021 228211199093 123 01071 223391199094 124 01099 220851199095 125 01128 218221199096 63 01217 210541199097 79 00856 245791199098 129 01265 206781199099 131 01346 20052

According to the distribution of the maximum entropythe uncertainty of measurement is

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894)

= radic

9

sum

119894=1

(119909119894minus 1136)

2119901 (119909119894) = 23033

(25)

and then confidence interval is

[119909 minus 119906 119909 + 119906] = [90567 136633] (26)

It can be judged that in Table 2 1199097and 119909

8are gross errors

in quantization so they should be excluded from the samplesequence At the same time the repeated sample 119909

3in Table 2

should also be excluded so the remaining 7 sets of effectivesample data form a new sample sequence (ie 119909

1 1199092 1199097)

for data fusion and the calculation results are shown inTable 4

According to (15)ndash(18) the result of data fusion is

119909119891= 124893 (27)

Replace the sample data 1199097and 119909

8in Table 2 with data

fusion result to form a new sample sequence containing 10sets of new sample data without gross errors at high samplingrate that is 1199091015840

1 1199091015840

2 1199091015840

10 as shown in Table 5

Finally average 10 sets of new sample data at highsampling rate (1 GSas) to achieve 1 sampling point at actualsampling rate (100MSas) that is

119860 =1

10

10

sum

119894=1

1199091015840

119894= 1243786 (28)

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (original signal)

SamplesD

igita

l out

put c

ode

Figure 4 Time-domain waveform of originally acquired signals at100MSas

When averaging the 10 sets of original sample data(1199091 1199092 11990910) obtained through oversampling directly the

average value is

1198601015840=1

10

10

sum

119894=1

119909119894= 1136 (29)

Else when averaging the remaining 8 sets of sample dataexcluding gross errors 119909

7and 119909

8from 10 sets of sample data

(1199091 1199092 11990910) obtained through oversampling the average

value is

11986010158401015840=1

8(1199091+ 1199092+ 1199093+ 1199094+ 1199095+ 1199096+ 1199099+ 11990910) = 12425

(30)

According to the maximum entropy theory the result1243786 obtained by the algorithm proposed in the paperis the precise measurement of unknown signal obtainedfrom sample data without any subjective hypotheses andconstraints

Similarly based on information entropy theory foreach group of original sample data (119909

1 1199092 11990910) obtained

through oversampling (1 GSas) gross errors excluded datafusing and average-based decimating can be adopted toobtain precise measurement data of a complete waveform atthe actual sampling rate (100MSas)

4 Experiment and Result Analysis

In order to verify the effectiveness and superiority of verticalresolution improved by the algorithm mentioned in the

Mathematical Problems in Engineering 9

Table 4 Fusion weight coefficient of 7 sets of effective sample data

Number Sample data Self-informationquantity

Ratio of informationquantity

Weight coefficient offusion

1199091

120 23048 10528 013501199092

121 22821 10424 013641199093

123 22339 10204 013931199094

124 22085 10088 014091199095

125 21822 09968 014261199096

129 20678 09445 015051199097

131 20051 09159 01552

Table 5 10 New sample data containing data fusion result

Number 1199091015840

11199091015840

21199091015840

31199091015840

41199091015840

51199091015840

61199091015840

71199091015840

81199091015840

91199091015840

10

Sample data 120 121 121 123 124 125 124893 124893 129 131

0 5 10 15 20 25 30 35 40 45 50minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (original signal)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 280776

Figure 5 Spectrum of originally acquired signals at 100MSas

paper we utilize 8-bit ADC model provided by AnalogDevice corporation to establish the acquisition system ofoscilloscope conduct simulation experiments with conven-tional average algorithm direct decimation algorithmand thealgorithm mentioned in the paper and finally estimate andcompare performances of all algorithms

The oscilloscope works at the time base of 500 nsdivand the corresponding actual sampling rate is 100MSasThefrequency of input sine wave is 119891

119894= 1MSas In order to

simulate quantization errors caused by noise interference andclock jitter data mismatched samples are randomly addedto the ideal ADC sampling model so the acquired samplesequence includes gross errors in quantization

Experiment 1 Sampling rate 119891119904= 100MSas Time-domain

waveform and signal spectrum obtained from sampling

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)A

mpl

itude

(dB)

SNR = 430485

Figure 6 Signal spectrumof successive capture averaging (119873 = 10)

results without any processing are shown in Figures 4 and 5respectively

After calculation in Figure 5 SNR = 280776 dB andENOB = 43717 bits

Experiment 2 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by applying successive capture averaging tosampling results with119873 = 10 is shown in Figure 6

After calculation in Figure 6 SNR = 430485 dB andENOB = 68585 bits

Experiment 3 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by adopting successive sample averaging tosampling results with119873 = 10 is shown in Figure 7

After calculation in Figure 7 SNR = 442538 dB andENOB = 70588 bits

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article Information Entropy- and Average-Based ...

8 Mathematical Problems in Engineering

Table 2 10 sets of original sample data obtained by oversampling

Number 1199091

1199092

1199093

1199094

1199095

1199096

1199097

1199098

1199099

11990910

Sample data 120 121 121 123 124 125 63 79 129 131

Table 3 Probability and self-information quantity of each set ofsample data

Number Sampledata Probability Self-information

quantity1199091 120 00998 230481199092 121 01021 228211199093 123 01071 223391199094 124 01099 220851199095 125 01128 218221199096 63 01217 210541199097 79 00856 245791199098 129 01265 206781199099 131 01346 20052

According to the distribution of the maximum entropythe uncertainty of measurement is

119906 = radic

1198731015840

sum

119894=1

(119909119894minus 119909)2119901 (119909119894)

= radic

9

sum

119894=1

(119909119894minus 1136)

2119901 (119909119894) = 23033

(25)

and then confidence interval is

[119909 minus 119906 119909 + 119906] = [90567 136633] (26)

It can be judged that in Table 2 1199097and 119909

8are gross errors

in quantization so they should be excluded from the samplesequence At the same time the repeated sample 119909

3in Table 2

should also be excluded so the remaining 7 sets of effectivesample data form a new sample sequence (ie 119909

1 1199092 1199097)

for data fusion and the calculation results are shown inTable 4

According to (15)ndash(18) the result of data fusion is

119909119891= 124893 (27)

Replace the sample data 1199097and 119909

8in Table 2 with data

fusion result to form a new sample sequence containing 10sets of new sample data without gross errors at high samplingrate that is 1199091015840

1 1199091015840

2 1199091015840

10 as shown in Table 5

Finally average 10 sets of new sample data at highsampling rate (1 GSas) to achieve 1 sampling point at actualsampling rate (100MSas) that is

119860 =1

10

10

sum

119894=1

1199091015840

119894= 1243786 (28)

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (original signal)

SamplesD

igita

l out

put c

ode

Figure 4 Time-domain waveform of originally acquired signals at100MSas

When averaging the 10 sets of original sample data(1199091 1199092 11990910) obtained through oversampling directly the

average value is

1198601015840=1

10

10

sum

119894=1

119909119894= 1136 (29)

Else when averaging the remaining 8 sets of sample dataexcluding gross errors 119909

7and 119909

8from 10 sets of sample data

(1199091 1199092 11990910) obtained through oversampling the average

value is

11986010158401015840=1

8(1199091+ 1199092+ 1199093+ 1199094+ 1199095+ 1199096+ 1199099+ 11990910) = 12425

(30)

According to the maximum entropy theory the result1243786 obtained by the algorithm proposed in the paperis the precise measurement of unknown signal obtainedfrom sample data without any subjective hypotheses andconstraints

Similarly based on information entropy theory foreach group of original sample data (119909

1 1199092 11990910) obtained

through oversampling (1 GSas) gross errors excluded datafusing and average-based decimating can be adopted toobtain precise measurement data of a complete waveform atthe actual sampling rate (100MSas)

4 Experiment and Result Analysis

In order to verify the effectiveness and superiority of verticalresolution improved by the algorithm mentioned in the

Mathematical Problems in Engineering 9

Table 4 Fusion weight coefficient of 7 sets of effective sample data

Number Sample data Self-informationquantity

Ratio of informationquantity

Weight coefficient offusion

1199091

120 23048 10528 013501199092

121 22821 10424 013641199093

123 22339 10204 013931199094

124 22085 10088 014091199095

125 21822 09968 014261199096

129 20678 09445 015051199097

131 20051 09159 01552

Table 5 10 New sample data containing data fusion result

Number 1199091015840

11199091015840

21199091015840

31199091015840

41199091015840

51199091015840

61199091015840

71199091015840

81199091015840

91199091015840

10

Sample data 120 121 121 123 124 125 124893 124893 129 131

0 5 10 15 20 25 30 35 40 45 50minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (original signal)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 280776

Figure 5 Spectrum of originally acquired signals at 100MSas

paper we utilize 8-bit ADC model provided by AnalogDevice corporation to establish the acquisition system ofoscilloscope conduct simulation experiments with conven-tional average algorithm direct decimation algorithmand thealgorithm mentioned in the paper and finally estimate andcompare performances of all algorithms

The oscilloscope works at the time base of 500 nsdivand the corresponding actual sampling rate is 100MSasThefrequency of input sine wave is 119891

119894= 1MSas In order to

simulate quantization errors caused by noise interference andclock jitter data mismatched samples are randomly addedto the ideal ADC sampling model so the acquired samplesequence includes gross errors in quantization

Experiment 1 Sampling rate 119891119904= 100MSas Time-domain

waveform and signal spectrum obtained from sampling

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)A

mpl

itude

(dB)

SNR = 430485

Figure 6 Signal spectrumof successive capture averaging (119873 = 10)

results without any processing are shown in Figures 4 and 5respectively

After calculation in Figure 5 SNR = 280776 dB andENOB = 43717 bits

Experiment 2 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by applying successive capture averaging tosampling results with119873 = 10 is shown in Figure 6

After calculation in Figure 6 SNR = 430485 dB andENOB = 68585 bits

Experiment 3 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by adopting successive sample averaging tosampling results with119873 = 10 is shown in Figure 7

After calculation in Figure 7 SNR = 442538 dB andENOB = 70588 bits

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article Information Entropy- and Average-Based ...

Mathematical Problems in Engineering 9

Table 4 Fusion weight coefficient of 7 sets of effective sample data

Number Sample data Self-informationquantity

Ratio of informationquantity

Weight coefficient offusion

1199091

120 23048 10528 013501199092

121 22821 10424 013641199093

123 22339 10204 013931199094

124 22085 10088 014091199095

125 21822 09968 014261199096

129 20678 09445 015051199097

131 20051 09159 01552

Table 5 10 New sample data containing data fusion result

Number 1199091015840

11199091015840

21199091015840

31199091015840

41199091015840

51199091015840

61199091015840

71199091015840

81199091015840

91199091015840

10

Sample data 120 121 121 123 124 125 124893 124893 129 131

0 5 10 15 20 25 30 35 40 45 50minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (original signal)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 280776

Figure 5 Spectrum of originally acquired signals at 100MSas

paper we utilize 8-bit ADC model provided by AnalogDevice corporation to establish the acquisition system ofoscilloscope conduct simulation experiments with conven-tional average algorithm direct decimation algorithmand thealgorithm mentioned in the paper and finally estimate andcompare performances of all algorithms

The oscilloscope works at the time base of 500 nsdivand the corresponding actual sampling rate is 100MSasThefrequency of input sine wave is 119891

119894= 1MSas In order to

simulate quantization errors caused by noise interference andclock jitter data mismatched samples are randomly addedto the ideal ADC sampling model so the acquired samplesequence includes gross errors in quantization

Experiment 1 Sampling rate 119891119904= 100MSas Time-domain

waveform and signal spectrum obtained from sampling

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)A

mpl

itude

(dB)

SNR = 430485

Figure 6 Signal spectrumof successive capture averaging (119873 = 10)

results without any processing are shown in Figures 4 and 5respectively

After calculation in Figure 5 SNR = 280776 dB andENOB = 43717 bits

Experiment 2 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by applying successive capture averaging tosampling results with119873 = 10 is shown in Figure 6

After calculation in Figure 6 SNR = 430485 dB andENOB = 68585 bits

Experiment 3 Sampling rate 119891119904= 100MSas Signal spec-

trum obtained by adopting successive sample averaging tosampling results with119873 = 10 is shown in Figure 7

After calculation in Figure 7 SNR = 442538 dB andENOB = 70588 bits

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 10: Research Article Information Entropy- and Average-Based ...

10 Mathematical Problems in Engineering

SNR = 442538

Frequency (MHz)0 5 10 15 20 25 30 35 40 45 50

minus100

minus90

minus80

minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (successive capture averaging)

Am

plitu

de (d

B)

Figure 7 Signal spectrum of successive sample averaging (119873 = 10)

0 5 10 15 20 25 30 35 40 45 50minus70

minus60

minus50

minus40

minus30

minus20

minus10

0FFT plot (decimation)

Frequency (MHz)

Am

plitu

de (d

B)

SNR = 288353

Figure 8 Signal spectrum of 10 times direct decimation

Experiment 4 Oversampling is adopted with the samplingrate 119891

119904= 1GSas Signal spectrum obtained by directly

applying 10 times decimation to sampling results is shown inFigure 8

After calculation in Figure 8 SNR = 288353 dB andENOB = 44976 bits

Experiment 5 Oversampling is adopted with the samplingrate119891119904= 1GSas Signal spectrum obtained by conducting 10

times average-based decimation on sampling results is shownin Figure 9

Am

plitu

de (d

B)

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (decimation with averaging)

SNR = 466242

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 9 Signal spectrum of 10 times average-based decimation

0 100 200 300 400 500 600 700 800 900 10000

31

63

95

127

159

191

223

255Time domain (data fusion and decimation with averaging)

Samples

Dig

ital o

utpu

t cod

e

Figure 10 Time-domain waveform obtained with data fusion and10 times average-based decimation

After calculation in Figure 9 SNR = 466242 dB andENOB = 74525 bits

Experiment 6 Oversampling is adopted with the samplingrate 119891

119904= 1GSas The time-domain waveform and signal

spectrum obtained by adopting information entropy-basedalgorithm mentioned in the paper to sampling results toexcluding gross errors fusing data creating new samplesequence and then conducting 10 times average-based dec-imation are shown in Figures 10 and 11 respectively

After calculation in Figure 11 SNR = 596071 dB andENOB = 96092 bits

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 11: Research Article Information Entropy- and Average-Based ...

Mathematical Problems in Engineering 11

Table 6 Comparisons of experiment results

Experiment number Method SNR (dB) Effective number of bits (bits) minus3 dB bandwidth (MHz)

1 Nonoversamplingoriginal signal 280776 43717 43

2 Nonoversamplingsuccessive capture averaging 430485 68585 43

3 Nonoversamplingsuccessive sample averaging 442538 70588 43

4 Oversampling10 times direct decimation 288353 44976 43

5Oversampling10 times average-baseddecimation

466242 74525 43

6Oversamplinginformation entropy-based datafusion and 10 timesaverage-based decimation

596071 96092 43

Am

plitu

de (d

B)

minus140

minus120

minus100

minus80

minus60

minus40

minus20

0FFT plot (data fusion and decimation with averaging)

SNR = 596071

0 5 10 15 20 25 30 35 40 45 50Frequency (MHz)

Figure 11 Signal spectrum obtained with data Fusion and 10 timesaverage-based decimation

Table 6 compares the experiment results of the above-mentioned 6 methods

According to Table 6 at the sampling rate of 100MSasthe conventional successive capture averaging algorithm andsuccessive sample averaging algorithm increase the ENOBby about 249 and 269 bits when processing the sinusoidalsamples including quantization errors respectively How-ever successive sample averaging algorithm also causes thedecrease of the bandwidth terribly At the sampling rate of1 GSas the ENOBprovided by the average-based decimationalgorithm is about 295 bits higher than that provided by thedirect decimation algorithm On this basis the algorithmof information entropy-based data fusion and average-baseddecimation proposed in the paper can further increaseENOB by about 216 bits to achieve total ENOB of 961 bitsCompared with the theoretical digitalizing bits of 8-bit ADC

the actual ENOB (resolution) has totally increased by about161 bits which is very close to the theoretically improvedresults 119877

119867= 05log

210 asymp 166 bits in (20) and at the same

time no loss of analog bandwidth at the actual sampling rateis caused

5 Conclusion

This paper proposes a decimation filtering algorithm basedon information entropy and average to realize the goal ofraising the vertical resolution of DSO Based on oversamplingand for single acquiring signal utilize the maximum entropyof sample data to eliminate gross error in quantization fusethe remaining efficient sample data and conduct average-based decimation to further filter the noise and then theDSO resolution can be improved In order to verify theeffectiveness and superiority of the algorithm comparisonexperiments are conducted using different algorithms Theresults show that the improved resolution of the algorithmproposed in the paper is nearly identical with the theoreticaldeduction What is more no subjective hypotheses andconstraints on the detected signals are added during thewhole processing and no impacts on the analog bandwidthof DSO at the actual sampling rate are exerted

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported by the National Natural ScienceFoundation of China (nos 61301263 and 61301264) theSpecialized Research Fund for the Doctoral Program ofHigher Education of China (no 20120185130002) and theFundamental Research Fund for the Central University ofChina (A03007023801217 and A03008023801080)

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 12: Research Article Information Entropy- and Average-Based ...

12 Mathematical Problems in Engineering

References

[1] Tektronix ldquoImprove the vertical resolution of digital phosphoroscilloscopesrdquo China Electronic Market vol 6 no 1 pp 76ndash842012

[2] R Reeder M Looney and J Hand ldquoPushing the state of theart withmultichannel AD convertersrdquoAnalogDialogue vol 39no 2 pp 7ndash10 2005

[3] E Seifert and A Nauda ldquoEnhancing the dynamic range ofanalog-to-digital converters by reducing excess noiserdquo in Pro-ceedings of the IEEEPacific RIMConference onCommunicationsComputers and Signal Processing pp 574ndash576 Victoria CanadaJune 1989

[4] K C Lauritzen S H Talisa and M Peckerar ldquoImpact ofdecorrelation techniques on sampling noise in radio-frequencyapplicationsrdquo IEEE Transactions on Instrumentation and Mea-surement vol 59 no 9 pp 2272ndash2279 2010

[5] V Gregers-Hansen S M Brockett and P E Cahill ldquoA stackeda-to-d converter for increased radar signal processor dynamicrangerdquo in Proceedings of the IEEE International Radar Confer-ence pp 169ndash174 May 2001

[6] S R Duncan V Gregers-Hansen and J P McConnellldquoA stacked analog-to-digital converter providing 100 dB ofdynamic rangerdquo in Proceedings of the IEEE International RadarConference pp 31ndash36 2005

[7] Silicon Laboratories ldquoImproving ADC resolution byoversampling and averagingrdquo Improving ADC resolutionby oversampling and averaging Application Note AN118 2013httpwwwsilabscomSupport20DocumentsTechnicalDocsan118pdf

[8] Y Lembeye J Pierre Keradec and G Cauffet ldquoImprovementin the linearity of fast digital oscilloscopes used in averagingmoderdquo IEEE Transactions on Instrumentation and Measure-ment vol 43 no 6 pp 922ndash928 1994

[9] C Bishop and C Kung ldquoEffects of averaging to reject unwantedsignals in digital sampling oscilloscopesrdquo in Proceedings of the45 Years of Support Innovation - Moving Forward at the Speed ofLight (AUTOTESTCON rsquo10) pp 1ndash4 IEEE Orlando Fla USASeptember 2010

[10] C Fager and K Andersson ldquoImprovement of oscilloscopebased RF measurements by statistical averaging techniquesrdquoin Proceeding of the IEEE MTT-S International MicrowaveSymposium Digest pp 1460ndash1463 San Francisco Calif USAJune 2006

[11] Z H L Luo Q J Zhang and Q C H Fange ldquoMethod of datafusion applied in intelligent instrumentsrdquo Instrument Techniqueand Sensor vol 3 no 1 pp 45ndash46 2002

[12] F-N Cai and Q-X Liu ldquoSingle sensor data fusion and analysisof effectivenessrdquo Joumal of Transducer Technology vol 24 no 2pp 73ndash74 2005

[13] C E Shannon ldquoAmathematical theory of communicationrdquoTheBell System Technical Journal vol 27 no 3 pp 379ndash423 1948

[14] M J Zhu and B J Guo ldquoStudy on evaluation of measurementresult and uncertainty based on maximum entropy methodrdquoElectrical Measurement amp Instrument vol 42 no 8 pp 5ndash82005

[15] Y Tan A Chu M Lu and B T Cunningham ldquoDistributedfeedback laser biosensor noise reductionrdquo IEEE Sensors Journalvol 13 no 5 pp 1972ndash1978 2013

[16] E T Jaynes ldquoInformation theory and statistical mechanicsrdquoThePhysical Review vol 106 no 4 pp 620ndash630 1957

[17] MCThomas and J AThomasElements of InformationTheoryJohn Wiley amp Sons New York NY USA 2nd edition 2006

[18] DH Li and ZH Li ldquoProcessing of gross error in small samplesbased on measurement information theoryrdquo Mechanical Engi-neering amp Aut omation vol 6 no 1 pp 115ndash117 2009

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 13: Research Article Information Entropy- and Average-Based ...

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of