Hardware Implementation of an ADC Error Compensation Using ...
Transcript of Hardware Implementation of an ADC Error Compensation Using ...
Hardware Implementation of an ADC Error Compensation UsingNeural Networks
Hervé Chanal
Clermont Université, Université Blaise Pascal,CNRS/IN2P3, Laboratoire de Physique Corpusculaire, Pole Micrhau, BP 10448, F-63000CLERMONT-FERRAND,France
2011 IMEKO IWADC - IEEE ADC FORUM
Outline
Introduction
Neural Network
Test bench
Hardware implementation
Conclusion and perspective
2 / 14
Introduction
Principle
I ADC post-correction scheme;
I Based on a "feed forward" neural network(NN);
I Use the current and last sample;
I The NN output is added to the ADC output.
Objective
I To implement the compensation both insoftware and hardware;
I To use a massively parallel approach;
I To check the consistency of the results.
Delay
++Neural Network
ADC output
CompensatedADC output
Figure: Proposed Neural Network compensation scheme
3 / 14
Neural Network Architecture
ArchitectureA feed-forward NN :
I Only connection between layers in the forwarddirection;
I Input layer: 2 neurons (current and previoussample);
I Hiddens layers: several architectures tested;
I Ouput layer: 1 neuron giving the compensationvalue.
Neuron ModelThe output value νj of the j th neuron of a layer k is givenby:
νkj = F
(∑
iwk
ij uk−1i +bk
j
)where the uk−1
i are the inputs, F is the activation function,wk
ij the weigths and bkj the bias
Input Layer Hidden Layers Output Layer
Figure: NN architecture
4 / 14
Neural Network Activation Function
Hidden Layers
Two activation functions tested:
I Sigmoid-like:
sig : x 7→
−1 if x <−2
x(1+ x/4) if −2 < x < 0x(1− x/4) if 0 < x < 2
1 if x > 2
I Linear with saturating bounds:
lin : x 7→
−1 si x <−1x si −1 < x < 11 si x > 1
Output Layer
Linear activation function
[1] Kwan,"Simple sigmoid like activation function suitable for digital hardware implementation" , ElectronicLetters 28 ,July 1992
Figure: Comparison betweensigmoid-like function and tanh
Figure: Comparison betweensigmoid-like first order derivativefunction and tanh
5 / 14
Neural Network Learning Set
Learning set
I Creation of an error matrix with the algorithm provided byLundin [1]
I Recursive computation of the matrix coefficient ei j :
eij =aij(n)eij + sref (n)− x(n)
aij(n)+1(1)
where i ,j are the ADC last and current output code, sref theexpected result, x the output of the ADC and n the time
Learning algorithm
I Levendberg-Macquart algorithm to compute the NN weightand bias
I minimize the difference between the LUT and the NN output
Figure: LUT example on 4 bits
[2] Characterization and Correction of Analog-to-Digital Converters, Thesis, Stockolm, Sweden, 2005
6 / 14
Test Bench
Tested ADC
I 9 bits, 100MS/s ADC;
I Pipeline architecture, 2.5 bits/stage;
I A lot of distorsion;
I 5.7 bit ENOB up to 500kHz.
I Output acquired with sinusoidal inputs offrequency between 100kHz and 10MHz
Test bench
I Reference 16 bit ADC AD9446;
I External function generator;
I ENOB decreasing quickly: 9 bits a100kHz, 6 at 1MHz, and 0 at 10MHz.
Figure: ENOB of the test bench and the 9 bits ADC
Figure: Difference between the FFT of the test benchand the 9 bits ADC for a 400kHz sinusoidal input
7 / 14
Results: ENOB
Parameters:
I 10 nodes on 1 hiddenlayer for the NN;
I sig activation function;
I Curves for a 400kHzsinusoid;
Results:
I Gain and offset errorscompensated;
I Lower distortion;
I 0.7 ENOB increase forfrequency below500kHz.
8 / 14
Results: SFDR
Figure: FFT of the output signal from the tested ADC and the test bench
I Aliasing and spurious generated by thetest bench;
I Improvement of the SFDR ≈ 20dB withcompensation;
I Mostly due to a primary and secondaryharmonic distortion peak compensation.
Figure: SFDR with and without compensation
9 / 14
Neural Network Architecture Study
Architecture studied
I 1 and 2 hidden layers for the NN;
I 5 or 10 neurons;
I lin and sig activation function.
I ENOB in the 0-1MHz range.
Results
I Variations with each learning phases;
I Not so much difference between the 10neurons 1 layer case with lin and sig NNand the 3/2 sig NN
Figure: ENOB for various NN architecture
10 / 14
Hardware Implementation
Methodology
I Massively parallel implementation (noresource sharing);
I VHDL coding;I 16 bits fixed logic arithmetic:
I 1 bit for the sign;I 5 bits for the integer part;I 9 bits for the mantissa.
I Target technology: IBM 0.13µm mixedsignals using VCAD standard cells;
I Cadence Software for the simulation,synthesis and place and route;
I Software training algorithm.Figure: SoC encounter software after the place androute of the NN
11 / 14
Place And Route Result
Automatic floor plan of a NN with 2 hidden layers (2/3)
ouput neuron
hidden layer 2neuron 2/3
hidden layer 2neuron 3/3
hidden layer 2neuron 1/3
hidden layer 1neuron 1/2
hidden layer 1neuron 2/2
12 / 14
Simulation Results
Resources usageFor a N-inputs neuron:
I N multipliers;
I 1 (N+1)-inputsadder.
For the lin activationfunction:
I 2 comparators.
For the sig activationfunction:
I 3 comparators;
I 2 adders;
I 1 multiplier.
Synthesis results
I 0.1 ENOB loss;
I Resources usage:
NN architecture LE Power consumption10 neurons F = syn 31k 50mW5 neurons F = syn 15.5k 25mW5 neurons F = lin 8k 6mW2/3 neurons F = syn 20k 25mW
LE = logic element
13 / 14
Conclusion and Perspectives
Conclusion
I Hardware implementation of a compensation technique using NN;
I Improvement the ENOB and lower the distortion;
I Various NN architectures tested;
I Fit in any low cost FPGA.
Perspective
I Improve the hardware implementation of the NN;
I Improve the test bench;
I Use a better ADC;
I Include the fixed point arithmetic errors in the training algorithms;
I Test others NN architectures.
14 / 14