Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel...

355
Double Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2002 ©Muahel Tabet, 2002

Transcript of Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel...

Page 1: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Double Sampling Techniques for CMOS

Image Sensors

by

Muahel Tabet

A thesis

presented to the University of Waterloo

in fulfillment of the

thesis requirement for the degree of

Doctor of Philosophy

in

Electrical and Computer Engineering

Waterloo, Ontario, Canada, 2002

©Muahel Tabet, 2002

Page 2: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

ii

I hereby declare that I am the sole author of this thesis.

I authorize the University of Waterloo to lend this thesis to other institutions or individuals

for the purpose of scholarly research

Muahel Tabet

I further authorize the University of Waterloo to reproduce this thesis by photocopying or by

other means, in total or in part, at the request of other institutions or individuals for the

purpose of scholarly research.

Muahel Tabet

Page 3: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

iii

Borrower’s Page

The University of Waterloo requires the signatures of all persons using or photocopying this thesis. Please sign below, and give address and date.

Page 4: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

iv

Acknowledgements

First of all I should thank and praise almighty Allah for his mercy upon me and for guiding

me throughout my life.

I would like to acknowledge my supervisor, Professor Richard Hornsey, for his

continuous support, help, and motivation throughout the duration of this work.

I would also like to thank my colleagues at the Integrated Camera group of University

of Waterloo (also VISOR Lab in York University) for their valuable discussion and support.

In particular, I extend my gratitude to Mr. N. Tu for designing the mask layout of the CYC

chip. I also would like to thank Mr. Ji-Soo Lee, and Mr. Sanjayan Vinayagamoorthy for their

valuable help, and Mr. Faycal Saffih for his helpful comments. I would like to thank all of

the staff in the Department of Electrical and Computer Engineering for their respect and

support.

I am grateful for research funding from Materials and Manufacturing Ontario

(MMO). I would also like to thank the Canadian Microelectronic Corporation (CMC) for

their support in the fabrication of prototype chips, and for granting me access to their

valuable database.

I am also grateful to the Libyan Educational Scholarship and Fellowship Program in

the Canadian Bureau for International Education (CBIE) for their professional yet

understanding sponsorship. I am very thankful to the Postgraduate Center for Electro-optics

for providing the scholarship.

Finally, my profound gratitude goes to my parents, my wife Um-Ibrahim, and my

daughters Shada Tabet and Nur-Alhuda Tabet, and my son Ibrahim Tabet.

Page 5: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

v

Abstract

In recent years, vision systems based on CMOS image sensors have acquired significant

ground over those based on charge-coupled devices (CCD) [1-5, 7]. The main advantages of

CMOS image sensors are their high level of integration [1], random accessibility [2, 6], and

low-voltage, low-power operation [1-3]. Accordingly, they offer system-on-chip capability

and allow on-chip image processing with low production costs [1, 3, 7]. For these reasons,

they have gained potential for use in many applications, especially where integrated

functionalities are advantageous, such as in security, biometrics, and industrial applications

[8-11].

Most of the reported designs, however, include these on-chip image-processing

functionalities at the expense of silicon area. This can limit the maximum spatial resolution

achievable. Here, we overcome this limitation with a straightforward, yet robust, VLSI

implementation of numerous on-chip image-processing functionalities using double sampling

(DS) techniques. Our method employs two sample–and–hold (S&H) circuits per column to

perform parallel sampled-differentiation of the captured image. This technique can be

applied in the spatial domain as spatial double sampling (SDS) to carry out edge detection

functionality, or in the temporal domain as temporal double sampling (TDS) to perform

motion detection functions in an image scene, especially in the linear mode. Most of the

linear (integrating) CMOS imagers incorporate these sample-and-hold circuits for reducing

the fixed pattern noise (FPN), using a correlated double sampling method. Therefore, no

additional silicon area is required to include these functionalities on the image sensor chip.

Hence, this opens door for implementing higher resolution imagers with reasonable chip

area.

For logarithmic (continuous) CMOS imagers, the application of the temporal double

sampling technique resulted in two “first reported” image processing functionalities: contrast

enhancement suited for low light intensities and 2D edge detection suited for high light

intensities such as those under direct exposure to a lamp or laser.

Page 6: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

vi

Logarithmic image sensors have high optical dynamic; however, they suffer from

high fixed pattern noise (FPN) inherent to their operation. Therefore, it is desirable to

remove this spatial noise. Unfortunately, the correlated double sampling (CDS) method

usually used to remove noise in linear image sensors cannot be applied for FPN removal in

logarithmic sensor because of non-existence of a “reset” signal to start with. This motivated

us to introduce a novel method for FPN reduction in logarithmic image sensors. This method

is based on the so-called modal double sampling (MDS) technique first presented in this

thesis. This core of this technique is a tri-mode “super” active pixel sensor (3M-APS) and

two S&H circuits. This pixel can work in three modes of operation: linear (Lin-APS),

logarithmic (Log-APS), and “reduced compression” logarithmic (2Log-APS) by choosing the

proper control signals. When operated with FPN reduction functionality, the imager can

perform FPN reduction in either linear or logarithmic modes of operation. In the linear

mode, the circuit works as an ordinary CDS circuit to reduce the FPN, whereas in the

logarithmic mode, of interest here, the circuit switches between the two logarithmic modes

and the sampled Log-APS signals and 2Log-APS signals are differenced, in order to perform

the FPN reduction functionality; hence, the name Modal double sampling (MDS) technique.

To get better insight, we performed some rigorous simulations on the APS (in the

logarithmic and linear modes) using HSPICE simulation. These simulations were supported

with measurements performed on prototype chips fabricated using both 0.5µm and 0.35µm

technologies. In order to determine the feasibility of these techniques, we conducted a

thorough experimental investigation on these techniques under various conditions of light

intensities, contrasts, frame-rates, bias voltages, and stimuli speeds. Results were very

promising, and suggest that these techniques are feasible to be used as a front-end to many

VLSI image-processing applications, such as motion detection and image segmentation in

dynamic environments.

Page 7: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

vii

Table of Content Borrower’s Page iii

Acknowledgements iv

Abstract v

Table of Contents vii

List of Figures xi

List of Tables xxxvi

Related Publications xxxvii

Chapter One Introduction 1

1.1 Introduction. 1

1.2 Motivation 2

1.3 Research Objectives 6

1.4 Organization of the thesis 6

Chapter Two CMOS Image Sensors 9

2.1 Introduction. 9

2.1.1 Advantages of smart vision sensors 11

2.1.2 Disadvantages of Smart Vision Sensors 12

2.2. Technologies available 13

2.2.1 CCD Image Sensors 14

2.2.2 CMOS image sensors 16

2.3 Digital versus Analog Processing 19

2.4 Issues in Designing CMOS Image Sensor Chips 21

2.4.1 Mismatch 21

2.4.2 Digital Noise 22

2.5 Photodetection in CMOS Compatible devices 22

2.5.1 Photo-effect in silicon 23

2.5.1.1 Absorption Coefficient 25

2.5.1.2 Optical Generation Rate and Quantum Efficiency 26

2.5.1.3 Recombination 27

2.5.1.4 Drift-current 27

Page 8: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

viii

2.5.1.5 Diffusion-current 28

2.5.2 CMOS compatible Photodetectors 31

2.5.2.1 Photodiodes 31

2.5.2.2 Phototransistors 35

2.5.2.3 Photogate 38

2.5.2.4 Pinned Photodiodes 38

2.5.2.5 Photodiode or Phototransistor? 40

2.5.3 Photocircuits 40

2.5.3.1 Integration (linear) Active pixel sensors 41

2.5.3.2 Logarithmic Active pixel sensor Photocircuits 44

2.5.3.3 CMOS Imager Limitations 46

2.6 Visual Motion Detection 49

2.6.1 Motion detection algorithms 50

2.6.2 Gradient motion detection schemes 51

2.6.2.1 Stored Comparison Schemes 53

2.6.2.2 Continuous (Delayed) Comparison schemes 55

2.7 Visual Edge Detection 64

Chapter Three Modeling and Characterization of CMOS Active Pixel Sensors 66

3.1 Introduction 66

3.2 CMOS-Compatible Photodiode Models 67

3.2.1 Equivalent circuit models 67

3.2.2 Post-layout modeling and simulation 73

3.3 Modeling of Logarithmic Active Pixel Sensors 76

3.3.1 MOSFET in Subthreshold 78

3.3.2 Logarithmic Pixels Configurations 80

3.4 Modeling of Linear (integrating) Active Pixel Sensors 82

3.5 Simulation Results 84

3.5.1 HSPICE models verifications 84

3.5.2 Electrical Sensitivity Analysis 91

3.5.3 Optical Modulation Frequency Response Analysis 95

3.6 Summary and Discussion 98

Page 9: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

ix

Chapter Four Spatial Double Sampling (SDS) 100

4.1 Introduction 100

4.2 System Architecture 101

4.2.1 Correlated double sampling (CDS) circuit 104

4.3 Spatial-edge Detection using SDS in the Linear Mode 105

4.3.1 Effect of illumination level on spatial-edge detection 112

4.3.2 Effect of contrast on spatial-edge detection 118

4.3.3 Effect of frame-rate on spatial-edge detection 124

4.4. Spatial-edge Detection in Logarithmic Mode 130

4.4.1 Effect of illumination level on spatial-edge detection 133

4.4.2 Effect of frame-rate on spatial-edge detection 136

4.5. Spatial-edge Detection in Motion 138

4.6. Summary and Discussion 149

Chapter Five Temporal Double Sampling (TDS) 153

5.1 Introduction 153

5.2 Temporal Edge Detection in Linear Mode 157

5.2.1 Effect of illumination level on TDS signal 159

5.2.2 Effect of stimulus contrast on TDS signal 161

5.2.3 Effect of ∆t and frame rate on TDS signal 164

5.3 Temporal Double Sampling in Logarithmic Mode 171

5.4 Temporal Double Sampling with a Stimulus in Motion 179

5.5 Summary and Discussion 187

Chapter Six FPN Reduction Using Double-Sampled Techniques (MDS) 190

6.1 Introduction 190

6.2 Fixed Pattern Noise (FPN) sources 192

6.2.1 Linear (integrating) active pixel sensors 192

6.2.2 Logarithmic (continuous) active pixel sensors 197

6.2.2.1 Two-MOS Diode-Logarithmic APS (2Log-APS) 200

6.3 3-Mode active pixel sensors (3M-APS) 206

6.4 Double-Sampled FPN Reduction Technique 213

6.4.1 Photoresponse and light intensity effect on FPN 218

Page 10: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

x

6.4.1.1 Linear Mode 218

6.4.1.2 Logarithmic Mode 223

6.4.2 Effect of frame-rate on imager response and FPN 228

6.4.2.1 Linear Mode 228

6.4.2.2 Logarithmic Mode 235

6.4.3 Effect of frame-rate on imager response and FPN 238

6.4.3.1 Linear Mode 238

6.4.3.2 Logarithmic Mode 240

6.5 Switching performance of 3M-APS 243

6.6 Summary and Discussion 252

Chapter Seven Summary and Conclusion 255

7.1 Summary 255

7.2 Future Work 261

Appendix A CYC-Image Sensors Chip 263

Appendix B CYC-Chip Measurements Setups 266

B.1 Edge Detection Measurements for Stationary Stimuli 266

. B.2 Edge Detection Measurements for Moving Stimuli 269

Appendix C ALO-Image Sensor Chip 271

Appendix D ALO-Chip Measurements Setups 276

D.1 Fixed Pattern Noise (FPN) Measurements 276

D.2 Test Structure Measurements 279

D.2.1 Photoresponse measurements 279

D.2.2 Optical modulation frequency responses measurements 279

D.2.3 Switching performance of 3M-APS 283

Appendix E Miscellaneous Measurements 284

Appendix F Correction of FPN Measurements 304

Bibliography 308

Page 11: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xi

List of Figures Figure 1.1 A graphical illustration of “attention selection” schemes for on-

chip data reduction: (a) Fovea, (b) Window. CD: Column Decoder, RD: Row Decoder.

3

Figure 1.2 Classification of intensity based motion processing algorithms. Here Ix is xE ∆∆∝ (SDS) signal, whereas It is tE ∆∆∝ (TDS) signal (after [99]).

4

Figure 2.1 The difference between intelligent vision systems. (a) The conventional intelligent vision system, where data is post-processed by a separate digital processor (computer). (b) The smart vision system, where data is pre-processed on-chip before sending it for any post-processing.

10

Figure 2.2 The milestones of silicon imager history (after [18, 19]).

14

Figure 2.3 Simplified interline-transfer CCD imager functional diagram (after [19]).

15

Figure 2.4 The Block diagram of basic CCD camera system (right) and a photo for an actual CCD camera system (left) [19, 21]. The figure shows the numerous support chips required (CCD, ADC, ASICs, memory, etc.) for the basic operation of CCD cameras. Also shown the various power supplies (with relatively higher voltages) used by CCD image sensor.

15

Figure 2.5 CMOS images sensors functional diagram. (a) Passive Pixel Sensor (PPS) technology: (b) Active Pixel Sensor (APS) technology. (c, d) The functional and block diagram of basic CMOS camera system. (e) Photo for an actual CMOS camera system. The figure shows how Sub-micron CMOS enable camera-on-chip paradigm [19, 21].

17

Figure 2.6 Ideal Photoelectric effect. (a) Electron-hole generation by excitation by light with energy Eph > Eg (1.1eV). (b) Absorption spectrum, sharp at λg, where λg = Eg/hc.

24

Figure 2.7 Energy band structures for indirect semiconductor Si (silicon) showing direct transition and phonon-assisted indirect transition (after [38]).

24

Figure 2.8 Measured absorption coefficient spectrum for silicon (Si). 25

Page 12: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xii

Figure 2.9 An illustration for the photo-generation-recombination mechanisms (after [23]).

27

Figure 2.10 The photo-generated e-h pairs can be separated before they recombine by the built-in electrical field of the p-n junction diode, where they can be collected at the diode’s cathode and anode respectively. This e-h separation and/or collection can be further enhanced by reversing the bias the diode.

28

Figure 2.11 The CMOS compatible photodiode (n+/p-substrate) showing energy-band diagram of its reverse biased p-n junction, and photo-generation (PG), and recombination (R) mechanisms of the electron-hole pairs (carriers). Also shown the drift-current of carriers due to the electric field and the minorities carriers that constitute the diffusion-current.

29

Figure 2.12 Possible CMOS-compatible photodiode structures: (left) n+/ p-substrate diode, (middle) p+ /n-well diode, and (right) n-well /p-substrate diode (after [41]). “C” is the cathode of the photodiode, whereas “A” is its anode.

31

Figure 2.13 Simulated spectral response of the CMOS-compatible photodiode structures: (upper curve) n-well/p-substrate diode, (middle curve) n+/ p-substrate diode, and (bottom curve) p+/n-well diode (after [45]). Data used here are for 0.8um CMOS process.

32

Figure 2.14 Measured percentage quantum efficiency (spectral response) of the CMOS-compatible photodetection structures: (upper curve) n-well/p-substrate diode, (middle curve) n+/ p-substrate diode, and (bottom curve) photogate (after [47]).

35

Figure 2.15 Possible CMOS-compatible phototransistor structures: (a) Lateral PNP-phototransistor, (b) Vertical PNP-phototransistor (parasitic), and (c) PMOS-phototransistor (after [41]).

36

Figure 2.16 Typical spectral response of the CMOS-compatible vertical PNP-phototransistor structure (left curve) compared with that of the n-well/p-substrate photodiode on right graph (after [40]).

37

Figure 2.17 The basic structure of NMOS-photogate photodetector.

38

Figure 2.18 The basic structure of pinned photodiode.

39

Figure 2.19 Standard active pixel sensor (APS) cell (gray-dashed area). The current source load (M4), sample-and-hold circuit (CSH, M5), and

42

Page 13: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xiii

the buffer amplifier are all located per column.

Figure 2.20 An offset current active pixel sensor (after [53]).

43

Figure 2.21 Logarithmic active pixel sensor (Log-APS) with one diode connected NMOS. The circuit in gray area is for the inverted Log-APS.

44

Figure 2.22 HSPICE Simulation of the logarithmic active pixel (Log-APS) transfer-characteristics for pixels with 1, 2-, and 3-cascaded NMOS diodes. Simulation was performed using 0.5µm CMOS process with level-13 SPICE model

45

Figure 2.23 Timing diagram of (nxm) array “Raster” scanning. When a row “j” is activated, then the columns (pixels) are readout successively, and outputted serially.

53

Figure 2.24 A DRAM basic memory cell for storing analog charge.

54

Figure 2.25 Circuit for reducing the leakage (after [72]).

54

Figure 2.26 (a) An RC circuit used as a delay element, b) An OTA-C circuit as a delay element, c) A current mode delay element (after [15]).

56

Figure 2.27 Processing circuitry in Hamamoto et al. motion adaptive sensor [64].

56

Figure 2.28 Input and output of typical analog temporal differentiator

57

Figure 2.29 Comparison between the frequency response of perfect differentiator and the practical differentiator. At low frequencies, the circuit acts as a perfect differentiator. At high frequencies, the resistor limits the current through the capacitor.

58

Figure 2.30 A physical circuit realization of an approximation to the temporal derivative operation.

58

Figure 2.31 (a) A follower-based differentiator. (b) A modified version to reduce output offset due mismatch (after [75]). Both circuits require two amplifiers, with two bias controls.

60

Figure 2.32 A typical example of a Hysteretic differentiator (after [75]).

60

Figure 2.33 Moini’s OTA-based differentiator [15, 74, 76, 77].

61

Page 14: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xiv

Figure 2.34 Functional block diagram of Moini’s insect vision chip showing the formation of templates (after [15, 74, 76, 78]).

62

Figure 2.35 Current mode Chong et al. differentiator (after [71]).

63

Figure 2.36 Type of image discontinuties [79].

65

Figure 3.1 Photodiode equivalent circuit [35]. Components include: Iph = current source, ideal diode with Idark , Rsh = shunt resistance, CD = junction (depletion layer) capacitance, Rs = series resistance, RL = load resistance, and VB = reverse bias supply.

67

Figure 3.2 Typical photodiode characteristics (region III), the dashed curve for photodiode under dark conditions, whereas the solid curve is for the photodiode under illumination conditions.

68

Figure 3.3 Compact CMOS compatible photodiode model (subcircuit) suitable for pixel level and imager level SPICE simulations.

70

Figure 3.4 Left: the layout of 3-mode active pixel sensor (3M-APS) of the ALO-chip that was fabricated using 0.35µm CMOS technology. Right: the extracted equivalent circuit (net-list) superimposed on the layout.

73

Figure 3.5 Level–I symbolization that includes all 3M-APS circuit elements that are native in stranded CMOS process (refer to the text above).

74

Figure 3.6 Level–II symbolization, which includes over the CMOS-native circuit-elements of 3M-APS, the all photodiode equivalent circuit components except the current source that represent the photo-generated current.

75

Figure 3.7 Final simulation test-bench that include all elements of equivalent circuit of the CMOS compatible photodiode and the pixel circuit.

75

Figure 3.8 Typical circuit of Log-APS with one NMOS diode (M1), which is connected to VDD supply voltage. The photodiode is connected to the column bus via a source-follower buffer (M2) and the row select switch (M3). All MOSFETs bodies are connected to the substrate.

76

Figure 3.9 (Left) Typical measured response of the logarithmic active pixel sensors (Log-APS). The optical dynamic range is about 6 dB with an output voltage swing of only 0.15 volts [55]. (Right)

77

Page 15: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xv

Diode connected MOSFET (MOS diode) configuration resistor configuration.

Figure 3.10 Schematic diagram of the MOSFET structure.

79

Figure 3.11 Comparison between responses (at N1 in Fig. 3.1) of the conventional and inverted Log-APS using HSPICE simulation with 0.5µm CMOS technology data.

80

Figure 3.12 Two typical examples of “conventional” logarithmic active pixel sensor configurations: (a) Photodiode logarithmic APS, with one NMOS diode, and, (b) Phototransistor logarithmic APS, with two series PMOS diode.

81

Figure 3.13 An example of inverted logarithmic pixels, with two MOS diodes to increase the output swings of the pixel (hence reducing the degree of compression).

81

Figure 3.14 The operation of direct integration (linear) mode active pixel sensors.

82

Figure 3.15 Typical transient response of linear (integrating) mode of operation of active pixel sensors. The graph shows fair linearity within the range of video frame rate or higher.

83

Figure 3.16 Simulated photoresponse of logarithmic APS with one MOS diode (top) and its I-V transfer characteristics (bottom) using HSPICE behavioral modeling. The simulated output was at node 8, before the source follower.

86

Figure 3.17 Simulated photoresponse of inverted logarithmic APS with one MOS diode (top) and its I-V transfer characteristics (bottom) using HSPICE behavioral modeling. The simulated output was at node Vout. before the source follower

87

Figure 3.18 The response of “conventional” logarithmic-APS with one MOS-diode, using HSPICE behavioral modeling (piecewise linear function (PWL), and VCVS), compared with the original measured data from [55], therefore have conditions different from the above mentioned.

88

Figure 3.19 I-V Transfer characteristics of logarithmic APS with one diode, using HSPICE radiation models (RFAC ranges from 18105 −× to 20101 −× ).

89

Figure 3.20 The transfer characteristics of “conventional” logarithmic APS 90

Page 16: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xvi

(with one MOS-diode). A comparison between using HSPICE radiation models (RFAC = 19101 −× ) shown in solid squares and parametric behavioral model shown in open triangles.

Figure 3.21 Measured photoresponse of Log-APS to light intensity (at node Vout in Fig. 3.8). Simulated results obtained using HSPICE modeling is shown for comparison.

91

Figure 3.22 Comparison of photoresponses (at Vout in Fig. 3.8) of conventional (Log-APS) pixels with different number of MOS diodes using HSPICE simulation. All MOSFETs bodies were connected to the substrate.

92

Figure 3.23 Comparison of photoresponses (at Vout in Fig. 3.8) of inverted pixels (inverted Log-APS) with different number of MOS diodes using HSPICE simulation. All MOSFETs bodies were connected to the substrate.

93

Figure 3.24 Comparison of transfer characteristics (at Vout in Fig. 3.8) for Log-APS (top) and for inverted Log-APS (bottom) with different number of MOS diodes using HSPICE simulation. All MOSFETs bodies were connected to the substrate.

94

Figure 3.25 Comparison between measured and simulated optical modulation frequency response for Log-APS (at Vout in Fig. 3.8).

96

Figure 3.26 Comparison of the simulated optical modulation frequency responses of the conventional Log-APS with one, two and three MOS diodes (at Vout in Fig. 3.8).

97

Figure 3.27 Simulated ambient illumination effect on 3-dB frequency for Log-APS (Conventional configuration, at node N2 in Fig. 3.1).

98

Figure 4.1 The conventional concept of the spatial-edge detection process shown for one row of the imager array (after [97]).

102

Figure 4.2 Functional block diagram of the proposed edge detection system using SDS algorithm (a). The row “n” (black) is sampled in the left S&H while row n+1 (white) is sampled in the right one. The outputs are then differenced using differential amplifier to obtain the spatial-edge information. (b) Comparison between timing patterns for FPN noise reduction operation (top) and spatial-edge detection operation (bottom) for the correlated double sampling circuit (CDS).

103

Figure 4.3 The correlated double sampling circuit [15, 16] used here for 104

Page 17: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xvii

spatial-edge detection is shown along with the dual mode active pixel sensor (APS). The mode of operation is determined by selecting the proper control state for the OR gate (“0” for Linear-APS mode, and”1” for Log-APS mode).

Figure 4.4 Captured hand image (at 355 Lux) in the linear-mode using CDS circuit to enhance image quality (a). Image 3D-plot of clearly shows image contrast (b). Frame-rate is at ~ 46 f/s. Images captured using LabView software.

106

Figure 4.5 Edge detection of the hand captured the linear-mode (at 746 Lux) using CDS circuit, but with the timing pattern for spatial-edge detection using SDS (a). 3D-plot of the detected image edges (b). Frame-rate is at ~ 46 f/s. Images captured using LabView software.

106

Figure 4.6 Captured bar image (at 738 Lux, and frame-rate ~46 f/s) in the linear-mode using CDS circuit to enhance image quality, with image 3D-plot (right) shows contrast.

107

Figure 4.7 Edge detection of the bar captured the linear-mode (at 738 Lux and ~ 46 f/s) using CDS circuit. The 3D-plot (right) shows the level of the spatially detected edge.

107

Figure 4.8 A typical captured image in the linear mode of a 100 %-contrast stimulus, along with the horizontal and vertical waveforms that correspond to the rows and columns of the imager array (rotated 90º). No double sampling (CDS) operation was applied here; therefore dc (offset) values still exits. Note: gray levels were used here instead of voltages (software oscilloscope). Frame rate was ~46 f/s, and illumination light intensity was ~922 Lux.

109

Figure 4.9 A typical image captured in the linear mode of a 100 %-contrast stimulus using correlated double sampling (CDS) circuits. Also shown the horizontal and vertical waveforms that correspond to the rows and columns of the imager array (rotated 90º). Because of the CDS operation applied here, most of dc (offset) values vanished upon differencing. Note: gray levels were used here instead of voltages (software oscilloscope). Frame rate was ~46 f/s, and illumination light intensity was ~922 Lux.

110

Figure 4.10 A typical captured image in the linear mode of a 100 %-contrast stimulus using correlated double sampling (CDS) circuits. Also shown the spatially detected edges image with horizontal and vertical waveforms that correspond to the rows and columns of the imager array (rotated 90º). Because of the spatial edge

111

Page 18: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xviii

operation applied here, all dc (offset) values vanished upon differencing. The strong 1D tuning toward one direction (parallel to image rows) is clear here. Note: gray levels were used here instead of voltages (software oscilloscope). Frame rate was ~46 f/s, and illumination light intensity was ~922 Lux.

Figure 4.11 Effect of illumination on spatial-edge detection. The 3D graphs show the imager response (captured image) and spatially detected edge for different illumination levels (a) 1131-1839 Lux, (b) 206-824. Both the contrast and frame-rates were kept constant at 100 % and ~46 f/s respectively. This graph shows that the spatial-edge detection is feasible even at these low illumination levels.

112 113

Figure 4.12 Effect of illumination on spatial-edge detection. These instantaneous histograms show the spatially detected edges output signal (∆V) of one frame for different illumination levels (206-1839 Lux). The stimulus has 100% contrast. The images were captured in linear mode with constant frame-rate of ~ 46 f/s. Left edge is white-to-black (WB) edge, whereas right edge is black-to-white (BW) edge as shown in top chart. H-scale Time (sec) is at 2.5ms/div is), V-scale ∆V (V) is at 100mV/div.

114

Figure 4.13 Effect of illumination on the spatial-edge detection of 100 % contrast stimulus. The illumination light intensities of light source used are in the range between 214-2560 Lux. The images were captured in linear mode at frame-rate of ~ 46 f/s. The plot shown here is for the left (WB) edge of the stimulus.

115

Figure 4.14 Effect of illumination on the spatial-edge detection of 100 % contrast stimulus. The illumination light intensities of light source used are in the range between 214-2560 Lux. The images were captured in linear mode at frame-rate of ~ 46 f/s. The plot shown here is for the right (BW) edge of the stimulus.

116

Figure 4.15 3D-plot and surface plot of the spatial-edge output signal for (vertical) 100 %-contrast (vertical) stimulus illuminated by light intensity of 922 Lux. The image was captured in linear mode at frame-rate of ~ 46 f/s. This graph shows the various portions 3D-edge signal as related to the 2D stimulus edge image represented by the surface plot on the right of the Figure.

118

Figure 4.16 3D-plot of the spatial-edge (SDS) output signal for (vertical) black bar over gray background stimuli illuminated by light intensity of 922 Lux. Different gray levels were used to generate stimuli with contrasts that range from 20-100 %. The

119 121

Page 19: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xix

image was captured in linear mode at frame-rate of ~ 46 f/s. (a) 100 %, (b) 70-92 %, (c) 30-59 %, the rest (20 %) is shown here.

Figure 4.17 Effect of contrast on spatial-edge detection. The stimuli are vertical black bar on different gray-levels background to generate contrasts that range from 20-100 %. All measurements were done under constant (922 Lux) illumination and frame-rate of ~ 46 f/s. These instantaneous histograms show the spatially detected edges output signal (∆V) of one frame. H-scale Time (sec) is at 2.5ms/div, V-scale ∆V (V) is at 100mV/div. (a) 75%, 100 %, the reset (20-47 %) is shown here.

122

Figure 4.18 Effect of contrast on spatial-edge detection for (vertical) black bar over gray background stimuli. Different gray levels were used to generate contrasts that range from 20-100 %. The illumination light intensity, frame-rate, were kept constant at 922 Lux, ~ 46 f/s respectively.

123

Figure 4.19 3D-plot of the spatial-edge output signal for vertical-100%-contrast stimulus that was illuminated by light intensity of 922 Lux. The image was captured in linear mode at frame-rate in the range (a) ~11.5 f/s, (b) ~23 f/s – 230 f/s, the rest is shown here

124 126

Figure 4.20 Effect of frame-rate on spatial-edge detection. The (vertical) 100 %-contrast stimulus was illuminated by a constant light intensity of ~ 922 Lux. These instantaneous histograms show the spatially detected edges output signal (∆V) of one frame at frame-rates that range from 23 f/s to 1150 f/s. V-scale ∆V (V) is at 100mV/div. H-scale Time (sec) are from the top to the bottom at 10 ms/div, 5ms/div, 2.5ms/div, 0.5ms/div, and 0.1ms/div respectively. (a) 11.5 f/s, 23 f/s, the rest (~ 46 f/s – 1150 f/s) is here.

126 127

Figure 4.21 Effect of frame-rate on spatial-edge detection of (vertical) 100%-contrast stimuli. The light intensity was kept constant at ~ 922 Lux. The frame-rate was varied from 23 f/s to 1150 f/s.

128

Figure 4.22 High illuminated (7170 Lux) captured lamp image in the logarithmic-mode (a). It shows higher FPN noise as expected. 3D-plot of clearly shows image contrast (b). The frame-rate was ~ 46 f/s.

130

Figure 4.23 The image of spatially detected edge of a lamp (high illumination at ~7170 Lux) in the logarithmic-mode using CDS circuit. The frame-rate was ~ 46 f/s. The 1D tuning and FPN

131

Page 20: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xx

noise are obvious in the image. The 3D plot was omitted, since it didn’t show any additional information content.

Figure 4.24 Left: captured bar image (at 196 Lux) captured at high frame-rates (~230 f/s) in the logarithmic-mode, its 3D-plot clearly showing the contrast, and its histogram. Right: spatially detected edges image, its 3D-plot showing the two edge signals, and its histogram, which shows the difference between normal operation and spatial-edge detection operation of CDS circuit. Histogram was calculated using MATLAB software.

132

Figure 4.25 Effect of the illumination and frame-rate on spatial-edge in logarithmic mode of 100%-contrast stimulus. The 3D graphs show the imager response (captured image) and spatially detected edge for different illumination levels of 922-1323 Lux. The images were captured with frame-rate of in the range ~ 230 f/s-1150 f/s.

134

Figure 4.26 Effect of the illumination on spatial-edge in logarithmic mode of 100 %-contrast stimulus. The illumination light intensities of light source used are in the range between 196-1323 Lux. The images were captured in linear mode at frame-rate of ~ 230 f/s. The plot shown here is for both edge types, left (WB), and right (BW) edges of the stimulus.

135

Figure 4.27 Effect of the frame-rate on spatial-edge in logarithmic mode of 100 % contrast stimulus. The illumination light intensity was kept at 922 Lux. The images were captured at frame-rate was varied from 230 f/s to 1150 f/s. The plot shown here is for both edge types, left (WB), and right (BW) edges of the stimulus.

137

Figure 4.28 A typical example of how the spatial-edge signal (averaged) changes in response to pixel-by-pixel (row-by-row) movement of 100%-contrast stimulus (inside) moves to the right. The illumination intensity level is ~1165 Lux. Images were captured with frame-rate of ~ 46 f/s. The 100%-contrast stimulus was black bar on white background as shown.

139

Figure 4.29 A typical spatial edge output signal (∆V) logged (with WaveStar) for about 117 sec with sampling rate of 1 sample/sec. The 100 % contrast stimulus was moving with a constant speed of ~15.24 cm/sec, and was illuminated with 466 Lux light source. The images captured at frame rate of ~ 46 f/s. This strip chart shows the M+, and M- values for both WB and BW edges, which are defined by Tektronix oscilloscope as Max, and Min respectively. Solid squares represent “active “spike, whereas

140

Page 21: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxi

open squares represent “idle” signals.

Figure 4.30 A graphic illustration of the experimental setup showing the position of the light source with respect to the imager field-of-view. Also shown the hypothetical positions p1, p2, and p3 (at the sampling event) of an edge moving clockwise with a speed of S cm/sec. Optical parameters shown numerically are kept constant throughout the experiments.

141

Figure 4.31 Illustration of the stimulus drum showing the positions of BW and WB edges that move in the clockwise direction with a typical speed of about 152.4 mm/sec. Also shown the imager array with width of 1.92 mm (64x30 µm). The stimulus can be viewed as a tape of flipping black and white strips or as a symmetrical square wave with a period of 10.925 sec (for this speed).

143

Figure 4.32 Effect of speed of motion on the maximum achievable spatial-edge output signal (∆V). The 100 %-contrast stimulus was moving with constant speeds in the range form ~15.24 cm/sec to ~200.39 cm/sec, and was illuminated with 466 Lux light source. The graph shows both M+, and |M-|. Each point in the graph represents the maximum of all data logged (about 117 samples for each speed). Image captured with frame rate of~46 f/s.

144

Figure 4.33 A typical area graph of the spatial edge output signal (∆V) used to facilitate the calculation of the percentage efficiency of the detection. The data used here is the same data that was used to plot Fig. 5.26. Also shown threshold Vth1, Vth2, and Vth3, at voltages 80mV, 100mV, and 150mV respectively. “A” is defined as the interval between edges of same type (period of the stimulus square wave in Fig. 4.28), whereas “B” is the time between edges of different types (width of the stimulus square wave in Fig. 4.24). Note, that we used the absolute value of edge signal |M-| in this plot.

146

Figure 4.34 Effect of speed of motion on the percentage detection efficiency of spatial-edge signal ∆V for different threshold voltages (80-150mV). The 100 % contrast stimulus was moving with constant speeds in the range ~15.24 cm/sec to ~ 200.39 cm/sec, and was illuminated with 466 Lux light source. The graph data was compiled from the data shown in Fig. 4.29.

148

Figure 5.1 (Top) A Comparison between controls-timings of one row for temporal edge detection operation (upper pattern) and spatial edge detection operation (lower pattern) of correlated double

154

Page 22: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxii

sampling circuit (CDS). (bottom) A graphical illustration of the double sampling operation.

Figure 5.2 Temporal double sampled (TDS) images in the linear mode of a black bar on white background (100 % contrast) stimulus illuminated by white light of intensity of ~466 Lux. The image (a) was captured at high frame rate of ~123 f/s. The Edge image (b) is obtained using intra-sampling interval ∆t of ~20 µsec.

155

Figure 5.3 Temporal double sampling (TDS) in the linear mode of a black bar on white background (100 % contrast) stimulus illuminated by white light of intensity of ~466 Lux. The image signals “VSH1” and “VSH2” (shown overlapped) are for one frame and was captured at high frame rate of ~123 f/s. These signals, which were sampled for each pixel in the imager array, were probed at outputs nodes of the buffer amplifiers of CDS circuit (refer to Chapter 4). The sampling clocks SH1 and SH2 were ~20 µsec (∆t) apart.

156

Figure 5.4 Output signals of the linear-mode temporal double sampling (TDS) of a horizontal black bar on white background (100 % contrast) stimulus. Light of intensity was ~531 Lux and the frame rate was ~123 f/s. The signals “VSH1” and “VSH2” are shown overlapped at the bottom of the graph. Temporal edge signal ∆V (shown at the top of the graph) was obtained by an intra-sampling interval (∆t) of ~20 µsec.

158

Figure 5.5 A magnified graph of the TDS output signal of a horizontal stimulus with 100 % contrast illuminated by 531 Lux light. The image signal was captured in the linear-mode at high frame rate of ~123 f/s, with an intra-sampling interval of ~20 µsec (∆t). These signals are then differenced to produce the edge signal “∆V”.

158

Figure 5.6 Effect of illumination on the temporal double sampling of 100 % contrast stimulus. The illumination light intensities of light source used are in the range between 214-2560 Lux. The images were captured in linear mode at frame rate of ~ 123 f/s. The edge information was obtained by a constant intra-sampling interval (∆t) of 61 µsec. The plot shown here is for the left edge of the stimulus.

159

Figure 5.7 Effect of illumination on the TDS output of 100 % contrast stimulus. The illumination light intensities of light source used are in the range between 214-2560 Lux. The images were captured in linear mode at frame rate of ~ 123 f/s. The interval

160

Page 23: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxiii

∆t was kept constant at 61 µsec. The plot shown here is for the right edge of the stimulus.

Figure 5.8 3D views show some samples of imager response (captured image) for different stimulus contrasts (30-100%). No TDS operation here. The illumination light intensity level was kept constant at 531 Lux. These images were captured in linear mode at frame rate of ~ 123 f/s.

162

Figure 5.9 Effect of contrast on TDS for the WB edge of black bar over gray background stimuli. Different gray levels were used to generate stimuli with contrasts that range from 30-100 %. The illumination light intensity, frame rate, and ∆t were all kept constant at 531 Lux, ~ 123 f/s, and 61 µsec respectively.

163

Figure 5.10 Effect of contrast on TDS for the BW edge of the stimuli. The stimuli contrasts ranges from 30-100 %. The illumination light intensity, the frame rate, and the interval ∆t were all kept constant at 531 Lux, ~ 123 f/s, and 61 µsec respectively.

163

Figure 5.11 Effect of the intra-sampling interval ∆t on the magnitude (M+) of temporal edge signal ∆V for different frame rates. The stimulus has 100% contrast and the illumination intensity was kept constant at 466 Lux. The interval ∆t was changed from 1 µsec to 188 µsec for each frame rate. This frame rate was varied from ~ 39.82 f/s to 122.52 f/s.

165

Figure 5.12 Effect of the intra-sampling interval ∆t on the peak-peak value of temporal edge signal ∆V for different frame rates. The stimulus has 100% contrast and the light intensity was kept constant at 466 Lux. The interval ∆t was changed from 1 µsec to 188 µsec for each frame rate. This frame rate was varied from ~ 39.82 f/s to 122.52 f/s.

166

Figure 5.13 The effect of the interval ∆t on the magnitude (M+) of TDS signal ∆V at frame rate of 55.13 f/s used to show how we calculated the saturation point (min ∆t at max ∆V) of temporal edge signal. The stimulus has 100% contrast and the illumination light intensity was kept constant at 466 Lux. The interval ∆t was varied from 1 µsec to 188 µsec.

167

Figure 5.14 The average saturation level ∆VSat of the magnitude (M+) of TDS signal ∆V as a function the frame rate in the range from 39.82 f/s to 122.52 f/s. The stimulus has 100% contrast and the illumination light intensity was kept constant at 466 Lux. Maximum is at 87.6 f/s, 224.4 mV.

168

Page 24: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxiv

Figure 5.15 The average saturation level ∆VSat of the peak-peak value of TDS signal ∆V as a function the frame rate in the range from 39.82 f/s to 122.52 f/s. The stimulus has 100% contrast and the illumination light intensity was kept constant at 466 Lux. Maximum is at 87.6 f/s, 329.6 mV.

169

Figure 5.16 ∆tmin (minimum ∆t at maximum ∆V) as the frame rate varied in the range from 39.82 f/s to 122.52 f/s. The stimulus has 100% contrast and the illumination light intensity was kept constant at 466 Lux. This plot is for both the magnitude and peak-peak values of TDS signal ∆V

170

Figure 5.17 Output signal of the logarithmic-mode TDS of a black bar on white background (100 % contrast) stimulus illuminated by white light of intensity of ~133 Lux. The image was captured at high frame rate of ~114 f/s. The signals “1” and “2” shown are the outputs of CDS circuit VSH1 and VSH2 respectively. The two signals where sampled with intra-sampling interval (∆t) of 6 µsec. These signals are then differenced to results the edge signal “∆V” shown in the top of the graph.

172

Figure 5.18 Surface and 3D-view of the TDS in logarithmic-mode. The stimulus is a black bar on white background (100 % contrast), which was illuminated by light of intensity of ~ 133 Lux. The image (a, b) was captured with frame rate of ~ 81.4 f/s. The image after TDS operation (c, d) was obtained with intra-sampling interval (∆t) of 58 µsec.

173

Figure 5.19 Surface and 3D-view of the TDS in logarithmic-mode. The stimulus is a black bar on white background (100 % contrast), which was illuminated by light of intensity of ~ 133 Lux. The image (a, b) was captured with frame rate of ~ 61.3 f/s. The image after TDS operation (c, d) was obtained with intra-sampling interval (∆t) of 122 µsec.

174

Figure 5.20 A graphical illustration of the degradation the sampled output with time (due to leakage) and its effect on TDS output signal when using different intra-sampling intervals.

175

Figure 5.21 Temporal double sampling (TDS) in the logarithmic-mode of bulb lamp at intensity level of ~ 7000 Lux. The image (a, a1, a3) was captured with frame rate of ~ 61.3 f/s. The TDS edge image (b1, b2, b3) was obtained with intra-sampling interval ∆t of ~122 µsec.

176

Figure 5.22 A Logarithmic-mode TDS output of a red (670 nm) diode Laser 177

Page 25: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxv

spot. The light intensity level was ~3.714mW/cm2. The image (a), (b) was captured with frame rate of ~ 61.3 f/s. The TDS output (c), (d) was obtained with ∆t of 122 µsec.

Figure 5.23 Output signal of the logarithmic-mode TDS output of a red (670 nm) diode Laser spot. The light intensity level was ~3.714mW/cm2. The image outputs “1”, “2”, captured with frame rate of ~ 61.3 f/s, are for VSH1 and VSH2 respectively. The TDS output ∆V, obtained with ∆t of 122 µsec, is labeled by “3”at the top of the graph.

178

Figure 5.24 A Typical TDS output signal (∆V) logged (with WaveStar) for about 117 sec with sampling rate of 1 sample/sec. The 100 % contrast stimulus was moving with a constant speed of ~15.24 cm/sec, and was illuminated with 466 Lux light source. The images captured at frame rate of 61.3 f/s. The interval ∆t was 122 µsec. This strip chart shows the M+ (BW edge), and M- (WB edge) values, which are defined by Tektronix oscilloscope as Max, and Min respectively.

180

Figure 5.25 Effect of speed of motion on the maximum achievable temporal double sampling (TDS) output signal (∆V). The 100 %-contrast stimulus was moving with constant speeds in the range form~15.24 cm/sec to ~179.25 cm/sec, and was illuminated with 466 Lux light source. The images were captured at frame rate of 61.3 f/s, and the edge signals were acquired with ∆t of ~122 µsec. The graph shows the M+ (BW edge), and |M-| (WB edge). Each point in the graph represents the maximum of all data logged (about 117 samples for each speed).

183

Figure 5.26 A typical area graph of the temporal edge output signal (∆V) used to facilitate the calculation of the percentage efficiency of the detection. The data used here is the same data that was used to plot Fig. 5.25. Also shown threshold Vth1, Vth2, Vth3, and Vth4, at voltages 60mV, 80mV, 100mV, and 120mV respectively. “A” is defined as the interval between edges of same type (period of the stimulus square wave in Fig. 4.28), whereas “B” is the time between edges of different types (width of the stimulus square wave in Fig. 4.24). Note, that we used the absolute value of WB edge signal |M-| in this plot.

185

Figure 5.27 Effect of speed of stimulus motion on the percentage detection efficiency of temporal edge signal ∆V for different threshold voltages (60-120mV). The 100 % contrast stimulus was moving with constant speeds in the range ~15.24 cm/sec to ~179.25 cm/sec, and was illuminated with 466 Lux light source. The

186

Page 26: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxvi

images were captured at frame rate of 61.3 f/s, and the edge signals were acquired with ∆t of ~122 µsec. The graph data was compiled from the same data shown in Fig. 5.24.

Figure: 6.1 Linear active pixel sensor (APS) with possible sources of fixed pattern noise (FPN). Idark is photodiode dark current, AD is the optical aperture, and CD is the sense node (IN) capacitance. Vth, Col, W, and L are the respective threshold voltage; overlap capacitance, gate width, and the gate length of the NMOSs. rds is the row-select-switch “on” resistance. Ibias-0 is the bias current of the per-column current-source-load.

193

Figure: 6.2 The total FPN noise CMOS active pixel sensor (APS) right as compared with FPN noise in CCD imager. It is clear the CMOS APS suffer from additional column-FPN, which appears as vertical stripes in the image (from [19]).

196

Figure: 6.3 Logarithmic active pixel sensor (APS) with possible sources of fixed pattern noise (FPN). Note the bulk-node of the transistor M1 is connected to the ground, but not shown for clarity.

198

Figure: 6.4 Logarithmic active pixel sensor (APS), with two MOS-diodes (2Log) to increase the logarithmic slop. Possible sources of fixed pattern noise (FPN) are also shown. Note that bulk-nodes of transistors (M1, M2) are both connected to the ground, but not shown for clarity.

201

Figure: 6.5 Simulated transfer-characteristics of logarithmic active pixel sensor (APS) using 0.5 µm CMOS technology data and HSPICE simulator. The upper curve represents the sense voltage for Log-APS (1-NMOS), whereas the lower curve represents the sense voltage of 2Log-APS (2-NMOS).

202

Figure: 6.6 3-Mode logarithmic active pixel sensor (3M-APS), Linear-APS, Log-APS, and 2Log-APS. Possible sources of fixed pattern noise (FPN) are also shown. Note that bulk-nodes of transistors (M1, M2, M3) are all connected to the ground, but not shown for clarity. The control signal Log and Reset can be either high (VDD), low (Gnd), or clock depending on the required mode of operation (as shown in the inset table).

206

Figure: 6.7 Gray level response of the imager in linear-mode (Linear-APS). Top graph is for the columns’ means <columns>, with bad column excluded, whereas the bottom graph is for <rows>. The measurement was held with Vbias-0 @ ~0.48 V, Vbias-B @-0.58 V, and frame rate @ ~23.32-f/s. Light intensity range is 0-

209

Page 27: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxvii

160µW/cm2 (green light λ~540nm).

Figure 6.8 Gray level response of the imager in logarithmic-mode (Log-APS). Top graph is for <columns>, with some bad columns excluded, whereas the bottom graph is for <rows>. The measurement was held with Vbias-0 @ ~0.48 V, Vbias-B @-0.58 V, and frame rate @ ~23.32-f/s. Light intensity range is 0-160µW/cm2 (green light λ~540nm).

211

Figure 6.9 Gray level response of the imager in logarithmic-mode (2Log-APS). Top graph is for <columns>, with some bad columns excluded, whereas the bottom graph is for <rows>. The measurement was held with Vbias-0 @ ~0.48 V, Vbias-B @-0.58 V, and frame rate @ ~23.32-f/s. Light intensity range is 0-160µW/cm2 (green light λ~540nm).

212

Figure 6.10 Double-sampled FPN technique with 3M-APS operations, Linear-APS, Log-APS, and 2Log-APS. Note that bulk-nodes of transistors (M1, M2, M3) are all connected to the ground, but not shown for clarity. The control signal Log and Reset can be either high (VDD), low (Gnd), or clock depending on the required mode of operation (see Fig. 6.6).

213

Figure 6.11 Functional block diagram of the proposed FPN reduction system using modal double sampling (MDS) technique. The Log-APS (Linear-APS) outputs of pixels in active row (dark) are sampled in one S&H. After switching to 2Log-APS (resetting) the 2Log-APS (reset) outputs from the same pixels are sampled in the other S&H. The outputs are then differenced using differential amplifier (one/imager) to obtain the logarithmic (linear) output with reduced FPN.

214

Figure 6.12 Controls-timings and clocks for fixed pattern noise reduction in logarithmic mode using modal double sampling (MDS) technique. ∆TR is the row-preparation-time, and ∆t is the time interval between samples. Refer to the text and Fig. 6.10 for controls’ definitions. For a given frame rate, both intervals are crucial in finding the minimum speed of pixel response.

215

Figure 6.13 Typical histogram of 11-frame averaged array of imager (gray levels) outputs (top graph). The measurement was held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and frame rate of ~23.32-f/s. Light intensity is 10 µW/cm2 (green light λ~540nm).

217

Figure 6.14 Imager photoresponse in the linear mode, VSH1 (right scale) and with MDS (left scale) differencing. DS-1, DS-2 is the output

219

Page 28: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxviii

depending on which of VSH1, VSH2 was differenced from the other. The measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and frame rate of ~23.32 f/s. The light source is green Halogen lamp (λ~540nm), with light intensity range from 0-160µW/cm2.

Figure 6.15 Pixel-FPN at different light intensities in the linear mode. The MDS noise-reduction operation has no effect on pixel FPN as shown. Note that the absolute value (mV) curves are overlapped with the percentage value (%) curves as shown. The measurements were held at the same conditions stated above. Graph data was calculated using Equation 6.28.

219

Figure 6.16 Row-FPN at different light intensities in the linear mode. The MDS noise-reduction operation has no effect on row spatial-noise as shown. The measurements were held at the same conditions stated above. The graph data was calculated using Equation 6.27.

220

Figure 6.17 Column spatial-noise (FPN & PRNU) at different light intensities in the linear mode. The MDS noise-reduction operation is clearly depicted by difference between in level of the column-FPN without (upper curves) and with (lower curves) the MDS operation. The measurements were held at the same conditions stated above. . Calculated using Equation 6.26.

222

Figure 6.18 Column-FPN percentage reduction with MDS (modal double sampling) at different light intensities in the linear mode. The measurements were held at the same conditions stated above.

222

Figure 6.19 Imager photoresponse in the logarithmic mode without (modal double sampling) MDS differencing operation. The measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and frame rate of ~23.32 f/s. The light source is green Halogen lamp (λ~540nm), with light intensity range from 0-160µW/cm2.

223

Figure 6.20 Imager photoresponse in the logarithmic mode with MDS differencing, with DS-1 (2log-Log), or with DS-2 (Log-2Log) is the output depending on which of the column-buffer-amplifier outputs (VSH1 [Log], VSH2 [2Log]) was differenced from the other. The measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and frame rate of ~23.32 f/s. The light source is green Halogen lamp (λ~540nm), with light intensity range from 0-160µW/cm2.

224

Figure 6.21 Pixel-FPN (A) and Row-FPN (B) at different light intensities in 225

Page 29: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxix

the logarithmic mode. The MDS noise-reduction operation has almost no effect on pixel-FPN and /or row-FPN as shown. These measurements were held at the same conditions stated above.

Figure 6.22 Column-FPN at different light intensities in the logarithmic mode. The MDS noise-reduction operation is clearly depicted by the difference level of the spatial-noise without (upper curves) and with (lower curves) the MDS operation. These measurements were held at the same conditions stated above.

226

Figure 6.23 Column-FPN percentage reduction with MDS (double sampling) at different light intensities in the logarithmic mode. These measurements were held at the same conditions stated above.

227

Figure 6.24 3D-imager response to frame rate in the linear-mode at different light intensities and (without MDS). The measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and light intensity range of ~ 0.5-160µW/cm2 (green @ λ~540nm). The frame rates range was ~11.66 f/s to 233.2 f/s.

229

Figure 6.25 Imager frame-rate response in the linear-mode of operation at different light intensities. The measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and light intensity range of ~ 0.5-160µW/cm2 (green @ λ~540nm). The frame rates range was ~11.66 f/s to 233.2 f/s.

230

Figure 6.26 Effect of frame-rate on column-FPN in the linear-mode. Left is the FPN before MDS operation (top) for different frame rates compared with the FPN after using MDS (bottom). Right is the calculated percentage of FPN-reduction. These measurements was held with Vbias-0 @ 0.48 V, Vbias-B @ -0.58 V, and light intensity @~20µW/cm2 (green @ λ~540nm). The frame rates were in range of ~11.66 f/s to 233.2 f/s. The percentage FPN shown in the left was calculated for full well capacity.

231

Figure 6.27 3D-plot of the effect of frame-rate on column-FPN in the linear-mode for different light intensities. Left is the FPN without MDS. Right graph is the FPN with MDS. All measurement conditions are as stated above. The frame rates range was ~11.66 f/s to 233.2 f/s and the light intensity ranges (λ~540nm) from 0.5 µW/cm2 to ~160 µW/cm2.

232

Figure 6.28 Effect of frame-rate on row-FPN in the linear-mode of operation. The graph shows no improvement in FPN using

233

Page 30: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxx

MDS. Both FPN curves (with MDS and without MDS) are almost identical (shown overlapped). Measurement conditions are as those in Fig. 6.26.

Figure 6.29 3D-plot of the effect of frame-rate on row-FPN in the linear-mode for different light intensities. Top is the FPN without using MDS. Bottom graph is for the FPN with MDS. All measurement conditions are as stated above. The frame rates range was ~11.66 f/s to 233.2 f/s and the light intensity ranges (λ~540nm) from 0 µW/cm2 to ~160 µW/cm2.

234

Figure 6.30 Imager frame-rate 3D-response in the logarithmic-mode of operation at different light intensities (without MDS). The measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and light intensity range of ~ 0-160µW/cm2 (green @ λ~540nm). The frame rates range was ~11.66 f/s to 233.2 f/s. The color code on the left is designate different ranges of output voltage.

235

Figure 6.31 Effect of frame-rate on column-FPN in the logarithmic-mode of operation. Left: the FPN without using MDS for different frame rates (top) compared with the FPN using MDS (bottom). Right: the calculated percentage of FPN-reduction for this case. All conditions are as stated above with and light intensity @~45µW/cm2 (green @ λ~540nm).

236

Figure 6.32 Effect of frame-rate on row-FPN in the logarithmic-mode. Shown in “solid points” is the FPN without using MDS for different frame rates compared with row-FPN using MDS (open points). Measurement was held in conditions similar those for Fig. 6.31 above.

237

Figure 6.33 3D-imager response to Vbias-0 in the linear-mode for different light intensities (without MDS). The measurements were held with, Vbias-B of -0.58 V, frame-rate of ~ 23.32 f/s, and light intensity in the range: 0-160µW/cm2 (green @ λ~540nm). Vbias-

0 range was ~0.25V to ~0.65V.

238

Figure 6.34 Effect of Vbias-0 on imager column-FPN in the linear-mode. Left: the FPN without using MDS for different Vbias-0 (top) compared with the PRNU using MDS (bottom). Right is the calculated percentage of FPN -reduction within the Vbias-0 range under test. All conditions are as stated above with light intensity @~20µW/cm2 (green @ λ~540nm).

239

Figure 6.35 Effect of Vbias-0 on row-FPN in the linear-mode. The graph 240

Page 31: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxxi

shows no significant reduction in FPN using MDS. Both FPN curves (with MDS and without MDS) are almost identical. Measurement conditions are as those in Fig. 6.34.

Figure 6.36 3D-imager response to Vbias-0 in the logarithmic-mode for different light intensities (without MDS). The measurements were held with, Vbias-B of -0.58 V, frame-rate of ~ 23.32 f/s, and light intensity in the range: 0-160µW/cm2 (green @ λ~540nm). Vbias-0 range was ~0.25V to ~0.65V.

241

Figure 6.37 Effect of Vbias-0 on imager column-FPN in the logarithmic-mode. Left: the FPN without using MDS for different Vbias-0 (top) compared with the FPN using MDS (bottom). Right is the calculated percentage of FPN-reduction within the Vbias-0 range under test. All conditions are as those stated above, with light intensity is ~45µW/cm2 (green @ λ~540nm).

242

Figure 6.38 Effect of Vbias-0 on imager row-FPN in the logarithmic-mode. The graph shows FPN without using MDS for different Vbias-0 compared with the FPN using MDS. All conditions are as those stated above in Fig. 6.38.

243

Figure 6.39 Simplified dynamic circuit for Log-APS (left), and 2Log-APS (right) at the onset of switching of respective mode.

244

Figure 6.40 Qualitative temporal circuits behavior for both mode of operation. (A) Response to abrupt decrease in Vin (increase in light intensity). (B) Response to abrupt increase in Vin (decrease in light intensity) for different step heights (after [38])

246

Figure 6.41 The measured response of the 3M-APS response (bottom) as it is switched between Log-APS and 2Log-APS modes. The switching frequency range fro 125Hz to 2.5kHz. The pulse train on (top) on “Reset” node (refer to Fig. 6.6), which is used along with other simultaneous “inverted” pulse-train (not shown) on “Log” node to switch the pixel from mode to another. Light source was pulsed red (670nm) laser diode at light intensity is ~ 2 mW/cm2.

248 249

Figure 6.42 The measured optical modulation frequency responses for Log-APS at different light intensities.

250

Figure 6.43 The measured optical modulation frequency responses for 2Log-APS at different light intensities.

251

Figure A.1 Dual-mode active pixel sensor (2M-APS) layout showing the 264

Page 32: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxxii

(N+, p-substrate) photodiode, all in-pixel transistors: ➊ M1 (reset transistor), ➋ SF (source follower transistor), ➌ M3 (Row select transistor), and all pixel contacts (see Fig. 3.1).

Figure A.2 Microscopic photograph for CYC-imager using the ElectroScan System.

264

Figure A.3 The layout floor plan of the CYC-chip used for fabrication, which consists of CMOS image sensor (64x64 2M-APS pixels) array, with pixel pitch of 30µm, ➊ vertical (Row select and/or Reset) decoder, ➋ control block (counter), ➌ double sampling (DS) shielded circuit block, and ➍ horizontal (column) decoder.

265

Figure B.1 Experimental setup for edge measurements for stationary stimuli. The National Instrument LabView software, which was adopted for image capture, is also used as an oscilloscope using Gage-card. Gage CompuGen is pattern generator software. Each software has a hardware card associated with it. The test fixture consists of the CYC-chip and PCB. The light source is a bulb-lamp.

268

Figure B.2: Experimental setup for edge measurements for moving stimuli. The National Instrument LabView software, which was adopted for image capture, is also used as an oscilloscope using Gage-card. Gage CompuGen is pattern generator software. WaveStar is an automatic measurement and logging software. Each software has a hardware card associated with it. The test fixture consists of the CYC-chip and PCB. The light source is a bulb-lamp.

270

Figure C.1 Triple-mode active pixel sensor (3M-APS) layout showing the (N+, p-substrate) photodiode, all in-pixel transistors: 1 (diode-connected NMOS), 2 M2 (Log-node NMOS, 3 M3 (Reset-node NMOS), 4 SF (source- follower transistor), 5 M5 (Row-select NMOS), and all pixel contacts (refer to Fig. 6.10).

272

Figure C.2 The layout the ALO-chip used for fabrication, which consists of a CMOS image sensor (64x64 3M-APS pixels) array, with pixel pitch of 23.4µm, 1 vertical shift-register (Row-select), 2 vertical shift-register (Reset), 3 analog differentiator block 4 double sampling (DS) circuit block, 5 horizontal shift-register (column), 6 single pixel test structure with complete signal path to the output, and 7 single analog differentiator test structure.

273

Page 33: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxxiii

Figure C.3 Monographic photograph for a part of the actual (fabricated) ALO-CMOS image sensor imager.

274

FigureC.4 The layout single pixel test structure with a complete signal path to the output. This test-structure is incorporated in the ALO-chip as shown in Fig. C.2 above. 1 3M_APS pixel 2 transmission-gate switches, 3 analog differentiator block, 4 VDD, 5 ground (substrate), 6 double sampling (DS) circuit, and 7 current-source load.

275

Figure D.1 Experimental setup of FPN & PRNU measurements. The National Instrument Labview software was adopted for image capture. Gage CompuGen is pattern generator software. Each software has a hardware card associated with it. The test fixture consists of the ALO-chip and PCB.

278

Figure D.2 Experimental setup for photoresponse measurements. Gage CompuGen is pattern generator software. WaveStar is an automatic measurement and logging software. Each software has a hardware card associated with it. The test fixture consists of the ALO-chip (pixel test-structure with complete signal path to the output) and PCB.

281

Figure D.3 Experimental setup for optical modulation frequency response measurements. Gage CompuGen is pattern generator software. WaveStar is an automatic measurement and logging software. Each software has a hardware card associated with it. The test fixture consists of the ALO-chip (pixel test-structure with complete signal path to the output) and PCB.

282

Figure D.4 Control clocks applied simultaneously to “Reset” and “Log” nodes of the 3M-APS shown in Fig.10 to provide the flip-flopping between the two logarithmic modes (Log-APS and 2Log-APS) of the pixel operation.

283

Figure E.1 The experimental setup for single pixel measurements of modulation frequency response, speed distortions and frequency distortions for logarithmic active pixel sensors APS .

285

Figure E.2 Definitions of key parameters used in experimental simulation of speed. The speed is simulated by keeping the frequency of modulated laser beam constant (fW) while changing the amplitude (∆L), which corresponds to VLp of the laser beam.

286

Page 34: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxxiv

Figure E.3 Measured Modulation Frequency Response of the COHERENT VLM-2 red laser diode (Using Newport Power meter).

287

Figure E.4 Measured modulation frequency response for the Logarithmic APS

288

Figure E.5 Speed distortion: Laser input Channel. @ 2v/div; Pixel output Channel. @ 20mv/div; Time base @ 20µs/div, and input frequency is constant @ 5kHz.

289

Figure E.6 Combined speed distortion under different simulated speeds. Laser input Channel. @ 2v/div; Pixel output Channel. @ 20mv/div; Time base @ 20µs/div, and input frequency is constant @ 5kHz.

290

Figure E.7 Output speed distortion under different simulated speeds. Laser input Channel @ 2v/div (bottom); Pixel output Channel @ 50mv/div(top); Time base @ 20µs/div and input frequency is constant @ 20kHz.

291

Figure E.8 Combined speed distortion of the output under different simulated speeds. Laser input Channel. @ 2v/div; Pixel output Channel. @ 20mv/div; Time base @ 20µs/div, and input frequency is constant @ 20kHz.

292

Figure E.9 Output frequency distortion under different frequencies. Laser input Channel @ 2v/div (top); Pixel output Channel @ 50mv/div (bottom). Input amplitude (VLp) is kept constant @ 6V.

293

Figure E.10 Experimental setup for low current measurements. LO (low) is connected the shield (~ 1 cm copper box).

294

Figure E.11 Measured modulation frequency response of CMOS compatible photodiodes.

295

Figure E.12 Measured dark current of CMOS compatible photodiodes. The slope of these linear relationships corresponds to the shunt conductance (1/Rsh), which appears to be constant at 5 × 1013 Ω.

296

Figure E.13 Experimental setup for single pixel motion measurements using CMOS image sensor chip (CYC) fabricated using 0.5µm CMOS technology (refer to Appendix A).

297

Figure E.14 Motion/change detection circuit using temporal double sampling (TDS) algorithm presented in Chapter 5.

298

Page 35: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxxv

Figure E.15 Double sampling differentiator. (Top-left) System clock, SH1, SH2. (Top-right) VR and VS not sampled, the difference equals zero. (Bottom-left) Sampled VR and VS, the difference not zeros. (Bottom-right) “Zoomed” sampled VR and VS and non zero difference.

299

Figure E.16 Single pixel (APS) change detection output. (Top-left) Sampled VR and VS signals for square-wave (Laser) input along with the detected edges signal using sampled differentiator (Top-right) Saw (Laser) input. (Bottom-left) Sin wave (Laser) input. (Bottom-right) Square-wave (Laser) input.

300

Figure E.17 Scenarios and definitions of parameters that could be used to decode motion/change detection.

301

Figure E.18 Motion/change detection for and input (Laser) ramp. Right graphs are for OFF-WB (down) ramp outputs. Left graphs are for ON-BW (up) ramp outputs. Top graphs are for the ramp input and one of the sampled signals. Bottom graph is for bopth sample signals and the motion/change (TDS) detection signals.

302

Figure E.19 Sample and hold circuit operation with different sampling frequencies. At very low sampling frequencies (bottom-right graph) or slow ramps the held signal leak out due to leakage current in the switching transistors at OFF state, therefore signal may decay when it is sampled again.

303

Figure F.1 Gray level response of the imager in logarithmic-mode (Log-APS). Top plot is for <columns> before correction, with some bad columns excluded, whereas the bottom plot is for <columns> after eliminating the effect of S&H. The measurement was held with Vbias-0 @ ~0.48 V, Vbias-B @-0.58 V, and frame rate @ ~23.32-f/s. Light intensity is 0µW/cm2 (dark).

305

Figure F.2 Gray level response of the imager before and after correction for: (Top) Linear-APS mode, (Middle) Logarithmic- (Log-APS) mod, and (Bottom) logarithmic (2Log-APS) mode of pixel operation.

307

Page 36: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxxvi

List of Tables

Table 2.1 Diffusion lengths in p-type and n-type Silicon [38].

30

Table 3.1 CMOS 14TB N+-well/P-substrate HP-SPICE Model (Level-3) parameters

84

Table 6.1 Possible FPN sources, their sensitivities, and their effects on FPN [19]

194

Table 6.2 Possible FPN sources for Log-APS, their sensitivities, and their effects on FPN.

199

Table 6.3 Possible FPN Sources for 2Log-APS, their sensitivities, and their effects on FPN.

203

Page 37: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

xxxvii

Related Publications

2001 M. Tabet and R.I Hornsey (2001), “Dual-Mode Active Pixel Sensor with Focal Plane Edge Detection”, 2001 IEEE Workshop on CCDs and Advanced Image Sensors, Crystal Bay, Nevada, June 2001.

2001 M. Tabet et al, “CMOS Image Sensor Camera with Focal Plane Motion and Edge Detection,” IEEE CCECE conference, Toronto, Ontario, Canada, May 13-16 2001.

2000 M. Tabet et al., "Modeling and Characterization of Logarithmic CMOS Active Pixel Sensors," Journal of Vacuum Science and Technology A, 18(3) May/June, 2000

1999 M. Tabet et al., " Characterization and Modeling of Logarithmic Active Pixel Sensors for use in CMOS Cameras with Focal Plane Motion Detection," presented in 2nd Waterloo Workshop on Materials Technology MT' 99, November 10, 1999, Waterloo, Ontario, Canada.

1999 M. Tabet et al., "Modeling and Characterization of Logarithmic CMOS Active Pixel Sensors," presented in the in the Ninth Canadian Semiconductor Technology Conference, 8-13 August 99, Ottawa, Canada.

Page 38: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

1

Chapter One

Introduction

1.1 Introduction In this earth, we are surrounded by world full of highly efficient creatures. One of the

main features common to all of them and us is the ability to sense changes in these

surroundings and react accordingly. Adapting to the environment is a necessity for any

species to survive. This requires continuous information acquisition and processing.

Humans rely on their five senses namely sight, sound, smell, touch and taste for gathering

data. Among which, the eye is probably the most vital since it provides most of the

information [12].

Replicating the functions performed by the humans or animals with machines has

been a constant driving force for the development of technology. The building of modern

digital computer can be thought of as inspired from a human brain [13]. The invention of

the camera was a step towards making an artificial eye.

The camera has come a long way since its invention in the nineteenth century. In

the olden days photographic plates were used, then film became popular and is still being

used. The future seems to be state of the art electronic imaging systems [14]. The first

electronic cameras came into being more than two decades ago. Since then significant

progress has been made in this field.

Page 39: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

2

The main part of these artificial vision systems is the image acquisition array.

Most of the commercial systems make use of CCD (Charge Coupled Devices) for

converting the light to electrical signals. Recently there has been an interest in CMOS

(Complementary Metal Oxide Semiconductor) imagers [1-24, 36, 41, 43-79, 87, 88, 92,

94-117, 119-128]. This is due to the superior advantages offered by CMOS technology

like low power, random accessibility, system integration on a single die and low

production cost which are essential in many applications. Accordingly, they have gained

potential to be used in many applications, especially where integrated functionalities are

advantageous, such as in security, biometrics, and industrial applications [8-11].

This has been of great importance in the development of smart vision systems [2,

15, 16], where the captured ‘raw’ data are pre-processed before sending them to the

computer via the communication channel for further processing. This will result in data

reduction, which allows sending the data at lower data-rates, and reduces the effect of the

computational-load bottleneck. Moreover, most of the on-chip pre-processing is usually

performed in the analog form allowing local adaptability (such as light-intensity

adaptability) besides the reduction of the effect of ADC data transfer bottleneck.

1.2 Motivation

Inspired by the efficient architecture of biological visual system, and motivated by

the features offered by the analog VLSI CMOS image sensor paradigm real time

performance, low power consumption, low voltage, and small size—an on chip image

processing functionality is proposed based on straightforward, yet robust double

sampling techniques. This vision system implements the image processing

functionalities, using processor-per-column algorithms that are particularly devised to

achieve a delicate balance between image processing robustness and captured image

quality usually ignored by many designers [102-106]. This vision system compromise

between speed and silicon area (fill factor), thus high resolution is achievable while

keeping the total chip area reasonable.

Our method employs two sample–and–hold (S&H) circuits located per column to

perform parallel sampled differentiation of the captured image. This technique can be

applied in the spatial domain as spatial double sampling (SDS) algorithm to carry out

Page 40: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

3

visual edge detection functionality, or in the temporal domain as temporal double

sampling (TDS) algorithm to perform contrast enhancement, edge detection, or

change/motion detection functionalities in the captured image scene, and/or in “model

domain” as modal double sampling (MDS) algorithm to perform fixed pattern noise

(FPN) removal in the captured images. The TDS contrast enhancement, TDS edge

detection, and MDS-FPN removal are novel algorithms that are particularly suited for the

logarithmic mode of pixel operation. To our knowledge, we are the first to report such

image processing functionalities using these straightforward double sampled techniques.

Peripheral Mode Normal (coarse) scans of the entire array.

Select Mode Region of attention identified by change/edge detection

a) Fovea Scan

b) Window Scan

Figure 1.1: A graphical illustration of “attention selection” schemes for on-chip data reduction: (a) Fovea, (b) Window. CD: Column Decoder, RD: Row Decoder.

These “primitive” images processing functions have very wide range of potential

applications and have been used as fundamental blocks in many biologically inspired

Page 41: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

4

vision applications. Perhaps, the most notable in this category is the “attentive selection”,

analogues to those found in biological systems, which utilize special features in the image

scene, such as edge or motion, to select a fovea (multi resolution scanning) or a window

(selective scanning) in the captured images, hence allows on-chip data reduction that is

essential to overcome data communication bottleneck and/or computational bottleneck.

Figure 1.1 above shows a graphical illustration of a typical “attentive selection” scheme

used for data reduction.

Another application of particular interest here is the use of the TDS

tE ∆∆∝ signal directly as change detector or along with the SDS signal xE ∆∆∝ to

detect motion. In fact, the temporal differentiation and/or the spatial differentiation have

been the working horse for most of the on-chip motion detection sensors [25, 34, 58, 59,

61-64, 66, 69, 71, 73, 74, 77, 97, 99, 100,103, 107, 128]. Figure: 1.2 shows that, with the

exception of “Reichardt” pure correlation algorithm, all intensity based motion detection

algorithms utilize TDS (It) and/or SDS (Ix).

Figure 1.2: Classification of intensity based motion processing algorithms. Here Ix is xE ∆∆∝ (SDS) signal, whereas It is tE ∆∆∝ (TDS) signal (after [99]).

Most of the reported work [25, 34, 58, 59, 61-64, 66, 69, 71, 73, 74, 77, 97, 99,

100, 103, 107, 128], however, emphasized on the system level characterization without

Page 42: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

5

solid understanding of their fundamental building blocks (temporal and spatial

differentiations). To our knowledge, we are first to report such rigorous analyses on

these fundamental image processing functionalities using our prototype chip.

Motion/change detection sensors have a potential for many applications. Some of

these applications are listed below [25, 34, 58, 59, 61-64, 66, 69, 71, 73, 74, 77, 97, 99,

100,103, 107, 128]:

1. Automotive including airbag control.

2. Industrial applications including, object and target tracking.

3. Avoidance and navigation.

4. Low-cost surveillance including office surveillance and monitoring technical

equipment.

5. Video compression and coding.

6. Navigation for moving robots or vehicles.

7. Change detection in multi-temporal remote sensing. Monitoring of the pressure

on the environment, monitoring of agricultural areas, global change analysis, and

the assessment of damages due to forest fires, deforestation, floods, earthquakes,

and volcanic activities.

Other than as a front end to motion detection applications, edge detection is one of the

earliest and most fundamental low level computer vision problems. It has been the

subject of intense study for decades, because of the critical role of edge information in

numerous image processing and understanding applications such as:

1. Taking tolerance measurements or "gauging" a component.

2. Inspection for missing components.

3. Alignment – determining the orientation and position of a part.

4. Identification – identifying a bar code.

5. Segmentation

The contrast enhancement is another very important image processing functionality, and

so it is advantageous to be employed with vision systems. However, one application very

relevant to this work is the use of TDS contrast enhancement as front end to SDS edge

detection function, especially for the logarithmic mode of operation at low intensities.

Page 43: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

6

Finally, a new modal double sampling (MDS) algorithm was applied on-chip to

remove offset FPN noise in logarithmic mode, which inherently suffers from high FPN

levels. This is a great advantage for CMOS image sensor in this mode because it offers

high dynamic range, true random accessibility, and continuous operation. This makes it

very attractive for “attention selection” schemes based on motion detection.

The core of this algorithm is a novel “super” pixel that can operate in 3 modes

(linear and 2 logarithmic) with different light intensity responses, and can therefore be

regarded as a locally (intensity) adaptive pixel if provided by the proper control signals.

1.3 Research Objectives

This thesis focuses on implementing three novel double sampling techniques for on-chip

image processing in different domains: the spatial double sampling (SDS) for visual edge

detection, a temporal double sampling (TDS) for “direct” motion/change detection in

linear mode, and contrast enhancement, and edge detection in logarithmic mode and a

modal double sampling (MDS), an innovative FPN reduction technique in logarithmic

image sensors.

The main contributions and objectives of this thesis can be summarized as

follows: (i) reviewing CMOS image sensors. (ii) characterization and modeling CMOS

compatible image sensors, especially in the logarithmic mode of pixel operation using

circuit models that can be extended (using Cadence) for the entire CMOS image sensor

simulation, (ii) demonstrating and determining, based on thorough theoretical and

experimental investigation, the feasibility of these three novel double sampling

techniques (SDS, TDS, MDS) using image sensor fabricated in standard CMOS 0.5µm

and 0.35µm technologies, (iii) demonstrating and investigating the feasibility and the

switching mechanism of a new APS design in 3 modes of operation, (iv) building a frame

work for implementing these kinds of on-chip image processing , and lastly (v)

suggesting future research directions for this work.

1.4 Organization of the thesis The material in the following chapters is organized as follows:

Page 44: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

7

In Chapter 2, we first present a brief description of smart vision systems, their

advantage and limitations compared with conventional systems. Then, we introduce

different technologies available for image sensors, with special emphasis on CMOS and

CCD operations, their advantages, and limitations. Issues in designing CMOS image

sensor are also listed. We then present the theory of photodetection in semiconductor

devices, especially in CMOS compatible photodetectors, followed by a discussion on

various CMOS compatible photosensors operations, and comparing them in terms of their

applicability in CMOS image sensor. Various photocircuits are then presented with some

emphasis on the integrating and logarithmic active pixel sensors (APS). This is followed

by an introduction to motion detection algorithms, theory, and practical examples.

Finally, a brief introduction to edge detection techniques will be presented.

Chapter 3 is devoted to the characterization, modeling, and simulation of CMOS

compatible photodiode, where we will present detailed analyses on CMOS active pixel

sensors (APS). These analyses were performed using HSPICE electrical circuit simulator

that utilizes a wide range of CMOS device models, with numerous levels of

computational complexity, efficiency, and accuracy including deep-sub-micron effects

and a wide range of device operations. We devised different equivalent circuit and

behavioral models for the CMOS photodiode. These models were compared with

measurements to find which is best suited for our investigation. This is followed by

analysis of logarithmic active pixel sensors (Log-APS). The investigations are supported

(where appropriate) by the experimental results obtained from chips fabricated using

standard 0.5 µm, and 0.35 µm CMOS technologies.

Chapter 4 describes the architecture and the operation of the proposed spatial

double sampling (SDS) technique to be used for edge detection functionality. A detailed

experimental investigation for spatial-edge detection in linear-mode and logarithmic

modes of operation under different conditions of light intensities, contrasts, and frame-

rates are to be presented and discussed. Finally, the influences of motion on spatial-edge

signals are discussed.

Chapter 5 introduces the architecture and the operation of the proposed temporal

double sampling (TDS) technique to be used for motion/change detection functionality in

the linear mode, and for contrast enhancement and edge detection in the logarithmic

Page 45: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

8

mode. A detailed experimental investigation for TDS detection output in linear-mode

and logarithmic modes of operation and under different conditions of light intensities,

contrasts, and frame-rates are presented and discussed. Finally, we present the influence

of motion on TDS output.

In Chapter 6, we describe a novel and straightforward technique to reduce the

FPN, especially in the logarithmic mode of pixel operations with slight constraints

imposed on silicon area. After discussing the sources of FPN and their contributions in

the 3-modes of operation of the novel “super” 3M-APS pixel, we will introduce this

technique in detail. This involves the presentation of measurements based on the actual

imager designed with this pixel configuration. These measurements include the effect of

various parameters such as light intensity, frame rate, and current-source-load bias in all

modes of operation; the results are then analyzed and discussed. Later, the pixel circuit

switching performance was analyzed in terms of settling-time and modulated optical

frequency response in order to understand the temporal behavior of this pixel, and to

determine the range of validity of this method.

Chapter 7 summarizes all research work and discusses the conclusions drawn

based on this work. Recommendations on how to improve the performance of the

proposed double sampling techniques are made. Finally, our vision for future work will

be presented.

Last but not the least, we will present the specification of the prototype chips

(CYC & ALO) fabricated using 0.5µm and 0.35µm CMOS technology respectively,

along the measurement setups used in various experimental investigations. Finally, we

will introduce our method for data correction (artifact elimination) in FPN calculation:

Appendix A CYC-chip specifications, functional diagrams and photos.

Appendix B Measurement setups for Chapter 3, 4, 5.

Appendix C ALO-chip specifications, functional diagrams and photos.

Appendix D Measurement setups for Chapter 6.

Appendix E Miscellaneous measurements setup (s) and results.

Appendix F Data Correction for FPN calculation.

Page 46: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

9

Chapter Two

CMOS Image Sensors

2.1 Introduction Smart vision systems (usually based on CMOS1 image sensors) will be an inevitable

component of future intelligent systems. Conventional vision systems, based on an

imager (usually a CCD2 camera) and a separate digital processor, do not have the potential

in the future general-purpose consumer applications, because of their cost, size, and

complexity. Smart vision chips, which include both sensor array and parallel processing

(analog or digital) elements, have been under extensive research for more than a decade

and have demonstrated promising capabilities [15]. Figure 2.1 below shows the main

differences between the two vision paradigms.

In conventional vision systems, because data transferred to post-processing

computer is “raw’ data, there will be data-transfer bottleneck because of the analog to

digital conversion, and computational-load bottleneck, especially for high resolution high

frame-rates applications. Moreover, the conventional system usually lacks local

adaptability because of the absence of feedback between the camera system and the

computer due to channel bandwidth and speed of response constraints.

_________________ 1CMOS: Complementary Metal Oxide Semiconductor. 2CDD: Charge Coupled Device.

Page 47: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

10

In smart vision systems, the transferred data is pre-processed on-chip before

sending it to the computer via the communication channel. This will result in data

reduction, which allows sending the data (information) at lower data-rates, and reduces

the effect of the computational-load bottleneck. Moreover, most of the on-chip pre-

processing is usually performed in the analog form allowing local adaptability (such as

light-intensity adaptability) besides the reduction of the effect of ADC data-transfer

bottleneck.

Figure 2.1: The difference between intelligent vision systems. (a) Conventional vision system, where data is post-processed off-chip by a separate digital processor (computer). (b) Smart vision system, where data is pre-processed on-chip before sending it for any post-processing.

(a) Conventional Intelligent Vision System

ComputerCamera (CCD)

Control

Missing feedback

Data transfer bottleneck

Computational load bottleneck

Raw data

ComputerIntegrated

Camera (CMOS)

Control

Local analog feedback

Processed data

Low data rates

(b) Smart Vision System

Page 48: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

11

The ultimate development of smart vision systems is implementing the smart pixel

paradigm, where image-processing occurs in-pixel. In these systems, both sensors and

processing elements are integrated at the pixel level. Smart-pixel-sensors are, therefore,

information sensors, not transducers and signal processing elements [15]. Vision systems

presented in this thesis are all smart vision systems, where most of the pre-processing

operations are performed per columns in an analog from. It should be noted that smart

vision sensors are typically not general-purpose devices—everything in a smart vision

system is specifically designed for the targeted application (i.e. “application specific

integrated circuits” or simply ASIC).

Conventional imaging systems could only output an analog signal, which required

further signal conditioning. Still, in most imagers, the main focus is on the quality of the

imaging in terms of noise, resolution, uniformity, sensitivity and so on. It is assumed that

further signal and image-processing stages can acquire the imager output and process it.

In contrast, in vision chips the main focus is on the processing functionality. The

implementation of a certain algorithm using existing components is given the priority and

often some image qualities, such as resolution, may be sacrificed. In this thesis, we

followed a middle-path by considering both image quality and processing robustness in

our prototype designs of image sensors with on–chip image-processing functionality.

2.1.1 Advantages of smart vision sensors

When compared to a conventional vision system consisting of a camera and a digital-

processor, a smart vision chip provides many system level advantages including:

Speed: The processing speed achievable using vision chips can exceed that of the camera-

processor combination. The main reason is the existence of two bottlenecks in the

conventional systems: the data-transfer bottleneck (because of ADC) between the imager

and the processor, and computational bottleneck caused by the computer’s inability to

handle large amount of data. On the other hand, the information between various levels of

processing in smart vision systems is processed and transferred in parallel [15].

System Integration: CMOS photodetector arrays used in smart systems are compatible

with integrated on-chip electronics, allowing a system-on-chip, where most of the

Page 49: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

12

modules necessary for smart vision operations are integrated on the same die. From a

system design perspective this is a great advantage over camera-processor option [15].

Large Dynamic Range: Many smart vision chips use adaptive photocircuits, which have a

dynamic range of at least 7 decades (15 decades for logarithmic imagers) of light

intensity. Conventional cameras are, at best, able to perform global automatic gain

control [15, 17].

Size: Using single chip implementation of vision processing algorithms, very compact

systems can be realized.

Power Dissipation: Vision chips are usually based on low power CMOS analog circuits.

Radiation hardness: CMOS imagers have lower susceptibility to radiation damage

compared with CCD.

2.1.2 Disadvantages of Smart Vision Sensors

Although designing single-chip vision systems is an attractive idea, it faces several

limitations such as:

Reliability of processing: Smart vision chips are designed based on the concept that

analog VLSI systems with low precision are sufficient for implementing many low-level

image-processing algorithms. The precision in analog VLSI vision systems is affected by

many factors, such as noise and mismatch which are not usually controllable. As a result,

if the algorithm does not account for these inaccuracies, the processing reliability may be

severely affected. Moreover, these CMOS smart chips use parasitic photosensors, and

unconventional analog circuits, which may not be well characterized and understood [15].

Resolution: In smart vision chips, each pixel includes a circuit that occupies a proportion

of the pixel area that depends on circuit complexity; therefore, these chips may have low

fill-factors. Furthermore, the silicon area for on-chip image processing blocks is used at

the expense of the total area reserved for the imager array. Thus lower resolution imagers

are expected because of the limited imager array size that can be implemented on the

limited silicon area.

Page 50: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

13

Difficulty of the design: Smart vision chips implement a specific algorithm in a limited

silicon area with specified shape. Each application requires a custom layout; a time

consuming and error-prone process.

2.2 Technologies available

Different technologies offer advantages and disadvantages for the design of smart vision

chips. The dominant technologies available to date are CMOS, BiCMOS, CCD, and

GaAs (HEMT1 T and MESFET2). CMOS has been widely used in many designs. The

additional bipolar transistor in BiCMOS processes, though advantageous in achieving

better matching properties, and higher speeds, is not easily justifiable when comparing

other factors such as limited availability due to their complexity and cost. They also use

large silicon area, which makes them unattractive for large vision chips [15].

While commercial grade CMOS processes are accessible through fabrication

foundries, such as TSMC, and CMP, the CCD processes available for prototyping are of a

low quality. GaAs processes have been used only to a very limited extent because they

suffer from several problems such as: technology immaturity, its processes are generally

expensive and not easily accessible, and its opto-electronics devices are only available in

very specialized processes [15].

In this thesis, we will limit our discussion to silicon imagers that are sensitive to

light in the visible spectrum. The milestones of silicon imager history are shown in Fig.

2.2 below. In the interval from 1965-1970, IBM, Fairchild, Westinghouse, and others

developed bipolar and MOS photodiode arrays. In 1970, the CCD was invented at Bell

Labs, and dominated the image sensor market until now, because they offer low noise,

low non-uniformity, and low dark current, with high sensitivity, and high quantum

efficiency imagers. From 1985 to1991 several MOS sensor was reported. The most

prominent was the CMOS Passive pixel sensor (PPS) developed by VLSI Vision Ltd [9].

Finally, from 1992 to the present, the active pixel sensor (APS) was and still under

continuous development [18, 19].

_________________ 1HEMT: High Electron Mobility Transistor. 2MESFET: Metal-Semiconductor Field Effect Transistor.

Page 51: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

14

Figure 2.2: The milestones of silicon imager history (after [18, 19]).

Area silicon image sensors consist of an array of pixels, with sizes that include the

standard formats such as QCIF (176x144 pixels), CIF (352x288 pixels), VGA (640x480

pixels), SVGA (800x600 pixels), XGA (1024x768 pixels), and non-standard formats such

as “mega-pixel” sensors (>1000x1000 pixels) like those used in astronomy and digital still

photography. Silicon image sensors come in two broad classes: charge-coupled devices

(CCDs) and CMOS imagers. We will start with a brief presentation CCD image sensor

first, and then we will discuss the CMOS images sensors of interest here.

2.2.1 CCD Image Sensors Charge-coupled devices (CCDs) are currently the dominant technology for image sensor.

Because they use optimized photodetectors, they offer low noise, low non-uniformity, and

low dark current, with high sensitivity, and high quantum efficiency imagers. Moreover,

high fill factor imagers; with small pixel size and large format arrays have been achieved

[19, 20]. The CCD is a dynamic analog (charge) shift register using closely spaced MOS

capacitors clocked with 2, 3, or 4 phase clocks. These capacitors operate in deep

depletion regime when their clocks are high. Charge transfer (from one capacitor to the

next) must occur at high enough rates to avoid corruption by leakage, but slow enough to

ensure high charge transfer efficiency (CTE).

Several MOS sensors reported

CMOS PPS sensor developed (VVL)

1965-1970 1980-1985

1970-1997 CCD dominates

1985-1991

1992-present CMOS APS developed (JPL…) CCD invented at Bell Labs

Bipolar & MOS photodiode array developed (IBM, and others…)

1965-1970

1970

Page 52: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

15

Figure 2.3: Simplified interline-transfer CCD imager functional diagram (after [19]).

CCD ASIC

ADC

Microcomputer

Timing Driver

Figure 2.4: The Block diagram of basic CCD camera system (right) and a photo for an actual CCD camera system (left) [19, 21]. The figure shows the numerous support chips required (ADC, ASICs, memory, etc.) for the basic operation of CCD cameras. Also shown the various power supplies (with relatively higher voltages) used by CCD image sensor.

Photodetector

Vertical CCDs (analog shift register)

Horizontal CCDs

Output amplifier

Page 53: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

16

More detailed information on CCDs operation can be found in reference [19]. A typical

“interline transfer” CCD imager array is shown in Fig. 2.3 above. The basic CCD camera

system and a photo for an actual CCD image sensor are both shown in Fig. 2.4 above.

The limitation with CCD technology is fundamental in its operation—the need for

near-perfect CTE [22]. This requirement has a great impact on CCDs and is the cause of

the major reported limitations of CCD technology [20, 22].

1. Susceptible to radiation damage.

2. Need complex high speed shifting clocks for its operation.

3. High power operation due to the high speed switching required.

4. Require multiple relatively “higher” voltage power-supplies (up to 15V).

5. Difficulty in achieving large array size.

6. Difficulty in integrating other camera functions on the same chip due to the

specialized non-standard process that is optimized for photodetection, but

incompatible with the on-chip CMOS VLSI integration.

7. Limited frame-rate of CCDs (serial readout and GTE constraints).

Even with these numerous limitations, CCDs are well-established in the market place and

continue to advance at rapid rate. It is likely that the CMOS imager revolution will

specifically impact those applications where high level of integration and random

accessibility are essentially needed, or where high frame-rates operation are important

(such as motion analyses applications), or where higher radiation hardness is required

(such as nuclear and space applications) [22].

2.2.2 CMOS image sensors Like CCDs, these imagers are also made from silicon, but in contrast to CCD sensors,

which rely on specialized fabrication process that requires dedicated—and costly—

manufacturing facilities, the CMOS image sensors can be fabricated using the standard

CMOS processes, with general purpose facilities that produce 90% of all semiconductor

chips to date [21, 10]. However, most commercial CMOS sensors manufactures currently

use specialized processes.

Page 54: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

17

Unlike CCDs, each pixel in a CMOS imager has its own individual amplifier

integrated inside. Since each pixel has its own amplifier, the pixel is referred to as an

"active pixel sensor" (APS) as shown in Fig. 2.5 (b) below. Note that the first CMOS

image sensors were implemented using "passive pixel sensors" (PPS) which do not

contain amplifiers as shown in Fig. 2.5 (a) below.

Figure 2.5: CMOS images sensors functional diagram. (a) Passive Pixel Sensor (PPS) technology: (b) Active Pixel Sensor (APS) technology. (c, d) The functional and block diagram of basic CMOS camera system. (e) Photo for an actual CMOS camera system. The figure shows how Sub-micron CMOS enable camera-on-chip paradigm [19, 21].

CMOS imager

3.3V (d)

Photodiode array

(e)

Both sensors types are arranged in arrays and supported by the proper read-out

circuitry as depicted in Fig. 2.5 (c). The development of a CMOS compatible image

Processing area

Vertical Scanner

Horizontal Scanner

Photosensitive area

(c)

Page 55: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

18

sensor technology is very beneficial since CMOS technology offers many advantages [15,

7, 20] that enables the camera-on-chip paradigm as shown in Fig. 2.5 (c), and (d) above.

The primary difference between CCD and CMOS image sensors is the readout

architecture. Each pixel in a CMOS imager can be read directly on an x-y coordinate

system, rather than through the charge shift out process of the CCD. For CMOS image

sensors, the charge (in PPS) or the voltage (in APS) is read out using row and column

decoders similar to those found in DRAM memories. This means that while a CCD pixel

transfers a charge, a CMOS pixel converts the photo-generated charge to a voltage and

transfers the information directly to the output. This fundamental readout difference,

along with the manufacturing process, gives CMOS imagers several advantages over CCD

imagers. These are listed as follows:

1) Mature Technology: CMOS processes are well established and continue to become

more mature. The demand for digital applications has led to continuous improvement and

down-scaling of CMOS processes.

2) Availability: CMOS processes are now readily available for prototype designs through

fabrication foundries, such as TSMC.

3) Low cost standard process: CMOS is the cheapest process available, when compared

with other technologies with the same minimum size feature.

4) Design Resources: Circuit and system design in CMOS is supported by a vast number

of resources and CAD software such as Cadence.

5) High Integration Level: CMOS processes enable very large scale integration (VLSI),

with architectures that incorporate all necessary camera functions on to one chip. Such

system-on-chip integration obviates the need for peripheral-support chip packaging and

assembly, which results in more reliable compact cameras, and more reduced cost [10].

Moreover, on-chip integrated circuitry enable “Smart” camera functions to be integrated

on-chip. Beyond the standard camera functions, many higher-level DSP functions can, in

principle, be realized. These include anti-jitter (image stabilization) for camcorders,

image compression, color encoding, multi-resolution imaging, motion tracking, and video

conferencing [10].

Page 56: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

19

6) Random Accessibility: Column- and row- random accessibility, similar to that of

DRAM, allows windowing (window-of-interest readout), thus provides much added in

applications that need image compression, or target tracking [10].

7) High readout speed: CMOS APS designs are inherently fast (parallel readout and

processing), which is an advantage in machine-vision and/or motion-analysis. Frame-

rates up to 500 f/s with mega-pixel resolution can be achieved [10].

8) Low voltage operation and low power consumption: Active-pixel image sensors

consume up to 100 times less power than CCD imagers [10]. This is a great advantage in

battery-powered portable applications, such as laptop computers, video cell-phones, and

hand-held scanners.

The major disadvantages of CMOS technology for the implementation: of vision

chips can be summarized as:

1) Analog Circuit Design: Leading-edge processes are driven by digital applications and

less optimized for analog circuit design [15].

2) Photodetectors: The photodetector structures are not characterized in any of the

standard CMOS processes [15]. Few design models are available.

3) High Bus Capacitance: The CMOS imagers suffer from high column-bus capacitance

since the sense gates of all pixels on a given column are joined in parallel. This affects

the readout speed and readout noise [23].

4) Mismatch: Fixed pattern noise (FPN) due to the relatively high mismatch in CMOS

devices. This hinders the reliability of analog processing in vision chips. However, its

impact can be reduced using additional circuitry.

5) Non- uniformity: due to multiple levels of amplification (pixel and column).

2.3 Digital versus Analog Processing Both digital and analog circuits can be implemented in VLSI technology and so apart

from photodetection stage, it is the designers’ decision which to implement, therefore it is

informative to compare them with regard to smart vision. When compared with digital

techniques, analog circuits offer the following advantages:

Page 57: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

20

High Speed & Short Reaction Time: In digital signal processing, real-world data

(massive analog data) are A/D converted upon acquisition, then sequentially bit-by-bit

computed based on rigorous Boolean algebra. Then, if feedback is needed these digital

data have to be D/A converted. This will impose limitation on the total circuit speed [24].

Motion analysis, for instance, requires extraordinary computational powers of DSPs and

MPUs [25]. On the other hand, analog chips are capable of handling computationally

intensive problems in image processing like real-time motion analysis [25].

Circuit Area: Analog VLSI implementation is one order of magnitude smaller than

digital VLSI implementation [24, 11]. This gives the designer more flexibility in

choosing the level of sophistication of the circuit and/or the resolution of the imager.

Power Consumption: The power consumption of CMOS analog circuits running in the

subthreshold region is minimal. The analog circuitry in Mead’s Retina require less then

10 mW, most of which is used in the photo-conversion stage [11].

Analog circuits, which exploit the physics of electrical circuits to perform useful

computational operations [11], suffer from some limitation such as:

Low Precision: Analog electronics are more susceptible to noise, which yield to low

precision. The source of this noise can be either on-chip switching electronics (digital

noise), which requires special considerations in the design [24], or the fixed pattern noise

due to mismatch in CMOS devices.

Low Storage Time: Analog electronics does not provide efficient long-term storage.

Typical longest storage times are about one second [24].

Lack of Flexibility: Analog circuits are hardwired to perform very specific tasks, unlike

digital computers, which can be programmed to approximate any logical or numerical

operation [11]. For this reason, digital computers are preferable in developing and

evaluating new algorithms; analog implementations should only be attempted in

conjunction with such digital simulations.

Long design and testing processes: Image sensors use parasitic photosensors, and

unconventional analog circuits, which may not be well characterized and understood in a

digital process [15]. In addition, these designs require custom mask layouts rather then

the standard layout used in digital designs. Moreover, the testing process for image

Page 58: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

21

sensors is too long, because imager outputs are preferred in image-format (which are

usually evaluated by the rigorous human eye, very sensitive to the slightest imperfection

in the captured images).

Mismatch: To compensate for mismatch, and other factors that lower the precision,

additional circuits must be included.

In conclusion, analog circuits are more suitable for the smart vision chips of

relevance here. Despite their disadvantages as being the source of digital noise, digital are

not completely excluded here, because they are the core of the scanning, timing, and

controls circuits that are essential to all vision chips.

2.4 Issues in Designing CMOS Image Sensor Chips 2.4.1 Mismatch Mismatch has been the worst limiting factor in designing analog VLSI systems, including

vision chips. Mismatch can be regarded as a spatial noise spread over the surface of the

image sensor. The main drawbacks of mismatch are dynamic range and precision

reduction due to increased spatial noise levels. Increasing devices sizes can reduce

mismatch. In an image sensor, however, this can lead not only to an increase in power

dissipation but also a reduction in pixel fill-factor (sensitivity) and /or resolution of the

captured images. Therefore, these parameters traded offs have to be considered when

designing image sensor chips. There are three main sources for mismatch in CMOS

circuits [26-29].

1) Physical variation of device dimensions from device to another due to lithographic

and fabrication process.

2) Variation of doping densities from device to another during fabrication process.

3) Variation of threshold voltage from device to another due to surface effects in MOS

devices, such as the trapped charges in the gate oxide, or the surface states.

The first source can be overcome by increasing device size (but at the expense of fill-

factor and/or resolution as mentioned above). The second source is process-dependent,

thus cannot be reduced by the increasing the size. The third source (which is due to

surface effects) explains why BJTs and junction diodes generally have lower mismatches

Page 59: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

22

than MOS devices. In general, mismatch in MOS device depend on the following

parameters.

A. Transistor size: Mismatch is inversely proportional to the area of the transistor.

B. Separation of transistors: In most of image sensors cases; we deal with global

mismatches rather then local mismatches. Therefore, to reduce mismatch between

transistors, they should be laid out in the imager array as close together as possible.

C. Current level: Mismatch is directly depending on the amount of current passing

through transistors [29]. Thus the mode of transistor operation (week inversion or

strong inversion) will determine the level of the mismatch in the imager array.

2.4.2 Digital Noise Digital noise stems from the switching transients of digital circuits [15]. In CMOS image

sensor chips, digital signals are present at least to readout the imager array using decoder-

or shift register- digital scanner. The effect of digital noise on analog circuits is related to

the distance between the two circuits. For small distances, there is a linear relationship

between the distance and the amount of digital noise [30-33]. As the distance increases,

the noise remains almost constant. This has been associated with the noise coupling

through the bulk substrate. There can also be some direct capacitive or resistive coupling

between the switching signals and the nodes in the analog circuits. Most of the studies

performed on modeling and characterizing digital noise have been based on separate

analog and digital modules. However, in vision chips there is a direct coupling between at

least between the digital scanning signals and the analog biasing or read-out lines. CMOS

images sensors operating in continuous (logarithmic) mode are more susceptible to digital

noise, as the circuits usually operate with MOSFET(s) in subthreshold regime [34].

2.5 Photodetection in CMOS Compatible devices Photodetectors are photosensitive semiconductor devices that can detect optical signals

through electronic process [35]. The important property of these elements is that the

electrical output should be proportional to the light intensity. They are called the

“doorway” of the imagers [15, 36, 37] since their characteristics directly affect the

Page 60: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

23

performance of the entire imager system. Therefore, it is desirable to have as perfect a

photodetector as possible.

However, in the standard CMOS processes of interest here, there is only a limited

choice of photodetector devices. In all reported CMOS imagers, photodetectors were

realized by using parasitic elements found in the standard CMOS processes [15].

Fortunately, this inflexibility has put no severe limitation on the processing capabilities of

vision chips so far. The junction photodiodes, for example, have linear characteristics

over a large dynamic range of more than 7 decades, with reasonable sensitivity to visible

light spectrum [15]. The inflexibility of photodetector choice may be overcome by

employing innovative design ideas in other areas of the imager components offered by

very resourceful CMOS technology. The active pixel sensors (APS) and correlated

double sampling (CDS) circuit can be regarded as a good example of this methodology.

The alternative is using specialized processes to improve the performance (e.g. pinned

photodiodes, where dark current can be reduced significantly [23]).

Accordingly, this section, which describes the principles of photodetection in

CMOS compatible photodetectors, is divided in three parts. The first part introduces the

basic concepts of photo-effect, such as optical absorption, optical generation, quantum

efficiency, and their relations to each other. The second part will overview the various

CMOS compatible photodetectors. Subsequently, some typical CMOS imager

photocircuits are described.

2.5.1 Photo-effect in silicon Photo-effect in semiconductor materials describes the interaction of these materials with

light. Ideally, when light with a photon energy, phE , that is greater than the band-gap

incident on the semiconductor, electron-hole (e-h) pairs generated by absorbing “all”

incident photon energy and exciting valence-band electrons to the conduction-band as

shown below. In this case the absorption spectrum would have a sharp edge at the band-

gap as shown. Note that for photons, λ/hchvEph == where h: Plank’s constant, v is the

light frequency (Hz), c is the speed of light (m/s), and λ= c / v is its wavelength (m).

Therefore Eph (eV) = 1.24/ λ (um).

Page 61: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

24

Figure 2.6: Ideal Photoelectric effect. (a) Electron-hole generation by excitation with light of energy Eph > Eg (1.1eV). (b) Absorption spectrum, sharp at λg, where λg = Eg/hc.

Figure 2.7: Energy band structures for indirect semiconductor Si (silicon) showing direct transition and phonon-assisted indirect transition (after [38]).

λg λ (nm)

% Absorption

hvEph =Eg

EC

EV h

Page 62: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

25

For actual semiconductors, the situation is slightly more complicated, especially

for indirect band-gap semiconductors such as Si (silicon) of interest here (see Fig. 2.7).

Since in indirect band-gap materials, the minimum energy gap between valence-band and

conduction-band exists at different crystal momentums, a change of the electron

momentum is necessary for a transition with the minimum energy Eg. The additional

momentum can only be delivered by a phonon (relaxation in the crystal lattice, treated as

particles [38]), which have a momentum comparable with electron momentum (photon

momentum is negligible). Thus, the electron can only make this minimum energy (1.1eV

for Si) if and only if a phonon is also involved. This reduces the transition probability,

and hence the optical absorption [38, 23]. This also explains why GaAs (direct band gap)

has higher “absorption coefficient” than Si in the visible light range (400nm-750nm).

2.5.1.1 Absorption Coefficient When a semiconductor material is illuminated by a light source of intensity Lio, the

reduction in intensity of light, dLi, due to absorption is given by:

dxxLdL ii ).(α−= (2.1)

Figure 2.8: Measured absorption coefficient spectrum for silicon (after [15, 19]).

Page 63: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

26

where, the proportionality constant α is the absorption coefficient. The negative sign

indicates decreasing intensity. The solution of this differential equation, with the

boundary condition Li (x) = Lio at x = 0, will result in Lambert’s law of absorption [23]. x

ioi eLxL α−=)( (2.2)

The absorption coefficient α is photon-energy (wavelength) dependent and varies over

several orders of magnitude (because the transition probabilities depend on energy of the

photons), as illustrated in Fig. 2.8 above, which shows the absorption spectrum for silicon.

Note that the penetration depth is defined as 1/α; and therefore red-light (750nm) can

penetrate the semiconductor (Si) more than blue-light (400nm), whereas the green-light

(550nm) can penetrate halfway between these two extremes of visible light band.

2.5.1.2 Optical Generation Rate and Quantum Efficiency The electron-hole pair (carriers) optical generation rate, G(x), for an incident light with

intensity, Lio, in a semiconductor with a constant absorption coefficient α (monochromatic

light) is given by: x

ioi eLxLxG ααηαη −== ..)(..)( e-h/cm3 s (2.3)

where η, is the quantum efficiency, which describes the actual number of e-h pairs

generated by the absorbed photons. This means that the generation rate falls

exponentially from the surface of the semiconductor. In fact, not only is the absorption

coefficient a function of the wavelength, α(λ), but also the quantum efficiency η(λ), which

is sometimes referred to as the spectral response or responsivity . Therefore to calculate

the generation rate in a device, we must integrate with respect to λ.

∫ −=2

1

)()().().()(λ

λ

λα λλλαλη deLxG xio (2.4)

The adsorption coefficient, α, could also be a function of x, because of its dependency on

the doping in the semiconductor. However, this is only important at high doping

concentrations. The quantum efficiency level of interest here is ≈ 1 (i.e., one e-h pair for

every absorbed photon). Note that we may get more than one e-h per photon at high

photon energies (outside the visible range—in the ultraviolet range and up), because

photons have enough energy to excite more then one atom [23]. It should noted that the

Page 64: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

27

“overall efficiency” of the sensor depends not only on the generation of carriers, but also

on how “efficiently’ they are collected. Also note that the quantum efficiency is usually

presented either in normalized format or as a percentage, whereas the spectral response is

usually (but not always) presented in absolute (A/W) units, and responsivity in (V/W)

units.

2.5.1.3 Recombination The free carrier concentrations in any

isolated piece of semiconductor, exposed to

light must be always in equilibrium.

Hence, there must be another mechanism to

compensate the photo-generation process.

This mechanism is usually referred to as

Recombination, which can be regarded as

the opposite mechanism of the optical e-h

Figure 2.9: An illustration for the photo generation-recombination mechanisms (after [23]).

pair generation as shown in Fig. 2.9. The rate of recombination, R, is proportional to the

number of the electrons and holes: R = np. Therefore, as the photo-generation increases,

the recombination increases too. Hence, we can get no significant signal from photo-

generation alone.

2.5.1.4 Drift-current To get a measurable signal, the photo-generated e-h pairs need to be separated to reduce

recombination, and then forced to reach some collection contact. On way to achieve these

two objectives is by utilizing the built-in electrical field in a p-n junction diode, which

shapes the energy band diagram in a way (as shown in Fig. 2.10 below) that the photo-

generated e-h pairs are promptly separated upon generation, where they are then collected

by the cathode and the anode of the photodiode respectively.

EC

EV h

e

h

e

G R

EC

EV h

e

h

e

Page 65: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

28

Figure 2.10: The photo-generated e-h pairs can be separated before they recombine by the built-in electrical field of the p-n junction diode, where they can be collected at the diode’s cathode and anode respectively. This e-h separation and/or collection can be further enhanced by reversing the bias the diode.

This process can be further improved by applying a reverse bias voltage on photodiode,

which increases the width of the collection area (depletion-width) and increases the

electrical field (steeper energy band diagram), hence more separation forces are applied.

These electron and hole currents are the drift component of the total photocurrent.

2.5.1.5 Diffusion-current So far we dealt only with drift components of the photocurrent due e-h pair collection in

the depletion width. However, not all the incident photons will be absorbed in the

depletion width and many will go through into the substrate (for CMOS vertical

photodiode). The collection efficiency is very low in the absence (or weakness) of the

electrical field. However, electrons diffuse faster than holes and therefore a reasonable

number can diffuse without recombining to the depletion-width, where they are collected.

Figure 2.11 below shows that the photo-generation may occur in three regions in

the photodiode: Carriers photo-generated inside the depletion region are separated and

forced to drift by the applied electrical field toward the contacts to constitute the drift-

component of the photocurrent. Carriers photo-generated within their respective diffusion

lengths (Ln, Lp) are diffused to the depletion region, where they are collected to constitute

the diffusion component of the photocurrent. Carriers photo-generated outside their

respective diffusion lengths have most likely recombined before reaching the depletion

region, therefore do not contribute to the photocurrent. The diffusion length (Ln,p) is the

EV

G EC

h

e

Page 66: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

29

characteristics length of the exponential distribution of minority carriers at which the

number of these carriers has decayed by factor of “e”.

Figure 2.11: The CMOS compatible photodiode (n+/p-substrate) showing energy-band diagram of its reverse biased p-n junction, and photo-generation (PG), and recombination (R) mechanisms of the electron-hole pairs (carriers). Also shown carrier’s drift-current due to the electric field and the diffusion of minority carriers that constitute the diffusion-current.

Page 67: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

30

The diffusion length is governed by both diffusion and recombination processes, and is

given by: nnn DL τ= (µm) for electrons and ppp DL τ= (µm) for holes, where pnD ,

and pn,τ are the respective diffusivity and the lifetime for electrons and holes. The

diffusivity is related to the mobility through Einstein relation ( ) pnpn qkTD ,, µ= (cm2/s),

where qkT is thermal potential, vT ≈25mV at room temperature, and pn,µ is the mobility

(cm2/Vs). The lifetime, pn,τ (s), in Silicon (Si) is the proportional to the reciprocal to the

density of recombination centers (Nt) in the energy band-gap, ( )tn N1∝τ [38]. Typical

values for Silicon are shown in Table 2.1 below.

Table 2.1 Diffusion lengths in p-type and n-type Silicon [38].

Carrier

type

Substrate doping

(cm-3)

Lifetime

τ (µs)

Diffusivity

D (cm2/s)

Diffusion length

τDL = (µm)

Electrons 1.45 .1017(ppo) 1.0 19 43.6

Holes 5.2 .1016 (nno) 0.3 17 14.4

Now, the total (collected carriers/incident photons) can be written

as: bulkdepletioncollectiontotal ηηη +=− where bulkdepletion ηη , represent the collection efficiencies in

depletion region and the bulk silicon respectively. According to the above

discussion, bulkdepletion ηη >> , so it is desirable to have most of the collection occur in the

depletion region. However, this is not always possible. For longer wavelengths (red)

incident light, the absorption coefficients are low (means larger penetration depths), hence

lesser light absorption (and collection) in the depletion region, especially for shallow

depletion regions. This will result in lower efficiencies that have more dependency on

diffusion length. On the other hand, for shorter wavelengths (blue) incident light, most

absorption occurs in the depletion region, hence higher collection efficiencies are

expected. This may have a negative impact on sensitivity when using advanced (sub-

micron) standard CMOS process for image sensor fabrication, as the junction depths (xj)

get more shallow, and a large portion of the incident light goes straight through the

Page 68: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

31

depletion region [23]. An improved one-dimensional analysis of these photo-effect

parameters and their impact on CMOS photodiodes can be found elsewhere [39].

2.5.2 CMOS compatible Photodetectors In a standard CMOS process, either p-well or n-well, several parasitic junction devices

can be used as photodetection elements. Because photodiodes, bipolar phototransistors,

photogates, and pinned photodiodes [15, 19, 23, 38, 40] are by far the most popular

devices used in the vision chips, only these devices will be discussed in more details here.

Other rarely used photo-detecting structures such as photo-PMOS, and the bidirectional

photodetector can be found elsewhere [15, 41].

2.5.2.1 Photodiodes Figure.12 below shows the three possible photodiode structures that can be implemented

using p-substrate process: the n+/p-substrate photodiode, the p+ /n-well photodiode, and

the n-well /p-substrate photodiode. The first two, are “shallow” junction photodiodes,

whereas the third is a “deep” junction photodiode. It is worth noting that similar

photodiodes structures can be devised using n-substrate (p-well) processes.

Figure 2.12: Possible CMOS-compatible photodiode structures: (left) n+/ p-substrate diode, (middle) p+ /n-well diode, and (right) n-well /p-substrate diode (after [41]). “C” is the cathode of the photodiode, whereas “A” is its anode.

Page 69: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

32

Diffusion-Substrate Junction This type of junction is the most widely used photodiode among all other junction

photodiodes available in the conventional CMOS process, because of it is simple layout,

hence less susceptible to lithographic variations that cause the fixed pattern noise. The

junction formed is between n-diffusion and a p-substrate (n+/p-substrate diode Fig. 2.12).

The spectral response (efficiency) of this photodiode is better than that of p+/n-well

(diffusion-well) photodiode (refer to Fig. 2.13 below) because of the contribution of

carriers generated in the bulk substrate, and because the depletion layer itself is wider for

the same pixel pitch (size). However, this diode is vulnerable to crosstalk and noise due

to diffusion and leakage of carriers through the substrate [15, 42, 43]. The maximum

quantum efficiency occurs at λ ~620nm reflecting the contribution of the bulk (substrate)

carries. The response time is expected to be longer than the p+/n-well, because of this

contribution. This junction exhibits the lowest dark current compared with other

junctions (~ 11104 −× - 10105.2 −× A/cm2 for 2-50 µm optical window width [44]).

Figure 2.13: Simulated spectral response of the CMOS-compatible photodiode structures: (upper curve) n-well/p-substrate diode, (middle curve) n+/ p-substrate diode, and (bottom curve) p+/n-well diode (after [45]). Data used here are for 0.8um CMOS process.

Page 70: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

33

Diffusion-Well Junction This junction is formed with a p-diffusion in an n-well (p+/n-well diode Fig. 2.12). The

spectral response of this diode is the worst compared with other photodiode structures

(refer to the bottom curve of Fig. 2.13 above). This degradation in spectral response is

attributed mainly to narrowness and shallowness of the p+/n-well junction. Charge

carriers generated outside the well (due to long wavelengths) are shielded by the n-well/p-

substrate junction and have no chance to be collected by the photodiode inside the well.

However, this n-well/p-substrate junction shielding will leads to better isolation for the

photodiode, hence lower crosstalk between neighboring photodiodes [46]. Moreover,

because this diode relies mainly on carriers that are photo-generated in the depletion

region rather than those generated in the bulk, which have to diffuse to the depletion

region to be collected. Hence faster response is expected from this photodiode. The

maximum sensitivity occurs is located at λ ~530nm (green), reflecting the shallowness of

the junction, where most of the green-light is absorbed. This photodiode (p+/n-well) has

the worst dark current compared with other junctions (~ 10104 −× A/cm2 for 2-50 µm

optical window width [44]).

Well-Substrate Junction This type of junction is formed between an n-well and a p-substrate (refer to n-well/p-

substrate diode in Fig. 2.12 above). This diode has the best spectral response for the

visible light due to the wideness and deepness of its depletion region compared with other

photodiode structure, which allows the collection of the minority carriers photo-generated

deeply in the substrate [15, 42, 43]. The well-substrate photodiode also has the lowest

capacitance, which helps to achieve a high bandwidth [43]. The disadvantage of the well-

substrate photodiode is its sensitivity to substrate noise and crosstalk from the neighboring

photodiodes due to diffusion of the minority carriers. This junction has moderate dark

current compared with the other junction (~ 10101 −× - 10108 −× A/cm2 for 2-50 µm optical

window width [44]).

An approximate formulation of the quantum efficiency for vertical p+/n-substrate

photodiode can be found in [15, 23], which can be easily extended to n+/p-substrate

Page 71: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

34

photodiodes. The n-well (p-substrate) photodiodes are preferable over p-well (n-

substrate), because they fit the standard CMOS process (p-substrate) better, and have

superior spectral response than p-well (n-substrate) photodiodes, since Ln > Lp . The

quantum efficiency, iototal qLJ /=η , for n+/p-substrate photodiodes can be written as:

+

−=−

−+

n

W

substratepn Le

)(11

)(

/ λαη

λα

(2.5)

diffusiondrifttotal JJJ += is the total photocurrent density (A/cm2), and diffusiondrift JJ , are the

drift and diffusion photocurrent components respectively. W is the depletion-layer width

as defined in Fig. 2.11 above. All other parameters are as defined before. This

relationship suggests that efficiency is critically dependent on the magnitude αW factor.

The absorption coefficient is a material parameter, thus it is out of our control, whereas

the depletion width W can be considered as a potential design parameter, since it can be

controlled by the bias in CMOS process (given doping levels). However, W increases

slowly with reverse bias voltage and higher reverse biases increase dark currents as well

[19].

This approximate relationship succeeded to describe the sharp fall of )(λη at long

wavelengths as shown in Fig 2.13, by the low absorption coefficient at those wavelengths.

However it failed to explain the fall )(λη at short wavelengths, which is due to low

penetration depths at short wavelengths. This failure is attributed to the several ignored

factors in the derivation such as neglecting the thermal current and/or the absorption in the

top doped region n+ of the n+/p-substrate photodiode, which are not necessarily realistic

[23]. The photocurrent Dtotalph AJI ×= can be derived from the quantum efficiency to lead

to:

hcLAq

I ioDph

λη= (2.6)

where λ is the wavelength, ioL is the incident light intensity (W/m2), η is the quantum, q

is the electron charge (1.6 x 10-19 C), h is Plank’s constant (6.62 x 10-34 Js), AD is the

photosensitive area of the photodiode, and c is the speed of light in space (3 x 108 m/s).

This equation shows a linear relation between photocurrent and the incident light

intensity.

Page 72: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

35

Figure 2.14 below shows a typical measured quantum efficiency plot, the oscillation of η

stems from interference effects in the passivation and the oxide layers on top of the silicon

wafer [38], as well as the effect of non-photosensitive areas such as interconnect metal

lines. This behavior was analyzed using one-dimensional models for CMOS photodiodes

[39].

Figure 2.14: Measured percentage quantum efficiency (spectral response) of some CMOS-compatible photodetection structures: (upper curve) n-well/p-substrate diode, (middle curve) n+/ p-substrate diode, and (bottom curve) photogate (after [47]).

2.5.2.2 Phototransistors CMOS-compatible phototransistors are based on vertical or lateral structures. The PNP

lateral transistor (refer to Fig. 2.15(a)) provides high current gain, β, but has a complicated

structure and large device-to-device variation (mismatches), both of which can be

detrimental to the imager implementation and performance [43]. The vertical parasitic

PNP bipolar transistor (refer to Fig. 2.15(b)) in the n-well CMOS process provides high

compactness, moderate gain, and speed. It is called “parasitic” because it always appears

when implementing PMOS transistors.

Page 73: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

36

(a) Lateral PNP-phototransistor

(b) Vertical PNP-phototransistor

(c) PMOS-phototransistor

Figure 2.15: Possible CMOS-compatible phototransistor structures: (a) Lateral PNP-phototransistor, (b) Vertical (parasitic) PNP-phototransistor, and (c) PMOS-phototransistor (after [41]).

For the vertical PNP-transistor, the emitter (p+ diffusion), the base (n-well), and

the collector (p-substrate) are arranged vertically, as shown in Fig. 2.15 above. The total

Page 74: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

37

collector current (mainly due to carriers photo-generated in the base-collector junction

contribution) can be written as:

phC II .β≈ (2.7)

Therefore, the effective quantum efficiency of the phototransistor is β times larger than

that of a photodiode as shown in Fig. 2.16 below.

Figure 2.16: Typical spectral response of the CMOS-compatible vertical PNP-phototransistor structure (left curve) compared with that of the n-well/p-substrate photodiode on right graph (after [40]).

The two spectral responses of Fig. 2.16 are qualitatively similar, due to the fact

that most of the amplified photocurrent is generated in base-collector junction (equivalent

to n-well/p-substrate diode). This structure can be regarded as an n-well/p-substrate

photodiode with built-in amplifier and therefore has higher sensitivity (refer to Fig. 2.16).

Unless the base is externally connected, the general characteristics of a bipolar device lead

to a logarithmic dependence of the resulting base voltage on the incident optical power

[40]. By comparing the two curves in Fig. 2.16, we can roughly calculate the current gain

as β ~ 50, which can be improved by using special layout techniques [48].

The third phototransistor structure shown in Fig. 2.15 (c) above is for PMOS-

phototransistor, and was reported by Schanz [41]. It is an interesting idea, but rarely used.

Again, similar devices can be also implemented in n-substrate (p-well) CMOS process.

Page 75: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

38

2.5.2.3 Photogate The next CMOS photodetection structure presented here is for the photogate shown in

Fig. 2.17 below, which is in fact MOS-capacitor that has been borrowed from CCD

technology. These devices can be either NMOS-photogate or PMOS-photogate

depending on the fabrication process used.

Figure 2.17: The basic structure of NMOS-photogate photodetector (after [47]). The graph shown in Fig. 2.17 above shows the basic structure of the photogate

photodetector. It consists of a photogate (PG) with a floating diffusion (FD) output

separated by a transfer gate (TX). Perhaps the main advantage of the photogate structures

is that the sensing node and integration node are separate, which allows true correlated

double sampling (CDS) operation to suppress kTC noise, 1/f noise, and fixed pattern noise

(FPN) [16, 49]. On the other hand, the main disadvantage of the photogate structures is

low spectral response (refer to Fig. 2.14 above), especially for lower wavelengths (blue),

due to the absorption in the polysilicon gate, which has an absorption coefficient

equivalent to that of crystalline silicon [19]. The spectral response is worst with PMOS-

photogates, which have peak quantum efficiency that is four times lower than that of

NMOS-photogates [16].

2.5.2.4 Pinned Photodiodes The last photodetection structure presented here is for the pinned photodiode as shown in

Fig. 2.18 below. This structure is fabricated using specialized process to enhance the

Page 76: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

39

performance (dark current). Like the photogate, this structure has its sense node separated

from its integration node, thus kTC noise can be removed by correlated double sampling

methods. Just like the photogate, after resetting the floating diffusion (FD), the photo-

generated charge is integrated in the pinned photodiode potential-well and transferred to

the output (FD) node for readout using the transfer gate (TX). The diode itself is then

reset through TX and RST transistor. Unlike the photogate, its potential-well (charge-

collection well) is formed by a buried n+ layer (or originally intrinsic layer) instead of the

pulsed gate voltage on the photogate. The pinned photodiode can be regarded, therefore,

as self-biased internal photogate.

Figure: 2.18: The basic structure of pinned photodiode.

From the photodetection performance prospective, the main advantages of pinned

photodiodes are the improvement of the quantum efficiency at short (blue) wavelength,

mainly due reduced surface recombination of the photo-generated e-h pairs [23].

Moreover, pinned photodiodes exhibit lower dark current, because the p+ implant reduces

the collection of dark current generated at the surface defect states and Si-SiO2 interface

[23]. From the pixel level perspective, the reduced number of required connections

compared by the photogate results in higher achievable fill-factors. Compared with

photodiode, however, the pinned photodiode disadvantages are having smaller fill-factor,

and complicated addressing due to TX [9].

Page 77: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

40

2.5.2.5 Photodiode or Phototransistor? Because noise is multiplied by a factor of β as well, photodiodes have lower white noise

and dark current than phototransistors with same junction areas [50], hence higher

dynamic range compared with that of phototransistors. Photodiodes also have better

linearity and faster response time than phototransistors [43, 38]. The latter is due to the

large base-collector capacitance, which is further increased by the gain (Miller’s feedback

effect). Moreover, phototransistors suffer from large device-to-device variation

(mismatches) caused by the large variation of β (20% in array) due to lithography errors in

base width [50]. Consequently, phototransistor imagers will have higher fixed pattern

noise (FPN) then the photodiode. The only possible advantage that phototransistors have

over photodiodes is their higher quantum efficiency benefited from their greater gain [50,

43]. But at low light levels, where the emitter current of the phototransistor is below

100pA, the quantum efficiency of the phototransistor is approximately equal to that of a

photodiode. This occurs because the current gain, β, is a decreasing function of emitter

current.

In conclusion, photodiodes are better suited for image sensor applications than

phototransistors, because of their higher dynamic range, better linearity, faster response,

and lower caused FPN. Accordingly, we will limit our discussion hereafter to photodiode

photocircuits, keeping in mind that these photocircuits can be also implemented by

phototransistors, photogate, or pinned photodiode structures.

2.5.3 Photocircuits Photodetector circuits are sometimes called the “front door” to the vision chip, as they are

the front-end circuits that receive the photocurrent. These circuits have to be designed

carefully in order to facilitate optimizing other levels of processing circuits. In general,

the way that the photocurrent is processed in photocircuits depends on the overall

architecture of the vision chip. For example, in visual motion chips, the photocircuit

should restore the spatiotemporal behavior of the inputs, while taking care of the dynamic

range problem.

Page 78: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

41

Dynamic range problem is a universal issue in image sensors, especially in

uncontrolled illumination areas, where the input light intensity may vary in a large range

of at least 10 decades (the human eye is capable of functioning over a 12-decade

range)[15]. Obviously, some sort of adaptation should be used with these circuits, which

at most can cope with seven decades of signal variation. However, if the vision chip is to

be used in controlled illumination area (e.g. in industrial inspection), “adaptive

photocircuits” are not necessary and (if used) may reduce the fill-factor and/or the

resolution without significant gain. Accordingly, we limit our discussion here to non-

adaptive photocircuits such as the well-known active pixel sensor (APS) circuits, which

can be either linear (integrating) or logarithmic (continuous). There are various

configurations of these circuits and can be implemented using different kind of

photodetectors (photodiode, e, phototransistor, photogate, etc.). Here, we will present the

original widely used (standard) configuration, which usually implemented with

photodiode as photodetector.

2.5.3.1 Integration (linear) Active pixel sensors Integrating active pixel sensor (Linear-APS) is the most popular type of circuit that is

used in the non-CCD imaging systems. The APS circuit has several advantages over

other photocircuits, such as linear transfer characteristics, large output swing and a fair

optical dynamic range of ~70-80dB [51, 16], which can be controlled by changing the

integration time. It exhibits low sensitivity to device mismatch (at least up to the sample-

and-hold stage) because of the dependency of the integration-time on the input

capacitance which has lesser mismatch compared with other parameters of the circuit

[15]. Also the integration principally acts as a low-pass filter, which removes the high

frequency components of the noise (both the device noise and the digital noise). The

main drawbacks of APS circuits (compared to the CCD) are fixed pattern noise and low

fill factor.

Among the early attempts to design APS cells, is the work done by S. Chamberlain

[52]. The standard APS cell, shown in Fig. 2.19 below, consists of a photodetector and an

“active” MOS transistor M2 (source-follower amplifier). This is what distinguishes an

Page 79: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

42

“active” pixel sensor from a “passive” one. The charge-to-voltage conversion occurs at

the sense node capacitance, Cgate (shown dashed) which comprise photodiode capacitance

and all other parasitic capacitances connected to that node. The source-follower transistor

M2 acts as a buffer amplifier to isolate the sensing node. The load of this buffer is the

transistor M4 (active current-source load), which is located per column to keep the fill-

factor high. Generally, the output circuit consists of a sample-and hold (S&H) circuits

and buffer amplifiers. Buffers are required so that the sampled value is held accurately

until it is read out.

Figure 2.19: Standard active pixel sensor (APS) cell (gray area). The current source load (M4), sample-and-hold circuit (CSH, M5), and the buffer amplifier are all located per column.

The basic principle of Linear-APS involves pre-charging the gate-capacitor of the

transistor M2 to value VDD - Vth1, which is then discharged by the photocurrent through the

reverse biased photodiode. The pre-charging is controlled by a “Reset” clock applied to

the gate of transistor M1, whereas the discharging is controlled by the amount of

photocurrent in the photodiode, which is approximately linearly proportional to the light

intensity. Transistor M3 works as row-select switch with its gate “Row” is connected to

row address decoder of the imager. The detailed operation of integrating (linear) pixel

will be discussed later in Chapter 3.

Page 80: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

43

The optical dynamic range of Linear-APS is around 70-80dB and it was reported

that a range of over 100dB is achievable [51]. In fact Vietze and Seitz [53] designed

pixels which had a dynamic range of more than 150dB. Such a high dynamic range was

achieved by introducing a programmable offset current-source in-pixel as shown in

Fig.2.20 below. The offset current-source corresponds to the background illumination.

The magnitude of the offset current can be varied over 7 orders of magnitude. This

combined with the dynamic range of over 80dB of the photodiode itself gives the sensor

an overall dynamic range of over 150dB.

Figure 2.20: An offset current active pixel sensor (after [53]).

The sense node “SN” in Fig.2.20 is pre-charged to ~ VDD through the reset

transistor M2. The voltage on the gate of M1 is adjusted to provide the offset current.

Transistor M1 is operated in the subthreshold region. If the photodiode current is greater

than the offset current, then node “SN” will discharge. On the other hand if the

photodiode current is less than the offset current, node “SN” will charge up. The change

in voltage at node “SN” is given by the equation: ∆∆ gateoffsetphoto Ct)-IIV=( × , where ∆t

is the time required to discharge or charge up, which depend on both the background

(offset) illumination and object (photocurrent) illumination and the speed of response of

the photocircuit.

Page 81: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

44

2.5.3.2 Logarithmic Active pixel sensor Photocircuits The simplest circuit for converting the photocurrent to a voltage is the logarithmic

conversion circuit shown in Fig. 2.21. The logarithmic compression function is a result of

the subthreshold operation of the diode connected NMOS transistor (M1), as

photocurrents involved are usually very small and fall within the subthreshold region of a

NMOS diode.

Figure 2.21: Logarithmic active pixel sensor (Log-APS) with one diode connected NMOS. The circuit in gray area is for the inverted Log-APS configuration.

The logarithmic active pixel sensors (Log-APS) can be realized with two distinct

configurations: original “conventional” configuration shown in Fig. 2.21 or inverted

configuration (shown in gray in the same figure), where the diode cathode is connected to

the supply VDD, while its anode and the NMOS diode drain are both connected to the

sense node (SN). The NMOS source here is connected to the ground as shown. The rest

of the circuit is the same for both configurations. Both circuits can be realized either with

1, 2-, or 3-cascaded NMOS diodes to improve its transfer characteristics (reduce

compression). Moreover, the NMOS can be replaced by PMOS to further enhance the

logarithmic slope. Unfortunately, it is not only space-consuming to use PMOS transistors

but also it results in higher fixed pattern noise because of their poor mismatching

behavior. Logarithmic pixels will be also discussed in more detail in Chapter 3.

Page 82: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

45

Figure 2.22 below shows simulated transfer characteristic for logarithmic active

pixel sensor (Log-APS), with 1, 2, and 3-cascaded NMOS diodes (refer to Chapter 3 for

more details). These characteristics show an enhancement in the logarithmic response

(reduction in compression) as the number of cascaded NMOS increases. However, this

enhancement is achieved at the expense of fill-factor due to the increase of the circuit-area

relative to total pixel area.

Figure 2.22: HSPICE Simulation of the logarithmic active pixel (Log-APS) transfer-characteristics for pixels with 1, 2-, and 3-cascaded NMOS diodes. Simulation was performed using 0.5µm CMOS process with level-13 SPICE model.

The logarithmic circuits have been the workhorse of many vision chips [15, 37].

They are used because of their small size, large dynamic range, continuous operation, and

their true random accessibility [15, 37, 54, 55]. Despite all of these advantages, these

circuits suffer from some serious drawbacks:

a) At low light levels, the circuit illustrates a very slow response, due to the low

photocurrent charge/discharge of sense node capacitance, which necessitate longer

settling times [15, 37].

2

2.5

3

3.5

4

4.5

0.0001 0.001 0.01 0.1 1 10 100

Light Intensity (W/m2)

Out

put V

olat

ge ,

V sen

se (V

)

One NMOS

Two NMOS

Three NMOS

Page 83: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

46

b) The heavy dependency of some operational parameters such as threshold voltage, Vth

on the process parameters, which may vary as much as m20%. This creates a large

spatial (fixed pattern) noise because of the exponential dependence of photocurrent on

Vth, which is difficult to compensate in the logarithmic mode [37].

c) The extreme (logarithmic) compression limits their ability to detect small changes (or

differences) in the optical signal. For edge (motion) detection applications, and when

the contrast of the input image is low, the logarithmic compression further reduces the

contrast to very low levels, which may limit the detection to very large contrast edges

only [15, 37].

2.5.3.3 CMOS Imager Limitations Photocircuits in CMOS imagers suffer from some generalized problems like cross talk,

fixed pattern noise (FPN), low fill factor, dark current, etc. [56]. This section briefly

introduces these problems along with testing problems, which are also introduced in the

later part of the section.

Crosstalk Crosstalk may have an optical and/or electrical origin. Optical crosstalk is due the travel

of light laterally through the SiO2 layer (like in a waveguide) to the junctions covered by

metal. Electrical crosstalk, the main contributor to the crosstalk, is due to the discharge of

neighboring cells’ gate capacitors caused by minority carriers with long lifetime.

However, using diffusion-well diodes instead of the diffusion-substrate diodes can reduce

the crosstalk, since the wells are better isolated [46].

Fixed Pattern Noise It is defined as the variation in pixel output currents/voltages at (dark) uniform

illumination. This mainly arises due to the mismatch of transistors in the process. Clever

circuit design techniques such as correlated double sampling are used to suppress this

noise. This technique is simple wherein the pixel output at reset and the output voltage

Page 84: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

47

corresponding to input light is read differentially so that the fixed pattern noise factor

cancels [51]. Fixed pattern noise will be discussed in more detail in Chapter 6.

Low-Fill factor CMOS imagers also suffer from low fill factor in comparison with the CCD imagers. The

fill factor for CMOS imagers, however, has undergone significant improvement recently

by the use of more advanced CMOS processes (scaling down of CMOS devices). The

main advantage offered by CMOS imagers is the integration of associated circuitry.

However, this has to take place at the expense of fill factor.

Dark current Dark current is defined as unwanted current generation in a semiconductor junction. It is

called dark current since it corresponds to the photocurrent under no illumination. It is of

thermal origin and similar in nature to the reverse saturation current, hence sometimes it is

referred to as photodiode leakage current [19]. The main impact of dark current is that it

limits the photodetector (and the imager) dynamic range [19, 21], because:

1. it reduces the output signal swing by filling up the well capacity that is intended to

hold the charge due to the incident illumination, reducing the full well capacity and

thus the maximum signal-to-noise-ratio.

2. it introduces an avoidable shot noise that adds to the overall noise of the imager, and

affecting the low light level performance.

3. it can vary widely over the image sensor array causing fixed pattern noise (FPN).

There are many sources of dark current, but they all originate due to the existence

of the defect state in the interface between the oxide and the silicon, surface defect states

and bulk defect states [19]. Thus, dark current is a function of the manufacturing process

and layout. There are many other factors affecting the dark current level, perhaps the

most obvious one is temperature, which affects the dark current dramatically reflecting the

nature of the current. Moreover, dark current seems to be dominated by the carrier

generation in the depletion region. It decreases with the decrease of the width of

Page 85: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

48

depletion region thus it can be reduced by reducing the reverse bias voltage, however, this

reduces the photocurrent too.

Dark current is normally reported in nA/cm2; therefore it is normalized for a

junction area. Since it is reported as a generation rate it is also independent of system

parameters such as gain and integration settings. The actual total dark current that affects

the system is a combination of generation rate multiplied with junction area multiplied

with integration time. This number is then reported in electrons and therefore easier to

understand its effect on a system. For a dark current of 10nA/cm2=10 n

C/(s.cm2)=(10n/q) electrons/(s.cm2)), now electron charge q~1.6x10-19C, which will result

in~ 6.25 x 1010 electrons/(s.cm2), for 40 (µm)2 = 4x10-7cm2 active pixel area, this will

result in 25k electron/s. For integration time of 33.3 ms, the resulted dark-current is ~ 833

electrons. These unwanted electrons will fill part of the well-capacity. The full-well-

capacity can be calculated from the reverse bias (~3.3V) and the diode capacitance (~50fF

for 40 (µm) 2 photodiode) as: full-well-capacity =50fFx3.3V=1.65x10-13C = (1.65x10-13

/q) electrons ~ 106 electrons. Upon comparison, the dark current (833 electrons) seems to

be insignificant number compared by full well capacity (1M electrons), yet it can

dramatically affect imager performance.

There are no accurate estimations for dark current using analytical models or

device simulation. But it can be accurately determined by measurements [19].

Nevertheless, there are some closed-form models for dark current (caused by bulk defect

source—not necessarily the largest dark current source) already reported [19, 38]. The

dark current, sI , is given as [38]: ( )nanPdPiDs LNDLNDnqAI += 2 , where AD is diode

area, pnD , and Ln,p are the respective diffusivity and the diffusion length for electrons and

holes. Na and Nd are the acceptor (p-region doping) and donor (n-region doping) densities

respectively. in is the intrinsic concentration.

For dark current measurements please refer to appendix E, where the dark current

was measured for two different diode areas and geometries. These measurements were

conducted in the course of estimating the shunt resistance Rsh of the equivalent circuit

model to be discussed in the next chapter. The resistance is the slope of the dark current

versus reverse bias voltage graph. Note, what we measure is the reverse photodiode

Page 86: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

49

characteristics )1( −= TD vVsdark eII , which can be then used to calculate Is. The voltage DV ,

the voltage across the diode (negative), qkTvT = , is the thermal potential (~25mV at

room temperature). Using forward bias characteristics TD vVseI≈ (with DV positive here) to

estimate the dark current could be a better alternative [44], because leakage currents on

the chip surface and package are larger at reverse bias.

The obvious desire is to decrease dark current as much as possible. However,

once it is reduced to a level that no longer affects the performance of the system then

further dark current reduction is not a priority.

Testing Issues The testing of the photodetector circuits is non-trivial. The circuits need to be partially

exposed i.e. only the photoactive regions are exposed while the associated circuitry should

be protected. The protection is achieved by covering the non-photoactive regions with

metallic layer, which is opaque to light. Another important issue is the light source used.

The complete characterization of these circuits needs costly equipment like a complete

optical test bench, laser source, Pico-ammeter, digital oscilloscope, etc. [37]. See

Appendices B and D for detailed testing setup information.

2.6 Visual Motion Detection Motion is a key component of the visual scene; it is used very effectively by biological

organisms from insects to humans [57]. Accordingly, visual motion detection is

considered as one of the fundamental computations required for visual perception [58]. It

serves as an important source of information for many tasks, and many important cues can

be obtained to facilitate 3D-perception, objects-tracking, and efficient navigation through

the environment [59, 60, 61], just to name a few.

Aside from its biological importance, motion detection is an important process in

many artificial vision-based applications [62]. A typical example is the area surveillance,

where specific patterns should be recognized or motion detected for activating successive

functions [63]. Motion detection is also an important task within the visual information

Page 87: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

50

processing. It has been used for image enhancement, dynamic range widening [64], and

image compression techniques used in high pixel rate cameras and processing systems

[65].

Motion detection already has potential in robotics, automotive navigation, remote

sensing [55] and industrial applications [66]. In industrial applications, such

manufacturing inspection, of interest here, a novel technique is proposed that uses motion

information as a clue for attentive selection segmenting an object from the background.

This technique will lead to large data reduction, as it will be shown in more details in the

next chapter.

While humans and other organisms compute motion information instinctively,

engineering a man-made system is a far more challenging task as will discussed in the

sections to follow. The discussion will be devoted to the introduction of various motion

detection algorithms and in-pixel motion detection architectures, upon the basis of which

the proposed design will be chosen.

2.6.1 Motion detection algorithms There are basically two main categories of algorithms that have been devised for motion

detection chips: biological, and computational. Some early implementations were based

on the optic-flow theory, which belongs to the computational category. Due to

complexity and inherent problems in this theory [15] however, no recent motion detection

chips have been based on this model. In fact all computational algorithms for motion

detection are very complex [15, 67] and only a few motion detection chips are based on

these models.

On the other hand, the use of analog VLSI technology to implement algorithms

that mimic biological visual systems has opened up a new insight into machine

intelligence [68]. Because biological models offer simple structures, which are VLSI

friendly, a large number of vision chips have adopted these models, or modified versions

of them [15]. Of particular interest, is the real time motion detection, which has been the

focus of much active research because of their importance in robotics and industrial

applications [69].

Page 88: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

51

Most of these biologically inspired motion detection chips are intensity-based [67],

and there are essentially two approaches to implement them in VLSI: Correlation

schemes and gradient schemes [67, 70]. Both are analog and spatiotemporal [70]. A

feature common to all correlation schemes is that the intensity, E, is passed through a

filter and then multiplied with a delayed version of filtered intensity from a neighboring

receptor [67]. The output has a quadratic form from which motion information is

extracted [67]. Gradient schemes, on the other hand, yield to a direct estimate of motion

[67], and only require the evaluation of derivatives of the image intensities.

Many researchers [67, 69, and 70] claimed that the correlation schemes, such as

those described by Delbrück [70] are more robust than the gradient ones, such as

described by Tanner and Mead [61]. An experimental comparison can be found in Koch

et al. [67]. However, gradient detectors are more popular [67], and straightforward, and

involve less number of operation (thus less circuit area) than correlation detectors.

Therefore, only gradient methods will be discussed here, although correlation detectors

are not completely excluded from our plans.

It should be noted that in this thesis, the term “motion detection”, hereinafter

means the detection of movement in the image projected onto an imager; no velocity

computation is implied.

2.6.2 Gradient motion detection schemes Biological visual systems can detect motion of images that are both times variant

(inconstant illumination), and spatially variant. Mimicking this type of operations is

complex and requires more in-pixel circuitry [71], therefore, it is outside the scope of this

work. However, if the intensity of the image is time invariant (under constant

illumination like our proposed application) and is only spatially variant, real-time motion

detection can be performed using temporal differentiation of the output current of the

photodetector [71]. Accordingly, the term “gradient” hereinafter is devoted only for

temporal differentiation, and no spatial differentiation is involved.

However, since the term “motion” has a spatial as well as temporal component

built into it, detecting intensity variation over time for single pixel, cannot be regarded as

Page 89: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

52

motion detection. Further steps are required to gather and interpret this information from

all pixels, which are spatially distributed, in order to extract the spatial information and

hence detect motion.

There are basically two different schemes to implement the gradient scheme in

analog VLSI: continuous and stored-comparison schemes. They differ in how the

temporal differentiation (gradient) is to be performed. Given image intensity E x y t( , , ) as

function of location and time, the temporal derivative ∂ ∂ E x y t t( , , ) / of the image

intensity has to be evaluated in order to extract motion information [68].

In the continuous (delayed) methods, the image intensity is either passed through

an analog differentiator or subtracted from a delayed sample of the same signal using

differential amplifiers. Both ways are similar and lead to effectively continuous

differentiation. On the other hand, stored methods involve past image information, since

it uses the approximation ∂ ∂ E x y t t( , , ) / ≈ E x y t t E x y t t( , , ) ( , , ) /− −∆ ∆ to perform the

differentiation. The difference between these two methods depends on whether ∆ t is

shorter than the “pixel readout time” (column readout time) as in continuous methods or

∆ t is larger than the “frame readout time” as in stored methods. All-important times for

(n x m) array “Raster” scanning are shown in Fig. 2.23 below.

Practically, both continuous and stored methods require some kind of “storage or delay”

elements. These elements are a challenge to the design of motion chips, because apart

from potentially being area consuming they can be difficult to implement [15].

In this context, continuous schemes generally require less chip area, and are more

randomly accessible than stored schemes, because their operation does not necessitate

large storage capacitors, and, in principle, can be accessed at any time. On the other hand,

stored schemes are more sensitive and require fewer constraints on the photodetector

signal to noise ratio (S/N), thus they can be used in low light intensities. This is because

such schemes allow enough charge accumulation during “integration time”, hence larger

signals result even at low light intensities.

Page 90: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

53

Figure 2.23: Timing diagram of (n×m) array “Raster” scanning. When a row “j” is activated, then the columns (pixels) are readout successively, and outputted serially.

2.6.2.1 Stored Comparison Schemes Fundamentally, an analog storage element can consist of either a capacitor holding

charge or an inductor holding current. With the latter being unfeasible in standard VLSI

processes the first method is the only choice [15]. Due to the leakage current existing at

any capacitive node, large capacitors should be used to increase the so-called ``charge

retention time’’ [15].

The DRAM style structure shown in Fig. 2.24 seems to be the simplest and the

most suitable charge storage for motion detection applications. However, this type of

memory suffers from leakage through the switch transistor, and thus can hold the charge

for at most one second [24]. The acceptable retention time for imager application

Page 91: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

54

obviously depends on the required imager resolution. This circuit therefore is useful only

for very short-term storage, such as in small imagers with fast frame rates.

Figure 2.24: A DRAM basic memory cell for storing analog charge (after [15]).

Figure 2.25: Circuit for reducing the leakage (after [72]).

The storage capability of the cell can be improved by using several techniques,

such as differential storage, and leakage reduction. In the differential storage technique

the original signal is translated into a differential signal and stored on two similar storage

Page 92: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

55

devices. As the leakage reduces the charge almost equally at both nodes, the difference

remains the same. This method can increase the storage time by several times. A

drawback of this technique is the additional area consumed by the differential translation

and the extra capacitance [15].

In leakage reduction techniques, the leakage of the source/drain diffusion of the

switching transistor at the storage node is reduced by setting the voltage across the anode

and cathode of the source diffusion-well diode to zero [72], as shown in Fig. 2.25 above.

Using this circuit, storage times of up to several seconds in normal conditions can be

achieved [15].

2.6.2.2 Continuous (Delayed) Comparison schemes Continuous detectors seem to be the best gradient motion detector in mimicking the

spontaneous motion-detecting neurons in animals because they offer the maximum

random accessibility among other gradient detectors as a result of the continuous nature of

their operation. Delaying a signal in the analog domain requires a capacitive node which

can hold the information. Because, in continuous schemes, charge is continuously

injected to the capacitor and read out [15], no large capacitors are required. Realizing

ideal controllable delay elements is very difficult, if not impossible [15]. The delay

element is usually approximated by circuits, such as integrators, as shown in Fig. 2.26

below. The amount of delay in the RC network depends on the resistor value, whereas in

the OTA-C (Operational Tranconductor Amplifier-Capacitor) circuit depends on the bias

current, and on the input current level in the current-mode delay element [15]. In order to

achieve large delay times using a conventional OTA-C circuit, very small biasing currents

are required [73, 74].

Figure 2.27 below shows a schematic diagram of a typical continuous comparison

detector, the motion adaptive sensor for image and wide dynamic range enhancement

(Hamamoto et al. [64]). The sensor has functions for detecting moving and/or saturated

image pixels and is able to control the suitable integration time pixel by pixel, which

results in images with no motion blur and no saturation [64]. Whenever an image pixel is

detected as moving or saturated, a flag signal is activated. Consequently, the pixel value

Page 93: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

56

is output and the detector is reset. If the pixel is not detected as moving or saturated, the

detector continues its charge integration operation.

Figure 2.26: (a) An RC circuit used as a delay element, b) An OTA-C circuit as a delay element, c) A current mode delay element (after [15]).

Figure 2.27: Processing circuitry in Hamamoto et al. motion adaptive sensor [64].

Analog Differentiation The analog differentiation is a good example of the continuous comparison schemes.

When a time-invariant, spatially variant image is projected onto an array of

photodetectors, the intensity of the light falling onto a photodetector, and thus its output

Page 94: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

57

current is constant over time. If the image is stationary, the output of a temporal

differentiator (used to differentiate photodetector output) is zero. When the image moves,

the spatially varying light intensity of the image will cause changes in the output current

of the photodetector. Thus, the temporal differentiator will produce a non-zero output

voltage (or current) proportional to the first derivative of the output current of the

photodetector as shown in Fig. 2.28 below.

Figure 2.28: Input and output of typical analog temporal differentiator.

Before introducing various types of differentiators, it is worth noting that temporal

differentiation is a high-pass filtering operation. It passes rapidly varying signals

components and ignores slowly varying ones [75]. In general, a perfect differentiator (e.g.

ideal capacitor) is unrealizable because it requires infinite gain (current) and/or infinite

frequency response as shown in Fig. 2.29 below. Every system has some leakage that

causes the imperfections.

Physically realizable differentiators can be simply made using the RC circuit

shown in Fig. 2.30 below. The resistor is used to turn the capacitor current, which

represents the temporal derivative of the voltage across the capacitor, into voltage for

further processing. The differential equation that characterizes this circuit can be written

as:

Page 95: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

58

R

VVV

dtdC out

outin =− )( (2.8)

Figure 2.29: Comparison between the frequency response of perfect differentiator and the practical differentiator. At low frequencies, the circuit acts as a perfect differentiator. At high frequencies, the resistor limits the current through the capacitor (after [75]).

Taking Laplace transform for both sides of Equation 2.8, we end up the following transfer

function form:

Figure 2.30: A physical circuit realization of an approximation to the temporal derivative operation.

Page 96: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

59

1+=

ss

VV

in

out

ττ (2.9)

Whereτ = RC ; for sinusoidal input s jw= , where w is the angular frequency. The

frequency response of the circuit is shown in Fig. 2.29 (with R is 1K and C is 0.01 µF)

compared with the response of perfect differentiators. At low frequencies (or shortτ ),

both curves overlap. At high frequencies (or longτ ), the response of the RC circuit levels

off to 1, since inout VV ≈ . This suggests that for low frequencies or shortτ , the first-order

differentiator abstracted by Equation. 2.9 can be a workable approximation to a perfect

differentiator [75]. The sensitivity of this differentiator is τ = RC.

Typical examples of analog differentiator are the follower-based differentiators

and the hysteretic differentiators [75]. An example of the first category is shown in Fig.

2.31 (a) below. In this circuit, the time derivative is approximated by the difference

between the input and delayed version of the input. This circuit will results in positive

derivative, which can be change to negative derivative by simply interchanging the inputs

of amplifier A1. The theoretical response of this differentiator to a step input is identical

to that in Fig. 2.28 above, except that it is multiplied by again A, and the area under the

response (Vout) to this step transient is 0−inVAτ . The sensitivity of circuit is “A” (gain of

amplifier A2) times more sensitive than the simple RC differentiator shown in Fig. 2.30.

However, this circuit suffers from a large offset in the output voltage equal to the gain A

times the difference δV between the offset voltages of the two amplifiers [75].

VAs

sAVV inout δτ

τ ++

=1

(2.10)

A typical amplifier gain A is ~ 2000, so an offset of a few millivolts is enough to derive

the output into one of the rail. A modified version of this circuit to overcome the offset

problem is shown Fig. 2.31(b) below. The smoothed output of A2 is fed back to the

negative inputs of both amplifiers thereby greatly reducing the effect of input offset

voltages. Moreover, this circuit can respond to both edges of the step-input (low-to-high,

and high-to low), which is an important advantage over the original version circuit shown

in Fig. 2.31 (a). This differentiator circuit, however, suffers from a damped oscillatory

behavior in its response [75].

Page 97: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

60

Figure 2.31: (a) A follower-based differentiator. (b) A modified version to reduce output offset due mismatch (after [75]). Both circuits require two amplifiers, with two bias controls. Hysteretic differentiators, on the other hand, offer a simpler implementation [75].

Briefly, the hysteretic differentiators approximate time-derivatives by amplifying the

“changes” and not amplifying the steady-state value of input. This functionality can be

achieved by a single amplifier with nonlinear elements in its feedback path, as shown in

Fig. 2.32 below. These kinds of differentiators offer many desirable properties, such as

large excursions in the output when the derivative of the input waveform changes sign,

and better noise immunity because its output is primarily responsive to major changes in

the derivative of the input voltage [75]. All told, these properties are remarkable for

simple, one-amplifier circuits.

Figure 2.32: A typical example of a Hysteretic differentiator (after [75]).

Page 98: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

61

A practical example of using analog temporal differentiators in visual motion

detection can be found in Moini et al.’s insect vision chips [74, 76, 77]. This sensor is

biologically inspired and is based on “template model”, where motion information is

obtained by thresholding the temporal differentiated intensity, ∂ ∂ E t/ , at each pixel using

the analog differentiator shown in Fig. 2.33 below.

Figure 2.33: Moini’s OTA-based differentiator [15, 74, 76, 77].

The resulted output indicates three states: increase, decrease, or no-motion, which can be

coded using two digital bits. This output is then sampled and stored. Templates are

formed by collating the outputs of two adjacent cells at two consecutive sampling instants,

which are then coded to represent motion information. The rest processing, which

involves tracking of some specific templates, is done using six tracking engines. The

locations of the tracked templates are reported off-chip. The chip has a 1D array of 64

photoreceptors, followed by a differentiator shown in Fig. 2.34 below. It also contains

RAMs for storing the templates and final results only for interfacing purposes. Six

search-and-track engines have also been implemented which operate on specified areas of

interest in the image. The chip has been fabricated in a 2µm CMOS process. The detectors

and analog processing elements only occupy ~5% of the chip and the rest is dedicated to

digital processing modules. This implies that these processing functionalities were

Page 99: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

62

achieved at the expense of the silicon area, which leads to the conclusion that this

technique can only accomplished by scarifying by the image quality (resolution) and/or

sensitivity (fill-factor). The functional diagram of this chip is shown in Fig. 2.34 below.

Figure 2.34: Functional block diagram of Moini’s insect vision chip showing the formation of templates (after [15, 74, 76, 78]).

Another practical example is Chong et al. current mode (CM) differentiator [71]

using a current mirror as shown in Fig. 2.35. This chip utilizes a compact current mode

circuit for differentiating the photocurrents and generating a pulse on the occurrence of an

increasing or decreasing light intensity. As shown in Figure 2.35, a simple

transconductance amplifier driving an integrating capacitor (delay element) is added to the

feedback loop of the current mirror to convert it into an analog (temporal) current-

differentiator. When the input light intensity decreases, the voltage at the output node,

“Out”, goes down. The negative feedback loop will eventually fix the operating point of

the circuit at a point where Iphoto = Ifeedback, and a voltage pulse will be detected at the

output. This voltage pulse is then converted to current and read out through an x-y switch

network.

There are mainly three issues that limit the applications of this circuit: Firstly, this

circuit can only detect one kind of edges (high-to-low light intensity) as result of the using

Page 100: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

63

of a simple inverting amplifier that consist of only two transistors M1 and M2, instead of

a standard 5-transistor OTA [15] like the one shown in Fig. 2.33 above. The second issue

is that the circuit is conditionally stable. In order to stabilize the circuit, very small

biasing current, Ibias, should be used [15]. The maximum pixel density claimed with this

in-pixel differentiator was 45-pixel/mm2 [71], achieved by using 3µm CMOS technology.

This means only 720 pixels can be implemented in a 4mm by 4mm die, which is very low

density for quality image (resolution) capture and/or high sensitivity (fill-factor) imager is

required, even if an advanced CMOS technology, such as 0.35µm CMOS technology, was

used.

Figure 2.35: Current mode Chong et al.’s differentiator (after [71]).

Page 101: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

64

2.7 Visual Edge Detection Edges are very important to any vision system (biological or machine). They provide

strong visual clues that can help the recognition process. Therefore, they are usually used

for image segmentation as a first step for image analysis. An edge may be regarded as a

boundary between two dissimilar regions in an image (i.e. it is the boundary between two

regions with relatively distinct gray-level properties). These may be different surfaces of

an object, or perhaps a boundary between light and shadow falling on a single surface

[79]. There are many methods for calculating the edges; however, they can be categorized

in two main classes: gradient method (based on the first derivative) or by using the second

derivative obtained by using the Laplacian. Here, we are interested in the gradient

method described below.

Gradient based methods An edge point can be regarded as a point in an image where a discontinuity (in gradient)

occurs across some line. A discontinuity may be classified as one of three types (see Fig.

2.36 below):

A Gradient Discontinuity-where the gradient of the pixel values changes across a line.

This type of discontinuity can be classed as roof edges, ramp edges, convex edges,

concave edges

A Jump or Step Discontinuity-where pixel values themselves change suddenly across

some line.

A Bar Discontinuity-where pixel values rapidly increase then decrease again (or vice

versa) across some line.

The discontinuities are not as sharp as shown here, and are usually blurred because

of the Modulation Transfer Function (MTF) of the optical sytem (such as lens,

passivation, etec..).

The gradient is a vector, whose components measure how rapidly pixel values are

changing with distance in the x and y directions. Thus, the components of the gradient

may be found using the following approximation:

Page 102: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

65

dxyxfydxxf

xyxf

x),(),(),( −+=∆=

∂∂ (2.11a)

dyyxfdyyxf

yyxf

y),(),(),( −+=∆=

∂∂ (2.11b)

where dx and dy measure distance along the x and y directions respectively.

Figure: 2.36 Type of image discontinuties (edges) [79].

In (discrete) images we can consider dx and dy in terms of numbers of pixels

between two points. Thus, when 1== dydx (pixel spacing) and we are at the point whose

pixel coordinates are (i,j) we have[79]:

),(),1( jifjifx −+=∆ (2.12a)

),()1,( jifjify −+=∆ (2.12b)

For our case, to be presented in Chapter 4, we used 1D edge detection system, which can

detect vertical (or horizontal) edges, but not both of them. In this case we have:

),(),( jifjnifx −+=∆ (2.13)

where n, is the number of pixels in the row (horizontal resolution). It worth mentioning,

that all gradient methods presented above for motion detection can applied for edge

detection. In fact “motion” can be directly regarded as temporal edge or as the movement

of a spatial edge, as we will see later in Chapter 4.

Page 103: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

66

Chapter Three Modeling and Characterization of CMOS

Active Pixel Sensors

3.1 Introduction In this chapter, we present detailed analyses on CMOS active pixel sensors (APS) already

introduced in Chapter 2. These analyses were performed using HSPICE [80, 81], a well-

known electrical circuit simulator that utilizes a wide range of CMOS device models,

with numerous levels of computational complexity, efficiency, and/or accuracy including

deep-submicron effects and a wide range of device operations. This simulator can be

used as stand-alone or integrated in general CMOS devices design CAD, such as

Cadence® [82], which includes comprehensive toolsets for any type of VLSI

implementation. Because CMOS-compatible photodetectors are based on parasitic

junction that are not native in CMOS processes, we have to devise equivalent circuit

models for these devices in order to simulate them using circuit simulators. In the next

section, we will introduce the CMOS photodiode models that will be translated to an

equivalent circuit, which will be then used for HSPICE simulations. This will be

followed by an analysis of logarithmic active pixel sensors (Log-APS) and linear

(integrating) active pixel sensors (Linear-APS). These investigations are supported

Page 104: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

67

(where appropriate) by the experimental results obtained from chips fabricated using

standard 0.5 µm, and 0.35 µm CMOS technologies.

3.2 CMOS-Compatible Photodiode Models 3.2.1 Equivalent circuit models Equivalent circuit models are models based on mathematical (or analytic approximations)

expressions that describe, in terms of voltages and currents, the device photoelectrical

operation. As already covered in Chapter 2, the photodiode operation relies on the

collection of free carries introduced into its reverse biased junction, either by thermal

excitation (dark current) or photo-generation (photocurrent). The behavior of the

photodiode under these conditions is best analyzed using its equivalent circuit shown

schematically in Fig. 3.1 below.

Figure: 3.1: Photodiode equivalent circuit [35]. Components include: Iph = current source, ideal diode with Idark, Rsh = shunt resistance, CD = junction (depletion layer) capacitance, Rs = series resistance, RL = load resistance, and VB = reverse bias supply.

In this equivalent circuit, the junction may be regarded as an ideal reverse-biased

diode with reverse saturation current Idark mainly due to thermal generation. The

depletion region is represented by its resistance Rsh (typically 107-1012Ω, depending on

temperature) and its “parallel plate” capacitance, CD (typically few tens of fFs, depending

on reverse bias voltage), both connected in parallel to the diode. The bulk semiconductor

resistance is represented as a series resistance Rs (typically few Ωs). These equivalent

circuit components can be either calculated from the technology data file for the CMOS

Rs IL

Iph

Rsh

VB

CD

Idark

RL

+ VD

Page 105: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

68

process used in the fabrication of the chips (0.5µm and/or 0.35µm CMOS technology), or

extracted from the measurements, such as Rsh, which represents the slope of dark current

(I-V characteristics) measurements of the photodiode. The dark current, similar in nature

to the reverse saturation current, is mainly caused by thermal generation and is given by

[35, 38, 83, 84]:

)1( −= TD vVsdark eII (3.1)

The “saturated” dark current, sI , is given as [38]:

+=

na

n

Pd

PiDs LN

DLN

DnqAI 2 (3.2)

The voltage, DV is the voltage across the diode, qkTvT = , is the thermal potential

(~25mV at room temperature), AD is diode area, pnD , and Ln,p are the respective

diffusivity and the diffusion length for electrons and holes. Na and Nd are the acceptor (p-

region doping) and donor (n-region doping) densities respectively. in is the intrinsic

concentration.

Figure 3.2: Typical photodiode characteristics (region III), the dashed curve for photodiode under dark conditions, whereas the solid curve is for the photodiode under illumination conditions.

IS

Iph

ID

VD

Light current

Dark Current

III

II I

IV

Page 106: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

69

This photodiode dark current is shown by the dashed curve in the typical photodiode

characteristics in Fig. 3.2 above.

The photocurrent Iph is represented by the voltage-controlled current source

(VCCS) parallel to the diode in the equivalent circuit model [35], as shown in Fig.3.1

above. For the n+/p-substrate CMOS compatible photodiode (Refer to Fig. 2.11), the

photo-generated current equation is given by [35, 84]:

+

−−=−

n

WioD

ph Le

hcRLqAI

)(11)1( )(

λαλ λα

(3.3)

This is identical to Equation 2.6 except that Equation 3.3 includes the optical reflection

effect due to the passivation layers (such as SiO2), which is represented in terms of the

reflectivity coefficient, R, defined as the ratio of reflected light intensity to the incident

light intensity (R~0.35 for Si). All other parameters in Equation 3.2 were introduced in

Chapter 2 (Section 2.3.21).

The total photodiode current under illumination, shown in Fig. 3.2 above, is given

by: phdarkD III −= . Note that this equation is quite general and applicable for all regions

of operation of the diode. The region of interest is region-III, where the diode is reverse-

biased and acts in photoconductive mode as a photodiode. In this case VD is negative,

and hence the dark current is almost constant ( sdark II ≈ ), especially at higher VD levels,

therefore the total photodiode current can be written as ( )phdarkD III +−= , which suggests

a diode current of quantitatively equal phdarkD III += , with a direction opposite to the

forward current direction as will see in the later chapters.

For most cases, the high shunt resistance, Rsh, and the low series resistance, Rs,

can both be neglected. The junction capacitance, CD, is the depletion capacitance for

reverse-biased n/p junction diode and is given by [80, 81, 85]:

M

i

D

DD

V

CC

+

=

φ1

0 (3.4)

where, 0DC is the zero-bias junction (depletion) capacitance, ( )2ln idai nNNqkT=φ , is

the built-in potential, M is the junction grading coefficient (1/2 for abrupt junctions and

Page 107: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

70

1/3 for linearly graded junctions. Assuming abrupt junction (n+/p-substrate photodiodes)

the reverse-biased (VD is negative) junction capacitance can be written as [35, 38]:

adDi

asiD

Dida

dasiDD NN

VNqA

VNNNqNAC >>

+≈

++= since ,

)(2

2))((2 φε

φε (3.5)

where, siε is the silicon permittivity. Now, using the technology data files associated

CMOS process to be used for fabrication (0.5µm or 0.35µm CMOS process) and/or

measured data, we can hand-calculate these equivalent-circuit parameters and invoke it in

the SPICE net-list to carry out the simulations.

Behavioral SPICE models

Instead of hand calculating equivalent circuits components, a more robust and accurate

method is used by automatically utilizing SPICE models. This method is called

“Behavioral” SPICE modeling, and accomplished by using these equations (Equations

3.1-5 above) as a transfer function to a voltage (E) or current (G) source. Subcircuits are

used to encapsulate these functions. If we split the function definition, we can create a

hierarchy. For example the total photodiode current under illumination, DI , function

(subcircuit) can be split to two separate sub-functions (sub-subcircuits) for darkI , and phI

respectively.

Figure 3.3: Compact CMOS compatible photodiode model (subcircuit) suitable for pixel level and imager level SPICE simulations.

In this context, we used a behavioral SPICE model with a voltage controlled

current source (VCCS) function to represent the photo-generated current as shown by the

photodiode subcircuit in Fig. 3.3 above. To include the effect of diode area and

peripheries on its characteristic, a geometrical diode model (level-3 SPICE model [80,

ID

Iph Lio

D

Is Rs Rsh CD

VD

Page 108: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

71

81]) was used in all “non-photo” equivalent circuit components (Is, Rs, Rsh, and Cj).

Therefore, a more realistic modeling of the photodiode operation was achieved, and

higher accuracy simulations were accomplished. Moreover, the photodiode equivalent

circuit model was reduced to a compact subcircuit comprised of a reverse-biased diode

and current source as shown in Fig. 3.3 above.

With this model, the light intensity of the illumination, Lio, can be passed as a

voltage into the subcircuit with values that are numerically equal to Lio which can be then

used to calculate the photocurrent internally using Equation 3.3. Other parameters

(constants) can be also passed into this subcircuit creating a parameterized cell.

Since most of the experimental characterizations used monochromatic light

sources in their setups, there is no need to include absorption coefficient spectrum, α(λ),

in these models and therefore all simulations were performed assuming monochromatic

light sources. This can be accomplished by choosing a specific operational wavelength, λ

(typically around 555nm), and extracting the corresponding α and/or η from the

measurements.

Nevertheless, devising models that includes the effect of the incident light

wavelength is not impossible, and only requires a good curve fitting functions for

absorption coefficient spectrum, α(λ), (refer to Fig. 2.8), and/or spectral response, η(λ),

measurements (refer to Fig. 2.13). These fitting functions are polynomial functions with

coefficients that can be extracted from measurements using MS-EXCEL software. An

example of these polynomial-fitting functions was reported by [84], for the absorption

coefficient spectrum as:

3210 7562.221893.487985.362131.13))((log λλλλα −+−= (3.6)

The model can be implemented using either current controlled current source (CCCS) or

voltage controlled current sources (VCCS) with this polynomial function as behavioral

models, and the wavelength, λ, as its control element, which can be easily passed into the

photocurrent subcircuit.

Another HSPICE behavioral model uses Piecewise Linear Functions (PWL)

based on actual device measurements. Although the device characteristic is described by

discrete data points in the form of a look-up table, and represented as a transfer function

of the VCVS that models the photocurrent, yet HSPICE [80, 81] automatically smoothes

Page 109: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

72

the corners to ensure derivative-continuity and, as a result, achieves better numerical

convergence. Simulation results using this model have the best quantitative agreement

with measurements compared with other models. However, these models, as already

mentioned, rely on actual measurements, which are not always readily available.

Therefore, these models can be used for post fabrications research and development

analyses and/or system level simulations. Moreover, they are not useful for dynamic

simulations, where the effect of capacitance has to be included.

Photocurrent (Radiation) effect models

Here, we consider the use of radiation effect models provided in META-software

HSPICE [80, 81] as a potential indicator of photodiode performance. These are designed

to model the effect of high-energy radiation (such as ionizing radiation), where 1>η , on

semiconductor device performances. This is strictly beyond the scope of this thesis,

which is intended to study the photodiode pixels operation under visible light energies.

Moreover, these models are devised to model the permanent degradation caused by total

ionizing dose, and transient radiation effects on semiconductor circuits, hence they model

the effect of radiation on the whole pixel circuit and not only the photosensitive area, AD,

of photodiode of interest to us. Finally, these models are based on analytical solution for

the photocurrent that are based on specific assumptions that are not always valid, such as

square radiation pulse and fixed voltage across the p/n junction.

Despite all these obstacles, these models can be implemented if their controlling

parameters and limitations are well understood. In particular, there are two controlling

parameters of interest here; the RFAC (dimensionless) is a multiplier that converts

(scale) the radiation dose to useful photocurrent that meets our requirements. The second

is SRC that select the radiation source name for each active device. By limiting the

radiation source to the photodiode and zeroing radiation sources for other non-

photosensitive pixel elements, such as NMOS transistors, we can overcome a big

obstacle. Results showed good qualitative and quantitative agreement with other models

over a fairly large photocurrent range, as we will see later in this chapter (please see Fig.

3.20).

Page 110: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

73

3.2.2 Post-layout modeling and simulation These models are based on the actual mask layout of pixel circuits to be used for chip

fabrication. They include the effect of parasitic and interconnect capacitance between

various layers (metal-metal, metal-poly, metal-substrate, and poly-substrate, etc.…)

available in the CMOS process used to layout the pixel circuit. These models are very

important for dynamic simulations such as transient and/or AC simulations, where the

existence of parasitic capacitances plays a vital role in determining the temporal

characteristics of these pixels. These models are sensitive, and can include capacitance

as low as few tens of attoFarads (10-18F).

These models are devised by using a procedure that starts by an automatic

extraction of equivalent circuit component (net-list) directly form pixel layout. The

extraction is a standard step of the Layout Versus Schematic (LVS) verification

procedure, usually performed after design rule checks (DRC), to ensure that the net-list

created by the schematic and that of the extracted layout match. If they do not, the errors

should be corrected; any mismatch of instances or nets is fatal [86]. Then, this extraction

is usually followed by post-layout simulation to ensure that most of the parasitics are as

expected and any unaccounted parasitics have not significantly affected the design's

performance [86].

Figure 3.4: Left: the layout of 3-mode active pixel sensor (3M-APS) of the ALO-chip that was fabricated using 0.35µm CMOS technology. Right: the extracted equivalent circuit (net-list) superimposed on the layout.

Page 111: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

74

The layout and extracted net-list, which includes the effect of parasitic

capacitances is shown in Fig. 3.4 above. The right side of Fig. 3.4 shows the extracted

equivalent net-list (shown as tiny spots) superimposed on the layout depicting the pixel

circuit elements and location of various parasitic capacitances on the layout to assist the

designer in reducing these parasites capacitances in future designs.

Note that in the extracted view above, the CAD software did not recognize the

n+/p-substrate photodiode (shown as blank), because this “parasitic” photodiode is not

considered as part of the standard CMOS process. Therefore further arrangements are

required to include the photodiode and ensure reliable modeling of the pixel operation.

This can be achieved by using two levels of symbolization (required by post-layout

simulation) instead of one symbol for circuit with only native CMOS process elements.

The first level involves the symbolization of the extracted equivalent circuit (refer to Fig.

3.4 above) as shown in Fig. 3.5 below.

Figure 3.5: Level–I symbolization that includes all 3M-APS circuit elements that are native in stranded CMOS processes (refer to the text above).

The next step is to invoke the photodiode equivalent circuit (subcircuit) to this

circuit symbol to achieve the second level of symbolization as shown in Fig. 3.6 below.

In this case, we prefer not to include the current source that represent the photocurrent

within this symbol to facilitate the DC analysis and/or parametric investigations based on

photocurrent used as an input to the pixel. This arrangement is suitable for pixel and/or

one-column structure simulations. For imager array simulations, this current source

should be included in this symbol or in higher (3rd) level of symbolization in order to

ensure compactness and reduced confusion.

Page 112: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

75

Figure.3.6: Level–II symbolization, which includes over the CMOS-native circuit-elements of 3M-APS, the all photodiode equivalent circuit components except the current source that represent the photo-generated current.

Figure 3.7: Final simulation test-bench that includes all elements of equivalent circuit of the CMOS compatible photodiode and the pixel circuit.

To conclude, the final simulation test-bench to be used for post layout simulations

is shown in Fig. 3.7, where all equivalent circuit elements of the photodiode are included,

along with other non-photosensitive pixel elements. It is worth noting that this model is

more appropriate to Cadence® [82] environments. On the other hand, equivalent circuit

models are usually used with stand-alone HSPICE circuit simulator.

Page 113: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

76

3.3 Modeling of Logarithmic Active Pixel Sensors In this section, we present a detailed analysis of logarithmic active pixel sensors (Log-

APS) briefly introduced in Chapter 2. Based on an equivalent circuit model for CMOS-

compatible photodiodes, an HSPICE simulation has been used to characterize different

configurations of pixel circuit under various conditions of light intensity and switching

speed. These investigations are supported by the experimental results obtained from the

chip fabricated with standard 0.5 µm CMOS technology. It is concluded that more quality

CMOS imagers based on logarithmic APS can be achieved with careful design of its Log-

APS pixel that considers the issues discussed here.

Logarithmic active pixel sensors (Log-APS), shown in Fig. 3.8, have been the

basic photocircuit of many CMOS imagers, [11, 54] because of their smaller size, higher

fill-factor (no reset line), and larger optical dynamic range. Their "logarithmic"

compression arises from the sub-threshold operation of their MOS diode(s) diode

connected MOSFET(s), which continuously performs logarithmic photocurrent-to-

voltage conversion. This also facilitates true random access to the imaging array owing

to the absence of an integration time. Accordingly, these sensors gained potential to be

used in applications where continuous pixel operation is an advantage, such as motion

analysis applications.

Figure 3.8: Typical circuit of Log-APS with one NMOS diode (M1), which is connected to VDD supply voltage. The photodiode is connected to the column bus via a source-follower buffer (M2) and the row select switch (M3). All MOSFETs bodies are connected to the substrate.

Vout

VDD VDD

Current source load

Light

M1

PD

M2

IPh

Column

Row

Page 114: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

77

Despite their advantages, logarithmic photocircuits suffer from serious

drawbacks, such as slow response at low light levels, and strong temperature dependence.

Moreover, these photocircuits suffer from extreme compression due to its logarithmic

response that can limit its ability to detect small changes in light intensity, especially at

low rates of change of light intensity (e.g. slow motion). These pixels also have higher

susceptibility to noise due to the low swing of its output signal (typically <200mV), and a

higher FPN due to exponential dependence of photocurrent on Vth, which is difficult to

compensate in the logarithmic mode [15, 38, 67, 87, 93]. Therefore, these photocircuits

have to be studied and analyzed carefully under different conditions of light intensity

(levels and speed of change) in order to optimize their performance for edge detection

and/or motion detection.

The continuous photocurrent-to-voltage conversion in Log-APS was achieved

through a series resistor, which must be very high because the low photocurrent (fA to

nA range) involved, and which can only be realized by a MOSFET in “active” resistor

configuration, like the diode-connected NMOS resistor used here (see Fig. 3.8). The

consequence however is that this MOSFET operates in weak inversion and that the pixel

photoresponse have a logarithmic behavior (see Fig. 3.9 below).

Figure 3.9: (Left) Typical measured response of the logarithmic active pixel sensors (Log-APS). The optical dynamic range is about 6 dB with an output voltage swing of only 0.15 volts [55]. (Right) Diode connected MOSFET (MOS diode) configuration resistor configuration.

VG =VDD

VDD

NMOS (M1)

VD =VDD

VS =VDD - ∆V

Page 115: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

78

This can be explained by the fact that the source of the diode-connected NMOS,

shown on the left of Fig. 3.9 above, is connected to the photodiode in Log-APS, which

exhibit a high dynamic resistance, because of the low currents involved in a reverse

biased junction. Therefore, most of the supply voltage DDV drops on the photodiode, and

only a small portion thDS VVV ≤∆= drops across the NMOS. Moreover, because of the

diode connection, we have DG VV = , and hence DSGS VV = . Consequently, the NMOS

transistor is always working in the “saturated subthreshold” regime, since both saturation

condition thDSGS VVV −> , and subthreshold condition thGS VV ≤ are always maintained.

Therefore, it is necessary to study the operation of MOSFET transistors in the weak

inversion (subthreshold) regime in order to understand its role on limiting the

performance of logarithmic pixels (Log-APS) operation.

3.3.1 MOSFET in Subthreshold When the gate voltage, GV , is below the threshold voltage, thV , and the semiconductor

surface is in weak inversion, the corresponding drain current is called the subthreshold

current. This current dominated by diffusion of minority carriers and can be written as

[23, 35, 75, 89, 90, 91, 93]:

)1()1 1()(

T

DS

T

SB

T

thGSv

Vnv

Vnv

VV

oDS eeeIL

WI−−−

−= (3.7)

with,

s

chsiToo

NqvIψ

εµ2

2= (3.8)

where, n (typically 1…2) is a process parameter (subthreshold slope factor), which is

related to gate efficiency [75], Nch is the doping concentration in the channel; µo is the

mobility, and ψs is the surface potential. VSB is the source-bulk voltage (≠zero, since the

bulk-node is not connected to source-node). L , and W are the channel length and width

respectively as shown in Fig. 3.10 below. Now, for VDS > 3vT (3kT/q ~75mV at room

temperature), the last term in Equation (3.7) can be approximated to unity. Accordingly,

the exponential terms dominate, and the drain-source current can be approximated as:

Page 116: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

79

)11()( −−

= nvV

nvVV

oDST

SB

T

thGS

eeIL

WI (3.9)

Figure3.10: Schematic diagram of the MOSFET structure.

Now, this equation can be solved for VGS as:

)1()(

ln −++= nVLWI

InvVV SBo

DSTthGS (3.10)

having DSdarkph III =+ at steady state, with GSDDSBSout VVVVV −=== , and

DDDG VVV == , Equation (6.13) can be rewritten as:

( ) )1()(

ln −−+

−−= nVLWI

IInvVVV out

o

darkphTthDDout (3.11)

Solving for Vout , we end up with:

)(

lnLWI

IIv

nVV

Vo

darkphT

thDDout

+−

−= (3.12)

For most cases of interest darkph II >> . Therefore outV can be written as:

−−=

o

phT

thDDout I

Iv

nVVV )ln (3.13)

where ( )LWII oo =)

, there the illumination intensity (and hence Iph) increases linearly,

the output voltage decreases logarithmically as shown in Fig. 3.9 above.

Page 117: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

80

3.3.2 Logarithmic Pixels Configurations There are several other configurations of Log-APS [54, 55, 67, 75, 87, 88], however they

are all can be classified in two main categories, either conventional or inverted. The

difference between these two categories is in their output responses as shown in Fig.

3.11, and in the respective locations of their MOS diode(s) and the photodiode as shown

in Fig. 3.12 and Fig. 3.13. In both configurations, the photosensor can be either

photodiode or phototransistor, and can be implemented using one, two, or three

(cascaded) MOS diodes as shown. They can be configured with current readout [92], or

with voltage readout as shown in Fig. 3.12, and Fig. 3.13 below.

Figure 3.11: Comparison between responses (at N1 in Fig. 3.1) of the conventional and inverted Log-APS using HSPICE simulation with 0.5µm CMOS technology data. The two kinds have, qualitatively, a mirror-image response with the conventional having

a descending response to higher light intensity, and inverted have an ascending one. Also,

both kinds exhibit the same amount of logarithmic compression, i.e. they have the same

electrical sensitivity in mV/ [W/m2], as shown in Fig. 3.11. However, the conventional

pixel (hereafter called logarithmic APS) performs logarithmic photocurrent-to-voltage

3.9

3.95

4

4.05

4.1

4.15

4.2

4.25

0.0001 0.001 0.01 0.1 1 10 100

Light intensity (W/m2)

Out

put v

olta

ge (V

)

0

0.1

0.2

0.3

0.4

0.5

0.6

Out

put v

olta

ge (V

)

Conventional

Inverted

Page 118: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

81

conversion with an output voltage descending with light intensity, whereas the inverted

pixel performs the conversion with an output voltage that is ascending with the light

intensity. So depending on the application, one can decide which of the two

configurations can be used.

Figure 3.12: Two typical examples of “conventional” logarithmic active pixel sensor configurations: (a) Photodiode logarithmic APS, with one NMOS diode, and, (b) Phototransistor logarithmic APS, with two series PMOS diode.

Figure 3.13: An example of inverted logarithmic pixels, with two MOS diodes to increase the output swing of the pixel (hence reducing the degree of compression).

Page 119: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

82

3.4 Modeling of Linear (integrating) Active Pixel Sensors The most commonly used mode of APS operation in image sensors is the direct (linear)

integration as shown in Fig. 3.14 below, where the photocurrent is directly integrated

over the diode capacitance (ignoring other parasitic capacitances). The photodiode is

reset to the reverse bias voltage Vreset via NMOS transistor (shown as switch in Fig.3.14),

hence the diode capacitance is per-charged to this initial voltage (Vreset ). Then the diode

current (Iph+ Idark) discharges CD for tint seconds, which called the integration time

(exposure time). At the end of this integration time, the output voltage Vout is read out.

Upon reset, both NMOS transistor gate and drain are high at VDD, therefore this NMOS

can only deliver VDD –Vth before it switches off (subthreshold), because VGS < Vth. This is

the so-called ‘soft reset’ condition (if VG>VDD, then the diode can be reset to VDD, the so-

called ‘hard reset’ condition). Since the photodiode can only be reset to a voltage Vreset =

VDD - Vth, this will limit the dynamic range of the sensor [23].

Figure 3.14: The operation of direct integration (linear) mode active pixel sensors.

Now, ignoring the dark current, and because of the photodiode is isolated (as shown in

Fig. 3.14 above), photocurrent, Iph, must equal and apposite to the capacitor current,

hence:

phD IdtdVVC −=)( (3.14)

Referring to Equation 3.5, the capacitance, CD, for n+/p-substrate photodiodes

effectively has the form:

Page 120: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

83

)(

22 V

NqACi

asiDD +

ε (3.15)

Combining, these two equations, we end up with the following integral equation:

( ) ∫∫ −=+t

ph

tV

V iasiD dtIdVVNqA D

reset 0

)(1 2

2φε (3.16)

Integrating and substituting, the photodiode voltage, VD, as a function of time can be

written as:

iasiD

phresetiD NqA

tIVtV φ

εφ −

−+=

2

2)( (3.17)

Therefore, the diode voltage bounces between two extremes: the first extreme is at 0=t ,

where the thDDresetD VVVV −==)0( , and the other extreme is located at inttt = , where the

diode voltage is [ ] iasiDphresetiD NqAtIVtV φεφ −−+=2

intint 2)()( as illustrated

graphically in Fig. 3.15 below.

Figure 3.15: Typical transient response of linear (integrating) mode of operation of active pixel sensors. The graph shows fair linearity within the range of video frame rate or higher.

0

1

2

3

4

5

0 20 40 60 80 100

Time (ms)

V D(V

)

V D (t int )

t int

Note that the apparent dependency on photodiode effective area, AD, is not real, since this

factor will be canceled by the area factor in Iph (recalling Equation 3.3,

where ioDph LAI ∝ , with ioL is the light intensity). For video frame-rate (30 f/sec), the

Page 121: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

84

integration time, intt , is about 33.3 msec. In this range, the photodiode voltage

)(tVD showed a fairly linear response as shown above:

3.5 Simulation Results In this section, we present HSPICE [80, 81] simulations for a variety of CMOS Log-APS

configurations. These sensors are of particular interest for motion detection applications,

because of their continuous operation. Experimental results obtained from the prototype

imager chip fabricated with standard 0.5µm CMOS digital process are used for

verification where appropriate . This imager consists of an array of 64 x 64 pixels, with

pixel pitch of 30µm. Each pixel (as shown in Fig. 3.8) contains a photodiode and three

transistors, which result in a fill-factor of about 60%. For a detailed description of the

CYC-chip and experimental setups refer to Appendix A, and B respectively.

3.5.1 HSPICE models verifications In these simulations, we used HSPICE circuit simulator from Meta-Software [80, 81].

For MOSFET transistors, we used Leve1-13 (and/or 39) SPICE models. The supply

voltage, VDD, used was 5V. For the photodiode, we used the 0.5µm CMOS 14TB N+-

well/P-substrate HP-SPICE Model parameters (Level-3) listed in Table 3.1below:

Table 3.1 CMOS 14TB N+-well/P-substrate HP-SPICE Model (Level-3) parameters

Parameter Value Parameter Name Area Perimeter

Junction Capacitance, CD 4.67 x 10 - 4 F/m2 3.2 x 10 -10 F/m

Dark Current, IS 1.23 x 10 -7 A/m2 1.21 x 10-13 A/m

Depletion Resistance, RSh 1.78 x 10 5 Ω.m2 5.6 x 10 10 Ω.m

Note that for CD, we used the zero–voltage value as an approximation, since the

photodiode voltage, will not change more than a few hundreds of mV. The total values

Page 122: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

85

of the above parameters are all calculated assuming 30µm x 30µm photodiodes as

follows:

The area, mmAD µµ 3030 ×= , and the perimeter: mPD µ304×= , therefore:

( ) ( ) 4587.0102.31067.4 104)( ≈×+×= −−

DDtotalD PAC pF

( ) ( ) 12522.01021.11023.1 137)( ≈×+×= −−

DiodeDtotalS PAI fA

( ) ( ) 106444.6106.51078.1 11105)( ×≈×+×= DiodeDtotalSh PAR kΩ

With the exception of the shunt resistance, )(totalShR , all these parameters were invoked in

HSPICE model as a diode parameters. This resistance )(totalShR was invoked as a circuit

element. For now, we assume negligible bulk resistance SR , zero reflection 0=R , with

quantum efficiency of 1=η , and 100% Fill-factor. We also assume a monochromatic

light source with wavelength of λ = 600 nm. We made these assumptions here to

simplify the calculations; for these simulations to be compared with the measurements, a

more careful selection was made to insure optimum comparison between the

measurements and simulations. The incident light intensity was in range between 10 -3 to

102 W/m2. These conditions and assumptions are applied to all simulations in this

chapter unless noted otherwise.

Equivalent circuit model (using parametric behavioral model with VCCS)

This HSPICE model is based on the so-called behavioral modeling offered by SPICE

simulators [80, 81], as described above. In this model Equation 3.3 was invoked in the

spice net-list of the pixel circuit as behavioral current source. The results were quite

promising as shown in Fig. 3.16 below for “conventional” Log-APS. We have

previously presented (in Fig. 3.11 above) a comparison between the two kinds of

logarithmic APS configuration using this model. Also shown below, a simulation for the

inverted Log-APS, which shows a mirror similarity between the two responses. The

difference between them is that the response of Log-APS decreases with light intensity,

whereas the response of the inverted Log-APS increases with light intensity. The

apparent constant slope of the transfer characteristics (bottom graphs in Fig. 3.16, and

Fig. 3.17 below) for both kinds of logarithmic pixels is an artifact due to the logarithmic

Page 123: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

86

scale of the graphs. In fact, they exhibit the same behavior (as anticipated) of

photoresponses (top graphs) at low light intensities (if we use linear scale). The

responses tended to “saturate”, since the logarithmic term in the output voltage equation

(e.g. Equation 3.13 for Log-APS) tends to zero at low light intensities.

Figure 3.16: Simulated photoresponse of logarithmic APS with one MOS diode (top) and its I-V transfer characteristics (bottom) using HSPICE behavioral modeling. The simulated output was at node Vout in Fig. 3.8, before the source follower.

3.9

3.95

4

4.05

4.1

4.15

4.2

4.25

0.0001 0.001 0.01 0.1 1 10 100Light Intensty [W/m2]

Out

put V

olat

ge,

Vou

t (V

)

1.E-11

1.E-10

1.E-09

1.E-08

1.E-07

3.9 3.95 4 4.05 4.1 4.15 4.2 4.25

V out (V)

I DS

(A)

Page 124: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

87

Figure 3.17: Simulated photoresponse of inverted logarithmic APS with one MOS diode (top) and its I-V transfer characteristics (bottom) using HSPICE behavioral modeling. The simulated output was at node Vout before the source follower

0.15

0.25

0.35

0.45

0.55

0.0001 0.001 0.01 0.1 1 10 100

Light intensity (W/m2)

Out

put v

olta

ge, V

out

(V)

1.E-12

1.E-11

1.E-10

1.E-9

1.E-8

1.E-7

0.15 0.25 0.35 0.45 0.55

Vout (V)

I DS

(A)

Page 125: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

88

Behavioral model with VCVS using piecewise linear function look-up table

This HSPICE model is also based on the so-called behavioral modeling offered by SPICE

simulators [80, 81]. However, in this model we invoke a whole pixel (measured)

response in our circuit Spice net-list in the form PWL function instead of a parametric

current-source. Here we used a voltage controlled voltage source (VCVS) to model the

response (output voltage versus light intensity) directly.

0.75

0.8

0.85

0.9

0.95

0.000001 0.0001 0.01 1 100 10000

Light Intensity (W/m2)

Out

put v

olta

ge, V

out (V

)

lambda = 600 nm

lambda = 600 nm

Model data

Measured data

Figure 3.18: The response of “conventional” logarithmic-APS with one MOS-diode, using HSPICE behavioral modeling (piecewise linear function (PWL), and VCVS), compared with the original measured data from [55], therefore have conditions different from the above mentioned.

The light intensity is inputted as the controlling voltage. The pixel response was

represented by a one-dimensional lookup table. This table was constructed based on

measurements. Although, the device response is described by some discrete data points,

HSPICE will automatically smoothes the corners to ensure the derivative continuity and

hence better convergence. However, the more data points the better the results. Here we

used an actual measurement to construct the look-up table. The results shown in Fig.3.18

above are in excellent agreement with measurement as it should be.

Page 126: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

89

Photocurrent (radiation) effect models

The results are in good qualitative and quantitative agreement with other behavioral

models over quite a large range of photocurrents. Figure 3.19 below, shows the transfer

characteristics simulated using this model, with RFAC factor ranges from 18105 −×

to 20101 −× . All other conditions are as stated above.

1.E-11

1.E-10

1.E-09

1.E-08

1.E-07

1.E-06

1.E-05

3.8 3.9 4 4.1 4.2 4.3

Vout (V)

I DS

(A)

RFAC = 5e-18

RFAC= 1.e-18

RFAC= 5e-19

RFAC = 1.e-19

RFAC= 5e-20

RFAC = 1e-20

Figure 3.19: I-V Transfer characteristics of logarithmic APS with one diode, using HSPICE radiation models (RFAC ranges from 18105 −× to 20101 −× ).

Discussion and conclusions

The comparison of results from equivalent circuit model (parametric behavioral model

with VCCS) results with measurements are shown in Fig. 3.21 and Fig. 3.25 in the next

sections (to avoid repetition), these comparisons showed that this model is in fairly good

agreement with the measurements, thus the validity of this model for both static

(photoresponse) and dynamic (optical modulation frequency response). For static (dc)

analyses, the look-up table behavioral model using (measured) piecewise linear function

with VCVS (shown in Fig. 3.18 above) is the best among all models presented here.

However, this model has the disadvantage that it is static and does not incorporate the

Page 127: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

90

photodiode capacitance and other parasitic capacitances that are essential for any reliable

dynamic analyses such transient response and/or AC response analyses.

Figure 3.20: The transfer characteristics of “conventional” logarithmic APS (with one MOS-diode). A comparison between using HSPICE radiation models (RFAC = 19101 −× ) shown in solid squares and parametric behavioral model shown in open triangles.

Shown above, a comparison of the transfer characteristics of “conventional”

logarithmic APS with one diode between results using HSPICE radiation models (RFAC

= 19101 −× ), and those using parametric behavioral model (open triangles). We selected

RFAC = 19101 −× , because it was the best RFAC value that made the results form

radiation model fit those from the parametric behavioral model. As shown, this model is

in good agreement with behavioral model. The main advantage of the radiation model

can be more easily extended to imager array simulation. The main drawback of this

model is that it is limited to only square pulse inputs, and therefore cannot be used with

simulations that require non-square pulse inputs. In conclusion, the equivalent circuit

model (parametric behavioral model with VCCS) seems to be the most suitable model

that suit our analyses, as it is applicable for static and dynamic simulations, and/or can be

easily extended to both imager array simulation and post layout modeling and

Page 128: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

91

simulations. Therefore, all simulations presented hereafter were performed using this

model.

Section 3.5.2 below describes the electrical sensitivity analysis and results. In

section 3.5.3, we investigate the speed of response, where the optical modulation

frequency response was used to determine the switching speed of these sensors. Finally,

results are discussed and concluding remarks are made.

3.5.2 Electrical Sensitivity Analysis A typical response of the Log-APS to light intensity is shown in Fig. 3.21. The

photocircuit exhibits a wide optical dynamic range; ~5 orders of magnitude change in

light intensity results in less than 250mV change in output voltage. This is in contrast to

linear active pixel sensors (Lin-APS) where the output voltage swing (~1.2 V) is typically

the limiting factor for electrical dynamic range. Simulated results using HSPICE are also

shown in the same figure. This shows a good agreement between the simulated and

measured results and therefore supports the argument about the suitability of the

parametric behavioral model.

Figure 3.21: Measured photoresponse of Log-APS to light intensity (at node Vout in Fig. 3.8). Simulated result obtained using HSPICE modeling is shown for comparison.

2.8

2.84

2.88

2.92

2.96

3

3.04

0.0001 0.001 0.01 0.1 1 10

Light intensity (W/m2)

Out

put v

olta

ge, V

out (V

)

Measurement

Simulation

Page 129: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

92

In these simulations, we used MOSFET model level 13 (which considers

subthreshold operation of MOSFET and its associated parasitic capacitances) [80, 81],

with parametric-behavioral equivalent circuit model for the CMOS-compatible

photodiode. The photodiode model parameters are either selected from the standard HP

0.5µm CMOS technology data used in the fabrication of the chip (as already mentioned

in section 3.5.1 above), accept for λ = 540nm, and Fill Factor 60%. Moreover, to include

the body effect in our simulation, all bodies of MOSFET transistors are connected to the

substrate (not shown in the figures for the sake of simplicity) as is usually the case in

integrated circuits.

We have already presented a comparison between the photoresponses of the

“conventional” logarithmic APS (Log-APS) and the inverted logarithmic APS (inverted

Log-APS), where both kinds exhibit the same amount of logarithmic compression, i.e.

they have the same electrical sensitivity in mV/[W/m2], as shown in Fig. 3.11 above.

Figure 3.22: Comparison of photoresponses (at Vout in Fig. 3.8) of conventional (Log-APS) pixels with different number of MOS diodes using HSPICE simulation. All MOSFET transistors bodies were connected to the substrate.

0.7

0.75

0.8

0.85

0.9

0.95

1

1.05

0.0001 0.001 0.01 0.1 1 10 100 1000

Light intensity (W/m2)

Nor

mal

ized

Vou

t

1 MOS-diode

2 MOS-diodes

3 MOS-diodes

Page 130: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

93

Both configurations can be implemented using one, two, or three (cascaded) MOS

diodes. Increasing the number of MOS diodes will reduce the compression (thus improve

the electrical sensitivity), as illustrated in Fig. 3.22 and Fig. 3.23. Note that these graphs

are normalized to the maximum value to highlight the effect of number of cascaded MOS

diodes on the electrical sensitivity. This is not the case for the transfer characteristics

graphs, where absolute values are used as shown in Fig. 3.24 below.

Figure 3.23: Comparison of photoresponses (at Vout in Fig. 3.8) of inverted pixels (inverted Log-APS) with different number of MOS diodes using HSPICE simulation. All MOSFET transistors bodies were connected to the substrate.

Theses graphs show that the effect of number of MOS-diode is more pronounced in

conventional Log-APS than inverted pixels. However, what is shown here are the

percentages of enhancement of the logarithmic slope (electrical sensitivity) and not their

absolute values. In fact, inverted pixels have, in general, a higher absolute electrical

sensitivity (mV/decade) than conventional ones (for the same number of MOS diodes).

The electrical sensitivity was ~127.63, 180.18, and 296.53 mV/decade for the inverted

Log-APS with 1-MOS, 2-MOS, and 3-MOS, respectively, whereas the corresponding

sensitivities were 68.53, 129.8, and 183.47 respectively for the conventional log-APS.

0.5

1

1.5

2

2.5

3

3.5

0.0001 0.01 1 100

Light intensity (W/m2)

Nor

mal

ized

Vou

t

1 MOS-diode

2 MOS-diodes

3 MOS-diodes

Page 131: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

94

This shows that the inverted one had better sensitivities than the conventional ones, but

lower percentage of enhancement with additional MOSFET transistors.

1.0E-11

1.0E-10

1.0E-09

1.0E-08

1.0E-07

2 2.5 3 3.5 4 4.5V out (V)

I DS

( A)

1-MOS 2-MOS 3-MOS

1.0E-12

1.0E-11

1.0E-10

1.0E-09

1.0E-08

1.0E-07

0 0.4 0.8 1.2 1.6 2

V out (V)

I DS

(A)

1-MOS 2-MOS 3-MOS

Figure 3.24: Comparison of transfer characteristics (at Vout in Fig. 3.8) for Log-APS (top) and for inverted Log-APS (bottom) with different number of MOS diodes using HSPICE simulation. All MOSFETs bodies were connected to the substrate.

Page 132: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

95

This is attributed to the circuit configuration of the inverted Log-APS, which is more

affected by the body-effect than the conventional one if extra MOS diodes are added.

This in turn will lead to a larger resistance added to the circuit of the inverted pixel when

an extra MOS diode is added. Therefore, the inverted pixel has a better potential to be

used in continuous operation that require higher electrical sensitivities such as motion

detection applications.

3.5.3 Optical Modulation Frequency Response Analysis In the operation of Log-APS, the photodiode capacitance and other associated parasitic

capacitances, are charged or discharged by means of the photocurrent. This can limit the

speed of response, especially during a transition from low- to high-illumination when the

initial photocurrent can be as low as 10 fA. [94].

One way to measure the speed of response of Log-APS is using the "optical

modulation frequency response" as a measure of the speed of response. In this method,

the electrical frequency response of the pixel is measured for ac optical inputs. The term

"optical modulation" is used here to distinguish it from the spectral response. By

calculating the 3-dB frequency from the measured frequency response, one can estimate

the range of speeds under which the Log-APS can perform.

This approach can be a very useful technique especially for motion detection

applications. However, to our knowledge, there has been little work done in studying the

pixel response speed using this method [95,96]. The optical modulation frequency

response was measured using red laser diode (670 nm) modulated with a square wave to

give a peak output of ~ 2 mW/cm2, with frequency range of 200 Hz to 1MHz. The

detailed experimental setup for optical modulation frequency response measurements is

shown Appendix D (Figure: D.3). The test fixture in the case consists of the single pixel

test structure mounted on PCB instead of ALO-chip.

Figure 3.25 shows that the response of conventional pixel (size 30µm) has a 3-dB

cut-off frequency of about 97.5 kHz. This corresponds to a maximum focal plane speed

of ~ 6m/s. The corresponding speed of an imaged object depends on the optics of the

system. Simulated results obtained using HSPICE are also depicted in the same figure.

Page 133: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

96

Comparison shows a good agreement between measured and simulated cut-off

frequencies.

-27

-21

-15

-9

-3

3

0.1 1 10 100 1000

Frequency (kHz)

Atte

nuat

ion

(dB

)

Simulation

Measurement

Figure 3.25: Comparison between measured and simulated optical modulation frequency response for Log-APS (at Vout in Fig. 3.8).

The simulation results presented above showed that the electrical sensitivity (level

of logarithmic compression) of the conventional Log-APS could be improved (as shown

in Fig. 3.22) by increasing the number of the MOS-diodes in the pixel. However, this

improvement can only be achieved at the expense of the speed of the response as

illustrated in Fig. 3.26. The higher the electrical sensitivity, the lower the 3-dB frequency

(or maximum speed) of the optical modulation frequency response of Log-APS and vice-

versa. Thus, there is a tradeoff between pixel electrical sensitivity and frequency

response, arising because the same photocurrent must cause a larger voltage swing at the

gate of the source-follower. Another way to explain this behavior is by the increase in

the circuit time-constant eqeq CR .=τ . as the number of the MOS-diodes increases. Since

adding MOS-diodes (active resistance) in cascade (series) will increase the equivalent

Page 134: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

97

resistance Req at the sensing node, the time-constant τ will also increase (refer to Fig. 3.8

and Fig. 3.24). This time-constant controls the speed of charge/discharge mechanisms of

the sensing node capacitance (diode capacitance and other associated parasitic

capacitances), and hence the 3-dB-frequency of the optical modulation frequency

response.

-57

-48

-39

-30

-21

-12

-3

6

0.0001 0.001 0.01 0.1 1 10 100

Frequency (MHz)

Atte

nuat

ion

(dB

)

1 MOS-diode 2 MOS-diodes 3 MOS-diodes

Figure 3.26: Comparison of the simulated optical modulation frequency responses of the conventional Log-APS with one, two and three MOS diodes (at Vout in Fig. 3.8).

Another interesting result in this context (shown in Fig. 3.27 below) is the effect

of the ambient (dc) illumination on the speed of response. The same above argument can

be applied since Req can be regarded as Vout/Iph (Vout/IDS). Therefore, as Iph increases

because of the ambient illumination, the effective resistance Req, will be reduced and

hence the RC-time constant, and τ, will be shorted. This implies an increase in the speed

of response (charge/discharge), and hence the 3-dB frequency. This is a very important

issue especially at low ambient illumination where the low photocurrents generated tend

to slow the charge/discharge mechanism of the sensing node capacitance. This indicates

that the speed of response to a moving object will be lower at lower illumination levels.

Page 135: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

98

10

100

1000

10000

100000

0.1 1 10 100 1000 10000 100000

Ambient light intensity (W/m2)

3dB

-freq

uenc

y (k

Hz)

Figure 3.27: Simulated ambient illumination effect on 3-dB frequency for Log-APS (Conventional configuration, at node N2 in Fig. 3.1).

3.6 Summary and Discussion In this chapter was devoted for characterization, modeling and simulation of

CMOS compatible photodiode. We presented detailed analyses on the CMOS active pixel

sensors (APS), especially in logarithmic mode. We devised several equivalent circuit and

behavioral models for the CMOS photodiode, and different configuration of pixels. We

compared these models with measurements resulted from test structures fabricated in

0.5µm and 0.35µm CMOS technology. The results suggest that the best fit for DC

analysis is the Piece Wise Linear (PWL) model, which uses actual measurements.

However, we found that the VCCS photodiode model is the one best suited for our

investigation, because of its accuracy and applicability to AC and transient (dynamic)

analyses. This was followed by an analysis of logarithmic active pixel sensors (Log-

APS) and linear (integrating) active pixel sensors (Linear-APS).

CMOS Log-APS performance has been analyzed in terms of electrical sensitivity

and speed of response. Results showed a good agreement between simulation using

parametric behavioral models and measurements (Fig. 3.21, and Fig. 3.25).

Page 136: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

99

These results suggest that the inverted Log-APS has superiority over the

conventional one, because it has a higher output swing (lower logarithmic compression)

for the same number of MOS-diodes. However, conventional pixels have higher voltage

levels (especially at low light intensity levels) compared to those of the inverted (refer to

Fig. 3.22, Fig. 3.23). Therefore, they are expected to have higher output signal-to-noise-

ratio (S/N) in this important stage of signal processing than the inverted ones. Moreover,

the percentage of electrical sensitivity improvement when a MOS-diode was added is

more pronounced in the conventional structure (Fig. 3.22) than that in the inverted ones

(Fig. 3.23). This opens the door for further response enhancements.

However, these improvements have some drawbacks on the optical sensitivity

(because of the reduced fill-factor) and/or the speed of response, as predicted by HSPICE

simulation (see Fig. 3.26). Therefore there is a tradeoff between the electrical sensitivity

of the pixel and its speed of response. Both figures of merit are important for our on-chip

image processing using double sampling techniques. Accordingly, the choice of

conventional pixel with two MOS diodes would be a good compromise. In this case, we

sacrifice some speed and fill-factor for the sake of more electrical sensitivity.

Another important result is the effect of the ambient illumination on the speed of

response of the pixel, which shows that at low light intensity the speed of response (3dB-

frequency) will be reduced (refer to Fig. 3.27). Thus we should take this effect into

consideration when designing for low light intensity illumination conditions, especially in

applications where the speed of response is vital. It is concluded that more robust on-

chip image processing for CMOS imagers with Log-APS as photocircuits can be

achieved with careful designs and the consideration of all issues discussed here.

Page 137: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

100

Chapter Four

Spatial Double Sampling (SDS)

4.1 Introduction In recent years, CMOS image sensors have gained significant ground over charge-

coupled devices (CCD) [1-5, 7] in many applications, especially where integrated

functionalities are advantageous, such as in security, biometrics, and industrial

applications [8-11]. The main advantages of CMOS image sensors are their high level of

integration [1], random accessibility [2, 6], and low-voltage, low-power operation [1-3].

Accordingly, they offer system-on-chip capability, allowing on-chip image processing

with low production cost [1, 3, 7]. This has been of great importance in the development

of smart vision chips which, in addition to their main task of capturing images and

converting them into electrical signals, process these signals in real-time by including

parallel, analog signal processing circuitry on the same chip [2, 15, 16].

Most of the reported designs, however, include these on-chip image-processing

functionalities at the expense of silicon area. This can limit the maximum spatial

resolution achievable. Here, we overcome this limitation with a straightforward, yet

robust, VLSI implementation of the sampled edge detection method. Our technique

adapts the two sample-and–hold (S&H) circuits of the correlated double sampling (CDS)

block, usually used for fixed pattern noise (FPN) reduction, to perform a spatial double

sampling (SDS) on the captured image (i.e. sampled differentiation), hence detecting

Page 138: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

101

visual edges. Because these circuits are usually an integral part of CMOS image sensors,

no additional area is required to include the proposed edge detection functionality on the

image sensor chip. It is very important to note that even though we call this circuit a

correlated double sampling (CDS) circuit throughout this chapter (to emphasis on the fact

that no additional silicon space is needed), the operation performed using this circuit is

not necessarily correlated. In fact, all double sampling operations in this thesis are

uncorrelated. The imager array was implemented using active pixel sensor (APS)

technology with dual mode of pixel operation: a logarithmic (continuous) mode with

wider optical dynamic range and a linear (integrating) mode with higher image quality.

The real-time spatial edge detection was demonstrated in the both modes of operation.

This technique can be extended to perform temporal differentiation (discussed in Chapter

5), providing a simple method for motion detection. The prototype chip was fabricated

using standard 0.5µm CMOS process with an array of 64 x 64 pixels and pixel size of 30

× 30 µm. The fill factor is ~ 60% and the system working voltage is 5V.

Section 4.2 describes the architecture and the operation of the proposed edge

detection system. Measurement results for spatial-edge detection in linear-mode of

operation are discussed in Section 4.3. Logarithmic mode results are presented and

discussed in 4.4. Finally, influences of motion on spatial-edge results are discussed. This

format allows us to explore the feasibility of spatial-edge detection in both modes of

operation, before attempting any analysis under motion influences.

4.2 System Architecture The conventional spatial-edge detection process is illustrated in Fig. 4.1 below for one

row. The black-white (high-contrast sharp edges) stimulus image I (x, t) is captured by

pixels of the active row, where the white and black pixels correspond to the white and

black portions of the image respectively. The captured image is then sampled and held

until it is readout via a differential amplifier, which differentiates samples from adjacent

pixels resulting in an edge signal as shown in Fig. 4.1. The situation for our case is

slightly different; samples are taken from pixels that are in the same columns but in

adjacent rows. This provides rough detection of spatial-edges. Results indicate that the

Page 139: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

102

proposed architecture is robust and suitable for many industrial applications, where real-

time integrated functionalities are advantageous.

Figure 4.1: The conventional concept of the spatial-edge detection process shown for one row of the imager array (after [97]).

The functional diagram of the proposed CMOS image sensor with focal plane

edge detection is shown in Fig. 4.2. The edge detection operation starts by activating one

row at a time using the row select transistor (Row). Output signals from the pixels in the

selected row are simultaneously transferred via each column bus to one of the CDS S&H

circuits (two per column). The same operation is repeated to the succeeding adjacent

row, which will be sampled and held in the second S&H circuit. These sampled signals

are then readout and differenced by activating one column at a time [1]. The difference

between the two readout signals will result in the required spatial edge information. The

row and column address decoders that provide the respective row and column activation

(scanning) signals are shown in Fig. 4.2 (a). This edge detection operation is similar to

that of the normal CDS operation in CMOS image sensor. The exception here is that the

operation has been modified from differencing two-sampled signal of the same pixel (to

reduce fixed pattern noise) to differencing samples form vertically adjacent pixels to

achieve the spatial differentiation (refer to Fig. 4.2 (b)). The key parameter here is the

timing of the control signals. With appropriate timing, we can combine both high-

resolution image capture and edge detection functionality on the same chip. Moreover,

with a slight modification in the sequence of events in the timing pattern of spatial

differentiation (SDS) operation from sample-reset-sample to sample-sample-reset, a

Pixels

Image I (x, t) Sampled Image

Image Edge

Page 140: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

103

temporal edge (TDS) detection operation can be implemented, providing a simple method

for motion detection (refer to Chapter 5).

a

Reset

SH1

SH2

Reset

SH1

Spatial-edge detection operation

Fixed pattern noise reduction operation

64 Columns (COL.)

Row n

SH2

∆t

∆TR

b

Figure 4.2: Functional block diagram of the proposed edge detection system using SDS algorithm (a). The row “n” (black) is sampled and held in the left S&H while row n+1 (white) is sampled and held in the right one. The outputs are then differenced using differential amplifier to obtain the spatial-edge information. (b) Comparison between timing patterns for FPN noise-reduction operation (top) and spatial-edge detection operation (bottom) for the correlated double sampling circuit (CDS).

Page 141: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

104

4.2.1 Correlated double sampling (CDS) circuit The correlated double sampling (CDS) circuits are located per column and consist of two

sample-and-hold (S&H) circuits as shown in Fig. 4.3 [15, 16]. Referring to this Figure

and to Fig. 4.2, when a row ‘n’ is selected using row select signal (ROW), with NMOS

transistor (MSH1) is turned on by SH1, output signal from each pixel in that row is

sampled and stored (VSH1) in the corresponding column S&H capacitor C1. All pixels in

the selected row are then reset.

Figure 4.3: The correlated double sampling circuit [15, 16] used here for spatial-edge detection shown along with the dual mode active pixel sensor (APS). The mode of operation is determined by selecting the proper control state for the OR gate (“0” for Linear-APS mode, and”1” for Log-APS mode).

VSH1

VSH2

Vbias-0

Column Bus

SH1

SH2

Cj1

Cj2

Pixel (i,j)

Vbias-2

COL

Vbias-2

COL

Column j VDD

VDD

PD

VDD

VDD

SF

Reset (Lin.)

M1 ROW

OR

MSH2

MSH1

0/1

Page 142: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

105

Similarly, when the successive row ‘n+1’ is selected, the pixels’ responses are sampled

and stored (VSH2), when SH2 is high, in S&H capacitors C2 (also located per column).

Again, all pixels in the selected row are then reset. Upon the activation of readout

column signals (COL.), the two-stored signals are differenced using a differential

amplifier (not shown for clarity). This difference results in the spatial-edge detection

information of interest here. This applies equally to both modes of pixel operation,

except that in logarithmic-mode, there is no need for resetting the pixels after each

sampling. The reset signal is always kept high at VDD by choosing the appropriate control

state for the OR-gate (“1” for logarithmic and “0” for linear-mode).

4.3 Spatial-edge Detection using SDS in the Linear Mode

All measurements held in this section are performed on prototype CMOS imager (CYC-

chip), which is a dual mode APS-imager fabricated using Hewlett-Packard 0.5µm digital

CMOS process (refer to Appendix A for more details). The experimental setup consists

of this CMOS imager (CYC-chip) test-fixture, power supplies to provide VDD, and other

biasing, pattern generator, light source, Lux meter, oscilloscope, an image capture system

using LabView, data handling software (WaveStar), and other optical and electrical

accessories. For detailed experimental setup, please refer to Appendix B.

Before presenting any detailed analysis on spatial-edge detection for this mode, it

is worthwhile to show some typical examples of spatial-edge detection using spatial

double sampling (SDS) technique. The maximum light intensity was limited to several

hundreds of Lux, to ensure good illumination without saturating the pixels. The contrasts

presented here are those typical to ordinary objects like that of a human hand, depicted in

Fig. 4.4(a), which shows the hand image captured using correlated double sampling

(CDS) circuit in its normal operation as fixed pattern noise (FPN) reduction circuit. The

resulted contrast (related to the output signal swing) of this captured image is clearly

depicted by 3D-plot in Fig. 4.4(b) below. The image in this figure has lower resolution

(less than 64 pixels) due to the elimination of several bad columns in the imager array.

The detected edge image using SDS is depicted in Fig. 4.5 (a), which shows strong tuning

toward one direction (vertical), which parallel to the rows of the image (since the imager

Page 143: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

106

was physically rotated by 90º). The 3D-plot of edge signal (refer to Fig. 4.5(b)) clearly

shows the level of this spatial edge output.

Figure: 4.4: Captured hand image (at 355 Lux) in the linear-mode using CDS circuit to enhance image quality (a). Image 3D-plot of clearly shows image contrast (b). Frame-rate is at ~ 46 f/s. Images captured using LabView software.

(b)

Figure 4.5: Spatial edge detection of the hand captured the linear-mode (at 746 Lux) using CDS circuit, but with the timing pattern for SDS operation (a). 3D-plot of the detected image edges (b). Frame-rate is at ~ 46 f/s. Images captured using LabView software.

(a) (b)

(a)

Page 144: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

107

These results show that this method have sensitivity not only to very high contrast images

(hand on dark background), but also to the low contrast background features (labeled by

“Low” in Fig. 4.5 (a)). For more precise results, horizontal and vertical bar stimuli are

used with different bar-widths, and contrasts. A typical example of these results is shown

in Fig. 4.6 and Fig. 4.7 for a captured vertical-bar image and its edge output respectively.

-

Figure 4.6: Captured bar image (at 738 Lux, and frame-rate ~46 f/s) in the linear-mode using CDS circuit to enhance image quality, with image 3D-plot (right) shows contrast.

Figure 4.7: Edge detection of the bar captured in the linear-mode (at 738 Lux and ~ 46 f/s) using CDS circuit. The 3D-plot (right) shows the level of the spatially detected edge.

These results confirm the 1D operation of this technique (limited to edges that are

parallel to the rows of the imager). Note that the other direction was omitted from the

Page 145: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

108

investigation because of its insignificant edge output. This may limit the application of

this technique to systems, which uses stripes or bars in its operation. Nevertheless, this is

not small market, and includes many applications such as object counters, speedometers,

bar code scanners, 1D alignment and gauging, and as a pre-processing step in many

advanced 1D image analysis applications.

To highlight the differences between edge detection operation and ordinary

operation of CDS circuits as a noise reducer, we compared an image captured without

CDS circuits (Fig. 4.8), with an image captured with CDS circuits (Fig. 4.9), and an

edge-image captured with SDS operation of (Fig. 4.10) as shown below. A 2D bars

stimulus with 100 %-contrast was used, and was illuminated with 922 Lux light source.

The frame-rate was kept constant at 46 f/s. These figures also show the respective

horizontal and vertical waveforms that correspond to the rows (in frame) and columns (in

rows) of the imager array (rotated by 90º). Note that we used desktop computer equipped

with LabView-GaGe soft oscilloscope modified to convert voltage levels to gray scale

levels as shown. In fact all displayed images in this thesis are captured using the same

method with gray scale that has ±100 levels instead of the ordinary 256 levels (refer to

Appendix B for more detailed information.).

For the results in Fig. 4.8, no double sampling (DS) operation was applied.

Therefore dc (offset) values still exist (around gray level 40) as shown. Note that

horizontal waveforms are for rows (in frame) superimposed on each other, whereas the

vertical waveforms are for columns (in rows) superimposed on each other, with larger

differences between horizontal signals compared with those between vertical signals.

This can be attributed to the non-uniformity of illumination distribution in the vertical

direction of the stimulus (rows of the imager here).

In Fig. 4.9, the double sampling circuit (DS) was used to reduce the fixed pattern

noise (FPN) in the image by differencing two-sampled image output signals VSH1, and

VSH2, thus most of DC (offset) values will vanish as shown. The image shown here

surprisingly appears to have more noise than that without CDS circuits. The reason can

be attributed to an artifact caused by the vanishing of the DC (offset) component, and the

fact that the LabView software was set on the automatic scale display, which resulted in a

magnified effect of residual noise.

Page 146: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re 4

.8:

A t

ypic

al c

aptu

red

imag

e in

the

lin

ear

mod

e of

a 1

00 %

-con

trast

stim

ulus

, al

ong

with

the

hor

izon

tal

and

verti

cal

wav

efor

ms

that

cor

resp

ond

to th

e ro

ws

and

colu

mns

of

the

imag

er a

rray

(ro

tate

d 90

º).

No

doub

le s

ampl

ing

(CD

S) o

pera

tion

was

ap

plie

d he

re;

ther

efor

e dc

(of

fset

) va

lues

stil

l ex

ist.

Not

e: g

ray

leve

ls w

ere

used

her

e in

stea

d of

vol

tage

s (m

odifi

ed s

oftw

are

osci

llosc

ope)

. Fr

ame

rate

was

~46

f/s,

and

illum

inat

ion

light

inte

nsity

was

~92

2 Lu

x.

Row

s

Row

s

Col

umns

Col

umns

Page 147: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re 4

.9:

A t

ypic

al i

mag

e ca

ptur

ed i

n th

e lin

ear

mod

e of

a 1

00 %

-con

trast

stim

ulus

usi

ng c

orre

late

d do

uble

sam

plin

g (C

DS)

ci

rcui

ts.

Als

o sh

own

horiz

onta

l and

ver

tical

wav

efor

ms

that

cor

resp

ond

to th

e ro

ws

and

colu

mns

of

the

imag

er a

rray

(ro

tate

d 90

º).

Bec

ause

of

the

CD

S op

erat

ion

appl

ied

here

, mos

t of

dc (

offs

et)

valu

es w

ill b

e va

nish

ed u

pon

diff

eren

cing

. N

ote:

gra

y le

vels

wer

e us

ed h

ere

inst

ead

of v

olta

ges (

mod

ified

softw

are

osci

llosc

ope)

. Fr

ame

rate

was

~46

f/s,

and

illum

inat

ion

light

inte

nsity

was

~92

2 Lu

x.

Row

s

Row

s

Col

umns

Col

umns

Page 148: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re 4

.10:

A

typ

ical

cap

ture

d im

age

in t

he l

inea

r m

ode

of a

100

%-c

ontra

st s

timul

us u

sing

cor

rela

ted

doub

le s

ampl

ing

(CD

S)

circ

uits

. A

lso

show

n sp

atia

lly d

etec

ted

edge

s im

age

with

hor

izon

tal a

nd v

ertic

al w

avef

orm

s th

at c

orre

spon

d to

the

row

s an

d co

lum

ns

of th

e im

ager

arr

ay (

rota

ted

90º).

B

ecau

se o

f th

e sp

atia

l edg

e op

erat

ion

appl

ied

here

, all

dc (

offs

et)

valu

es w

ill b

e va

nish

ed u

pon

diff

eren

cing

. Th

e st

rong

1D

tuni

ng to

war

d on

e di

rect

ion

(par

alle

l to

imag

e ro

ws)

is c

lear

her

e.

Not

e: g

ray

leve

ls w

ere

used

her

e in

stea

d of

vol

tage

s (m

odifi

ed so

ftwar

e os

cillo

scop

e).

Fram

e ra

te w

as ~

46 f/

s, an

d ill

umin

atio

n lig

ht in

tens

ity w

as ~

922

Lux.

Row

s

Row

s

Col

umns

Col

umns

Row

s

Page 149: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

112

Figure 4.10 shows captured image using CDS circuit, along with its spatially

detected edges with their horizontal and vertical waveforms. Because of the spatial-edge

(SDS) operation applied here, all DC (offset) values vanished upon differencing. The

strong 1D tuning toward one direction is very clear here; only image edges that are

parallel to image rows can be detected. This can be attributed to the method of the

spatial-edge detection applied here, which is based on differencing outputs of pixels that

are in the same columns but in different rows (successive rows) in order to maximize the

spatial-edge output signal (∆V). Even with this limitation, this technique still has the

potential to be used in a wide range of applications (as listed above), especially knowing

no extra circuitry is required.

4.3.1 Effect of illumination level on spatial-edge detection In this section we examine the effect of illumination intensity levels on the spatial-edge

(SDS) information determined in terms of voltage difference ∆V between the two output

signals of CDS circuit, VSH1 and VSH2. These two signals were captured at a constant

frame-rate of about 46 f/s and sampled for each pixel in the imager array with sampling

clocks (SH1 and SH2 respectively). The stimulus was a black bar on a white background

(100 % contrast). The spatial-edge (SDS) output signal was measured under wide range

of illumination light intensities (206-1839 Lux). To get better insight, we used 3D-graph

to plot these data for both stimulus image and detected edge output as shown in Fig. 4.11.

Figure 4.11 (a)

Page 150: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

113

Figure 4.11: Effect of illumination on spatial-edge detection. The 3D graphs show the imager response (captured image) and spatially detected edge for different illumination levels (a) 1839 Lux, (b) 206-1131. Both the contrast and frame-rates were kept constant at 100 % and ~46 f/s respectively. This graph shows that the spatial-edge detection is feasible even at these low illumination levels.

Page 151: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

114

1839 Lux

Frame Flag

Edge (WB)

Frame Flag

Edge (BW)

1131 LuxBlack White White

824 Lux

420 Lux

206 Lux

Figure 4.12: Effect of illumination on spatial-edge detection. These instantaneous histograms show spatially detected edges output signal (∆V) of one frame for different illumination levels (206-1839 Lux). The stimulus has 100% contrast. The images were captured in linear mode with constant frame-rate of ~ 46 f/s. Left edge is white-to-black (WB) edge, whereas right edge is black-to-white (BW) edge as shown in top chart. H-scale is Time (sec) at 2.5ms/div, V-scale is ∆V (V) at 100mV/div.

Page 152: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

115

These graphs show that this technique is feasible for light intensities down to 206 Lux.

This opens the door for many applications.

Another useful way to understand the effect of illumination on spatial-edge (SDS)

output signal, we measured its instantaneous histogram (waveform) at each illumination

intensity level in the range between 206-1839 Lux as shown in Fig. 4.12 above. Output

signals ∆V form both edge types (BW-ON edge and WB-OFF edge) have bipolar form,

with the positive magnitude (M+) is on the average greater than the negative magnitude

(M-), which can be attributed to the nature of the SDS operation. This bipolarity of edge

signal can be eliminated by using thresholds.

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

0 500 1000 1500 2000llumination (Lux)

Spat

ial e

dge-∆

V (V

)

M+

M-

Pk-Pk

Figure 4.13: Effect of illumination on the spatial-edge detection of 100 % contrast stimulus. The illumination light intensities of light source used are in the range between 214-2560 Lux. The images were captured in linear mode at frame-rate of ~ 46 f/s. The plot shown here is for the left WB-OFF edge of the stimulus.

Page 153: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

116

All measurements so far are instantaneous. In order to reduce the effect of

temporal noise (hence better understanding of the effect of illumination on spatial-edge

detection) we need to measure edge output signal many times for each edge type and for

each illumination level under investigation, then we average these measurements to get

the desired improved results. Accordingly, we measured the edge signal ∆V (30 times

for each edge types) using the same stimulus and keeping frame rate constant. The

illumination light intensity was in the range between 206 to 1839 Lux. This task was

accomplished using a logging-software (WaveStar), which samples at 1 sample/sec.

These data (30 samples for each edge type at each illumination level) were stored in the

computer, where they were averaged and plotted in Fig. 4.13 and Fig. 4.14.

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

0 500 1000 1500 2000llumination (Lux)

Spat

ial E

dge-∆

V (V

)

M+

M-

Pk-Pk

Figure 4.14: Effect of illumination on the spatial-edge detection for 100 % contrast stimulus. The illumination light intensities of light source used are in the range between 214-2560 Lux. The images were captured in linear mode at frame-rate of ~ 46 f/s. The plot shown here is for the right BW-ON edge of the stimulus.

Page 154: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

117

Figure 4.13 shows the effect of the illumination on the spatially detected left

(WB) edge signal (∆V). The peak-peak value, which is defined by Tektronix TDS360

oscilloscope: MAX (offset) – MIN (M-) will result in positive quantity as also shown in

the graph above. In this graph, the edge signal magnitude ∆V starts increasing rapidly

with illumination intensity until it reaches a certain value (M+ ~259 mV, |M-| ~168 mV,

and peak-peak ~427 mV) at 824 Lux, and then start to level off gradually toward some

steady state value.

According to our observations, at low light intensities, the captured images

usually have very poor contrast (output swing) due to noise, but as the light intensity

increase, the contrast start to improve until the light intensity reaches a certain level after

which the process is reversed and the contrast start to degrade or level off with

illumination intensity. This level is called the saturation point of the pixels after which,

the pixel reset-time (reset pulse width) is no longer enough to reset all the charge in the

saturated pixels, which cause degradation pixel (imager) ability to resolve gray levels,

consequently, the captured images will appear bright with extremely poor contrast. This

argument fits well with the spatial-edge graph shown in Fig. 4.13 and explains its

behavior with illumination intensity, since edge signal is a strong function in captured

image contrast (swing) as will be discussed in more detail in the next section.

The effect of illumination intensity on the spatially detected right BW edge output

signal shows similar behavior as depicted in Fig. 4.14 above, with ∆V start to level off

(like WB edge) at 824 Lux with values of M+ ~226 mV, |M-| ~168 mV, and peak-peak

~394 mV. Differences in ∆V maximum values between the two edge types could be

attributed to the non-uniformity of illumination intensity distribution on the stimulus

surface, which depends on the light source position relative to each edge.

These results are interesting and suggest that we can acquire more reliable edges,

by carefully choosing the illumination level. Moreover, these results showed that the

spatial-edge detection is feasible even at low light intensities such as 206 Lux, where the

minimum peak-peak is ~ 200 mV for both edges. Note, we focused on peak-peak value

since we intend to use thresholds to eliminate bipolarity of the edge signal.

Page 155: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

118

4.3.2 Effect of contrast on spatial-edge detection Understanding the effect of contrast on the spatial-edge output signal (∆V) is important

as we already discussed in the last section. This effect was investigated under a wide

range of stimulus contrasts, from100% down to 20% (the lowest limit for any significant

spatial-edge response in our SDS system). These stimuli were black bars over gray

backgrounds with different gray levels to generate contrasts in the range under study.

The light intensity level was kept constant at 922 Lux. These images were captured in

linear mode with a frame-rate of ~ 46 f/s.

It is worth nothing that it is the effective contrast (contrast of the captured image)

that directly affects the edge output signal and not the stimulus contrast. In fact, the

contrast of the captured images is usually a degraded reproduction of the stimulus

contrast as is the case here (refer to Fig. 4.15 for 100% contrast stimulus). Accordingly,

we used 3D-plots to show captured images (effective contrast) of the stimuli with

contrast that range from 20-100% (shown on the left of 3D-plots) along with the

corresponding 3D-plot of the spatially detected edge as illustrated in Fig. 4.16.

Figure 4.15: 3D-plot and surface plot of the spatial-edge output signal for 100 %-contrast (vertical) stimulus illuminated by light intensity of 922 Lux. The image was captured in linear mode at frame-rate of ~ 46 f/s. This graph shows important features in 3D-plot of edge signal as related to the 2D stimulus edge image represented by the surface plot on the right of the Figure.

Page 156: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

119

Figure 4.16 (a)

Page 157: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

120

Figure 4.16 (b)

Page 158: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

121

Figure 4.16: 3D-plot of the spatial-edge (SDS) output signal for (vertical) black bar over gray background stimuli illuminated by light intensity of 922 Lux. Different gray levels were used to generate stimuli with contrasts that range from 20-100 %. The image was captured in linear mode at frame-rate of ~ 46 f/s. (a) 75-100 %, (b) 47-70 %, (c) 30-20 %, the rest (20 %) is shown here.

It is clear that the captured images have much lower contrasts compared with

those of stimuli. Although using the effective contrast is more accurate in characterizing

our sampled edge detection technique (since it excludes the effect of the imager), we

always refer to the stimuli contrast (shown on the left of 3D-plots) whenever contrast is

mentioned (consistent with standardization).

Another useful way to understand the effects of contrast on the spatial-edge (SDS)

output signal is to measure the histogram (waveform) of the spatial-edge output signal at

each contrast level in the range under study (20%-100%) as shown in Fig. 4.17 below.

These histograms show how the contrast affects the output signal of the spatial detection

of both edge types. Both edges output signals ∆V of bipolar form, a positive magnitude

(M+), and a negative magnitude (M-).

Page 159: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

122

Figure 4.17: Effect of contrast on spatial-edge detection. The stimuli are vertical black bars on different gray-levels background to generate contrasts that range from 20-100 %. All measurements were done under constant (922 Lux) illumination and frame-rate of ~ 46 f/s. These instantaneous histograms show the spatially detected edges output signal (∆V) of one frame. H-scale is Time (sec) at 2.5ms/div, V-scale is ∆V (V) at 100mV/div. (a) 75%, 100 %, the reset (20-47 %) is shown here.

37%

20%

47%

Frame flag

75%

Frame flag Edges

White White Black bar 100%

Page 160: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

123

All result presented so far (Fig. 4.15, Fig. 4.16, Fig. 4.17) are based on

instantaneous measurements. Therefore, in order to reduce the effect of temporal noise,

and gain better insight to the effect of contrast on spatial-edge detection, the spatial-edge

output has to be measured many times for each contrast level, and then averaged.

Accordingly, we measured the edge output (SDS) signal ∆V using the same frame-rate

(46 f/s) and illumination intensity (922 Lux) with the contrasts that ranges (20%-100%)

many times (30 sample) using logging WaveStar software which samples at 1

sample/sec. The data was then stored in the computer, where it was averaged and plotted

as shown in Fig.4.18 below.

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

0 20 40 60 80 100 120Contrast (%)

Spat

ial E

dge-∆

V (V

)

Av M+

Av M-

Av Pk-Pk

Figure 4.18: Effect of contrast on spatial-edge detection for (vertical) black bar over gray background stimuli. Different gray levels were used to generate contrasts that range from 20-100 %. The illumination light intensity, frame-rate, were kept constant at 922 Lux, ~ 46 f/s respectively.

Figure 4.18 shows the effect of the contrast on the spatial-edge (SDS) output

signals (∆V), which are bipolar spikes with negative magnitude (M-), and positive

magnitude (M+). M+ was found, on the average, to be larger than |M-|. This graph also

shows, that the edge signal magnitudes M+, |M-| and its peak-peak value are all

Page 161: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

124

monotonically increasing with contrast for both edge types of the stimulus. The output

edge signal changes in the range of 84mV-229 mV, 56mV-164mV, and 140mV-394mV,

for M+, |M-|, and peak-peak respectively, when the contrast changes from 20%-100%.

From the observation, one can notice that |M-| levels off at contrast of about 50% toward

as steady state value, whereas both M+ and peak-peak values start to level off slightly at

contrast of about 60%, but they still increasing until the contrast reaches about 87 %,

where they start to fluctuate around the steady state value of ∆V. This result suggests

that the spatial-edge detection is feasible even at low contrasts (20%) where edge output

signals are minimum at~140mV (peak-peak) level. This allows room for many

applications. The apparent non-smoothness of the edge signal would be due to the fact

that every point in the graph can be considered as an independent experiment, since

stimuli are changed manually for each point. However, this has no effect on our

qualitative arguments presented here.

4.3.3 Effect of frame-rate on spatial-edge detection To get good insight on how the frame-rate affect the spatial-edge (SDS) output signal

(∆V), we used the 3D-graphs to show the effect of the frame-rate on captured image and

its detected edges as shown in Fig. 4.19 below. It is clear that both the contrast (output

swings) of captured images and the edge output signals had undergone non-monotonic

behavior. First both increase at low frame-rates, then they level off to constant values at

medium frame-rates, and finally they start to decrease at high frame-rates as shown

below.

Figure 4.19 (a)

Page 162: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

125

Figure 4.19 (b)

Page 163: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

126

Figure 4.19: 3D-plot of the spatial-edge output signal for vertical-100%-contrast stimulus that was illuminated by light intensity of 922 Lux. The image was captured in linear mode at frame-rate in the range (a) ~11.5 f/s, (b) ~23 f/s – 230 f/s, the rest is shown here.

To add another dimension to our understanding of effect of the frame-rate on the

spatial-edge (SDS) output signal, we measured the instantaneous histogram (waveform)

of the spatially detected edge output signal at each frame-rate in the range under study

(11.5 f/s-1150f/s) as shown in Fig. 4.20 below. These histograms show how the frame

affects the output signal of both edge types. Both edges output signals (∆V) have bipolar

forms, a positive magnitude (M+), and a negative magnitude (M-). The positive

magnitude (M+) seem to be constant at low and medium frame-rates, whereas the

negative magnitude follows a non-monotonic behavior, discussed above, with WB-left

edge signal changes more rapidly than that of the BW-right edge and shows an obvious

“increase -constant-decrease” pattern as shown in Fig. 4.20.

11.5 f/s

White WhiteBlack bar

23 f/s

Frame flag Frame flag

Edge

Figure 4.20 (a)

Page 164: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

127

46 f/s

115 f/s

230 f/s

1150 f/s

Figure 4.20: Effect of frame-rate on spatial-edge detection. The (vertical) 100 %-contrast stimulus was illuminated by a constant light intensity of ~ 922 Lux. These instantaneous histograms show the spatially detected edges output signal (∆V) of one frame with frame-rates that range from 23 f/s to 1150 f/s. V-scale is ∆V (V) at 100mV/div. H-scale is Time (sec) at 10 ms/div, 5ms/div, 2.5ms/div, 0.5ms/div, and 0.1ms/div respectively. (a) 11.5 f/s, 23 f/s, the rest (~ 46 f/s – 1150 f/s) is shown here. These characteristics are best explained by the change of contrast (output swing)

with frame-rate, which strongly affect the edge output. So, as discussed earlier (with Fig.

4.19 and 4.20 in mind), better captured image contrasts will result in higher levels of the

edge output.

So far, all measurements are instantaneous. Therefore, in order to reduce the

effect of temporal noise, and obtain better insight into the effect of frame-rate on spatial-

edge detection, we measured the edge signal (∆V) using same stimulus (100% contrast)

and illumination light intensity (~922 Lux) with frame-rate that ranges (11.5 f/s-1150 f/s)

many times using the logging software WaveStar, which samples at 1 sample/sec. These

Page 165: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

128

data (30 samples for each frame-rate) were stored in computer, where it averaged and

plotted in Fig. 4.21.

These results confirm the conclusions reached above and verify them with more

details as shown. All M+, |M-|, and peak-peak values have constant regions (independent

on frame rate) with edge output levels of 225 mV, 165 mV, and 410 mV respectively.

This region is extended to frame rates as high as 115 f/s after which all edge output

measurements decease toward minimum values. On the other hand, this region extends

to lowest frame rates (11.5 f/s) in the range under investigation for M+ magnitude, but it

is limited to a frame rate of 23.f/s for both |M-| and peak-peak values, which obeying an

“increase-constant-decrease” behavior with frame-rate as shown in Fig. 4.21 below.

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

1 10 100 1000 10000Frame Rate (f/s)

Spat

ial E

dge-∆

V (V

)

Av M+

Av M-

Av Pk-Pk

Figure 4.21: Effect of frame-rate on spatial-edge detection of (vertical) 100%-contrast stimuli. The light intensity was kept constant at ~ 922 Lux. The frame-rate was varied from 11.5 f/s to 1150 f/s.

Again, as discussed before, these results can be best explained by the

enhancement or degradation of the captured-image contrasts. In this context, we first

Page 166: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

129

start by analyzing the effect of extreme frame rates in the range under investigation, i.e.

low frame-rates and at high frame-rates.

At low frame-rates, the integration times are longer, thus resulting in higher

imager output signals but at low image contrasts (refer Fig. 4.19). The captured images

appear bright (at lowest gray level) but with no details, (because of a situation similar to

blooming, where the pixel overflows and causes the photons to go to adjacent pixels

which result in images like those resulted from an overexposure film). Here, because of

the long integration times at low frame-rates, the pixel well is flooded with the photo-

generated charge carriers, which causes the charge carriers to go to adjacent pixels, thus

can be regarded as electronic blooming. Accordingly, the edge output signal (∆V), which

is a strong function of contrast, will be low at low frame-rates as shown in Fig. 4.21

above.

At high frame-rates, the pixels’ output levels are very small, because of the

shorter integration times involved, and the captured images appear very dark (almost

black at logarithmic pixel, where integration time is equal to zero) as shown in Fig. 4.19.

In this case, it is difficult to differentiate between the black bar and the white background

of the 100 % contrast stimuli. This implies a large degradation in the contrast (output

swing) of the captured image, resulting in a low edge output signal ∆V as shown Fig.

4.21.

Between these two extremes, the output edge signal is constant (at same levels

mention above) in the frame-rates range between 23 f/s-115 f/s, except for M+ which

oddly starts earlier at frame-rate of 11.5 f/s. This is a very interesting finding since it

suggests that there is an optimal frame-rate (integration time) range at which a maximum

level of ∆V can be achieved. Thus, better spatial-edge detection performance can be

reached.

So far, we examined effect of various parameters on spatial-edge output signal for

the linear mode, to understand its behaviors, and to explore its feasibility for the

application under our scope. Similar measurements are repeated (as applicable) for the

logarithmic mode to serve the same purpose.

Page 167: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

130

4.4. Spatial-edge Detection in the Logarithmic Mode All measurements presented in this section are performed on prototype CMOS imager

(CYC-chip), which is a dual mode active pixel sensor (2M-APS) image sensor fabricated

using HP 0.5µm digital CMOS process (refer to Appendix A for more details). The

experimental setup and testing procedure of spatial-edge detection in the logarithmic

mode are the same as those used for linear mode testing (refer to Appendix B). However,

they differ in the timing pattern, since the pixel reset clock in logarithmic mode is always

set high at VDD (refer to Fig. 3.8), so that the reset transistor is continuously working in

sub-threshold regime, which result in the logarithmic photocurrent-to-voltage conversion

(refer to section 3.3.1 & Fig 3.9). The other difference is that extra high illumination

stimuli used here.

In this mode, where the optical dynamic range is high, there are two operational

regimes: either high illumination at low frame-rates or low (moderate) illumination at

high frame-rates. At higher illumination, like those resulting from direct exposure to a

bulb lamp, the logarithmic-mode is preferable over the linear mode. Figure 4.22 below

shows the image (a) of bulb lamp and its 3D-plot (b) captured in the logarithmic mode

with light intensity of about 7170 Lux.

Figure 4.22: Captured lamp image in the logarithmic-mode of pixel operation (a), and its 3D-plot, which clearly shows the image contrast (b). The frame-rate was ~ 46 f/s and illumination light intensity was high at 7170 Lux.

(a) (b)

Page 168: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

131

Both graphs in Fig. 4.22 above show higher levels of fixed pattern noise (FPN)

compared with those of the linear mode. This problem is inherent in logarithmic active

pixel sensors (APS), because of the exponential relationship between its photocurrent and

the threshold voltage (main source of FPN) of the reset transistor M1 (refer to Fig 4.3).

Note that CDS is not normally used with logarithmic mode because of the absence of a

reset signal. However, as we will see later in Chapter 6, this obstacle was overcome by

using Modal Double Sampling (MDS) algorithm. The spatially detected edge image is

shown in Fig. 4.23. Here we are trying to map a curved object onto an edge detector that

is strongly tuned to vertical edges as shown.

Figure 4.23: The image of spatially detected edge of a lamp (high illumination at ~7170 Lux) in the logarithmic-mode using CDS circuit. The frame-rate was ~ 46 f/s. The 1D tuning and FPN noise are obvious in the image. The 3D plot was omitted, since it didn’t show any additional information content.

At low illumination levels, we used bar stimuli, which are usually used for

standard measurements, because they result in more precise data. A typical example is

shown in Fig. 4.24 below, for both bar image and its spatially detected edge (logarithmic-

mode) are shown. In this experiment, the illumination level was low at 196 Lux;

however, the frame rate was kept high at ~230 f/s. At high frame rates, the logarithmic

mode exhibits characteristics similar to that of linear mode. This can be attributed to the

Page 169: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

132

fact that the linear (integrating) mode operation tends towards the logarithmic at very

short integration times.

Figure 4.24: Left: Bar image (at 196 Lux) captured at high frame rates (~230 f/s) in the logarithmic-mode, its 3D-plot (clearly showing the contrast), and its histogram. Right: spatially detected edges image, its 3D-plot showing the two edge signals, and its histogram, which is different from that resulted from normal operation of CDS circuit as FPN remover (shown left). Histogram was calculated using MATLAB software.

Page 170: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

133

Also shown in Fig. 4.24, the histogram distribution of the captured image (left)

and the spatially detected edge image (right). Here, we used the CDS circuit in both

cases (not usually used with logarithmic operation) to compare the two CDS operations.

We noticed that there are two gray level distributions for normal CDS operation, whereas

only one gray level distribution for the edge image detected using SDS operation of CDS

circuit. In normal operation (left), the differencing causes splitting of the distribution into

two distributions centered on the black and white of the captured image, whereas in the

spatial edge detection, the differencing cause the distribution width, centered on the zero

gray level to shrink, with the WB (BW) edge magnitudes represented by the positive

(negative) extreme of the distribution. Note that we are using LabView gray level coding

(±100) here.

4.4.1 Effect of illumination level on spatial-edge detection The instantaneous images captured in logarithmic mode and its spatially detected edge

image (for 100%-contarst bar stimulus) are shown in Fig. 4.25 below in 3D form to show

how the image and its edge signals are affected by the illumination levels. In general, we

noticed that images captured in logarithmic mode, and hence their edge output signals,

are very sensitive to any change in experimental conditions (e.g. illumination intensity,

frame-rate, etc.). In fact, it is rather difficult to get good image quality with logarithmic

mode operation, because of logarithmic compression and the associated small voltage

swing. Images are usually dark with low contrasts, and usually require post image

processing to correct the logarithmic compression. Perhaps only under direct exposure

with low frame rates or at low light intensity with high frame rates cases, one can capture

relatively good images as shown in Fig. 4.22, and Fig. 4.24 respectively. This applies to

the spatial-edge output signal (∆V), which is strongly affected by the captured image

contrast as already discussed. These difficulties are reflected on the depth and details of

measurements that can be conducted in the logarithmic mode, (hence the kind of analysis

that can be carried out) which make this mode less attractive for any further studies.

However, this should not prevent us from presenting measurements that were achieved

even under these sensitive situations.

Page 171: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

134

The effect of illumination can be better understood by comparing the 3D-plot of

Fig. 4.24 (at 196 Lux) with the 2nd row of Fig.4.25 (at 922), since they are both captured

at the same frame-rate (~230 f/s). The differences are relatively large as shown. For very

high frame rates, we can compare the 1st row (at 922 Lux) and 3rd row (at 1323 Lux) of

Fig. 4.25, as they are captured with the same frame-rate (1150 f/s). The differences are

now (interestingly) smaller as shown.

Figure 4.25: Effect of the illumination and frame-rate on spatial-edge in logarithmic mode of 100%-contrast stimulus. The 3D graphs show the imager response (captured image) and spatially detected edge for different illumination levels of 922-1323 Lux. The images were captured with frame-rate of in the range ~ 230 f/s-1150 f/s.

Page 172: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

135

To get better understanding of the discrepancy, we should use average values of

repeated measurements, rather than a single instantaneous measurement presented above.

Therefore, we measured the spatial-edge output signal in the logarithmic using automated

logging (offered by WaveStar software), which was set to sample at 1 sample/sec for 30

sec. This data (30 samples for each illumination level) was stored (as before) in

computer, where it was averaged and plotted in Fig. 4.26. In this case we measured both

the left-WB and right-BW edges to see the differences.

-0.04

-0.02

0

0.02

0.04

0.06

0.08

0 200 400 600 800 1000 1200 1400Ilumination (Lux)

Spat

ial E

dge-∆

V (V

)

WB-M+WB-M-BW-M+BW-M-

Figure 4.26: Effect of illumination on spatial-edge in logarithmic mode of 100 %-contrast stimulus. The illumination light intensities of light source used are in the range between 196-1323 Lux. The images were captured in linear mode at frame-rate of ~ 230 f/s. The plot shown here is for both edge types, left (WB), and right (BW) edges of the stimulus.

This graph shows qualitatively mirrored behavior for edge outputs (∆V), but with

reversed polarity. We found that the M+ of the right-BW edge output have mirror

equivalence to M- of the left-WB edge output, with |M-| magnitude is quantitatively

smaller than M+ magnitude. The same applies to M+ of the left-WB output, which have

Page 173: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

136

mirror equivalence to M- of the right-BW edge output, with M+ is quantitatively larger

than |M-|. The quantitative difference can be attributed mainly to the uneven distribution

of light over the both edges, since they are in different distances from the light source

position. Moreover, the location of each edge on the imager array will affect the edge

output signal differently according to the spatial noise level at each edge location, and

whether bad pixels exist in that location or not. Also, it is shown that the magnitude M+

of right-BW edge (|M-| of left-WB edge) level off to a constant value around 922 Lux and

over, whereas the magnitude M+ of the left-WB edge (|M-| of the right-BW edge) peaks

around the same illumination level (922 Lux), but then decrease at higher illuminations.

This behavior can be attributed to by the discharge/charge of the sense capacitance

through parasitic photocurrents due imperfect shielding of pixel circuit [38].

4.4.2 Effect of frame-rate on spatial-edge detection The effect of frame-rate on spatial-edge output signal in the logarithmic mode can be

understood by comparing the 1st row (at ~1150 f/s) with 2nd row (at ~ 230 f/s) of Fig. 4.25

above, since they are captured under the same illumination level (922 Lux). At these

levels of illumination intensities (≤922 Lux), the imager can only capture recognizable

images if the frame rates are high enough. Accordingly, we limited our measurements to

frame rates that are higher than 230 f/s. In logarithmic mode, low frame rates are usually

used with high illumination stimuli such as direct exposure to bulb lamp (or laser

sources) as shown in Fig. 4.22 and Fig. 4.23.

Note that these observations are based on instantaneous measurements. To get

better insight, average values (out of 30 samples) for each frame-rates were used (as

before) to plot Fig.4.27, which shows the effect of frame rate on positive (M+), and

negative (M-) magnitudes of the edge output signal (∆V) for both edge types. This graph

also shows that the edge output signal in this mode can be fairly considered independent

(constant) on the frame rate over the range under investigation (230 f/s – 1150 f/s), since

the changes over the entire range of frame rates under investigation are less than 4mV.

This is negligible when compared to the FPN levels of the CYC chip (~16.6 mV) in this

mode. This behavior is logically understandable knowing that the logarithmic mode of

pixel operation is a continuous mode with no integration involved.

Page 174: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

137

From our observation, we noticed that the outputs of the logarithmic mode of

operation follow characteristics that are usually different from those followed by the

linear mode outputs, when influenced by the same parameters. This statement is quite

general, as we will also see later in other chapters.

-0.06

-0.04

-0.02

0

0.02

0.04

0.06

0.08

0 200 400 600 800 1000 1200 1400Frame rate (f/s)

Spat

ial E

dge-∆

V (V

) WB-M+WB-M-BW-M+BW-M-

Figure 4.27: Effect of the frame-rate on spatial-edge in logarithmic mode of 100 % contrast stimulus. The illumination light intensity was kept at 922 Lux. The images were captured at frame-rate was varied from 230 f/s to 1150 f/s. The plot shown here is for both edge types, left (WB), and right (BW) edges of the stimulus.

The peak-peak levels of edge output signals in logarithmic mode are generally

small (less than 100 mV) compared with those of the linear mode of operation, but

slightly larger than 16.6 mV (noise level in the logarithmic mode), especially at low light

intensities. Reliable edge signals are, therefore, feasible only for limited ranges of light

intensities and frame rates. Moreover, because of the difficulties associated with

capturing quality images in the logarithmic mode because of the sensitivity of this mode

Page 175: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

138

to any parametric changes. Accordingly, we limited our investigation on logarithmic

mode to this stage, and no further analysis will presented on this mode hereafter.

4.5 Spatial-edge Detection in Motion In the previous section, we discussed the issue of spatial-edge detection performance for

stationary stimuli under the influence of set of crucial parameters. We analyzed the

effect of these parameters before complicating the situation by using moving stimuli.

Thus, we got intrinsic dependency of the output edge signal on these parameters without

the side effects being imposed by the motion (such as blurring). Moreover, this approach

allowed us to explore the feasibility of the spatial-edge detection in the logarithmic mode

at rest state first, before attempting any motion situations. Because of the poor

performance and the sensitivity of the imager output to parametric changes in addition to

the low edge output signals, reliable edge detection is not feasible, especially at low light

intensities and /or low frame rates. Accordingly, we concluded that the logarithmic mode

is not suitable for motion detection analysis within the conditions of specified here.

Therefore, no farther analysis of this mode will be conducted here, and only linear

(integrating) mode of operation will presented and analyzed under motion conditions.

A typical example of how an average spatial-edge (SDS) signal changes in

response to a moving stimulus is shown in Fig.4.28 below. The illumination intensity

level was ~1165 Lux, and images were captured with frame-rate of ~ 46 f/s. The

stimulus was black bar on white background (100%-contrast). The row-by-row

movement of stimulus to the right is also shown in Fig. 4.28 (inset). Also shown, the

averaged spatial-edge output in response to this movement, which flips each time the

edge moves one row, thus can be employed to detect motion using zero-crossing circuit

that triggers whenever an edge signal flips.

For more detailed motion analysis, we need to conduct more rigorous

measurements on the effect of motion on spatial-edge detection. The experimental setup

and testing procedure for these measurements are similar to those used for stationary

stimuli (refer to Appendix B) except that the stimuli used here are drums with different

diameters (instead of the flat stimuli) that are moving with controllable speeds. All

measurements held in this section are performed on prototype CMOS imager (CYC-

Page 176: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

139

chip), which is a “dual mode” 2M-APS-image sensor (fabricated using HP 0.5µm digital

CMOS process. Please refer to Appendix A for more details).

Figure 4.28: A typical example of how the spatial-edge signal (averaged) changes in response to pixel-by-pixel (row-by-row) movement of 100%-contrast stimulus (inset) moves to the right. The illumination intensity level is ~1165 Lux. Images were captured in the linear mode with frame-rate of ~ 46 f/s. The 100%-contarst stimulus was black bar on white background.

The spatial-edge output was measured using the standard condition with 100%

contrast stimulus illuminated by light source of intensity 466 Lux. The frame-rate was

~46 f/s. The edge signal was measured using a Tektronix TDS 360 digital real-time

oscilloscope and logged using WaveStar software for about 117 sec with a sampling rate

of 1 sample/sec. A typical strip chart for spatial-edge output for stimulus moving with

constant speed of about 15.24 cm/sec is shown in Fig. 4.29. This strip chart shows the

M+, and M- values for both WB and/or BW edges, which are defined by a Tektronix

Motion

White (∆V=M-)

➌➌

Gray (∆V= 0)

Black (∆V=M+)

Page 177: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

-0.4

-0.3

-0.2

-0.10

0.1

0.2

0.3

0.4 21:1

8:43

21:1

9:18

21:1

9:52

21:2

0:27

21:2

1:01

21:2

1:36

Tim

e (s

ec)

Spatial Edge ∆V (V)M

+M

-

Fi

gure

4.2

9: A

typi

cal s

patia

l edg

e ou

tput

sig

nal (∆V

) log

ged

for a

bout

117

sec

(usi

ng W

aveS

tar)

with

sam

plin

g ra

te o

f 1 s

ampl

e/se

c.

The

100

% c

ontra

st st

imul

us w

as m

ovin

g w

ith a

con

stan

t spe

ed o

f ~15

.24

cm/s

ec, a

nd w

as il

lum

inat

ed w

ith 4

66 L

ux li

ght s

ourc

e. T

he

imag

es c

aptu

red

at f

ram

e ra

te o

f ~

46 f

/s.

This

stri

p ch

art s

how

s th

e M

+, a

nd M

- va

lues

for

bot

h W

B a

nd B

W e

dges

, whi

ch a

re

defin

ed b

y Te

ktro

nix

osci

llosc

ope

as M

ax,

and

Min

res

pect

ivel

y.

Solid

squ

ares

rep

rese

nt

activ

esp

ikes

, w

here

as o

pen

squa

res

repr

esen

t id

le

sign

als.

Page 178: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

141

oscilloscope as MAX and MIN respectively. As shown in this log chart (Fig. 4.29), there

are some differences in the level between edge output signals (spikes) that can be better

understood from the following discussion.

For stimuli speed of about 152.4 mm/sec, it takes only about ~374 ms for an edge

to pass the imager field-of-view (~57 mm), which was calculated (refer to Fig.4.30

below) for an array-width Bmax= 64 pixels × 30 µm = 1.92 mm, with a standard wide

angel lens of ~ 35º and focal length of 3 mm. The stimulus was ~ 90 mm apart from the

lens plane, whereas the imager was located at focal plane to get the sharpest image

possible. In other words, the imager array can “see” the edge for ~374 msec at this

speed. On the other hand, it requires at least 21.8 ms for the imager array to be fully

scanned (readout) with an operating frame-rate of ~ 46 f/s here.

Figure 4.30: A graphic illustration of the experimental setup showing the position of the light source with respect to the imager field-of-view. Also shown the hypothetical positions at the sampling event) p1, p2, and p3 (of an edge moving clockwise with a speed of S cm/sec. Optical parameters shown numerically are kept constant throughout the experiments.

Imager array

Stimulus drum

Light source

p1 p2 p3

S

Lens

Imager field of view

Bmax = 1.92 mm

57 mm F = 3 mm

90 mm

Page 179: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

142

Therefore, if an edge enters the imager field-of-view (with this speed) as the

scanner is reading out the first pixel (column) of the imager array, ~17 frames could be

readout before the edge leaves the imager field-of-view. This number of frames changes

with the speed of the stimuli and/or frame-rate. When WaveStar samples at 1 sample/sec

for 117 sec, the sampled data will be either active (sampled in the existence of the edge)

or idle (sampled when the edge left the field-of-view) depending on the initial conditions,

speed of the stimuli, and/or frame-rate.

For an “active” samples, the position of the edge in the field-of-view is a key

factor in determining the level of the output edge signal ∆V, because this level is affected

by the position of the edge stimulus on the (curved) drum and/or its distance from the

light source. As shown in Fig. 4.30 above, at the sampling event, the level of the output

signal for an edge at positions p1, p2, or p3 will differ depending on how much the edge

images were distorted (faded) by the curvature of the drum. For example, the images at

positions p1, and p3 are more affected by the curvature than that at p2. Therefore, edges

at p2 are expected to have higher edge signals than those at p1, and p3. Also, the edge

output from these positions will differ depending on their position relative to the light

source; some edge positions will be better illuminated than other positions. In this case

(refer to Fig. 4.30), edges at p1 are better illuminated than those at p2, and/or p3. Perhaps

the best position for this case is position p2, where the drum curvature effect is minimal

and the illumination level is acceptable. Now, since in practice neither the stimulus-

speeds nor readout-clocks (frame rates) are perfectly stable, the edge can assume any

position in the field-of-view during the 117 sec logging interval, thus the obvious

fluctuation in the spatial-edge output spikes as shown in Fig. 4.29 above. Also, these

fluctuations can be attributed to the slow sampling rate of WaveStar (1 sample/sec),

which cause edge spikes to be sampled in different frames. Moreover, the edge output

signal affected by the FPN-PRNU noise levels and/or the existence of bad pixels in an

edge position during sampling. In general, all these factors collectively contribute to the

apparent fluctuation in the level of the edge spikes shown in Fig. 4.29. This argument is

valid for the two edge types (BW edges and WB edges). However, since pixels’ initial

conditions are different for each edge type [98], slight differences in the edge signal ∆V

levels were observed between the outputs of the two edge types, as shown in Fig. 4.29.

Page 180: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re 4

.31:

Illu

stra

tion

of th

e st

imul

us d

rum

sho

win

g th

e po

sitio

ns o

f BW

and

WB

edg

es th

at m

ove

in th

e cl

ockw

ise

dire

ctio

n w

ith a

ty

pica

l spe

ed o

f abo

ut 1

52.4

mm

/sec

. A

lso

show

n, th

e im

ager

arr

ay w

idth

~ 1

.92

mm

(64x

30 µ

m).

The

stim

ulus

can

be

view

ed a

s a

tape

of f

lippi

ng b

lack

and

whi

te st

rips o

r as a

sym

met

rical

squa

re w

ave

with

a p

erio

d of

10.

925

sec

(for

this

spee

d).

~5.4

6s @

s1

~10.

925s

@ s

1

Edge

s

Bla

ck

Whi

te

BW

WB

Imag

er A

rray

s1=

~152

.4 m

m /s

ec

Arr

ay w

idth

= 1

.92

mm

Page 181: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

144

When an edge leaves the imager field-of-view, WaveStar will record an “idle”

sample and the image output signal will have a constant value until the next edge enters

the field-of-view (~5.46 sec for 15.24 cm/sec speed—2 edges per drum). This constant

value could be a white level or black level depending on whether a BW or WB edge has

left the field-of-view as depicted in Fig.4.31 above. Anyways, the edge signal will be at

its minimum for both edge types since the differencing is applied on the same gray level

(black or white). In this case, both spatial noise (FPN) and temporal random noise

sources will play a role in determining the level of the edge signal (∆V). Now, because

the “active” interval is about ~374 msec whereas the “idle” interval is about 5.46 secs

(5460 msec), it is more likely that the WaveStar will sample (at 1 sample/sec) “idle” if

the previous sample was “idle”, whereas it is less likely to sample “active” if the previous

sample was active. This explains why “active” signals look more spiked than the “idle”

signal.

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0 50 100 150 200 250Speed (cm/sec)

Max

. ach

ieva

ble ∆

V (V

)

M+|M-|

Figure 4.32: Effect of speed on the maximum achievable spatial-edge output signal (∆V). The 100 %-contrast stimulus was moving with constant speeds in the range from ~15.24 cm/sec to ~200.39 cm/sec, and was illuminated with 466 Lux light source. The graph shows both M+ and |M-|. Each point in this graph represents the maximum of all logged data (about 117 samples for each speed). Image captured with frame rate of~46 f/s.

Page 182: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

145

The above experiment was performed for seven speeds (15.24 –200.39 cm/sec) in

order to study the spatial-edge output signal under moving stimuli. In general, however,

the maximum speed should be limited to values that assure only one edge detection per

frame. Otherwise, the analysis will be complicated without adding any significant

information. A photodiode tachometer was used to measure the speed in all

measurements. All speeds reported here are for linear motion. Results for each speed are

very similar to those shown in Fig. 4.29.

To illustrate the effect of the speed on the edge output signal, we extracted from

the measured data, the maximum achievable edge signal for each speed in the range

under investigation as shown in Fig. 4.32. In this figure, each point represents the

maximum of all measured data (~117 samples) of edge output signal ∆V at each speed

for both M+, and |M-| magnitudes as shown in Fig. 4.29 labeled by and

respectively. This graph shows that the maximum achievable edge signal is higher at

some speeds than others. This behavior is attributed to the so-called “velocity tuning”

[99, 100, 101], which occurs when the stimulus speed synchronize with the readout

scanning-speed (frame-rate) so that the stimuli edge is captured at the most appropriate

position in the imager’s field-of-view (e.g. position p2 in Fig. 4.30) for a maximum edge

output signal.

Here, the first tuning speed was at ~15.24 cm/sec where the “peak” tuning level

occurs (refer to Fig. 4.32), then the tuning was repeated at other speeds (~24 cm/sec, ~64

cm/sec) with tuning levels that faded away with increased speed. This graph also shows

that this “velocity tuning” is more clear in M+ compared with |M-|, whereas |M-| showed

some kind of logarithmic decreasing relation with speed.

Another interesting tool for spatial-edge detection feasibility evaluation is the use

of detection efficiency (% detection) at certain threshold level, which is defined as the

percentage of the number of detected spikes in certain interval (117 msec) to the number

of spikes expected (calculated) for an ideal detection within the same interval. For an

ideal detection, the efficiency is maximum at 100%, because all edges in the interval are

detected. However, in most practical cases, the detection efficiency is expected to be less

than 100 %.

Page 183: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

0

0.050.

1

0.150.

2

0.250.

3

0.35 21

:19:

0521

:19:

2521

:19:

4521

:20:

0521

:20:

2521

:20:

45Ti

me

(sec

)

Spatial Edge ∆V (V)

M+

|M-|

BW

B W

BW

BW

BW

BW

BW

BW

Ts1=

10.9

25 s

ec p

hoto

diod

e M

easu

rem

ents

, A

= ~

10.9

25 s

ec,

B =

~ 5

.46s

ec (A

/2)

A

B

WB

WB

WB

WB

WB

WB

WB

BW

V th1

V th2

V th3

BW

WB

WB

WB

BW

Fi

gure

4.3

3: A

typi

cal a

rea

grap

h of

the

spat

ial e

dge

outp

ut s

igna

l (∆V

) use

d to

faci

litat

e th

e ca

lcul

atio

n of

the

perc

enta

ge d

etec

tion

effic

ienc

y. A

lso

show

n is

the

thre

shol

d vo

ltage

s Vth

1, V

th2,

and

Vth

3 at 8

0mV

, 100

mV

, and

150

mV

resp

ectiv

ely.

A

is

def

ined

as t

he

inte

rval

bet

wee

n ed

ges

of s

ame

type

(pe

riod

of th

e st

imul

us s

quar

e w

ave

in F

ig. 4

.31)

, whe

reas

B

is

the

time

betw

een

edge

s of

di

ffer

ent t

ypes

(wid

th o

f the

stim

ulus

squa

re w

ave

in F

ig. 4

.31)

. N

ote,

that

we

used

the

abso

lute

val

ue o

f edg

e si

gnal

|M-|

in th

is p

lot.

Page 184: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

147

Because of the bipolar nature of the spatial edge signal, both edge magnitudes

(M+ and |M-|) were used to determine the number of measured spikes for every threshold

voltage. The calculation was performed using MS EXCEL supported with manual

calculation. To facilitate the manual calculations, we built area graphs for output signals

for both edges and for each speed in the range under investigation (15.24 cm/sec- 200.39

cm/sec).

A typical example of these graphs is shown in Fig. 4.33 above. This data is

identical to that used to plot Fig.4.29. The graph in Fig. 4.33 shows the threshold

voltages Vth1, Vth2, and Vth3 (horizontal lines) used in this analysis. These thresholds are

at 80mV, 100mV, and 150mV respectively. The interval “A” is the interval between two

successive edges of the same type (BW-BW and/or WB-WB) and can be viewed as the

period of the stimulus square wave (refer to Fig. 4.31). The interval “B”, however, is the

time between two successive edges of the different type (BW-WB and/or WB-BW),

which is, in this case, equal to A/2, and can be regarded as the pulse-width of the stimulus

square wave.

It is worth noting that some M+ spikes (brighter ones) are hidden behind (masked

by) the M- spikes (darker ones). This situation is absolutely correct according to our

criterion, since existence of only one edge polar magnitude (M+ or |M-|) is needed to

trigger the spike counter (MS Excel). This software counter is triggered whenever the

transition in the edge output is greater than the threshold voltage by a margin of 20 mV

(which is reasonable sensitivity for comparators taking in the account FPN noise). The

extracted data is then used to calculate and plot the percentage detection efficiency at the

given threshold voltage as defined above. This process was repeated for each threshold

voltage used, as shown in Fig. 4.34 below. Note that the data used here is based on all

values rather maximum values used to plot Fig. 4.32.

This graph (Fig. 4.34) shows that spatial-edge detection efficiencies have two

extremes: high detection efficiency (80%) at low speeds, and low detection efficiencies at

high speeds. The first extreme (peaks at ~80 %, 75%, and 70% for threshold voltages of

80mV, 100mV, and 150mV respectively) can be attributed to the existence of false spikes

at low threshold voltages, especially at low speeds. The second extreme is attributed to

the fading of the edge signal at high speeds (due to increased blur and the degradation of

Page 185: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

148

the resolution). Thus, many “true” spikes will be missed, especially if threshold voltages

used were high. In fact, the detection efficiency reduces to zero at very high-speeds and

high threshold voltages.

0

10

20

30

40

50

60

70

80

90

0 50 100 150 200 250Speed (cm/sec)

% d

etec

tion

effe

cien

cy

Vth = 0.08V

Vth = 0.1V

Vth = 0.15V

Figure 4.34: Effect of speed of motion on the percentage detection efficiency of spatial-edge signal ∆V for different threshold voltages (80-150mV). The 100 % contrast stimulus was moving with constant speeds in the range ~15.24 cm/sec to ~ 200.39 cm/sec, and was illuminated with 466 Lux light source. The graph data was compiled from the data like the one plotted in Fig. 4.29.

On the other hand, the edge detection reliability is deteriorated at both extremes

(false detection in one extreme and the non-detection of true spikes in the other extreme).

It seems that not much can be done for high speeds (> 25 cm/sec) because of the very low

detection efficiency. However, at “low” speeds (<25 cm/sec ~0.9 km/hour), which is

reasonably high if we took into account the width of the field of view (57mm), one can

choose a threshold voltage as high as 100 mV. This will result in high detection

efficiencies with acceptable detection reliabilities; thus a good compromise between edge

detection reliability and efficiency. Anyways, it is desirable to use higher threshold

voltages since lower threshold voltage puts more technical constraints on the sensitivity

of the threshold amplifier. It is worthwhile to mention here, that for all threshold

Page 186: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

149

voltages shown in Fig. 4.34, the “velocity tuning” behavior is not as strong as that shown

in Fig. 4.32(which can be regarded as zero threshold reading).

All results presented in this chapter are for stimuli that are vertical (parallel to

rows of the imager), since the imager was rotated 90º (vertical bars will effectively

appear horizontal). Because of the symmetry the imager array (64pixel × 64pixel) used,

all analysis and conclusions presented in this chapter are valid for both vertical stimuli

and horizontal stimuli. However, since our SDS technique for spatial-edge detection is

based on differentiating outputs from pixels that are in the same columns but in adjacent

rows, only those (vertical) edges that are parallel to imager rows could be detected. This

implies a 1 D edge detection operation. Even with this limitation, 1D edge detection can

find use as a front end for many applications as mentioned earlier.

4.6 Summary and Discussion In this chapter, a dual mode CMOS image sensor with a focal plane spatial edge

detector was demonstrated in the linear and the logarithmic modes. This real-time analog

VLSI edge detection sensor, which can be a front end for many image processing

applications, is based on a straightforward, yet robust spatial double sampling (SDS)

algorithm, particularly devised to achieve a practical balance between the edge detection

functionality and the captured image quality, usually ignored by designers [102, 103, 104,

105, 106].

To get better insight on factors that may limit the performance of our system, we

conducted rigorous parametric investigation and detailed analyses on a fabricated

prototype (CYC) chip for the two modes of pixel operation and under different conditions

of light intensities, frame-rates, contrasts, for stationary and moving stimuli. The results

of these investigations are discussed and compared with other state-of-the-art

implementations where appropriate.

Linear mode: Generally speaking, the performance of edge detection using Spatial Double Sampling

(SDS) technique was superior over a wide range of light intensities, stimuli contrasts, and

Page 187: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

150

frame rates, with reliable invariant outputs that are high enough from the noise level (~

16.5mV dominated by FPN). For instance, the peak-peak edge output signal (Fig. 4.13

and Fig. 4.14) is constant at ~ 400 mV (independent of illumination intensity) over wide

range of light intensities (824-2560 Lux). Being invariant (stable, not sensitive to the

conditions) is a desirable feature in many applications such as motion detection of

interest here. On the other hand, edge output signal was significant (~200 mV) even at

light intensities as low as 206 Lux, which high enough (>> 16.5mV) for reliable edge

detection.

Similar results were achieved with contrast variations (Fig. 4.18); the edge

output signal was practically constant at ~ 375 mV peak-peak for contrasts that were over

60%. The fluctuations were neglected because their level (±12.5mV) was within the FPN

noise level of CYC imager. At low contrasts, edge output signal was still significant

(~140 mV peak-peak) even at very low contrasts as 20%. Similar results were reported

by [102, 105]; they realized it with a very low-resolution, very low fill factor chip that is

dedicated for motion tracking only.

The effect of frame-rate variation on the spatial-edge detection (Fig. 4.21)

performance is more relevant to linear imagers then to the edge detection algorithm, and

can be explained by the existence of a range of frame rates where integration time is

optimum for maximum output swing (contrast). In this context, we found that the edge

signal is constant (@ 410 mV) over a wide range of frame rates (23f/s-115f/s), out of

which the edge signal is reduced to 354mV and 114mV at 11.5 f/s and 1150 f/s

respectively. These figures are still high compared with the noise level in this mode

(16.5 mV), and therefore feasible for motion analysis or in image segmentation

applications. Note that our SDS technique is capable of detecting both edge types (WB-

OFF and BW-ON) compared with [105], which can detect ON edges only. Therefore, all

the issues discussed previously are valid for both edge types.

Logarithmic mode: With the logarithmic mode, the level edge signal suffers from significant degradation (~ 6

times less) due to the logarithmic compression and the high fixed pattern noise (FPN)

inherent in this mode. In this mode, where the optical dynamic range is high, there are

Page 188: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

151

two operational regimes: high light intensities (like those from direct exposure to lamp or

laser) with low frame rates, and normal light intensities with high frame rates. The first

regime mandates that special light sources be used in order to conduct any reliable

investigations. Here, we are focused on the second regime (Fig. 4.26), which was within

our scope at this stage.

For light intensities that are ~ 922 Lux or higher, the edge signal is practically

constant at 80mV and 114mV peak-peak for the ON (BW) edge and OFF (WB) edge

respectively. These levels are high enough to ensure reliable edge detection in the

logarithmic mode using SDS technique. At low intensities (<196 Lux), however, the

edge signal is too small to be useful for any applications. Note that the logarithmic mode

operation is limited to high contrast operation. Therefore, it was irrelevant to conduct

any further contrast investigations.

Even though the frame rate should have no effect on image sensors that work in

continuous mode such as the logarithmic mode here, we conducted frame rate

investigation to be sure that frame rate has no effect on SDS edge detection performance.

Fortunately, the results confirmed the independency of our SDS technique on frame rate

variation over wide range of frame rates (230 f/s—1150f/s), where the change in output

signal with frame rate was very small compared with system noise level, therefore can be

ignored (refer to Fig. 4.27). The minimum peak-peak edge output within the range under

investigation (230 f/s—1150f/s), is greater than 80mV. This is greater than the noise

level in this mode (~17mV), thus allowing the use of the SDS edge detection technique

with logarithmic mode in applications where high frame rates are desirable.

Motion Since motion detection is one of the potential applications of SDS edge detection, it

desirable to investigate the range of validity of the technique under the influence of

stimuli motion. Alternately, it can be regarded as investigating edge sharpness on the

edge signal, since the sharp bar edges will be smoothed due to motion blurring, especially

at high speeds. However, our study was limited to linear mode in this stage, because the

edge output signal in logarithmic mode was not high enough to conduct a reliable

investigation.

Page 189: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

152

The lowest SDS edge output signal is ≥ 100 mV (@ speeds ≥ 200.39 cm/sec) in

the range under investigation (15.24 cm/sec—200.39 cm/sec), far from the noise level

(16.5 mV for linear mode). Therefore, we can set a threshold at 100 mV to ensure “true”

edge detection and therefore increase detection reliability. At low speeds, the edge signal

is high at ~ 300 mV, which results in a dynamic range of ~ 200mV in the range under

investigation (refer to Fig. 4.32).

On the other hand (refer to Fig. 4.34), the percentage detection efficiency is very

low (< 50 %) for speed that are ≥ 25 cm/sec no matter what threshold voltage was used,

then decreases to zero at high speeds. For speeds that are < 25 cm/sec, a detection

efficiency of ≥ 50% is possible. Choosing a threshold voltage at 100 mV in this case will

result in high detection reliabilities with acceptable detection efficiencies (≥ 50%). Thus,

a good compromise between edge detection reliability and efficiency is reached.

Compared with sensor reported by Etienne-Cummings et al. [98], and

Deutschmann [105], our technique demonstrated higher dynamic range (~200mV) and

detection reliability over a wide range of speeds (15.24 cm/sec—200.39 cm/sec) not to

mention their low fill factor, low resolution, and non-scalability to high density 2D

applications.

Limitations: 1. Edge output signals have a bipolar form, which can be attributed to the nature of

spatial double sampling (SDS) algorithm used, where signals from pixels in the same

columns but in adjacent (successive) rows are differenced. However, this behavior

was also reported by [105].

2. 1D edge detection.

3. Logarithmic mode SDS operation is limited to high (100%) contrasts.

4. SDS techniques utilize analog VLSI circuitry, which have inherent limitation due to

low precision analog circuits.

5. Logarithmic is not suitable for motion.

Page 190: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

153

Chapter Five

Temporal Double Sampling (TDS)

5.1 Introduction Edge detection in the temporal domain is an important image processing functionality,

especially for motion-based applications where edge information is used to detect motion

in the scene. This technique is an extension of that discussed in Chapter 4. We are using

the double sampling method to perform temporal differentiation (temporal edge

detection), which can be used for motion detection. The term “edge” was given to the

temporal differentiation of the light intensity in analogy with the spatial edge (which is

usually determined by spatial differentiation of the light intensity). The operation of

temporal double sampling (TDS), however, is performed on the same pixel rather than on

different pixels (in different rows in our case) as in the case of spatial edge detection. In

TDS detection, the sequence of events is modified from a sample-reset-sample pattern

(spatial edge detection) to a sample-sample-reset pattern as illustrated in Fig. 5.1 (top)

below. In this case, when a row “i” is selected, the two sample and hold (S&H) circuits

located in each column of the correlated double sampling circuits (CDS) , will sample

two sequential samples of the pixel output upon starting the control clocks SH1 and SH2

respectively. These samples are then stored and held in the corresponding capacitor of

the two S&H circuits. This technique, therefore, provides a double integration-time

output (tint1 at SH1 and tint2 at SH2); hence the name “temporal double sampling”. This

Page 191: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

154

operation is executed simultaneously for each pixel in the active row. Then the “Reset”

control clock, shown in Fig. 5.1(top) below, resets all of the pixels in the selected row.

Upon the activation of the read-out column signals (COL), the two-stored signals are

differenced using a differential amplifier; hence resulting in the TDS (edge) information

(∆V) of interest here. The timing graph shown here (top of Fig. 5.1) is for the linear

mode of operation. However, it is valid for both modes of operation of the pixel (except

that in logarithmic-mode, there is no need to reset the pixels after each sampling, as the

reset signal is always maintained high at VDD as described before). The control clocks

SH1 and SH2 are separated by a time interval (hereafter named the intra-sampling interval

(∆t)) as illustrated in the bottom of Fig. 5.1, which graphically describes the TDS

operation.

Reset

SH1

SH2

Reset

SH1

Spatial edge detection operation

Temporal double sampling operation

64 Columns (COL.)

Row i

SH2

∆t

∆TR

Figure 5.1: Top: a comparison between controls-timings of one row for TDS operation (upper pattern) and spatial edge detection operation (lower pattern) of correlated double sampling circuit (CDS). Bottom: a graphical illustration of the temporal double sampling operation.

Page 192: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

155

This parameter will be investigated in more detail in the following sections

because it plays an important role in the performance of temporal double sampling.

Other parameters that may individually or collectively affect the TDS reliability are the

illumination level, the contrast, and the frame rate.

In order to get a better understanding of how these parameters affect the temporal

double sampling output, it is logical to investigate each parameter separately by keeping

the rest of the parameters constant. Moreover, since we are interested in motion detection

applications, it is important to investigate the effect of speed of motion on the TDS

signal. Since the motion of the stimulus influences the role in how these parameters

affect the TDS output, in order to avoid any confusion, it is necessary to probe their

effects on TDS detection with stationary stimuli first, before attempting any moving

stimuli investigation.

The experimental investigation was performed using the prototype chip CYC

(refer to Appendix A). The setup and testing procedure for TDS detection are similar to

those used for spatial edge detection (refer to Appendix B), except that we operate on the

pixel level and use a different pattern to activate the various controls in the imager array

and/or in the CDS circuit as we mentioned above. A typical example of temporal double

sampling in the linear mode is shown in Fig. 5.2 below.

Figure 5.2: Temporal double sampled (TDS) images in the linear mode of a black bar on white background (100 % contrast) stimulus illuminated by white light of intensity of ~466 Lux. The image (a) was captured at high frame rate of ~123 f/s. The edge image (b) is obtained using intra-sampling interval ∆t of ~20 µsec.

(a) (b)

Rows Rows

Page 193: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

156

Figure 5.2, (a), shows the image 100 % contrast stimulus captured at a frame rate

of ~123 f/s and an illumination level of ~466 Lux. The contrast shown is degraded from

100% because of the various parameters under investigation. The images displayed after

the TDS operation (see Fig. 5.2 (b) above) have a different appearance when compared to

those captured after the spatial-edge operation. Although they can be recognized in this

image, edges are not as clear as the spatial edges presented in Chapter 4. This image

shows that the gradient is different in light regions from that in dark areas ∆Vbright >

∆Vdark as graphically illustrated in Fig. 5.1 (bottom), which implies that an edge can be

detected.

White Black

Temporal edge (∆V)

Frame flagFrame flag

Time

Overlapped Image Signals VSH1 and VSH2

Figure 5.3: Temporal double sampling (TDS) in the linear mode of a black bar on white background (100 % contrast) stimulus illuminated by white light of intensity of ~466 Lux. The image signals “VSH1” and “VSH2” (shown overlapped) are for one frame and was captured at high frame rate of ~123 f/s. These signals, which were sampled for each pixel in the imager array, were probed at outputs nodes of the buffer amplifiers of CDS circuit (refer to Chapter 4). The sampling clocks SH1 and SH2 were ~20 µsec (∆t) apart. For an ideal TDS operation, the TDS output image should have only one gray level acting

as a stationary “edge” stimulus. Upon motion of this stimulus, the temporal signal (edge)

Page 194: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

157

will become more recognizable. This is true for frame difference (global) TDS operation,

but not for pixel (local) TDS operation used here to provide infinitesimal temporal

differentiation.

As we mentioned earlier, the term “edge” was given to the temporal

differentiation of the light intensity in analogy with the spatial edge. However, in the

electrical domain of interest here, the TDS signal, ∆V, (shown in Fig. 5.3) has the form of

a uni-polar spike with one polarity, which can be either positive or negative, depending

on the edge type (ON or OFF), as we will see later in this section. Thus, it is different

from the spatial edge, which appears to have bipolar spikes.

To get better insight into the issue, we divided the TDS detection investigation

into two parts. First, TDS detection with stationary stimuli for both the linear mode and

the logarithmic mode of operations will be covered. In the second part, we will discuss

the TDS detection under the influence of stimuli motion. This allows us to explore the

feasibilities, before attempting any further investigation of motion effects.

5.2 Temporal Double sampling in Linear Mode A typical example of the temporal edge detection output signals in linear mode is shown

in Fig. 5.4 below. We chose a black bar on white background (100 % contrast) in order

to get both edge types— an OFF edge (white to black—WB) on the left of Fig. 5.4 and an

ON edge (black to white—BW) on the right of the same figure. This resulted in different

responses as depicted by their respective spikes.

In this figure, the left (WB) edge results in a mainly negative spike (~ -208.1 mV)

whereas the right (BW) edge results in a mainly positive spike (~ 368 mV). With these

output levels, reliable TDS operation can be considered practical and feasible. The

reason that an edge may result in a mainly positive or negative spike depends on which of

the sampled signals (VSH1, VSH2) was differenced from the other and on the type of edge

(BW or WB) relative to the array read-out direction (since each type of edge has a

different initial conditions [98]).

Another interesting observation is that signals from pixels that are located on the

edge border change rapidly. Thus, a larger difference between the sampled outputs VSH1,

VSH2 resulted, even with the use of intra-sampling intervals, ∆t, as low as 20 µsec as

Page 195: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

158

shown in Fig. 5.5 below. This figure presents a magnified view of a region of the graph

in Fig. 5.4 located at the border BW (right) edge. The large difference between the two

signals will lead to a spike as shown marked by .

Figure 5.4: Output signals of the linear-mode temporal double sampling (TDS) of a horizontal black bar on white background (100 % contrast) stimulus. Light of intensity was ~531 Lux and the frame rate was ~123 f/s. The signals “VSH1” and “VSH2” are shown overlapped at the bottom of the graph. Temporal edge signal ∆V (shown at the top of the graph) was obtained by an intra-sampling interval (∆t) of ~20 µsec.

White White

Black

Right edge spike (M+)

Left edge spike (M-)

Frame flag Frame flag

BW WB

Figure 5.5: A magnified graph of the TDS output signal of a horizontal stimulus with 100 % contrast illuminated by 531 Lux light. The image signal was captured in the linear-mode at high frame rate of ~123 f/s, with an intra-sampling interval of ~20 µsec (∆t). These sampled signals are then differenced to produce the edge signal “∆V”.

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.0E+00 2.0E-04 4.0E-04 6.0E-04 8.0E-04 1.0E-03 1.2E-Time (sec)

Out

put s

igna

l (V)

VSH2VSH1∆V

Row

Black

White

Edge spike

Page 196: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

159

5.2.1 Effect of illumination level on TDS signal In this section we examine the effect of illumination intensity levels on the TDS

information determined in terms of the voltage difference ∆V between two output signals

of a CDS circuit, VSH1 and VSH2. For each pixel in the imager array, these two signals

were captured at a constant frame rate of about 123 f/s and sampled with sampling clocks

(SH1 and SH2 respectively) at an intra-sampling interval of about 61 µsec (∆t), which was

kept constant throughout the experiment. The stimulus was a black bar on a white

background (100 % contrast).

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0.5

0 500 1000 1500 2000 2500 3000Ilumination (Lux)

Volta

ge d

iffer

ence

-∆V

(V)

Av offset

Av M-

Av Pk-Pk

Figure 5.6: Effect of illumination on the temporal double sampling of 100 % contrast stimulus. The illumination light intensities of light source used are in the range between 214-2560 Lux. The images were captured in linear mode at frame rate of ~ 123 f/s. The edge information was obtained by a constant intra-sampling interval (∆t) of 61 µsec. The plot shown here is for the WB (OFF) edge of the stimulus.

The TDS signal ∆V (for WB and BW edges) was measured and sampled (1

sample/sec for 30 sec) using the same stimulus but under a wide range of illumination

light intensities (214-2560 Lux). The data (30 samples for each edge type and each

Page 197: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

160

illumination level) was stored in computer where it was averaged and plotted as shown in

Fig. 5.6 and Fig. 5.7.

Figure 5.6 shows the effect of illumination on the TDS signal, ∆V, which is (as

we noticed earlier) a spike with mainly negative magnitude (M-) for WB edges. A

positive offset (middle curve) of about 28 mV is almost constant for all illuminations.

The peak-peak value, which is defined by Tektronix TDS360 oscilloscope: Max (offset

here) – Min (M-) will result in a positive quantity as shown in the graph above. In this

graph, the edge signal output magnitude |∆V| starts out rapidly increasing with

illumination intensity until it reaches a certain maximum (~355 mV [~387 mV peak-

peak] at 743 Lux). Then we start to see the curve gradually decrease toward some

minimum.

-0.1

0

0.1

0.2

0.3

0.4

0.5

0 500 1000 1500 2000 2500 3000Ilumination (Lux)

Volta

ge d

iffer

ence

-∆V

(V)

Av M+

Av Offset

Av Pk-Pk

Figure 5.7: Effect of illumination on the TDS output of 100 % contrast stimulus. The illumination light intensities of light source used are in the range between 214-2560 Lux. The images were captured in linear mode at frame rate of ~ 123 f/s. The interval ∆t was kept constant at 61 µsec. The plot shown here is for the WB (ON) edge of the stimulus.

According to the observations, this behavior can be easily explained in terms of

contrast (output swing), which has a strong effect on edge signal output magnitude. We

noticed that, at low light intensities, the captured images usually have very poor contrasts.

Page 198: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

161

However, the contrast enhances as the illumination level increases until it reaches a

certain point after which the process is reversed and the contrast again starts to decrease

with illumination intensity. This point is the saturation point of the pixels in the imager.

The effect of illumination intensity on the TDS signal for the BW edge shows

similar behavior, as depicted in Fig. 5.7 above, with a maximum ∆V located at 392 mV

(434 mV peak-peak) at an intensity of 884 Lux. The difference in ∆V maximum

locations between the two edge types could be attributed to the non-uniformity of the

illumination intensity distribution on the stimulus surface, which depends on the light

source position relative to each edge and/or to the different initial condition of each edge

as reported by [98].

5.2.2 Effect of stimulus contrast on TDS signal The effect of stimulus contrast on the TDS signal ∆V is very important; therefore this

effect was studied under a wide stimulus contrast range that starts from 100% down to

30% (the lowest limit for any significant TDS response). The stimuli were black bars

over backgrounds with different gray levels to generate contrasts ranging from 30-100 %.

The light intensity level was kept constant at 531 Lux. These images were captured in

the linear mode at a frame rate of ~123 f/s. As mentioned before, it is the contrast of the

captured image (effective contrast) that directly affects the TDS output signal and not the

stimulus contrast. In fact, the contrast of the captured images is usually a degraded

version of the stimulus contrast. Accordingly, we used 3D-graphs to show the contrast of

the captured images (effective contrast) in response to “physical” stimuli. These stimuli

contrasts range from 30-100%, as illustrated in Fig. 5.8 below. It is obvious that the

captured images have much lower contrasts compared with those of stimuli. Although

using the effective contrast is more accurate in characterizing the TDS detection

technique (since it excludes the influence of the imager), we always refer to the stimuli

contrast (consistent with standardization) whenever ‘contrast’ is mentioned in our

analysis throughout this chapter. The TDS was performed using an intra-sampling

interval of 61 µsec (∆t), which was kept constant throughout the experiment. The voltage

difference ∆V was automatically measured 30 times for each edge of the stimulus-bar

and at each contrast in the range under investigation (30-100%). The data were then

Page 199: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

162

stored in the computer where they were averaged and plotted as shown later in Fig.5.9

and Fig.5.10.

Figure 5.8: A 3D views show some samples of imager response (captured image) for different stimulus contrasts (30-100%) shown of the left of plots. No TDS operation here. The illumination light intensity level was kept constant at 531 Lux. These images were captured in linear mode at frame rate of ~ 123 f/s.

Page 200: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

163

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

0 20 40 60 80 100 120Contrast (%)

Volta

ge d

iffer

ence

-∆V

(V)

Av Offset

Av M-

Av Pk-Pk

Figure 5.9: Effect of contrast on TDS for the WB edge of black bar over gray background stimuli. Different gray levels were used to generate stimuli with contrasts that range from 30-100 %. The illumination light intensity, frame rate, and ∆t were all kept constant at 531 Lux, ~ 123 f/s, and 61 µsec respectively.

-0.1

0

0.1

0.2

0.3

0.4

0.5

0 20 40 60 80 100 120Contrast (%)

Volta

ge d

iffer

ence

-∆V

(V)

Av M+Av OffsetAv Pk-Pk

Figure 5.10: Effect of contrast on TDS for the BW edge of the stimuli. The stimuli contrasts ranges from 30-100 %. The illumination light intensity, the frame rate, and the interval ∆t were all kept constant at 531 Lux, ~ 123 f/s, and 61 µsec respectively.

Page 201: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

164

Figures 5.9 and 5.10 show the effect of the contrast on the TDS output signals (∆V),

which are spikes with mainly negative magnitude (M-) for WB-left edges, and spikes

with mainly positive magnitude (M+) for the BW-right edge. These graphs also show

that the TDS signal magnitude |∆V| and peak-peak value are monotonically increasing

with contrast for both edge types. However, the BW edge TDS magnitude seems to

change faster (from~65mV to ~362mV) compared with the WB edge TDS magnitude

(from ~68mV to 256mV) at the same contrast range. This is also true for the peak-peak

values as shown in Fig. 5.9 and Fig. 5.10. The difference between TDS signal

magnitudes (peak-peak) at maximum contrast (100%) of the two edge types is very large

(> 100mV), whereas this difference is in the noise range (~3mV) at minimum contrast

(30%). The large difference in the signal at maximum contrast (100%) can be attributed

mainly to non-uniformity of the illumination intensity distribution on the stimulus plane,

which depends on the light source position relative to each edge. At low contrast,

however, signals from both edges are already degraded to their lowest levels at about (~

65-68mV).

This result suggests that reliable temporal edge detection is feasible even at low

contrasts (30%) where edge output signals are in the 60 mV range. This allows room for

its use in a wide range of applications. The apparent fluctuation in the edge signal,

especially for the WB edge, is due the fact that every point in the graph can be considered

as an independent experiment (since stimuli are changed for each point). However, this

has no effect on the qualitative arguments discussed here.

5.2.3 Effect of ∆t and frame rate on TDS signal In this section, we present the effects of both the intra-sampling interval, ∆t, and the

frame rate together because they are both related to the general system timing, which is

controlled by the system main clock (pattern generator frequency here). Therefore, for a

certain pattern generator frequency and controls timing pattern (refer to Fig. 5.1), the

number of bits that represent the interval ∆t are translated to time in seconds, and the total

number of bits that constitute the whole pattern are translated to ‘frame time’ in seconds,

which is the reciprocal of frame rate in frame/sec (f/s). So, when we change the length

(number of bits) of ∆t, the row-preparation period labeled by ∆TR in Fig. 5.1 will change

Page 202: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

165

accordingly. As a result, the length of the whole frame will change as will the frame rate

(integration time) if the pattern generator frequency is kept constant. This explains why

higher frame rates are used in most TDS investigations (to maintain reasonable

integration times).

We measured the effect of the interval ∆t on the temporal edge signal ∆V for

different frame rates. The stimulus has a 100% contrast and was illuminated by a

constant light intensity of 466 Lux. The interval ∆t was changed from 1 µsec to 188 µsec

for each frame rate, and the frame rate was varied from ~ 39.82 f/s to 122.52 f/s. The

output TDS signal ∆V was sampled 30 times for each ∆t at each frame rate. The data is

then averaged and plotted, as shown in Fig. 5.11 and Fig. 5.12, for the magnitude and

peak-peak value respectively. It is worth noting that the data presented in this section are

limited to the right (BW) edge of the stimulus bar (positive spikes). The results,

however, are similar to that of the left (WB) edge of the bar (negative spikes). They

differ primarily in the sign of the TDS output signal magnitude. Therefore, the

conclusions drawn here are general and applicable to both types of edges.

-0.05

0

0.05

0.1

0.15

0.2

0.0E+00 5.0E-05 1.0E-04 1.5E-04 2.0E-04

∆t (sec)

Volta

ge d

iffer

ence

-∆V(

V)

39.82 f/s 42.88 f/s

49.01 f/s 55.13 f/s

61.26 f/s 122.52 f/s

Frame Rates

Figure 5.11: Effect of the intra-sampling interval ∆t on the magnitude (M+) of TDS signal ∆V for different frame rates. The stimulus has 100% contrast and the illumination intensity was kept constant at 466 Lux. The interval ∆t was changed from 1 µsec to 188 µsec for each frame rate. This frame rate was varied from ~ 39.82 f/s to 122.52 f/s.

Page 203: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

166

The graphs in Fig. 5.11 and Fig. 5.12 show that, at low ∆t, the TDS signal (∆V)

increases rapidly with increasing ∆t, especially at high frame rates as shown. But at a

certain ∆t value, the edge signal starts to level off toward a steady state value. This point

is very important because it defines the minimum value of an intra-sampling interval

∆tmin that results in maximum TDS signal ∆VSat; hence, increasing ∆t beyond this value

has no advantages. Furthermore, this “saturation” TDS signal magnitude, ∆VSat,

increases with frame-rate. However, at some frame rate, this maximum TDS signal

reaches its peak value, ∆VSat, max, after which any further increase in the frame rate will

not lead to any increase in ∆VSat, but instead, to a decrease in ∆VSat (compare ∆VSat for

frame rates 61.26 f/s and 122.52 f/s in Fig. 5.12 below).

0

0.05

0.1

0.15

0.2

0.25

0.3

0.0E+00 5.0E-05 1.0E-04 1.5E-04 2.0E-04∆t (sec)

Volta

ge d

iffer

ence

-∆V(

V)

39.82 f/s42.88 f/s49.01 f/s55.13 f/s61.26 f/s122.52 f/s

Frame Rate

Figure 5.12: Effect of the intra-sampling interval ∆t on the peak-peak value of TDS signal ∆V for different frame rates. The stimulus has 100% contrast and the light intensity was kept constant at 466 Lux. The interval ∆t was changed from 1 µsec to 188 µsec for each frame rate. This frame rate was varied from ~ 39.82 f/s to 122.52 f/s.

Now, in order to get better insight into these issues, the loci of the onsets of

“saturation” TDS signals (∆tmin, ∆VSat) were first obtained from these graphs (Fig. 5.11

and Fig. 5.12) using asymptotes, as shown in Fig. 5.13 below. This sample graph shows

Page 204: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

167

the effect of the intra-sampling interval ∆t on the magnitude (M+) of TDS signal ∆V at a

frame rate of 55.13 f/s. Other frame rate curves showed similar qualitative behavior. To

examine the role of frame rates on the “saturation” onset location, four other graphs were

generated from the measured data to show the effect of frame rates on the “coordinates”

of the “saturation” point (∆tmin , ∆VSat) for both the magnitude (M+) and peak-peak

values. These graphs are shown in Fig. 5.14, Fig. 5.15, and Fig. 5.16 below.

0

0.04

0.08

0.12

0.16

0.2

0.0E+00 4.0E-05 8.0E-05 1.2E-04 1.6E-04

∆t (sec)

Volta

ge d

iffer

ence

-∆V

(V)

saturation pointMin ∆t at Max ∆V

∆t min

∆VSat

Figure 5.13: The effect of the interval ∆t on the magnitude (M+) of TDS signal ∆V at frame rate of 55.13 f/s used to show how we calculated the saturation point (min ∆t at max ∆V) of temporal edge signal. The stimulus has 100% contrast and the illumination light intensity was kept constant at 466 Lux. The interval ∆t was varied from 1 µsec to 188 µsec.

The onset of saturation level ∆VSat for both the magnitude (M+), and the peak-

peak value follow a similar “parabolic” relationship with frame rate, as shown in Fig.

5.14 and Fig. 5.15 respectively. In this case, the maximum ∆VSat, max value for both the

magnitude (224.4 mV) and the peak-peak value (329.6 mV) occur at a frame rate of

about 87.6 f/s. Again, from our observation, these results can be best explained by the

enhancement or degradation of the captured-image contrast with frame rate. In this

context, we first started by analyzing the effect of extreme ranges of frame rates. That is,

at low frame rates and at high frame rates.

Page 205: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

168

224.4

R2 = 0.9683

0

50

100

150

200

250

0 20 40 60 80 100 120 140Frame rate (f/s)

∆VS

at (m

V)

Poly. (∆VSat M+)

Figure 5.14: The average saturation level ∆VSat of the magnitude (M+) of TDS signal ∆V as a function the frame rate in the range from 39.82 f/s to 122.52 f/s. The stimulus has 100% contrast and the illumination light intensity was kept constant at 466 Lux. Maximum is at 87.6 f/s, 224.4 mV.

At low frame rates, the integration times are longer. This results in higher imager

output signals, but at low image contrasts. The captured images appear bright but with

no details, due to the failure of the array pixels to resolve, especially at high illumination

levels. Again, this situation can be regarded as “electronic” blooming, which result in

images similar to those from overexposed film. Accordingly, the TDS output signal ∆V,

which is a strong function in the contrast, will be low at low frame rates; hence, the low

“saturation” onset level (∆VSat), as shown in Fig. 5.14 and Fig. 5.15 for the magnitude

(M+) and peak-peak values respectively.

At high frame rates, however, the output levels of the pixels are very small

because of the shorter integration times involved. Hence, the captured images appear

Page 206: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

169

very dark (almost black like the logarithmic pixel, where integration time is equal to

zero). In this case, it is difficult to differentiate between the black bar and the white

background of the stimuli. This implies a large degradation in the contrast (output swing)

of the captured image and a low TDS output signal ∆V (as expected) and, therefore, low

“saturation” onset levels (∆VSat) as shown in Fig. 5.14 and Fig. 5.15 for the magnitude

(M+) and peak-peak values respectively.

R2 = 0.9671

0

50

100

150

200

250

300

350

0 20 40 60 80 100 120 140Frame rate (f/s)

∆VS

at (m

V)

Poly. (∆VSat pk-pk)

Figure 5.15: The average saturation level ∆VSat of the peak-peak value of TDS signal ∆V as a function the frame rate in the range from 39.82 f/s to 122.52 f/s. The stimulus has 100% contrast and the illumination light intensity was kept constant at 466 Lux. Maximum is at 87.6 f/s, 329.6 mV.

Between these two extremes, there is an optimum frame rate (~ 87.6 f/s in this

case) at which a maximum output swing (contrast) can be obtained; thus, a maximum

∆VSat, max level can be achieved as shown above. This is a very interesting finding since

it suggests that there is an optimal frame rate (integration time) at which a maximum

level for ∆VSat can be obtained and, therefore, a better TDS detection performance can be

Page 207: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

170

achieved. Note that this optimal frame rate is very much connected with the integration

time, and thus, it can be easily lowered if we can devise a method to maintain the same

integration time. The most straightforward method is to reset the pixel more than once

per frame to reduce the integration lengthening due to the lower frame rate.

Unfortunately, this feature was not included when this TDS detection technique was

designed.

Figure 5.16: ∆tmin (minimum ∆t at maximum ∆V) as the frame rate varied in the range from 39.82 f/s to 122.52 f/s. The stimulus has 100% contrast and the illumination light intensity was kept constant at 466 Lux. This plot is for both the magnitude and peak-peak values of TDS signal ∆V.

The graph in Fig. 5.16 show the effect of the frame rates on the other “coordinate”

of the “saturation” onset point. That is, the minimum intra-sampling interval, ∆tmin at

which the maximum signal (∆VSat) occurs. This is a key factor in determining the

optimal TDS performance conditions since no advantage can be gained by increasing ∆t

above this value. The results shown in Fig. 5.16 for the magnitude M+ or peak-peak

Page 208: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

171

values are encouraging and suggest that increasing the frame rate will allow us to achieve

∆VSat at a lower ∆tmin. Of course, we cannot increase the frame rate to any desired value

since we are bounded by the optimal ∆VSat, max requirement.

5.3 Temporal Double Sampling in Logarithmic Mode The experimental setup and testing procedure for TDS detection in the logarithmic mode

are similar to those used for the linear mode testing (refer to Appendix B). However,

they differ in the timing pattern since the pixel reset clock is in logarithmic mode and is

always set high at VDD, so that the reset transistor is continuously working in the sub-

threshold regime. This results in the logarithmic photocurrent-to-voltage conversion, the

main feature of this mode of operation. Other differences include the extra stimuli and

light sources used. Here, high light intensities sources such as the bulb lamp and/or the

red diode lasers are used either as illumination sources or as direct exposure stimuli. A

typical example of the TDS signal detection in the logarithmic mode (spikes) is shown in

Fig. 5.17 below.

Figure 5.17 obviously shows no evidence that a reliable TDS output signal can be

obtained in the logarithmic mode operation at these low light intensities. This can be

attributed to the poor output swing (contrast) in the logarithmic mode, due to the

logarithmic compression, especially at low light intensities (refer to conventional-curve

Fig. 3.11). Moreover, the logarithmic APS suffers from the inherent high fixed pattern

noise (FPN) noise due to the exponential relationship of the photocurrent with threshold

voltage Vth (the main source of FPN). Collectively, this may lead to results with false

spikes (labeled by “Φ” in the graph) and thus cause the degradation in the TDS detection

reliability. In fact, even those spikes labeled as “edge spikes” in the graph are not reliable

since they are masked by the high digital noise (labeled by “O” in the graph). The TDS

detection reliability is important in applications where the detected TDS signal (spikes)

will be used as a front-end for higher image processing functionalities, such as in motion

detection applications. The origin of this digital noise (~1 kHz) in this case is the row-

select clock generator. This digital noise can be reduced by the physical separation

between the analog and digital blocks using ground rings and/or using separate power

Page 209: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

172

supplies for each block. However, a complete separation is not feasible in the 0.5µm or

0.35µm technologies, since all ground pads are connected through the common substrate.

Very low output Swing

O Noise spikes in the output

Edges spikes (∆V)

frame flag

frame flag

Φ

Φ

Φ Φ

Φ False spikes due to noise

Black White

Figure 5.17: Output signal of the logarithmic-mode TDS of a black bar on white background (100 % contrast) stimulus illuminated by white light of intensity of ~133 Lux. The image was captured at high frame rate of ~114 f/s. The signals “1” and “2” shown are the outputs of CDS circuit VSH1 and VSH2 respectively. The two signals where sampled with intra-sampling interval (∆t) of 6 µsec. These signals are then differenced to results the edge signal “∆V” shown in the top of the graph.

There is no evidence that reliable TDS detection (important for motion detection

applications) exists in the logarithmic mode at these low intensities. However, through

the use of surface and 3D-graphs to plot the data array after the TDS operation, we

noticed an enhancement (more steep) in the contrast in the 3D-views that may result in

sharper images, as shown in Fig. 5.18 and Fig. 5.19 below. This can be explained by the

fact that differencing using the CDS circuit will get rid of most of the DC component

(offset) in outputs of the pixels (almost equal). This stretches the difference between

“Maximum” and “Minimum” gray levels of the captured image (compare (b), (d) of Fig.

5.18 and Fig. 5.19). It is noted that the TDS output, |∆V|, is larger for the white (light)

Page 210: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

173

regions than the TDS output for the black (dark) regions. This was also confirmed by

Fig. 5.1 (bottom graph) and by Fig. 5.2 (b) in the previous sections.

Figure 5.18: Surface and 3D-view of the TDS in logarithmic-mode. The stimulus is a black bar on white background (100 % contrast), which was illuminated by light of intensity of ~ 133 Lux. The image (a, b) was captured with frame rate of ~ 81.4 f/s. The image after TDS operation (c, d) was obtained with intra-sampling interval (∆t) of 58 µsec.

Upon examination of the effect of frame rate on the performance of the TDS

contrast stretching in the logarithmic-mode, we set the frame rate to about 81.4 f/s and

61.3 f/s as shown in Fig. 5.18 and Fig. 5.19, respectively. When comparing the surface

and 3D-views of the captured images (a), (b) at the two frame rates, no important

difference was found between them. The spots shown in the surface graphs can be

attributed to bad pixels since the same pattern was repeated in both cases. The FPN noise

is clearly seen in the 3D-views for both frame rates. Again, no significant difference

between FPN levels was observed. The respective outputs after the TDS operation for ∆t

at 58 µsec and 122 µsec are shown in Fig. 5.18 (c), (d), and Fig. 5.19 (c), (d). Comparing

(a) (b)

(d)(c)

Page 211: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

174

these TDS outputs, we noticed that there is more contrast enhancement using a ∆t of 122

µsec (~ 16 times) compared with (~7 times) with a ∆t of 58 µsec. On the other hand, the

output with 58 µsec case exhibited lower noise levels compared 122 µsec case.

Both observations can be explained by the fact that the sampled output, VSH1, will

suffer from larger degradation (due to leakage) using a ∆t of 122 µsec (longer time) as

shown in Fig. 5.20 below. Therefore, when VSH1 is differenced from the second sample

VSH2, (recall from Fig. 5.1 that ∆t = t2- t1), a relatively larger difference signal, ∆V, is

obtained than compared with that using a ∆t of 58 µsec (Fig. 5.18 (d) and Fig. 5.19 (d)).

Thus, higher contrast stretching (enhancement), as shown, is found in the 122-µsec cases.

Figure 5.19: Surface and 3D-view of the TDS in logarithmic-mode. The stimulus is a black bar on white background (100 % contrast), which was illuminated by light of intensity of ~ 133 Lux. The image (a, b) was captured with frame rate of ~ 61.3 f/s. The image after TDS operation (c, d) was obtained with intra-sampling interval (∆t) of 122 µsec.

Now, because the sampled outputs are composed of captured image plus fixed

pattern noise (FPN), which is, as its name implies, “fixed” for certain imagers, and since

(a) (b)

(d) (c)

Page 212: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

175

VSH1 will suffer less degradation in 58-µsec case (shorter time), the noise is more

efficiently reduced by differencing when ∆t is 58 µsec compared to the results when ∆t is

122 µsec because the levels are more similar.

Figure 5.20: A graphical illustration of the degradation the sampled output with time (due to leakage) and its effect on TDS output signal when using different intra-sampling intervals.

At higher light intensities, such as that from direct exposure to the bulb lamp, as

shown in Fig 5.21 (a), the results are different. A black, blurry edge can be observed as

shown in Fig. 5.21 (b). It was also noted that the level of FPN in the captured image is

very high (as expected) due to the exponential relationship between the threshold voltage

Vth and the photocurrent (refer to Chapter 6 for more detailed information). To get better

insight, we used surface and 3D graphs to show sanitation before and after the TDS

operation, as illustrated in Fig. 5.21. The raw image before edge detection is also shown

in Fig. 5.21 (a1), (a2).

The fuzziness in the detected edges, shown in Fig. 5.21 (b), is due to high levels

of fixed pattern noise (FPN) inherent in the logarithmic image sensor operation and also

due to the curved surface of the bulb-lamp stimulus used. The ambiguity in edge

detection using TDS algorithm is about ± 2 pixels. The edge of the lamp, however, is

clearly depicted in the surface and 3D graphs in Fig. 5.21 (b1, b2), respectively.

Therefore, the TDS operation can be considered fairly feasible at very high light

intensities, such as those used here.

Page 213: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

176

Figure 5.21: Temporal double sampling (TDS) in the logarithmic-mode of bulb lamp at intensity level of ~ 7000 Lux. The image (a, a1, a3) was captured with frame rate of ~ 61.3 f/s. The TDS edge image (b1, b2, b3) was obtained with intra-sampling interval ∆t of ~122 µsec.

(a) (b)

(a1) (a2)

(b2) (b1)

Page 214: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

177

To further explore the range of feasibility, another experiment was performed

using a laser diode as a source of high intensity light illumination (3.714mW/cm2). The

laser diode used has a red wavelength of 670 nm, which makes this experiment more

interesting since CMOS compatible n+/p-substrate photodiodes have low photosensitivity

to red wavelengths (compared with that to the green wavelengths, which constitutes a

large portion of white light). Therefore, this experiment can be considered, to some

extent, to be the worst-case situation. Here, the edge detected is somehow very fuzzy.

The noise levels appear to increase in the edge detected output (Fig. 5.22 (c, d)) when

compared to the noise levels in the raw image output (Fig. 5.22 (a, b)). The fuzziness in

the detected edge is attributed to the “blooming” in the captured images inherent to

CMOS image sensor operation at high light intensities, whereas the increase in noise

artifacts is attributed to relative noise level as compared to the DC (offset) level.

(a) (b)

(c)

(d)

Figure 5.22: A Logarithmic-mode TDS output of a red (670 nm) diode Laser spot. The light intensity level was ~3.714mW/cm2. The image (a) (b) was captured with frame rate of ~ 61.3 f/s. The TDS output (c), (d) was obtained with ∆t of 122 µsec.

Page 215: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

178

In the raw image, this noise level is very small compared with the high DC level,

whereas in the edge-detected output, this noise level becomes comparable to the edge

signal output (almost without DC component because of differencing). Note that the DC

(offset) component (almost constant) always vanishes after differencing, while the noise,

which is either random or uncorrelated (comprised of the spatial FPN noise plus other

temporal noise sources), will remain. Therefore, the apparent increase in noise level with

temporal edge detection operation is an artifact (due to display magnification) and not a

true increase in noise due to edge detection, as is clearly depicted in Fig. 5.23 below. The

edge spike is clear at ~ 61.42 mV, compared with the noise level which is < 10-mV.

These data were averaged over 10 samples using WaveStar automatic logging software.

~ 61.42 mV

Frame flag

Detected edge spike

Laser spike

Frame flag

Laser spike

Frame flag

Frame flag

Frame flag

Frame flag

Figure 5.23: Output signal of the logarithmic-mode TDS output of a red (670 nm) diode Laser spot. The light intensity level was ~3.714mW/cm2. The image outputs “1”, “2”, captured with frame rate of ~ 61.3 f/s, are for VSH1 and VSH2 respectively. The TDS output ∆V, obtained with ∆t of 122 µsec, is labeled by “3”at the top of the graph.

Page 216: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

179

5.4 Temporal Double Sampling with a Stimulus in Motion So far, we have discussed the temporal double sampling for stationary stimuli and

explored the operation before complicating the situation by using moving stimuli. In

particular, we investigated the feasibility of the TDS detection in the logarithmic mode

with stationary stimuli first, before attempting any motion effect analyses. Because the

reliability of TDS detection was poor for this mode, especially at low light intensities, we

concluded that the logarithmic mode operation is not suitable for motion detection at the

illumination levels of interest to us (low and moderate light intensities). We have kept

TDS high light intensity feasibility investigation for future work. Therefore, no further

analysis of this mode will be conducted here and only the linear (integrating) mode of

pixel operation was tested and analyzed under motion conditions.

The experimental setup and test procedure for temporal double sampling for

moving stimuli is similar to the one used for stationary stimuli (refer to Appendix B),

except that the stimuli used are moving drums with different diameters (instead of the flat

stimuli used before) connected to a controllable speed motor (refer to Appendix B).

The output was measured using the standard conditions with a 100% contrast

stimulus illuminated by a light source with an intensity of 466 Lux. The frame rate was

61.3 f/s and ∆t was 122 µsec. The edge signal was measured using a Tektronix TDS 360

digital real-time oscilloscope and logged using WaveStar software for about 117 sec with

a sampling rate of 1 sample/sec. A typical strip chart for temporal edge output for a

stimulus moving with constant speed of ~15.24 cm/sec is shown in Fig. 5.24.

This log chart shows M+ (for BW edge) and M- (for WB edge) values, which are

defined by Tektronix oscilloscope as the signal Maximum and Minimum, respectively. It

also shows the differences in the level of the TDS output signal (spikes) that can be

understood from the following discussion.

For stimuli speed of about 152.4 mm/sec, it takes only ~374 ms for an edge to pass

through the imager field-of-view (~57 mm), which was calculated as before (refer to

Fig.4.30 in Chapter 4) for an array-width Bmax= 64 pixels x 30 µm = 1.92 mm, with a

standard wide-angle lens of ~ 35º and focal length of 3 mm. The stimulus was ~ 90 mm

from the lens plane, whereas the imager was located at focal plane to get the sharpest

image possible. On the other hand, it takes at least 16.324 ms for the imager array to

Page 217: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

-0.3

-0.2

-0.10

0.1

0.2

0.3 20:5

9:25

20:5

9:43

21:0

0:00

21:0

0:17

21:0

0:35

21:0

0:52

21:0

1:09

21:0

1:26

21:0

1:44

Tim

e (s

ec)

Temporal Edge ∆V (V)

M+

M-

Fi

gure

5.2

4: A

Typ

ical

TD

S ou

tput

sig

nal (∆V

), lo

gged

(with

Wav

eSta

r) fo

r abo

ut 1

17 s

ec w

ith s

ampl

ing

rate

of 1

sam

ple/

sec.

The

10

0 %

con

trast

stim

ulus

was

mov

ing

with

a c

onst

ant s

peed

of

~15.

24 c

m/s

ec, a

nd w

as il

lum

inat

ed w

ith 4

66 L

ux li

ght s

ourc

e.

The

imag

es c

aptu

red

at f

ram

e ra

te o

f 61

.3 f

/s.

The

inte

rval

∆t w

as 1

22 µ

sec.

Th

is s

trip

char

t sho

ws

the

M+

(BW

edg

e), a

nd M

- (W

B

edge

) val

ues,

whi

ch a

re d

efin

ed b

y Te

ktro

nix

osci

llosc

ope

as M

ax, a

nd M

in re

spec

tivel

y.

Page 218: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

181

be fully scanned (readout) at the operating frame rate (61.3 f/s) used here. In other

words, the imager array can “see” the edge for ~374 ms at this speed. Therefore, if an

edge enters the imager field-of-view (at this speed) as the scanner is reading out the first

pixel (column) of the imager array, then ~23 frames will be read out before the edge

leaves the imager field-of-view. This number of frames changes with the speed of the

stimuli and/or frame rate. When WaveStar samples at 1 sample/sec for 117 sec, the

sampled data will be either active (sampled when the edge is in the field-of-view) or idle

(sampled after the edge has left the field-of-view) depending on the initial conditions,

speed of the stimuli, and/or frame rate.

For “active” samples, the position of the edge in the field-of-view is the key factor

in determining the level of the TDS output signal, ∆V (refer to Fig. 4.30 in Chapter 4),

because this level is affected by the curvature of the stimulus drum and/or its position

relative to the light source. Since neither the stimulus speeds nor the readout clocks are

perfectly stable, the edge can assume any position in the field-of-view during the 117 sec

logging interval. This explains the fluctuations in the TDS output spikes. Also, these

fluctuations can be attributed to a slow sampling rate of WaveStar (1 sample/sec), which

cause TDS spikes to be sampled in different frames. Moreover, the FPN-PRNU noise

levels and/or the existence of bad pixels in the edge position affect the TDS output signal.

In general, all these factors may collectively contribute to the apparent fluctuation in the

level of the edge spikes as shown in Fig. 5.24. This argument is valid for two edge types:

the BW edges and/or the BW edges. However, since the initial conditions of the pixels

are different for each type [98], a difference in the edge signal ∆V was observed between

the two types of edges. The BW edge TDS output (M+) is, on average, higher than (M-)

of the WB edge, as shown in Fig. 5.24.

When the edge leaves the imager field-of-view, WaveStar will record an “idle”

sample and image output signal will have a constant value until the next edge enters the

field-of-view (~5.46 sec for 15.24 cm/sec speed). This value could be a white gray level

or a black gray level depending on whether the edge that left the field-of-view was BW or

WB, respectively, as was depicted in Fig.4.31 in Chapter 4. In either case, the TDS

signal will be at its minimum for both edges since the differencing is applied on the same

Page 219: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

182

gray level (black or white here). Therefore, the spatial noise (FPN-PRNU) and other

temporal noise sources will play a major role in determining the level of the edge signal.

Because the “active” interval is ~0.374 seconds and the “idle” interval is ~5.46

seconds, it is more likely that the WaveStar will sample (at 1 sample/sec) “idle” if the

previous sample was “idle”. By similar argument, it is less likely to sample “active” if the

previous sample was active. This is why active signals look “spikier” than idle signals,

where ∆V appear almost flat, as shown in Fig. 5.24.

The above experiment was performed for seven speeds (15.24 -179.25 cm/sec) to

investigate the TDS output signal under moving stimuli. Again, the maximum speed

should be limited to values that ensure only “one” TDS detection per frame for the same

edge. Otherwise, the analysis will be not valid. A photodiode tachometer was used, as

before, to measure the speed in all measurements. All speeds reported here are for linear

motion. Results for each speed are very similar to those shown in Fig. 5.24 above. To

illustrate the effect of speed on the TDS output signal, the maximum achievable TDS

signal was extracted from the measured data for each speed in the range under study as

shown in Fig. 5.25 below.

In this figure, each point represents the maximum value of all measured data

(~117 samples) of TDS output signals |∆V| at each speed for BW edges (M+) and WB

edges (M-), shown in Fig. 5.24, labeled as and , respectively. This graph shows that

the maximum achievable edge signal for both edge types is higher at some speeds than

others. This behavior, as we mentioned previously in Chapter 4, is attributed to the so-

called “velocity tuning” [99-101], which occurs when the stimulus speed synchronizes

with the read-out scanning speed (frame rate). The synchronization is such that the edge

signal is captured at the most suitable position in the imager field-of-view (for example,

“p2” in Fig. 4.30 in Chapter 4) that will result in a maximum (resonant) edge output

signal. The “speed tuning” is more obvious here than compared with that in the spatial

double sampling operation.

Here, the first tuning speed occurred at ~15.24 cm/sec, where the maximum ∆V

(refer to Fig. 5.25) was achieved. Then the tuning was repeated at other speeds that are ~

triple multiples (3x) of the first tuning speed with ∆V levels that decrease with speed.

This graph also shows that tuning speeds for BW edges (M+) and WB edges (|M-|) are

Page 220: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

183

well matched in most of the range of speeds under investigation. The source of mismatch

at very high speeds could be attributed mainly to the increased stimulus vibration,

without excluding other sources that may result in measurement errors, regardless of

speed.

0

0.05

0.1

0.15

0.2

0.25

0.3

0 50 100 150 200Speed (cm/sec)

Max

imum

ach

ieva

ble ∆

V (V

)

M+

|M-|

Figure 5.25: Effect of speed of motion on the maximum achievable temporal double sampling (TDS) output signal (∆V). The 100 %-contrast stimulus was moving with constant speeds in the range from~15.24 cm/sec to ~179.25 cm/sec, and was illuminated with 466 Lux light source. The images were captured at frame rate of 61.3 f/s, and the edge signals were acquired with ∆t of ~122 µsec. The graph shows the M+ (BW edge), and |M-| (WB edge). Each point in the graph represents the maximum of all data logged (about 117 samples for each speed).

Another interesting tool to evaluate the temporal edge detection feasibility is the

detection efficiency (% detection) at a certain threshold level, which is defined as the

percentage of the number of detected spikes in a given logging interval (117 msec here)

to the number of spikes expected (calculated) for an ideal detection in the same interval.

For an ideal detection, detection efficiencies are maximal at 100% because all edges in

the interval are detected.

Page 221: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

184

Here, both edge signals, M+ (for BW edges) and |M-| (for WB edges), were used

to determine the number of measured spikes for every threshold voltage. The calculation

was performed using Microsoft Excel and supported with manual calculation. To

facilitate the manual calculations, we built area graphs for the TDS output signals at each

speed in the range under investigation (15.24 cm/sec-179.25 cm/sec). A typical example

of these graphs is shown in Fig. 5.26 below. These data are identical to those used to plot

Fig. 5.24. This graph in Fig. 5.26 shows the example thresholds, Vth1, Vth2, Vth3, and Vth4

(horizontal lines) used in this analysis. These thresholds correspond to values at 60mV,

80mV, 100mV, and 120mV, respectively. Recall that the interval “A” was defined as the

interval between successive edges of the same type (BW-BW and/or WB-WB) and can

be viewed as the period of the stimulus square waveform (refer to Fig. 4.31 of Chapter 4).

The interval “B” (defined earlier), however, is the time between successive edges of

different types (BW-WB and/or WB-BW), which is (in this case) equal to A/2, and can

be viewed as the width of the stimulus waveform. Note that some BW spikes (the

brighter ones) are hidden behind (or masked by) the WB spikes (the darker ones).

According to our criterion, this situation is absolutely correct since the existence

of only one edge type is needed to trigger the spike counter (in Microsoft Excel). This

software counter triggers whenever the transition in the edge output is greater than the

given threshold voltage by a minimum amount equal to the noise floor (assumed to be 10

mV in this case). The extracted data is then used to calculate and plot the percentage

detection efficiency at this threshold voltage, as defined above. This process was

repeated for each threshold voltage, as shown in Fig. 5.27 below. Note that the data used

here were based on all values (rather than maximum values) that were used to plot Fig.

5.25 above.

This graph in Fig. 5.27 shows two extremes: high TDS detection efficiencies at

low threshold voltages, especially at low speeds, and low detection efficiencies at high

threshold voltages, especially at high speeds. The first extreme (peaking at 87.5 %) can

be attributed to the existence of false spikes at low threshold voltages, especially at low

speeds. The second extreme is attributed to the fading of TDS signal at high speeds (due

to increased blur and the degradation of the resolution); thus many “true” spikes will be

missed if higher threshold voltages are used.

Page 222: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

0

0.050.

1

0.150.

2

0.250.

3 20:5

9:37

20:5

9:57

21:0

0:17

21:0

0:37

21:0

0:57

21:0

1:17

Tim

e (s

ec)

Temporal Edge ∆V (V)

M+

|M-|

BW

BW

Ts1=

10.9

25 s

ec (p

hoto

diod

e M

easu

rem

ents

)A

= ~

10.9

25 s

ecB

= ~

5.4

6 se

c (A

/2)

A

BW

BW

BW

BW

B

BW

BW

BW

WB

WB

BW

WB

WB

WB

V th1

V th2

V th3

V th4

BW

WB

Fi

gure

5.2

6: A

typi

cal a

rea

grap

h of

the

TDS

outp

ut s

igna

l (∆V

) use

d to

faci

litat

e th

e ca

lcul

atio

n of

the

perc

enta

ge e

ffic

ienc

y of

the

dete

ctio

n. T

he d

ata

used

her

e is

the

sam

e da

ta th

at w

as u

sed

to p

lot F

ig. 5

.25.

Als

o sh

own,

the

thre

shol

d vo

ltage

s, V

th1,

Vth

2, V

th3,

and

Vth

4 at

60m

V, 8

0mV

, 100

mV

, and

120

mV

res

pect

ivel

y.

A

is d

efin

ed a

s th

e in

terv

al b

etw

een

edge

s of

sam

e ty

pe (

perio

d of

the

stim

ulus

squ

are

wav

e in

Fig

. 4.2

8), w

here

as

B

is th

e tim

e be

twee

n ed

ges

of d

iffer

ent t

ypes

(w

idth

of

the

stim

ulus

squ

are

wav

e in

Fi

g. 4

.24)

. N

ote,

that

we

used

the

abso

lute

val

ue o

f WB

edg

e si

gnal

|M-|

in th

is p

lot.

Page 223: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

186

0

20

40

60

80

100

0 50 100 150 200Speed (cm/sec)

% d

etec

tion

effie

cenc

y

Vth = 0.06V

Vth = 0.08V

Vth = 0.1V

Vth = 0.12V

Figure 5.27: Effect of speed of stimulus motion on the percentage detection efficiency of TDS signal ∆V for different threshold voltages (60-120mV). The 100 % contrast stimulus was moving with constant speeds in the range ~15.24 cm/sec to ~179.25 cm/sec, and was illuminated with 466 Lux light source. The images were captured at frame rate of 61.3 f/s, and the edge signals were acquired with ∆t of ~122 µsec. The graph data was compiled from the same data shown in Fig. 5.24.

In fact, the detection efficiency is reduced to zero at very high threshold voltages.

However, the TDS detection reliability deteriorates at both extremes (false detection in

one extreme and the non-detection of true spikes in the other extreme). Therefore, there

is an optimal threshold voltage level, which will result in the optimum detection

efficiencies with highly preserved detection reliabilities. In this case, a threshold voltage

of 70mV would be a good compromise between TDS detection reliability and efficiency.

However, choosing a low threshold voltage puts more technical constraints on the

sensitivity of the threshold amplifier. It is worthwhile mentioning that the speed-tuning

behavior is consistent for all threshold voltages used, as shown in Fig. 5.25 and Fig. 5.27.

All results presented in this chapter are for stimuli that are vertical (parallel to

rows of the imager) since the imager was rotated by 90º (vertical bars will effectively

appears horizontal). However, by argument of symmetry of the imager array (64 ×64

pixels) used, all analyses and conclusions presented in this chapter are valid for vertical

stimuli as well as for horizontal stimuli.

Page 224: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

187

5.5 Summary and Discussion In this chapter, we described an analog VLSI implementation of a robust temporal

double-sampling (TDS) algorithm to perform real-time temporal sampled differentiation

to provide a simple method for change/motion detection. This algorithm was used

directly to perform some primitive, yet important on-chip image processing tasks, such as

the novel contrast enhancement and visual edge detection in logarithmic mode first

presented here. Moreover, the range of applications of this technique can be further

extended by using the TDS output signal as an input to many on-chip advanced image

processing functionalities such as motion detection, image segmentation, and data

reduction, with an operational range that spans a wide set of light intensities, stimuli

contrasts, frame rates, and speeds. Of late, these real-time image-processing utilities have

gained potential for employment in applications where integrated functionalities are

advantageous, such as industrial inspection, target tracking, and navigation.

The prototype chip (CYC) was fabricated using a 0.5µm CMOS image sensor

with dual mode APS that can work in both the continuous logarithmic mode (with high

optical dynamics) or in the linear mode (with higher image quality). A rigorous

experimental investigation was conducted to determine the feasibility of this sensor under

a wide range of light intensities, stimuli contrasts, frame rates, and speeds. These results

were also compared with state-of-the-art implementation whenever appropriate.

In the linear mode of operation, the TDS signals are mainly uni-polar spikes; thus

dismissing the need for using thresholds in this case. The output TDS signal (refer Fig.

5.6 and Fig. 5.7) was reliable over a wide range of light intensities (240-2200 Lux), with

a maximum level of 434 mV peak-peak @ ~884 Lux for ON-BW edge (and ~387 mV

peak-peak @743 Lux for the WB-OFF edge). On the other hand, the minimum TDS

signal, which determines the operational range of the TDS technique, is greater than

97mV for the ON edge and the OFF edge, which is much higher than the noise level of

the CYC imager (~16.5 mV).

The contrast sensitivity of TDS output signals (∆V) is monotonically increasing

with contrast for both edge types (Fig. 5.9 and Fig. 5.10) over a contrast range between

30-100%. The minimum detectable TDS signal is > 93mV @ 30%, which is much higher

than the noise level (16.5mV). This result suggests that reliable TDS detection is feasible

Page 225: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

188

down to contrasts of 30%; thus presenting an advantage for a wide range of applications.

The results most closely related to the work presented are those found in the works by

Deutschmann et al. [105, 107] and by Etienne-Cummings et al. [66]. Their sensors were

able to respond to contrasts as low as 10% and 5% respectively. However, this

sensitivity was achieved at the expense of image quality since they were realized with

large pixel sizes and very low resolutions.

Results (Fig. 5.15) showed that the peak-peak onset saturation level ∆VSat have a

“parabolic” relationship with frame rate. This suggests that there is an optimal frame rate

(integration time) at which a maximum level for ∆VSat can be obtained. This implies that

the effect of frame rate on the TDS signal is mainly intrinsic to imager operation and not

the temporal double sampling algorithm. This is a very interesting finding as it suggests

that a better TDS detection performance can be achieved by choosing an “optimum”

frame rate. Nevertheless, the minimum ∆VSat (68mV @ 40 f/s) is large enough (>

16.5mV) to ensure a reliable signal over a wide range of frame rates (40f/s—122f/s). A

key factor in determining the optimal TDS performance is the minimum intra-sampling

interval, ∆tmin, since, for a given frame rate, no advantage can be gained by increasing ∆t

above this value. Results (Fig. 6.16) suggest that increasing the frame rate will allow

having the same ∆VSat value at a lower ∆tmin. Therefore, it is a tradeoff between the

maximum ∆VSat and ∆tmin.

The reliability of TDS detection is important for motion detection applications,

where the TDS output signal is severely affected by degraded resolution (MTF) due to

motion blurring, especially with logarithmic mode operation at low light intensities.

Fortunately, the very same results (Fig. 5.18 and Fig. 5.19) showed another important

image processing functionality that can be realized using TDS. This function (contrast

enhancement) is a fundamental block in many image-processing applications, especially

with logarithmic mode (which suffers from very high compression at low light intensity,

and appears almost constant (very low electrical sensitivity to changes in light intensity)).

This functionality is particularly suited for logarithmic mode operation at low light

intensities (~133 Lux) and could be used to improve the SDS edge detection outputs.

On the other hand, reliable 2D edge detection (Fig. 5.21) was realized using the

TDS technique suited for logarithmic mode operation at high intensities (> 7000 Lux).

Page 226: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

189

This TDS edge detection has a potential for use as a front end to motion detection in

pointing devices, laser alignment, and position sensing detector applications. To our

knowledge, these two logarithmic mode image processing functionalities using the

temporal double sampling technique are original and have not been reported previously.

Since one of the potential applications of the TDS technique is the motion

detection, it is imperative to study the effect of stimuli motion on TDS detection

performance under a wide range of stimuli speeds (~15 cm/sec—180 cm/sec). This

effect was determined in terms of maximum achievable TDS outputs as a function of

stimuli speeds (Fig. 2.25). This so-called “speed tuning” affects the level of threshold

voltage to be used, and hence, affects the sensitivity of the motion detector. The results

indicate that a “true” edge could be missed if the threshold voltage value chosen is

greater than 88mV, especially at high speeds. This finding is supported by the %-

detection efficiency (Fig. 5.27). Therefore, a tradeoff is required between detection

efficiencies and reliability when choosing threshold voltages. A threshold of 70mV is an

example of a good compromise between the two figures of merit. Most of the previously

reported implementations [105, 106, 107, 108] are limited to low speeds compared with

our TDS temporal differentiator, which is feasible over a wide range of speeds (~15

cm/sec—180 cm/sec). The temporal differentiator used by Etienne-Cummings [102] in

his tracking chip has a comparable speed range; however, it is limited to very high light

intensity operation.

Limitations 1. The TDS technique utilizes analog VLSI circuitry, which have inherent limitations

due to low precision analog circuits.

2. Contrast enhancement is limited to low light intensities (<200 Lux).

3. 2D Edge detection is limited to high light intensities (> 7000 Lux).

4. The presence of a strong 112 Hz (or 100 Hz) AC component in artificial (interior)

lighting may degrade the sensitivity of the TDS technique operations.

Page 227: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

190

Chapter Six

Fixed Pattern Noise Reduction using

Modal Double Sampling (MDS).

6.1 Introduction One of the primary limitations to image quality in CMOS image sensors is the fixed

pattern noise (FPN), caused by the mismatch between individual pixels or columns of the

image sensor [102, 103, 38]. Fixed pattern noise is spatial in nature and ideally does not

change with time for a particular imager. Hence, the name “fixed” is used to differentiate

it from the temporal random noise. This usage of the nomenclature “FPN” is quite

general and applies whether the pattern noise was measured in the dark or under

illumination. The dark FPN (offset), may also be referred to as the Dark Signal Non-

Uniformity (DSNU) and is a measure of non-uniformity due to pixel-to-pixel output

variation in the dark, whereas fixed pattern noise measured under uniform illumination, is

usually referred to as the Photoresponse Non-Uniformity (PRNU) and it is a measure of

the pixel-to-pixel output variation under uniform illumination [104]. The total FPN

consists of two components, “offset” component (dark FPN), with a contribution almost

constant to output signal under varying illumination, and “gain” component (pure

PRNU), with magnitudes that change with illumination [104].

Page 228: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

191

The ideal image sensor array would produce the same output signal for each pixel

under uniform illumination. However, in reality (as stated above), this is rarely the case.

There are several sources of non-uniformity caused mainly by device and interconnect

parameter variations (mismatches) across the sensor [19]. This will be discussed in more

detail in the next section.

Since the fixed pattern noise (FPN) does not change significantly from frame to

frame, it cannot be reduced by frame averaging [23]. However, other techniques were

established to reduce FPN. For instance, with linear (integrating) active pixel sensors

(APS), the correlated double sampling (CDS) circuit [15, 16, 105] is the primary mean

for reducing FPN, as already introduced in Chapter 4. For logarithmic APS on the other

hand, off-chip techniques with digital calibration were reported such as in [106] as well

as integrated analog calibration methods reported in [38, 102, 107-109]. However, most

of these techniques, especially those integrated ones (of interest here), implemented the

FPN reduction functionality by scarifying either by resolution or by photosensitivity or

both (≥ 8 CMOS-transistors /pixel [38, 102, 107]), with applicability that is limited to one

mode of operation.

In this chapter, we describe a novel and straightforward technique to reduce the

FPN in two modes of pixel operations with only slight constraints imposed on resolution

and/or photosensitivity. This technique is based on Modal Double Sampling (MDS)

method with slight modification in photocircuit design to accommodate the FPN

reduction functionality (only 5 NMOS-transistors/pixel). In fact, without using the

logarithmic FPN reduction functionality, we can operate our novel “super” pixel in three

modes (3M), the Linear-APS (with FPN reduction), the logarithmic with full-

compression (Log-APS), and/or logarithmic with lowered-compression (2Log-APS)

modes (refer to sections 2.5.3.2 and 3.3.2). After discussing sources of FPN and their

contributions in the 3-modes of the novel pixel, we will introduce this technique in detail.

This involves presentation of the measurements based on the actual imager designed with

this pixel configuration (refer to Appendix C for the design and Appendix D for the

measurement setup). These measurements include the effect of various parameters such

as light intensity, frame rate, and current-source-load bias for all modes of operation.

The results are then analyzed and discussed. Later, the pixel circuit switching

Page 229: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

192

performance will be analyzed in terms of settling-time and optical modulation frequency

response in order to understand the temporal behavior of this pixel, and to determine the

range of validity of this method. Finally, conclusions are drawn based on these analyses,

with recommendations on how to improve the performance of this novel technique

6.2 Fixed Pattern Noise (FPN) sources FPN sources in CMOS image sensors are those accountable for the mismatch between

various elements in the imager array, which cause the spatial non-uniformity. The

mismatch in integrated circuits can be primarily attributed to several factors. Firstly, the

limited quality of the lithographic process, which is reflected as variations in dimension

[38, 16] of the photodetectors and/or MOSFET devices with position across the imager

array; hence varying the pixel optical-apertures and MOSFET aspect ratios (W/L).

Secondly, the mismatch can be attributed to the variation of process parameters such as

gate-oxide thickness and doping concentration across the wafer resulting from non-

uniform conditions and/or contamination during fabrication [38, 23], which cause the

variation of MOSFETs threshold-voltages with position across the image sensor.

Collectively these factors are the cause of pixel-, column-, and/or row-mismatches across

the image sensor and their respective fixed pattern noise.

6.2.1 Linear (integrating) active pixel sensors Following the analysis reported in [19], for linear active pixel sensors (APS), shown in

Fig. 6.1 below, the effect of these sources is reflected as pixel-FPN due to the variation of

photodiode geometry (e.g. AD), dark current Idark, and due to variations of the transistor

parameters (Vth, W, L, Col) and/or column-FPN, which is mainly due to variations in Ibias-0

(Vbias-0, the bias of the column current-source-load). The effect of row-select-switch

“ON” resistance, rds, is usually neglected by treating it as an ideal switch (Vo ≈ VoF). The

source-follower NMOS (SF), also shown in Fig. 6.1, is in saturation with VGS,F greater

than the threshold voltage Vth,F, and VDS,F higher than VGS - Vth , so IDF (= Ibais-0) can be

simply expressed as:

Page 230: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

193

( ) ( )FDSFthFGSF

DF VVVI ,2

,, 12

λβ +−= with F

FoxnF L

WCµβ = (6.1)

where λ is the channel length modulation parameter, µn is the electron mobility, and Cox is

the gate oxide capacitance. WF, and LF are the effective channel width and length

respectively (smaller than the drawn ones, because of lateral diffusion of the source and

drain diffusions and/or due to the “encroachment” of field oxide into the channel) [38,

110], which is described as blurred transition from the gate-oxide to the field-oxide.

Figure: 6.1: Linear active pixel sensor (APS) with possible sources of fixed pattern noise (FPN). Idark is photodiode dark current, AD is the optical aperture, and CD is the sense node (IN) capacitance. Vth, Col, W, and L are the respective threshold voltage; overlap capacitance, gate width, and the gate length of the NMOSs. rds is the row-select-switch “ON” resistance. Ibias-0 is the bias current of the per-column current-source-load.

Field-oxide is the oxide layer that covers the whole die including the gate, and has a

thickness that is much larger than that of the gate-oxide [38]. Now, referring to Fig. 6.1,

VGS,F = Vin – VoF. So, if we neglect the channel length modulation, we could write the

steady state output voltage as:

)2( 0, −+−=≈ biasF

FthinoFo IVVVVβ

(6.2)

Load

Column Bus Pixel (i,j)

PD

VDD

M1

VDD

SF

Reset

Idark AD CD

Vth,F , (WF / LF)

Vth,1 Col,1

rds

Ibias-0 Vbias-0

IN Vin Vo VoF

Row

Page 231: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

194

where Vin is the voltage at the sense node (IN), which is this case can be expressed as:

DthDDin C

QVVV −−= 1, (6.3)

where Vth,1 is the threshold voltage of the reset NMOS (M1), VDD is the supply voltage

(3.3 V), CD is the capacitance at the sense node (IN), and Q is the charge accumulated on

the photodiode and it is given by :

( ) DDoldarkphD VCtIjAQ 1,int ++= (6.4)

where Col,1VDD term is the “feed-through” charge (when the reset-transistor is turned off),

Col,1 is the reset-transistor (M1) overlap capacitance, tint is the integration (exposure) time,

AD is the optical aperture (photosensitive area of the photodiode) and Jph the photocurrent

density (A/cm2) and is given by (refer to Equation 2.6):

chLq

AI

J io

D

phph

λη== (6.5)

where λ is the wavelength (~ 555 nm green-light) , ioL is the incident light intensity

(W/m2), η is the quantum efficiency 1≈ for cases of interest to us (controlled room light),

q is the electron charge (1.6 x 10-19 C), h is Plank’s constant (6.62 x 10-34 Js), and c is the

speed of light in space (3 x 108 m/s). From Equations (6.2, 6.3, 6.4), the steady state

output-voltage (showing possible sources of FPN) can be written as:

( ))

)(2

( 0

,1,int

1,FFoxn

biasFth

D

DDoldarkphDthDDoFo LWC

IV

CVCtIjA

VVVVµ

−+−++

−−=≈ (6.6)

Accordingly, we list all possible sources (Zi), their absolute sensitivities (defined as

ioi ZVS ∂∂= evaluated at nominal source value), and their effect on FPN as shown in

Table 6.1.

Table 6.1: Possible FPN sources, their sensitivities, and their effects on FPN [19]

Effect on FPN Source

(Zi)

Sensitivity

(Si) Class Type

Idark DCtint Pixel Offset

AD Dph CtJ int Pixel Gain

Page 232: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

195

CD 2DCQ Pixel Offset, Gain

Vth,1 1 Pixel Offset

Col,1 DDD CV Pixel Offset

Vth,F 1 Pixel Offset

FF LW ( ) 2/30 2 −

− FFoxnbias LWCI µ Pixel Offset

Ibais-0 ( )FFoxnbias LWCI µ021 − Column Offset

Assuming uncorrelated sources, the total FPN is the standard deviation (σ) of Vo variation

around its nominal (mean) value due to variation of the k sources ( kZZZ ,....., 21 ) around

their means ( kzzz ,....., 21 ) as:

mV .)(.)(1

22

1

22

,..... , 21

∑∑==

=∂∂=

k

iZi

k

iZ

zzzi

oV ii

k

oS

ZV σσσ (6.7)

where, iZσ is standard deviation of the variation of these sources around their means, iz .

Now, if we classify lZZZ ,....., 21 as those sources that affect pixel-FPN (σX) only, and

kll ZZZ ,....., 21 ++ as the sources that affect column-FPN (σY) only, then the total-FPN,

assuming uncorrelated sources, can be written:

mV 22YXVo

σσσ +=

where,

( ) 22/30

22222

2int2int

1

222

)2(

)()()()(

.)(

,1,1,

FF

FtholthDDdark

i

LWFFoxnbias

VCD

DDVC

DA

D

phI

D

l

iZiX

LWCI

CV

CQ

CtJ

Ct

S

σµ

σσσσσσ

σσ

−−

=

++++++=

=∑

and

( ) 20

11

222 )21(.)(0−−

+=

== ∑ biasi IFFoxnbias

k

iZiY LWCIS σµσσ (6.8)

Page 233: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

196

Similarly, we can write the total-FPN in terms of gain-FPN (σgain) that represents all

sources that include Jph (illumination dependent), and offset-FPN (σoffset), which includes

dark current as well as other offset voltages as:

mV 22gainoffsetVo

σσσ +=

where,

( ) ( ) 20

22/30

22222

1,int2int

1

222

)21()2(

)()

()(

.)(

0

,1,1,

−−−

=

+

+++++

+=

=∑

biasFF

FtholthDdark

i

IFFoxnbiasLWFFoxnbias

VCD

DDVC

D

DDoldarkI

D

ll

iZioffset

LWCILWCI

CV

CVCtI

Ct

S

σµσµ

σσσσσ

σσ

and

22

int2int

1

222 )()( .)(DDi C

D

DphA

D

phk

lliZigain C

tAJC

tJS σσσσ +== ∑

+=

(6.9)

The total-FPN (oVσ ) calculated using Equation (6.8), and/or using Equation (6.9) is

equal; they are only different in the classification of the sources of the FPN and/or their

contribution. Compared with total-FPN in CCD imagers (dominated by pixel-FPN), The

total FPN in linear CMOS image sensor suffers from large column-column variation

(column-FPN), which appears as “lines”, and may result in considerable degradation in

image quality [19] as shown in Fig. 6.2 below.

Figure: 6.2: The total FPN noise CMOS active pixel sensor (APS) right as compared with FPN noise in CCD imager. It is clear the CMOS APS suffer from additional column-FPN, which appears as vertical stripes in the image (from [19]).

Page 234: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

197

We noticed that the contributions of some FPN-sources are more dominant then

others depending on the conditions, and the type of FPN under investigation (pixel-FPN,

or column-FPN) and/or (gain-, or offset-FPN). For example, the level and the

wavelength of illumination (spectral response) may play an important role on gain-FPN

level [23, 111], but plays a minor role on offset-FPN level. Moreover, this total-FPN

may undergo dramatic change, if we change the pixel mode-of-operation from linear to

logarithmic, as will presented in following subsections.

6.2.2 Logarithmic (continuous) active pixel sensors A typical example of logarithmic active pixel sensor is shown in Fig. 6.3 below, where

the gate of reset transistor (M1) was connected to its drain (at VDD) to form a MOS-diode.

As we mentioned in previous chapters, because of the very low currents involved

phD II ≈1

(fA–nA), this diode-connected transistor continuously work in the subthreshold

(weak inversion) regime, thus the drain current is given [75, 89, 90, 91, 93]:

)1(11,1,1,

1

)1 1()(

1

1 T

DS

T

SB

T

thGS

vV

nvV

nvVV

oD eeeILWI

−−−

−= (6.10)

where

s

chsiTno

NqvIψ

εµ2

2= (6.11)

and where, n (typically 1…2) is a process parameter (subthreshold slope factor), which is

related to gate efficiency [75], vT is thermal potential (kT/q), with k as Boltzmann’s

constant, T is absolute temperature, and q is the electron charge. At room temperature vT

≈ 25mV. Nch is the doping concentration in the channel; µn is the mobility, εsi is the

silicon permittivity, and ψs is the surface potential. VSB,1 is the source-bulk voltage (≠

zero), since the bulk-node is not connected to the source-node. For VDS > 3kT/q (VDS ≈

VDD here), the last term of Equation (6.10) can be approximated to “1”. Accordingly, the

drain current can be written as:

)1 1()(

1

11,1,1,

1

−−

= nvV

nvVV

oDT

SB

T

thGS

eeILWI (6.12)

This equation can be solved for VGS,1 as:

Page 235: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

198

)1()(

ln 1,11

1,1.1 −++= nV

LWII

nvVV SBo

DTthGS (6.13)

having 1Ddarkph III =+ at steady state, DDG VV =

1, 1,1 GSDDinS VVVV −== , and inSB VV =1, ,

Equation (6.13) can be rewritten as:

)1()(

ln11

1, −−+

−−= nVLWI

IInvVVV in

o

darkphTthDDin (6.14)

solving for Vin , we end up with:

)(

ln11

1,

LWIII

vn

VVV

o

darkphT

thDDin

+−

−= (6.15)

Figure: 6.3: Logarithmic active pixel sensor (APS) with possible sources of fixed pattern noise (FPN). Note the bulk-node of the transistor M1 is connected to the ground, but not shown for clarity. Now, using the assumption that the channel length modulation can be neglected; hence,

we can use Equation (6.2) to determine the output voltage:

))(

2(

)(ln 0

,11

1,

FFoxn

biasFth

o

darkphDT

thDDoFo LWC

IV

LWIIJA

vn

VVVV

µ−+−

+−

−=≈ (6.16)

Based on this equation, all possible sources (Zi), their absolute sensitivities, and their

effect on FPN can be listed as shown in Table 6.2 below. Note that because n is a

Load

Column Bus Pixel (i,j)

PD

VDD

M1

Ibias-0 Vbias-0

Vo

VDD

SF

Idark AD

Vth,F , (WF / LF)

Vth,1 (W1 / L1)

Io, n

rds

IN Vin VoF

Row

Page 236: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

199

process parameter and Io depends Nch (the doping concentration in the channel), we added

them to the list of possible FPN sources.

Now, again assuming uncorrelated FPN sources, the total-FPN can be written:

mV 22YXVo

σσσ += (6.17)

Table: 6.2: Possible FPN sources, their sensitivities, and their effects on FPN for the Log-APS.

Effect on FPN Source

(Zi)

Sensitivity

(Si) Class Type

Idark )( darkphDT IJAv + Pixel Gain

AD )( darkphDTph IJAvJ + Pixel Gain

W1/L1 )( 11 LWvT Pixel Offset

Io oT Iv Pixel Offset

Vth,1 1/n Pixel Offset

n 21, /)( nVV thDD − Pixel Offset

Vth,F 1 Pixel Offset

FF LW ( ) 2/30 2 −

− FFoxnbias LWCI µ Pixel Offset

Ibais-0 ( )FFoxnbias LWCI µ021 − Column Offset

where,

( ) 22/30

222

1,2

22

11

22

1

222

)2())(

()1(

)())(

()()(

.)(

,1,

11

FFFthth

oDdark

i

LWFFoxnbiasVnthDD

V

Io

TLW

TA

darkphD

TphI

darkphD

T

l

iZiX

LWCIn

VVn

Iv

LWv

IJAvJ

IJAv

S

σµσσσ

σσσσ

σσ

−−

=

++−

+

++++

++

=

=∑

and

( ) 20

11

222 )21(.)(0−−

+=

== ∑ biasi IFFoxnbias

k

iZiY LWCIS σµσσ (6.17)

Page 237: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

200

Similarly, we can write the total-FPN in terms of gain-FPN (σgain) that represents all

sources that include Jph (illumination dependent), and offset-FPN (σoffset), which includes

dark current as well as other offset voltages as:

mV 22gainoffsetVo

σσσ +=

where,

( ) ( ) 20

22/30

222

1,222

11

1

222

)21( )2(

))(

()1()())(

(

.)(

0

,1,11

−−−

=

++

+−

+++=

=∑

biasFF

Fththo

i

IFFoxnbiasLWFFoxnbias

VnthDD

VIo

TLW

T

ll

iZioffset

LWCILWCI

nVV

nIv

LWv

S

σµσµ

σσσσσ

σσ

and

22

1

222 )()( .)(Ddarki A

darkphD

TphI

darkphD

Tk

lliZigain IJA

vJIJA

vS σσσσ+

++

== ∑+=

(6.18)

6.2.2.1 Two-MOS Diode-Logarithmic APS (2Log-APS) Referring to Equations (6.15), and/or (6.16), the slope (i.e. voltage change per decade

photocurrent) of this linear equation in the semi-log chart of the transfer characteristics

(Vo versus Iph) is vT. To enhance the logarithmic response (compression reduction) and

thus make the imager suitable for motion detection applications, this slope should be

increased. One way to increase this slope is to add another diode-connected transistor as

shown in Fig. 6.4 below. Since both MOS-diodes are operating the weak inversion

regime, Equation (6.13) can be used to calculate VGS,1, and VGS,1 for transistors M1 and M2

respectively:

)1()(

ln 1,1,1.1 −++= nV

LWII

nvVV SBo

DTthGS (6.19)

and

)1()(

ln 2,2,2.2 −++= nV

LWII

nvVV SBo

DTthGS (6.20)

At steady state 1Ddarkph III =+ ,

Page 238: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

201

+==

−−=

1,2,

1,

2,1,

GSinSB

inSB

GSGSDDin

VVVVV

VVVV (6.21)

Using the above equations, and solving for the sense node voltage, Vin , yields:

)(ln112

2,1,

LWIII

vnn

VVVV

o

darkphT

ththDDin

+

+−

−−= (6.22)

Figure: 6.4: Logarithmic active pixel sensor (APS), with two MOS-diodes (2Log) to increase the logarithmic slop. Possible sources of fixed pattern noise (FPN) are also shown. Note that bulk-nodes of transistors (M1, M2) are both connected to the ground, but not shown for clarity.

Comparing the voltage at the sense-node (IN) for logarithmic active pixel sensor

of 2-NMOS diode (2Log-APS) represented by the above equation with logarithmic APS

of 1-NMOS diode (Log-APS) represented by Equation (6.15) above, we noticed that in

addition to offset voltage reduction by at least a factor of n, the slope (gain) is increased

by a factor ( )n11+ , which have a maximum value of 2, when n = 1, hence enhancement

in the logarithmic response, and reduced compression. This leads to more electrical

sensitivity (output swing). This appears as a slope increase in the Semi-log plot of the

Load

Column Bus Pixel (i,j)

PD

VDD

M2

Ibias-0 Vbias-0

Vo

VDD

SF

Idark AD

Vth,F , (WF / LF)

Vth,2 (W / L)

n, Io

rds

IN Vin VoF

RowM1

Vth,1 (W / L)

n, Io

Page 239: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

202

sense voltage, Vin versus the light intensity, Li (photocurrent Iph is linear function in Li) as

shown in Fig.6.5 below. This figure represents HSPICE [80, 81] simulations for Log-

APS and 2Log-APS using 0.5 µm CMOS technology data with VDD ~ 5V. This graph is

similar to one shown in Fig. 3.4 in Chapter 3. Here, we used absolute values of the sense

voltage instead of normalized1 values used in Fig. 3.4, to show the change in both the

offset voltage and the slope.

2.5

2.9

3.3

3.7

4.1

4.5

0.0001 0.001 0.01 0.1 1 10 100 1000

Light Intensity (W/m2)

Sens

e Vo

ltage

Vin

(V

)

1-NMOS

2-NMOS

Figure: 6.5: Simulated transfer-characteristics of logarithmic active pixel sensor (APS) using 0.5 µm CMOS technology data and HSPICE simulator. The upper curve represents the sense voltage for Log-APS (1-NMOS), whereas the lower curve represents the sense voltage of 2Log-APS (2-NMOS).

The above plot shows that both curves are almost constant at low intensities, and

linear over four decades of light intensities (0.01-100 W/m2), with slopes of ~ 68.5

mV/decade and ~ 129.8 mV/decade for Log-APS and 2Log-APS respectively.

_______________ 1In Fig. 3.4, we plotted values of the sense-node voltage that are normalized to the maximum of Log-APS response, to clearly show the change in slope, which is related to the compression (output swing).

Page 240: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

203

Theoretically (from Equations 6.15, 6.22), the division 129.8 mV/decade by 68.5

mV/decade ~ 1.895 corresponds to the factor ( )n11+ , and hence n ~ 1.12, which was

also depicted by a reduction in the offset voltage (low light intensity portion of the

curves). To further enhance the response, one may use of PMOS instead of NMOS for

M1 transistor. However, this cannot be accomplished without sacrificing fill-factor

(active photodiode area over total pixel area), which is related to the optical sensitivity

and/or degradation in image quality due to the poor matching behavior exhibited by

PMOS transistors [38]. Another alternative for enhancing logarithmic response is by

adding a 3rd NMOS diode in cascade with the other two NMOS diodes already present in

2Log-APS to form a 3Log-APS. The resulted slope enhancement is clearly illustrated in

Fig. 3.4 (Chapter 3), but again, this can only be done at the expense of fill-factor. It

seems the 2Log-APS is the best configuration for many applications.

Now, as we did before, assuming negligible channel length modulation effect on

the drain current (IDF) of the source-follower transistor, Equation (6.2) still can be used to

determine the output voltage as:

))(

2()(

ln11 0,2

2,1,

FFoxn

biasFth

o

darkphDT

ththDDoFo LWC

IVLWIIJA

vnn

VVVVV

µ−+−

+

+−

−−=≈ (6.23)

Based on this equation, all possible sources (Zi) can be listed along with their absolute

sensitivities, and their effect on FPN as shown in Table 6.3 below.

Table: 6.3: Possible FPN sources, their sensitivities, and their effects on FPN for the 2Log-APS.

Effect on FPN Source

(Zi)

Sensitivity

(Si) Class Type

Idark ( ) )(11 darkphDT IJAnv ++ Pixel Gain

AD ( ) )(11 darkphDTph IJAnvJ ++ Pixel Gain

W/L ( ) )(11 LWnvT + Pixel Offset

Io ( ) oT Inv 11+ Pixel Offset

Vth,1 21 n Pixel Offset

Page 241: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

204

Vth,2 21 n Pixel Offset

n

)(ln

)(2 23

2,1,

LWIIJA

nv

nVVV

o

darkphDTththDD ++

−−−

Pixel Gain/Offset

Vth,F 1 Pixel Offset

FF LW ( ) 2/30 2 −

− FFoxnbias LWCI µ Pixel Offset

Ibais-0 ( )FFoxnbias LWCI µ021 − Column Offset

Now, assuming uncorrelated FPN sources as before, the total-FPN can be written:

mV 22YXVo

σσσ +=

where,

( ) ( ) ( )

( )

( ) 22/30

2

223

2,1,

22

22

2

222

1

222

)2(

))(

ln)(

2(

)1()1()11(

))(

11()11

()11(

.)(

,

2,1,

11

FFFth

ththo

Ddark

i

LWFFoxnbiasV

no

darkphDTththDD

VVIo

T

LWT

AdarkphD

TphI

darkphD

T

l

iZiX

LWCI

LWIIJA

nv

nVVV

nnInv

LWnv

IJAnvJ

IJAnv

S

σµσ

σ

σσσ

σσσ

σσ

−−

=

+

+

+

+−−

++++

++

+++

++

+=

=∑

and

( ) 20

11

222 )21(.)(0−−

+=

== ∑ biasi IFFoxnbias

k

iZiY LWCIS σµσσ (6.24)

Similarly, we can write the total-FPN in terms of gain-FPN (σgain) that represents all

sources that include Jph (illumination dependent), and offset-FPN (σoffset), which includes

dark current as well as other offset voltages as:

mV 22gainoffsetVo

σσσ +=

where,

Page 242: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

205

( ) ( )

( ) ( ) 20

22/30

2

23

2,1,22

22

22

1

222

)21()2(

))(

2()1()1(

)11())(

11(

.)(

0,

2,1,

11

−−−

=

++

+−−

++

++++=

=∑

biasFFFth

thth

o

i

IFFoxnbiasLWFFoxnbiasV

nththDD

VV

Io

TLW

T

ll

iZioffset

LWCILWCIn

VVVnn

Inv

LWnv

S

σµσµσ

σσσ

σσ

σσ

and

( ) ( )

+

+

+++

++

+== ∑

+=

22

22

1

222

))(

ln(

)11

()11

( .)(

no

darkphDT

AdarkphD

TphI

darkphD

Tk

lliZigain

LWIIJA

nv

IJAnvJ

IJAnv

SDdarki

σ

σσσσ (6.25)

Note that all of these analyses are based on the assumption that all sources are

uncorrelated to ease the formulation. This is not necessarily a good assumption in

general [19], as many of these sources are cross related, and therefore, the total FPN

determined by simply square-rooting the linear addition of the variances (σ2) of different

sources may not be generally accurate. For correlated sources, determining the explicit

contribution (variance) of each source and/or separating column from pixel FPN or gain

from offset FPN are difficult tasks, if not impossible ones. Studying the contribution of

each possible source of FPN needs special design-for-test arrangement and statistical

modeling that are outside of the scope of this thesis.

Here, we are not attempting to model or formulate FPN precisely, but rather

having a broad-spectrum understanding on how the different sources may contribute to

the total FPN that can be qualitatively used to devise a technique to reduce the FPN from

the imager output signal. For example, upon comparison of FPN in the three modes of

operation (refer to Table 6.1-3), we noticed identical column-FPNs that are mainly offset-

FPNs. This suggested the possibility of removing this FPN, especially from logarithmic

APS, as we will see later when we introduce our “Modal Double Sampling” (MDS)

technique. We will also use this study later to qualitatively interpret the behavior of the

FPN type with different system parameters, such as illumination, frame rate, and Vbias-0.

Page 243: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

206

6.3 3-Mode active pixel sensors (3M-APS) The “super” pixel (shown in Fig. 6.6) constitutes the core of the proposed MDS technique

for FPN noise reduction. This novel pixel design can work in 3-mode of operations,

linear-, Log-, 2Log-APS-, by choosing specific control signals and/or clocks (refer to the

inset table in Fig. 6.6). If we ground the gate of M1 (Log), we can have either linear

(integrating) APS by providing the appropriate “reset” clock to the gate of M3 (Reset), or

Logarithmic (continuous) APS by making this gate high at VDD. By grounding the gate

of M3, and making the gate of M1 high at VDD, we may accomplish the 3rd mode of

operation. M1 will be kept in diode-connected form for all modes operations. Possible

sources of FPN are also listed on Fig. 6.6 below. We added the respective overlap

capacitance, which is responsible for feed through when one of the transistors (M1 or M3)

is off [19, 90]. It is worth noting that DALSA Incorporated separately developed a pixel

that has a similar structure, but with different functionality (refer to [112]).

Figure: 6.6: 3-Mode logarithmic active pixel sensor (3M-APS), Linear-APS, Log-APS, and 2Log-APS. Possible sources of fixed pattern noise (FPN) are also shown. Note that bulk-nodes of transistors (M1, M2, M3) are all connected to the ground, but not shown for clarity. The control signal Log and Reset can be either high (VDD), low (Gnd), or clock depending on the required mode of operation (as shown in the inset table).

Load

Column Bus Pixel (i,j)

PD

VDD

M2

Ibias-0 Vbias-0

Vo

VDD

SF

IdarkAD CD

Vth,F , (WF / LF)

Vth,2, Vth,3 (W / L)

(W3 / L3) Col,2 Col,3

Io, n

rds

IN Vin VoF

RowM1 Vth,1 (W / L)

VDD

M3 Reset Log

Mode Log Reset Linear-APS Low Reset clock Log-APS Low High 2Log-APS High Low

Page 244: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

207

Ideally, we should end up with responses and FPN levels for each mode of

operation that are identical to those found in the corresponding separated pixels discussed

earlier. It was subsequently learned, however, that because of low currents involved

(especially at very low light intensities), leakage and feed-through currents (when either

M1 or M3 turned off) could be comparable to the photocurrent. In this case, both branches

of the “super” pixel are working to some degree, even if either of them is ‘idled’ by

grounding its transistor gate (M1 in left branch, and M3 in right branch). Thus, for any

mode of pixel operation, the percentage contribution of each branch to total current

(governed by Iph) determines the response of that mode operation.

In Fig. 6.6 above, for Log-APS (or Linear-APS) mode operation, the left branch

of the pixel circuit (consisting of M1, M2) is idled by forcing the “Log” node low at 0V,

whereas the right branch (which consists of M3) is activated by putting the “Reset” node

high at VDD (or reset clock). The reverse is true for 2Log-APS, where the left branch is

the activated one, while the right branch is idled by grounding the gate of M3.

From the current analysis of the 3M-APS in its three modes of operations, we

found that the both Log-APS and 2Log-APS would not be affected by the current

“sharing” between the two branches, because the current of the active branch is much

higher than that of the idle branch. For Linear-APS mode, however, the contribution

from both branches are comparable, especially if M1, M2, M3 have the same aspect ratio

(W/L), and/or at lower light intensities (higher Vin). Accordingly, the following scenario

is expected: at low light intensities, the Linear-APS response would be distorted and may

have a compressive nature (because of the non-linearity and/or logarithmic compression

that was introduced to it). As the light intensity increases, the linear-APS starts to

respond linearly with light intensity, until it saturates at some light intensity level. The

large discrepancy in response of the three modes occurs at high light intensities, as we

will notice in the following sections. To improve the response of the Linear-APS mode,

we may need to increase the aspect ratio of M3 (W3 /L3) compared to the aspect ratio of

M1 and M2 (W/L) to an extent that does not jeopardize the fill-factor and/or the resolution

of the imager. Alternatively, we can apply negative voltage to the gate of M2 instead of

0V. These actions cause reduction in current contribution from the left (idle) branch of

the pixel and hence the effect of this branch on of the linearity of the response.

Page 245: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

208

Figure 6.6 also shows possible FPN sources from the two branches of the pixel.

Now, with the argument presented in the previous paragraphs in mind, the total measured

FPN for any mode is expected to be a complicated function in photocurrent density, and

not just a simple linear combination of all possible sources listed in Tables 6.1-3 above.

So, it is not surprising to see a general increase in FPN as a result of the increased

number of potential sources of FPN. However, under some conditions, the combined

effect may reduce as a result of effect cancellation of some sources with others. This can

be seen as a rise and fall in the FPN within the operational light intensity range.

Now, to see how optical responses for each mode is affected by the presence FPN,

we measured the response at each mode using an actual imager (ALO) chip that was

designed with 3M-APS pixels, and fabricated using 0.35µm CMOS technology (refer to

Appendix C). The detailed setup of these measurements is shown in Appendix D.

Briefly, we used the National Instrument card with LabView oscilloscope software,

which also display the captured image in framed format using LabView gray levels

coding (±100 levels). In these measurements (true for all measurements in this chapter),

the data array of imager outputs (in voltage-form or converted to LabView gray level-

form) were constructed by the automatic (using LabView) averaging of 11 consecutive

captured frames in an attempt to reduce the effect of temporal random noise.

After excluding the bad pixels (25 columns) from the measured-data using a

threshold, we plotted the column means1, <columns>, and row means, <rows>, of imager

data arrays for each mode of operation. The measurement was held with Vbias-0 at 0.48 V,

Vbias-B at 0.58 V, and frame rate at ~23.32-f/s. Light intensity range was 0-160µW/cm2

(green light λ~540nm). The results are shown below in Figs. 6.7-6.9. These plots are

qualitatively representative of any column (row) in the array at a given light intensity,

although they may differ quantitatively because of the “mean” operation.

At low intensities, the linear mode (Linear-APS) response, shown in Fig. 6.7

exhibited a compressive response like the logarithmic pixel (overlapped curves). At

these light intensities (1-20µW/cm2), however, the imager demonstrated the best linearity

(linear mode). Finally, after 20µW/cm2, the imager starts to saturate with outputs that are

_______________ 1Throughout this chapter, “mean” is for the spatial-average (e.g. <columns>, <rows>), and “average’ is for the temporal-average for differentiate between them.

Page 246: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

209

constant with intensity (overlapped curves). The <columns> graph showed more

fluctuation (higher FPN) at low intensity compared with <rows> graph. Moreover, for

any given light intensity the <columns> response (Top) increases monotonically with

column position (column #). Since we are using an integrating sphere to ensure uniform

intensity distribution, we cannot attribute this behavior to the light source.

Figure 6.7: Gray level response of the imager in linear-mode (Linear-APS). Top graph is for the columns’ means <columns>, with bad column excluded, whereas the bottom graph is for <rows>. The measurement was held with Vbias-0 @ ~0.48 V, Vbias-B @-0.58 V, and frame rate @ ~23.32-f/s. Light intensity range is 0-160µW/cm2 (green light λ~540nm).

Page 247: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

210

This behavior can be attributed to the fading of the sampled pixel output as the

pixel is read out. According to the Raster scanning algorithm used here, when a row is

selected, all pixels in that row are sampled simultaneously in the respective sample-and-

hold circuit (S&H), but they read sequentially (column after column). Therefore, if the

S&H capacitors are small and/or their NMOS switches have higher leakage currents, the

reading of the 1st pixel will be different compared with the reading of last pixel in the

row.

The <rows> response (bottom), on the other hand, showed a constant nature for

all rows in the imager, which may be attributed to the fact that at a given light intensity,

the pixels of particular column in the array have the same response (if exposed to the

same light intensity), because they are sampled at the same time and read at the same

time relative to each row-timing). Moreover, the <row> graph exhibits no fluctuation

between adjacent pixels, which implies low row-FPN.

As for Log-APS, and 2Log-APS responses shown in Fig. 6.8 and Fig. 6.9 below,

both modes showed behaviors similar to Linear-APS with regard to the effect of column

and row position on the respective <columns> and <rows> response for the same above

mentioned reasons. In practice, imagers working with Log-APS or 2Log-APS modes

need no sampling because they are continuous in nature. We sampled here, because (as

we will see later) our “Modal Double Sampling” MDS technique for FPN reduction is

based on differencing sampled pixel outputs. Additionally, we need to compare FPN

before and after applying this technique. Therefore, responses need to be read via the

same path in order to make a fair comparison. The two modes exhibited high <columns>

fluctuations, which imply high column-to-column mismatch (high column-FPN), thus

advocate the need for FPN reduction. On the other hand, <row> fluctuations (like

Linear-APS) are minimum, which also suggest low row-FPN (pixel-FPN).

Correction of Measurements before FPN calculation

Due to a large variation of <column> data across the width of the imager (tops of Fig.

6.7, Fig. 6.8, and Fig. 6.9), these data need to be corrected before any fixed pattern noise

(FPN) calculations. Otherwise, the FPN will include these variations, which will act as

Page 248: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

211

an artificial source for FPN, especially for column-FPN. Please refer to Appendix F for

more detailed information on how we corrected these data.

Figure 6.8: Gray level response of the imager in logarithmic-mode (Log-APS). Top graph is for <columns>, with some bad columns excluded, whereas the bottom graph is for <rows>. The measurement was held with Vbias-0 @ ~0.48 V, Vbias-B @-0.58 V, and frame rate @ ~23.32-f/s. Light intensity range is 0-160µW/cm2 (green light λ~540nm).

Page 249: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

212

Figure 6.9: Gray level response of the imager in logarithmic-mode (2Log-APS). Top graph is for <columns>, with some bad columns excluded, whereas the bottom graph is for <rows>. The measurement was held with Vbias-0 @ ~0.48 V, Vbias-B @-0.58 V, and frame rate @ ~23.32-f/s. Light intensity range is 0-160µW/cm2 (green light λ~540nm).

Page 250: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

213

6.4 Modal Double-Sampled FPN Reduction Technique The well known correlated double sampling (CDS) circuit [15, 16, 105], introduced in

Chapter 4, was and still is the primary means for reducing FPN in linear (integrating)

active pixel sensors (APS). For logarithmic APS, on the other hand, off-chip techniques

with digital-calibration methods were reported (see [106]) as well as integrated analog-

calibration methods [102, 38, 107-109]. However, most of these techniques, especially

those integrated ones (of interest here), implemented the reduction functionality of FPN

at the expense of fill-factor (photosensitivity) and/or the resolution, with an applicability

that is limited to one mode of pixel operation.

Figure 6.10: Double-sampled FPN technique with 3M-APS operations, Linear-APS, Log-APS, and 2Log-APS. Note that bulk-nodes of transistors (M1, M2, M3) are all connected to the ground, but not shown for clarity. The control signal Log and Reset can be either high (VDD), low (Gnd), or clock depending on the required mode of operation (see Fig. 6.6).

VSH1

VSH2

SH1

SH2

Cj1

Cj2

Vbias-2

COL

Vbais2

COL

VDD

VDD

MSH2

MSH1

Load

Column Bus

Vbias-0

Vo

Pixel (i,j)

PD

VDD

M2

VDD

SF INVin

VoF

RowM1

VDD

M3 Reset Log

Page 251: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

214

Here, we describe a novel technique to reduce the FPN in two modes (linear,

logarithmic) of pixel operations with slight constraints imposed on the resolution and/or

photosensitivity. This technique is based on double sampling methods plus a novel

“super” pixel (3M-APS) design, discussed in the previous section, to accommodate the

FPN reduction functionality as shown in Fig.6.10 above. As already demonstrated, this

novel “super” pixel can be operated in three modes, the Linear-APS (with FPN

reduction), the logarithmic (Log-APS) with full-compression, and/or logarithmic (2Log-

APS) with lowered-compression if we ignore the logarithmic FPN reduction

functionality.

The double-sampled technique was implemented using an architecture (see Fig.

6.10) that consists (in addition to the 3M-APS imager) of two sample-and-hold (S&H)

circuits per column, two buffer PMOS amplifiers per column, and only one differencing

amplifier per imager (not shown) to reduce the column-FPN. The operation of the 3M-

APS was already presented in the last section. Therefore, we limit our discussion to the

double sample FPN reduction only. With FPN reduction functionality, the pixel

operation is limited to two modes: linear mode or logarithmic mode.

Figure 6.11: Functional block diagram of the proposed FPN reduction system using double sampling (DS) technique. The Log-APS (Linear-APS) outputs of pixels in active row (dark) are sampled in one S&H. After switching to 2Log-APS (resetting), the 2Log-APS (reset) outputs from the same pixels are sampled in the other S&H. The outputs are then differenced using differential amplifier (one/imager) to obtain the logarithmic (linear) output with reduced FPN.

DS

Active row

Output

Page 252: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

215

For the linear mode, the pixel will operate in Linear-APS with control settings as

before (refer to the inset table in Fig. 6.6). The functional diagram is shown in Fig. 6.11.

When a row is activated, the output from each pixel in that active row is sampled and

held in one of the S&H circuit (2 per columns) as shown above. Then this row is reset

and the output from each pixel is sampled and held in the other S&H circuit (refer to Sec.

4.2.1 for operation, and Fig. 4.2 for timing diagram). When reading (COL becomes low)

both signal-output and reset-output simultaneously, this will result in output with the

offset FPN (ideally) removed.

For the logarithmic mode of operation, the pixel will switch between two

logarithmic modes, Log-APS and 2Log-APS, with control settings as before (refer to the

inset table Fig. 6.6). The difference, however, is that these controls are bi-state clocks

instead of one-state control (see Fig. 6.12 below). In fact, the clock on M2 gate (Log) is

inverted clock on the M3 gate (Reset). Therefore, Log Reset = , or vice-versa.

Figure 6.12: Controls-timings and clocks for fixed pattern noise reduction in logarithmic mode using double sampling (DS) technique. ∆TR is the row-preparation-time1, and ∆t is the time interval between samples. Refer to the text and Fig. 6.10 for control settings definitions. For a given frame rate, both intervals are crucial in finding the minimum speed of pixel response.

_______________ 1Row-preparation time is the time at the beginning of “row-time2” assigned for sampling, switching to other mode, and sampling again. Therefore, ∆TR it is longer then ∆t, the time between samples. 2Row-time is time the cover all row operations including column read-out (during which row-select switch is active – high Row).

Reset

SH2

SH1

Double-sampled Fixed pattern noise reduction

64 Columns (COL.)

Row i

∆t

∆TR

Log

Page 253: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

216

According to Fig. 6.11 and Fig.6.12 for the logarithmic mode, when a row is

activated and the Reset node was high (and the Log node was low), the output from each

pixel in that row (Log-APS response) is sampled and held in one of the S&H circuit.

Then this row is switched to 2Log-APS made by making Reset node low (and Log node

high), where the output from the same pixels are sampled and held in the other S&H

circuit. When reading out (COL becomes low), both the Log-APS-signal and the 2Log-

APS-signal are then simultaneously differenced from each other. This results in a

logarithmic output with the column-offset FPN (ideally) removed, since both signal have

(theoretically) equal column offset FPN (refer to Tables 6.2, 6.3 and Figs. 6.8, 6.9 above).

The key issue here is the speed of response of the pixel circuit when switched

from one mode to another. However, we will postpone discussion of this issue for now.

We will start by presenting the results of our parametric investigation on the imager

response and the associated FPN before and after implementing our MDS FPN reduction

techniques. This involves measurements based on the actual imager (ALO-chip)

designed with this pixel (3M-APS) configuration and fabricated using 0.35µm CMOS

technology (refer to Appendix C). For a detailed setup of these measurements, refer to

Appendix D. These measurements were conducted for the three modes of operation and

include investigation of the effects of various accessible parameters such as light

intensity, frame rate, and bias voltages on the performance. The results are then analyzed

and discussed.

In these measurements, 11 consecutive frames were first averaged (using

LabView) to remove the contribution of temporal random noise [104, 111]. The resulting

data arrays have approximately Normal distribution histograms as shown in Fig. 6.13

below. A standard deviation, σ, of 6.9895 (refer to the inset of the histogram) is large.

This is because the histogram is for the entire data array and this σ value is for the

variation of pixel outputs from the “mean” of the entire array. The Normal distribution is

defined by the equation: 22

21 /)(

21 σµ

πσ−−= XeY

where µ is the mean of the variable X, and σ2 is the variance.

Page 254: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

217

Figure 6.13: Typical histogram of (11-frame averaged) the imager gray-level outputs (left graph). The measurement was held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and frame rate of ~23.32-f/s. Light intensity is 10 µW/cm2 with λ~540nm (green-light).

Accordingly, we can now calculate RMS -column-FPN, -pixel-FPN and/or -row-FPN

from the (11-frame averaged) data arrays using the following formulas [111]:

( )mV

1

2

−==∑

j

PPFPN j

j

columncolumn σ (6.26)

and

( )mV

1

2

−==∑

i

PPFPN i

i

rowrow σ (6.27)

and

( )mV

1 . ,

2,

−==∑

ji

PPFPN ji

jji

pixelpixel σ (6.28)

where, i is the number of rows, and j is the number of columns. iP is the row-mean

<row>, jP is the column-mean <column>, P is the mean of the entire array (after 11-

frames averaging). jiP, is the output of pixel at ith row and jth column, which can be either

in gray level form or an absolute voltage value. According to these formulas, column-

FPN is determined by calculating σ of the variation of the columns’-means <columns>

from the mean of the entire data array. A similar method is used for the row-FPN. Pixel-

Page 255: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

218

FPN, however, is determined by calculating σ of the variation of pixels’ outputs in one

column from that column’s mean<column>, for all columns in the imager array. It is

worth noting that all bad pixels with cosmetic defects were identified (using thresholds)

and excluded from all calculations of “rms” FPN. The results were categorized according

to the parameter under investigation and the mode of operation. In the next sections, we

will be presenting photoresponses, FPN behavior with light intensity, and FPN reduction

efficiency using the proposed modal double sampling technique. This procedure will be

repeated with other parameters such as frame rate section and current-source-load bias.

6.4.1 Photoresponse and the effect of light intensity on FPN 6.4.1.1 Linear Mode The photoresponse of the ALO image sensor (refer to Appendix C & D) in the linear

mode is shown in Fig. 6.14 below. Each point in this graph was constructed by

calculating the mean-value of the output of all pixels in the imager array (64×64) with

bad pixels excluded using gray level thresholds (done manually here). Since the

differencing is done off-chip, we have the ability to choose which of the column-buffer-

amplifier outputs (refer to Fig. 6.10) is differenced for the other. Here, VSH1 represents

the pixel “read-out” signal, whereas VSH2 represents the “reset” signal. One differencing

(DS-2) results in (+) response, whereas the other (DS-1) results in the mirrored (-)

response.

It is important to mention that the photoresponse shown here (black squares) is

inverted (expected to be monotonically increasing then saturate) because of the output

buffer amplifier. This is true for all measurements in this chapter. Nevertheless, the FPN

results will not be affected since it based on relative variation rather than absolute values.

This imager seems to start to saturate at ~ 30µW/cm2 as shown. This is also confirmed

by the pixel-FPN plot (Fig. 6.15) below, which initially rises due to the gain-FPN

(PRNU) sources, but later falls to some minimum, when the pixel saturates, because the

sense capacitor CD and optical aperture AD (gain-FPN sources) do not affect the output

voltage when the pixel saturates1 [111].

_______________ 1When a pixel saturates, photocurrent will be constant at saturation level, and so neither CD nor AD variation can have any effect on it; hence, any variation is due to mismatch (since all of the imager is saturated).

Page 256: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

219

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

0 40 80 120 160 200

Intensity (µW/cm2)

Abs

. out

put v

olta

ge d

iffer

ence

∆V

(V)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Out

put V

olta

ge (V

)VSH1-VSH2

VSH2-VSH1

VSH1

Figure 6.14: Imager photoresponse in the linear mode, VSH1 (right scale) and with DS (left scale) differencing. DS-1, DS-2 is the output depending on which of VSH1, VSH2 was differenced from the other. The measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and frame rate of ~23.32 f/s. The light source is a green Halogen lamp (λ~540nm), with light intensity ranges from 0-160µW/cm2.

0

10

20

30

40

50

60

0 50 100 150 200Intensity (µW/cm2)

FPN

(mV)

0

0.5

1

1.5

2

2.5

3FP

N (%

) before DS

after DS

Figure 6.15: Pixel-FPN at different light intensities in the linear mode. The MDS noise-reduction operation has no effect on pixel FPN as shown. Note that the absolute value (mV) curves are overlapped with the percentage value (%) curves as shown. The measurements were held at the same conditions stated above. Graph data was calculated using Equation 6.28.

Page 257: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

220

Figure 6.15, above, shows that the modal double sampling (MDS) technique has

no effect on pixel-FPN (overlapped FPN curves before and after MDS). This is

attributed to the fact that pixel-FPN is dominated by gain-FPN, which cannot be removed

by double sampling techniques [19, 103]. This equally applies to the row-FPN, which is

actually a spatially averaged pixel variation. Therefore, it is expected to be (at least)

qualitatively identical to the pixel-FPN as confirmed in Fig. 6.16 below. Note that the

total (measured) FPN consists of two components: “offset” component (dark-FPN) and

“gain” component (pure PRNU). Also note that throughout this chapter, the percentage

(%) FPN (right scale) are calculated relative to the full well capacity (1820mV).

0

10

20

30

40

50

60

0 50 100 150 200Intensity (µW/cm2)

FPN

(mV)

0

0.5

1

1.5

2

2.5

3

FPN

(%)

before DS after DS

Figure 6.16: Row-FPN at different light intensities in the linear mode. The DS noise-reduction operation has no effect on row spatial-noise as shown. The measurements were held at the same conditions stated above. The graph data was calculated using Equation 6.27.

For the column-FPN, the situation is somewhat different. Like pixel-FPN, it rises

at low light intensities, then levels off when the imager saturates (see Fig. 6.17 below).

The rise of FPN with light intensity implies the existence of gain-FPN sources that affect

the column-FPN. Hence, it is not a “pure” offset as suggested by Table 6.1, which did

Page 258: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

221

not take into account the horizontal1 pixel-to-pixel mismatch sources, which obviously

contributes to the total column-FPN (column-to-column mismatches). This conclusion is

supported by the fact that all gain-FPN sources are from pixel mismatches. Moreover,

note the qualitative similarity between the FPN-curves before and after the DS operation.

This supports our argument about the existence of gain-sources that contribute to the

column-FPN, which cannot be reduced by double sampling (DS) technique [19, 103].

Unlike the pixel-FPN (which is mainly gain-FPN), the column-FPN does not fall to zero,

but levels off to a ~constant level after saturation as shown below. A similar result was

reported by Decker et al. [111].

Figure 6.17, below, also shows that there is significant reduction in column-FPN

using our double sampling technique. Because the DS operation cannot remove the gain-

FPN [19, 103], the residual FPN is mainly from gain-FPN sources as shown. These

results (Fig. 6.17) show that in most of the range of light intensity under investigation in

the experiment, there is at least ~25mV-voltage difference between absolute-FPN (left

scale) before and after the DS operation. This difference increases to 50mV for dark-

FPN, which is mainly offset-FPN. Therefore, it can be completely removed as shown in

Fig. 6.17 below. This is an important improvement in the FPN level confirmed by the

percentage of FPN reduction graph shown in Fig. 6.18, which can be regarded as FPN

reduction efficiency of the MDS technique. It is defined as:

( )( ) 100FPN/FPNFPN Reduction FPN Percentage DS beforeDSafter DS before ×−=

Our results (Fig. 6.18) shows that this reduction efficiency starts at ~100% at dark-FPN

(complete removal) and falls to a minimum of ~40% at the operational intensity range,

below the saturation intensity (~30 µW/cm2). Note that calculating reduction efficiencies

for pixel-FPN and row-FPN is insignificant since very little reduction is achieved for

either case. These results suggest that the modal double sampling technique (MDS) is

very effective for eliminating the column-FPN reduction in imagers working in the linear

mode of operation, especially at offset-FPN components.

_______________ 1Note the calculated pixel-FPN (refer to Equation 6.28) is for the vertical pixel-to-pixel mismatches, since it is based on vertical variation of pixel outputs relative to the columns means <columns>.

Page 259: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

222

0

10

20

30

40

50

60

70

80

90

0 50 100 150 200

Intensity (µW/cm2)

FPN

(mV)

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

FPN

(%)

before DS

after DS

Figure 6.17: Column spatial-noise (FPN & PRNU) at different light intensities in the linear mode. The MDS noise-reduction operation is clearly depicted by differences in level between column-FPN without MDS operation (upper curves) and with the MDS operation (lower curves). These measurements were held at the same conditions stated above. FPN was calculated using Equation 6.26.

0

20

40

60

80

100

120

0 50 100 150 200

Intensity (µW/cm2)

Perc

enta

ge F

PN R

educ

tion

(%)

Figure 6.18: Column-FPN percentage reduction with MDS (modal double sampling) at different light intensities in the linear mode. These measurements were held at the same conditions stated above.

Page 260: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

223

6.4.1.2 Logarithmic Mode The photoresponse of an ALO image sensor (refer to Appendix C & D) operating in the

logarithmic mode is shown Fig. 6.19 below. As before, each point in this graph was

constructed by calculating the mean-value of the output voltage of all pixels in the imager

array (64×64) with the exclusion of bad pixels using gray level thresholds. Trend lines

were added to show the logarithmic nature of these transfer-characteristics and to

differentiate between the two modes. The behavior of the responses show inverted

characteristics, which are caused by the output buffer amplifiers.

Figure 6.19: Imager photoresponse in the logarithmic mode without (modal double sampling) MDS differencing operation. The measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and frame rate of ~23.32 f/s. The light source is green Halogen lamp (λ~540nm), with light intensity range from 0-160µW/cm2.

Here, the outputs of column-buffer-amplifier VSH1, and VSH2 represent the “Log-

APS” read-out signal and the “2Log-APS” read-out signal, respectively. Again, since the

differencing is done off chip, we can have either (-) response (DS-1), or mirrored (+)

response (DS-2) as shown below in Fig. 6.20. Again, each point in this graph was

Page 261: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

224

constructed by calculating the mean-value of the output voltage of all pixels in the imager

array (64×64) and with bad pixels excluded. Trend lines were added here to show that

the logarithmic-nature of these transfer-characteristics survived the double sampling (DS)

operation with both differencing methods. For an on-chip differencing implementation,

both options are possible by using the proper control-timing pattern.

Figure 6.20: Imager photoresponse in the logarithmic mode using MDS differencing, with DS-1 (2log-Log), or with DS-2 (Log-2Log) is the output depending on which of the column-buffer-amplifier outputs (VSH1 [Log], VSH2 [2Log]) was differenced from the other. These measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and frame rate of ~23.32 f/s. The light source is green Halogen lamp (λ~540nm), with light intensity range from 0-160µW/cm2.

Figure 6.21, below, shows pixel-FPN and row-FPN. Both exhibit similar

qualitative behavior with light intensity since row-FPN is actually a spatially averaged

pixel-variation. Because the logarithmic pixel does not saturate (like the linear pixel), the

pixel- (row-) FPN does not fall to zero. Both measured FPN show low-level random

fluctuations with light intensity. Because of their randomness and low levels (< 3mV),

Page 262: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

225

they can be attributed to random noise from the system. Moreover, since the light

intensity was changed manually, every point in the graph can be regarded as a result of an

independent experiment.

0

0.5

1

1.5

2

2.5

3

3.5

0.1 1 10 100 1000

Intensity (µW/cm2)

FPN

(mV)

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

FPN

(%)

before DS

after DS

A

0

0.2

0.4

0.6

0.8

1

1.2

0.1 1 10 100 1000Intensity (µW/cm2)

FPN

(mV)

0

0.01

0.02

0.03

0.04

0.05

0.06

FPN

(%)

before DS after DS

B

Figure 6.21: Pixel-FPN (A) and Row-FPN (B) at different light intensities in the logarithmic mode. The DS noise-reduction operation has almost no effect on pixel-FPN and /or row-FPN as shown. These measurements were held at the same conditions stated above.

Page 263: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

226

Figure 6.22, below, shows that the column-FPN (upper curve) decreases with

light intensity. This implies that the gain-sources in the column-FPN collectively caused

its fall with light intensity. Recalling the pixel- (gain) FPN sources in Table 6.2, we

found that the collective effect of these two sources may cause the reduction of the FPN

with light intensity within a specific light intensity range.

0

10

20

30

40

50

60

0.1 1 10 100 1000

Intensity (µW/cm2)

FPN

(mV)

0

0.5

1

1.5

2

2.5

3

FPN

(%)

before DS

after DS

Figure 6.22: Column-FPN at different light intensities in the logarithmic mode. The DS noise-reduction operation is clearly depicted by the difference level of the spatial-noise without (upper curves) and with (lower curves) the DS operation. These measurements were held at the same conditions stated above.

The reduction in column-FPN using the MDS technique is quite obvious, with

differences ranging from~52mV at low intensities (2.84 %-FPN) to ~26.9mV (1.48 %-

FPN) at high intensities. This is a significantly high reduction in the FPN levels and

suggests that the modal double sampling (MDS) technique is an efficient method for FPN

reduction in the logarithmic mode of pixel operation. This statement is also supported by

the percentage FPN reduction plot shown in Fig. 6.23 below as function of light intensity.

The trend line was added to the plot to explain the behavior of the FPN reduction

Page 264: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

227

efficiency with light intensity. Contrary to the linear mode, the FPN reduction efficiency

in logarithmic mode has a wider range of high values (~ 95%) that spans a light intensity

range from 0-50µW/cm2. Hence, this technique has potential in a wide range of

application.

75

80

85

90

95

100

0.1 1 10 100 1000

Intensity (µW/cm2)

Perc

enta

ge F

PN R

educ

tion

(%)

Figure 6.23: Column-FPN percentage reduction with DS (double sampling) at different light intensities in the logarithmic mode. These measurements were held at the same conditions stated above.

Generally speaking, the modal double sampling (MDS) would remove most of

offset-FPN ( 33 LW , LW , Vth,1, Vth,2, Vth,3, Col,2 Col,3, FF LW , Vth,F, Ibais-0), but not gain-

FPN sources, which are unaffected by MDS operation (as also confirmed elsewhere [103,

19]). Moreover, the double sampling cannot reduce the reset-kTC1noise because here we

first sample the signal (involving reset-noise and FPN-noise), then the pixel is reset. The

FPN-noise will not change from its level when the signal is sampled, but the kTC-noise

will be changed after the reset. Therefore, we can reduce FPN by DS differencing, but

_______________ 1kTC is charge redistribution noise originate from the uncertainty of the amount of charge on a capacitor after disconnecting from a low impedance supply [49]. The rms-noise voltage is given by CkTkTCrms = . [23].

Page 265: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

228

not the (uncorrelated) kTC noise [23, 113, 49]. In the logarithmic mode, the situation is

similar. We sampled the Log-APS first, and then we sampled the 2Log-APS. Both have

comparable FPN but different kTC noise. Thus, the DS differencing will reduce the

FPN, but it cannot reduce the kTC noise. The level of the kTC is almost constant for all

mode operations, since the photodiode capacitor is almost the same for all operations.

Additionally, the DS method cannot cancel offset-FPN that involves Idark, since it acts as

a gain-FPN source under illumination (refer to Table 6.2) that cannot be eliminated by

DS.

The major drawback of the DS technique could be the addition of more kTC -

noise due to the S&H switches MSH1, MSH2 [19] as shown in Fig. 6.10. Moreover, there

are the effects of the non-linearity of the photodiode capacitance and/or the source

follower transfer characteristic. These may transform some offset-FPN sources into gain-

FPN sources that cannot be reduced by the double sampling (DS) technique [103, 19].

6.4.2 Effect of frame rate on imager response and FPN Here, we will examine the effect of frame-rate on imager response and FPN for the 3M-

APS. When FPN reduction functionality is used, this pixel can work in two modes:

linear and logarithmic. Therefore, it desirable to investigate the effect of frame-rate on

imager response and FPN level for each mode separately.

6.4.2.1 Linear Mode The 3D-plot of the ALO-imager (refer to Appendix C & D) response to frame-rate in the

linear mode is shown Fig. 6.24 below. Each point in this graph was constructed by

calculating the mean-value of the output voltage of all pixels in the data array (64×64).

Bad pixels were manually excluded using gray level thresholds. Each data array is a

result of automatic averaging (using LabView) of 11 consecutive frames to reduce the

random-temporal noise.

This graph shows the combined effect of both light intensity and frame rate on the

imager response. The transfer characteristic (photoresponse) shown here is consistent

with that presented in the previous sections (refer to Fig. 6.14). The effect of frame-rate

appears as a shift in the characteristic (curve) of the imager to higher intensities as the

Page 266: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

229

frame-rate increases. Both the onset-level to the linear operational range and the

saturation-level of the imager were shifted to higher light intensity levels as the frame-

rate increases. The shift of saturation to higher intensities (desirable) can be explained in

terms of integration-time. Recalling that for charge at the sense

node, ( ) int tIjAQ darkphD += , and at low frame-rates, the integration time is longer. This

allows for more charge accumulation at the sense node. Hence, only lower intensity

levels are needed to reach saturation (full well capacity). At high frame-rates, the

integration-time is short, so the imager needs higher intensities to reach saturation.

Shifting the linear region (operational range) to higher intensities has a negative effect on

the sensitivity of the imager to low intensity levels (flat response). The major advantage

is the ability to control the linear operational range by choosing the proper frame-rate for

specific types of applications.

11.6623.3246.64116.6233.2

0.5

20.0

60.0

160

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Out

put v

olta

ge (V

)

Frame rate (f/s)

Light Intensity (µW/cm2)

0.6-0.70.5-0.60.4-0.50.3-0.40.2-0.30.1-0.20-0.1

Figure 6.24: 3D-imager response to frame rate in the linear-mode at different light intensities (without MDS). The measurements were held with Vbias-0 ~ 0.48 V, Vbias-B ~ -0.58 V, and light intensity in the range of ~ 0.5-160µW/cm2 (green @ λ~540nm). The frame rates range was ~11.66 f/s to 233.2 f/s.

Page 267: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

230

0.2

0.3

0.4

0.5

0.6

0 50 100 150 200 250Frame rate (f/s)

Out

put v

olta

ge (V

) 0.510.020.040.060.0100160

Figure 6.25: Imager frame-rate response in the linear-mode at different light intensities. The measurements were held with Vbias-0 ~ 0.48 V, Vbias-B ~ -0.58 V, and light intensity in the range of ~ 0.5-160µW/cm2 (green @ λ~540nm). The frame rates range was ~11.66 f/s to 233.2 f/s.

Another useful way to show the frame-rate response is the 2D-plot shown in Fig.

6.25 above, which will be used later in the discussion. The response to frame-rate

variation is flat at two extremes: low light intensities and high light intensities. Between

these two extremes, the response is linear, with the widest observed linearity

(operational) range at a light intensity level of 20µW/cm2 (shown dark) with saturation

starting at ~ 117 f/s.

The effect of frame-rate on column-FPN level in the linear-mode is shown in Fig.

6.26 below. This figure depicts two interesting results. Firstly, the column-FPN

decreases with frame-rate until it reaches some frame-rate (~117 f/s) at which point it

levels off. It falls because when frame-rate increases, the integration time (tint) decreases.

Hence, the gain-FPN source, which involves the tint term (refer to Table 6.1), will also

decrease. The leveling-off of FPN after imager saturation at ~117 f/s occurs because the

gain-FPN sources do not affect the output voltage when the pixel saturates [111].

Therefore, they do not contribute to the column-FPN, and we end up with only offset-

FPN components, which are independent of light intensity (shown flat after saturation in

Page 268: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

231

Fig. 6.26). This graph can be regarded as the reverse behavior of the effect of light

intensity on column-FPN shown in Fig. 6.17. If we take a reverse path and reduce the

light intensity instead of increasing it, we will end up with similar result. This is because

increasing the integration time (at constant light intensity) effectively decreases the light

intensity since the amount of charge at the sense node decreases accordingly. The

maximum reduction (~ 42 mV) occurs at a frame-rate of 117 f/s. This was confirmed by

the reduction efficiency graph (right), which shows a maximum (~95%) at the same

frame rate, after which this efficiency falls to 88% at 233.2 f/s. Note that the minimum

reduction efficiency (~ 47%) occurs at low frame rates (11.66 f/s). The amount of

reduction is ~39 mV at this frame rate. This gives the user another degree of freedom

because one can use both frame-rate and light intensity to find an optimum operational

range for a specific application.

0

10

20

30

40

50

60

70

80

90

0 100 200 300

Frame rate (f/s)

FPN

(mV)

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

FPN

(%)

without DS

with DS

0

10

20

30

40

50

60

70

80

90

100

0 100 200 300

Frame rate (f/s)

Perc

enta

ge F

PN re

duct

ion

(%)

Figure 6.26: Effect of frame-rate on column-FPN in the linear-mode. Left: the FPN before MDS operation (top) for different frame rates compared with the FPN after using MDS (bottom). Right: the calculated percentage of FPN-reduction. These measurements were held with Vbias-0 @ 0.48 V, Vbias-B @ -0.58 V, and light intensity @~20µW/cm2 (green @ λ~540nm). The frame rates were in range of ~11.66 f/s to 233.2 f/s. The percentage FPN shown on the left was calculated for full well capacity.

Page 269: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

232

11.66 23.32 46.64 116.6 233.2

0.5

20.0

40.0

100

0102030

40

50

60

70

80

90

FPN

(mV)

Frame rate (f/s)

Light intensity (µW/cm2)

0.5

10.0

20.0

30.0

40.0

60.0

100

160

11.66 23.32 46.64 116.6 233.2

0.5

20.0

40.0

100

0

10

20

30

40

50

60

FPN

(m

V)

Frame rate (f/s)

Light intensity ( µW/cm2 )

0.5

10.0

20.0

30.0

40.0

60.0

100

160

Figure 6.27: 3D-plot of the effect of frame-rate on column-FPN in the linear-mode for different light intensities. Left is the FPN without MDS. Right graph is the FPN with MDS. All measurement conditions are as stated above. The frame rates range was ~11.66 f/s to 233.2 f/s and the light intensity ranges (@ λ~540nm) from 0.5 µW/cm2 to ~160 µW/cm2.

Figure 6.27 shows a 3D-plot of the effects of frame-rate on column-FPN under

different light-intensities. The behavior of column-FPN before MDS operation (left)

confirms the argument based on saturation effect on the gain-FPN, and thus its

contribution to the column-FPN (almost zero after saturation). At very low light

intensity, the saturation does not occur even at very low frame-rates (very long tint), and

since the light intensity is very low, the contribution of gain–sources (which is dependent

on light intensity) is also very low. Thus, the flat FPN shown contains almost pure

offset-FPN (independent of light intensity). As light intensity increases, we can have

saturation at low frame-rates, and even at higher frame-rates (short tint) as light intensities

becomes higher than 20µW/cm2. However, at very high frame-rates (ultra short tint), the

imager will not saturate, even at light intensities as high as 100µW/cm2 (as shown on the

left of Fig. 6.27). Note that the total level of FPN before saturation consists of constant

offset-FPN plus gain-FPN that dependent on light intensity. So as light intensity

increases, the gain-FPN will also increase; thus giving a total level of FPN as shown. A

reverse behavior is valid for frame effect where the gain component decreases with frame

rates. Our argument is further supported by the graph on the right for the column-FPN

Page 270: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

233

after MDS operation, which mainly consists of residual gain-FPN that cannot be

eliminated with the MDS technique.

0

10

20

30

40

50

60

0 50 100 150 200 250

Frame rate (f/s)

FPN

(mV)

0

0.5

1

1.5

2

2.5

3

FPN

(%)

without DSwith DS

Figure 6.28: Effect of frame-rate on row-FPN in the linear-mode of operation. The graph shows no improvement in FPN using MDS. Both FPN curves (with MDS and without MDS) are almost identical (overlapped). Measurement conditions are similar to those in Fig. 6.26.

The behavior of row-FPN with frame rate in linear mode is similar to that of row-

FPN with light intensity (refer to Fig. 6.16), but with reversed order. For a constant light

intensity of ~20µW/cm2, the effect of high frame rate is analogous to the effect of low

light intensities (no saturation), whereas the effect of low fame rates is analogous to

effect of high light intensities. Here, like the behavior of row-FPN with light intensity,

we have zero row-FPN at high frame rate (≡ low intensities) and low frame rate (≡ high

intensities) and a maximum row-FPN between two extremes as shown Fig. 6.28 (Fig.

6.16) above. Again, no effect can be noticed by using the MDS reduction technique

(curves shown overlapped in Fig. 6.28). Therefore, the percentage of FPN-reduction

using MDS was ignored because of its insignificance here.

Page 271: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

234

11.66 23.32 46.64 116.6 233.2

0.0

10.0

40.0

60.0

90.0

0

10

20

30

40

50

60

70

80

90

FPN

(m

V)

Frame rate (f/s)

Light intemsity (µW/cm2)

0.0

0.5

10.0

20.0

40.0

45.0

60.0

70.0

90.0

160

3 23.32 46.64 116.6 233.2

0.0

10.0

40.0

60.0

90.0

0

10

20

30

40

50

60

70

80

90

FPN

(mV)

Frame rate (f/s)

Light intemsity (µW/cm2)

0.0

0.5

10.0

20.0

40.0

45.0

60.0

70.0

90.0

160

Figure 6.29: 3D-plot of the effect of frame-rate on row-FPN in the linear-mode for different light intensities. Top graph is the FPN without using MDS. Bottom graph is for the FPN with MDS. All measurement conditions are as stated above. The frame rates range was ~11.66 f/s to 233.2 f/s and the light intensity ranges (@λ~540nm) from 0 µW/cm2 to ~ 160 µW/cm2.

Page 272: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

235

Again using the 3D-plot to show the effect of frame-rate on row-FPN in the

linear-mode at different light intensities (refer to Fig. 6.29) is useful because it shows

features that cannot be seen in 2D-plots. First, at very low light intensity, the

photocurrent is very small, and hence, the gain-FPN sources (as shown). At light

intensity ~10 µW/cm2, gain-FPN sources start to contribute to the row-FPN, which is

reflected as a rise in the FPN, then fall to minimum (zero here) after saturation as

mentioned before. The more interesting observation is the clear movement of the peak,

prior to saturation, to higher frame-rates as discussed before. The last three segments of

the plot (at 70, 90, and 100 µW/cm2) did not go to zero, which confirms our conclusion

that the imager will not saturate, even at these high intensities if the frame-rate is very

high.

6.4.2.2 Logarithmic Mode The effect of frame-rate on the response of the ALO-imager (refer to Appendix C & D) in

logarithmic mode, shown in Fig. 6.30, was, as expected, insignificant, because of the

continuous operation of logarithmic mode (which is, therefore, independent of frame-rate

and/or integration time (tint)).

11.66

46.64

233.2

10.020.0

30.035.0

50.060.0

70.090.0

100160

0.5

0.55

0.6

0.65

0.7

0.75

Out

put v

olta

ge (V

)

Frame rate (f/s)

Light intemsity (µW/cm2)

Figure 6.30: Imager frame-rate 3D-response in the logarithmic-mode of operation at different light intensities (without DS). The measurements were held with Vbias-0 of 0.48 V, Vbias-B of -0.58 V, and light intensity range of ~ 0-160µW/cm2 (green @ λ~540nm). The frame rates range was ~11.66 f/s to 233.2 f/s.

Page 273: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

236

A significant observation is the effect of frame-rate on column-FPN in this mode,

shown in Fig. 6.31 below. Again, each point in this graph was constructed by calculating

the standard deviation from the mean-value of the output voltage of all pixels in the data

array (64×64), with bad pixels was excluded as before. Also, automatic averaging (11

frames) was used to reduce the random-temporal noise level.

0

10

20

30

40

50

60

70

80

0 100 200 300Frame rate (f/s)

FPN

(mV)

0

0.5

1

1.5

2

2.5

3

3.5

4

FPN

(%)

w ith DS

without DS

0

20

40

60

80

100

0 100 200 300Frame rate (f/s)

Perc

enta

ge F

PN re

duac

tion

(%)

Figure 6.31: Effect of frame-rate on column-FPN in the logarithmic-mode of operation. Left: the FPN without using DS for different frame rates (top) compared with the FPN using DS (bottom). Right: the calculated percentage of FPN-reduction for this case. All conditions are as stated above with a light intensity @~45µW/cm2 (green @ λ~540nm).

Figure 6.31 above shows that column-FPN undergo a slight increase to some

maximum (@~50 f/s) at the beginning, then starts to fall to a certain minimum (@ ~ 117

f/s), after which it levels off. This can be attributed to the gain-FPN sources (refer to

Table 6.2), which, together with offset-FPN, can collectively cause this rise-fall-leveling-

off behavior. The leveling-off implies that the gain-FPN source (collectively) does not

affect the output and only the constant offset is in effect. The more interesting

observation, regardless of the sources, is that the MDS showed very high effectiveness

over the entire frame-rate range under investigation. This effectiveness was reflected as a

Page 274: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

237

clear difference between FPN before MDS and FPN after MDS shown in the left of Fig.

6.31, supported by the high reduction efficiency (right plot), which ranges between 90-

97% over the entire range of frame-rates under investigation. With this wide range of

frame-rates, in which MDS technique proved to be effective for the logarithmic mode

operation, we gain another degree of freedom in using this technique in a broad range of

applications.

In contrast, this is not the case for row-FPN (shown in Fig. 6.32 below); where

there is no significant reduction in levels of row-FPN. The apparent increase in row-FPN

level at high frame-rates is due to residual temporal noise. This conclusion was reached

because of the obvious randomness and low-levels (<0.5 mV) of these row-FPN

fluctuations.

0

0.1

0.2

0.3

0.4

0.5

0 50 100 150 200 250Frame rate (f/s)

FPN

(m

V)

0

0.005

0.01

0.015

0.02

0.025

FPN

(%)

with DSwithout DS

Figure 6.32: Effect of frame-rate on row-FPN in the logarithmic-mode. Shown in “solid points” is the FPN without using MDS for different frame rates compared with row-FPN using MDS (open points). Measurements were made in conditions similar those for Fig. 6.31 above.

Page 275: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

238

6.4.3 Effect of current-source load bias (Vbias-0) on FPN 6.4.3.1 Linear Mode Figure 6.33, below, shows the 3D-plot of the ALO-imager (refer to Appendix C & D)

response to the current-source bias (Vbias-0) in linear mode. Each point in this graph was

made by calculating the mean-value of the output voltage of all pixels in the data array

(64×64) and exclusion of bad pixels using gray level thresholds. Each data array is a

result of automatic averaging of 11 frames in an attempt to reduce the random-temporal

noise.

Figure 6.33: 3D-imager response to Vbias-0 in the linear-mode for different light intensities (without MDS). The measurements were held with, Vbias-B of -0.58 V, frame-rate of ~ 23.32 f/s, and light intensity in the range: 0-160µW/cm2 (green @ λ~540nm). Vbias-0 range was ~0.25V to ~0.65V.

The 3D-plot, shown in Fig. 6.33 above, shows the effect of Vbias-0 on imager

response at different light intensities in linear mode. The transfer characteristic

(photoresponse) shown here is consistent with that presented in previous sections (refer to

Fig. 6.14). This plot, however, shows that the effect Vbias-0 on the linear response is

insignificant except for 10µW/cm2, which shows an important decrease in the response as

Vbias-0 increases. The reason is due to the fact that not all light intensities used in the

measurements are in the operational linear range of the imager (1 µW/cm2-10µW/cm2 in

Page 276: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

239

this particular case). Therefore, for light intensities that are out of this operational range,

the imager is photoresponse is either very small or saturated; hence it is unaffected by the

changes in Vbias-0.

0

20

40

60

80

100

0.2 0.3 0.4 0.5 0.6 0.7V bias-0 (V)

FPN

(mV)

0

1

2

3

4

5

FPN

(%)

without DSwith DS

0

20

40

60

80

100

0.2 0.3 0.4 0.5 0.6 0.7V bias-0 (V )

Perc

enta

ge F

PN-r

educ

tion

(%)

Figure 6.34: Effect of Vbias-0 on imager column-FPN in the linear-mode. Left: the FPN without using MDS for different Vbias-0 (left) compared with the PRNU using MDS (right). Right: the calculated percentage of FPN-reduction within the Vbias-0 range under test. All conditions are as stated above with light intensity (green @ λ~540nm).

Since we are more interested in investigating the effect of Vbias-0 on FPN rather

than on the photoresponse, we present Figure 6.34, which shows the effect of varying

Vbias-0 column-FPN before MDS and after the MDS operation. The column-FPN (offset

component) decreases with increasing Vbias-0 (compressed by the scale) as predicted

theoretically (refer to Table 6.1). Note that Ibias-0 increases as Vbias-0 increases. The

efficiency of MDS in reducing column-FPN is quite reasonable (~45%) over the entire

range under investigation (~0.25V to ~0.65V) as shown in Fig. 6.34 (right). Since this

efficiency is almost constant with Vbias-0. Therefore, we cannot use Vbias-0 as a control

parameter to improve MDS operation. Moreover, the amount of reduction (~39mV), and

Page 277: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

240

hence, the reduction efficiency (~ 45%) are determined by the light intensity

(~20µW/cm2) and frame rate (~23.32 f/s) rather than Vbias-0 (see Fig. 6.26 above).

0

2

4

6

8

0.2 0.3 0.4 0.5 0.6 0.7

V bias-0 (V)

FPN

(mV)

0

0.1

0.2

0.3

0.4

FPN

(%)

without DS

with DS

`

Figure 6.35: Effect of Vbias-0 on row-FPN in the linear-mode. The graph shows no significant reduction in FPN using DS. Both FPN curves (with DS and without DS) are almost identical. Measurement conditions are as those in Fig. 6.34. In Figure 6.35, the row-FPN is decreasing with Vbias-0 (~8mV over the entire

range of Vbias-0 under investigation 0.25-0.65 V). This change is comparable to the

column-FPN change (compressed scale in Fig. 6.34). Therefore, we may attribute it to

the effect of the column-offset source in row-FPN since any reading has to be

transformed into a voltage through the current-source load (one/column). Again, no

significant reduction in row-FPN can be seen, as shown in Fig. 6.35.

6.4.3.2 Logarithmic Mode

The 3D-garph shown in Fig. 6.36 below describes the ALO-imager (refer to Appendix C

& D) response to the current-source bias (Vbias-0) in logarithmic mode. Each point in this

graph was made by calculating the mean-value of the output voltage of all pixels in the

data array (64×64) and exclusion of bad pixels using gray level thresholds. Each data

Page 278: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

241

array is a result of automatic averaging of 11-frames in an attempt to reduce the random-

temporal noise.

0.25

0.35

0.48

0.55

0.65 10

.0 20.0 40

.0 50.0 60

.0 70.0 90

.0 100 16

0

0.5

0.55

0.6

0.65

0.7

0.75

Out

put V

olta

ge (V

)

V bias-0 (V) Light Intensity (µW/cm2)

Figure 6.36: 3D-imager response to Vbias-0 in the logarithmic-mode for different light intensities (without DS). The measurements were held with, Vbias-B of -0.58 V, frame-rate ~ 23.32 f/s, and light intensity in the range: 0-160µW/cm2 (green @ λ~540nm). Vbias-0 range was ~0.25V to ~0.65V.

The 3D-plot in Fig. 6.36 above shows the effect of Vbias-0 on imager response at

different light intensities in logarithmic mode. The transfer characteristic

(photoresponse) shown here is consistent with the logarithmic photoresponse presented

earlier (Fig. 6.30). The above graph (Fig. 6.36) shows slight changes with Vbias-0 that is

dependent on the light intensity. This could be attributed to the logarithmic compression

in this mode.

The column-FPN, shown in Fig. 6.37 below, suggests that at a light intensity of

45µW/cm2, the contribution of Vbias-0 (offset) FPN source to the column-FPN is

negligible compared with other sources. Thus, the column-FPN appears nearly flat.

Nevertheless, the modal double sampling (MDS) is very effective in removing this FPN

from ~ 45mV to almost zero FPN, which implies the offset nature of FPN shown in Fig.

6.37. This effectiveness is confirmed by the near 100% FPN-reduction efficiency shown

Page 279: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

242

in Fig. 6.37 (right graph). This steady 100% efficiency will give room to optimize other

parametric effects such as the effect of light intensity, without adding other constraints.

0

10

20

30

40

50

0.2 0.3 0.4 0.5 0.6 0.7V bias-0 (V )

FPN

(m

V)

0

0.5

1

1.5

2

2.5

FPN

(%

)

without DS

with DS

0

20

40

60

80

100

0.2 0.3 0.4 0.5 0.6 0.7V bias-0 (V)

Perc

enta

ge F

PN-r

educ

tion

(%)

Figure 6.37: Effect of Vbias-0 on imager column-FPN in the logarithmic-mode. Left: the FPN without using MDS for different Vbias-0 (top) compared with the FPN using MDS (bottom). Right: the calculated percentage of FPN-reduction over the entire Vbias-0 range under test. All conditions are as before (Fig. 6.36), with light intensity of ~45µW/cm2 (green @ λ~540nm).

This is not the case for row-FPN shown in Fig. 6.38 below, which was not

removed by the use of MDS, and became worse at some Vbias-0 in the range under

investigation (~0.25V to ~0.65V). Anyways, because of the randomness and low levels

(< 0.6mV) of the row-FPN fluctuations, row-FPN can be ignored and regarded as

residual temporal system noise, and not FPN.

Page 280: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

243

0

0.1

0.2

0.3

0.4

0.5

0.6

0.2 0.3 0.4 0.5 0.6 0.7

V bias-0 (V)

FPN

(mV)

0

0.005

0.01

0.015

0.02

0.025

0.03

FPN

(%)

without DS

with DS

`

Figure 6.38: Effect of Vbias-0 on imager row-FPN in the logarithmic-mode. The graph shows FPN without using MDS for different Vbias-0 compared with the FPN using MDS. All conditions are as before (Fig. 6.37).

6.5 Switching performance of 3M-APS The key issue in MDS technique operation is the switching performance of the 3M-APS

pixels, the corner stone of this technique. In other words, we need to investigate the

speed of response of the “super” pixel when switched between its two logarithmic modes

to perform MDS operation. The time assigned for each “row preparation operation”,

∆TR, is limited by the resolution (number of pixels in the imager) and/or the frame rate

(system clock) used. For the 64 × 64 pixels imager operating at frame rate of ~ 23.32 f/s,

this resulted in ~ 0.67 ms/row (~42.88 ms/frame). Therefore, the ∆TR should not be

longer than 30µsec to accommodate other row operations (reading 64 pixels/row) if 10

µsec/pixel was assigned for each pixel readout (0.67 ms= 30µsec + 10 µsec × 64).

This can put a limitation on the minimum switching speeds permissible, and

hence, the speed of response of the 3M-APS, particularly for high frame-rate applications

(such as motion analysis). The higher the frame-rate, the shorter ∆TR allowable.

Therefore, a higher pixel speed of response is required. Fortunately, there are two

features in the pixel MDS operation that may help accommodate these challenges: First,

the responses from the two modes of operation (Log-APS and 2Log-APS) differ at a

particular level of light intensity, yet the difference is not too high (~160mV maximum).

Page 281: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

244

Therefore, pixels do not need to ‘jump’ large voltage differences to change from one

mode to another; hence allowing higher switching speeds. Secondly, only one mode

switching is required during ∆TR. This may further loosens constraints on the minimum

switching speeds permissible.

Referring to Fig.6.3 and Fig.6.4 for Log-APS and 2Log-APS modes, respectively,

the corresponding circuits used for dynamic analysis for these two modes are shown

simplified in the left and right sides of Fig. 6.39 below. For both modes of operations

shown, the current is darkphin III += , and CD is the total capacitance at the sense node,

and DCI is the current through this capacitor.

Figure 6.39: Simplified dynamic circuit for Log-APS (left), and 2Log-APS (right) at the onset of switching of respective mode.

For Log-APS, from Equation 6.15, (keeping in mind that we are doing the

instantaneous (dynamic) analysis and not steady-state) we can write the current in

transistor (M1) as:

T

in

vV

DD eII−

=0111

,

PD

VDD

M2

Vth,2 (W / L)

n, Io

IN

M1

Vth,1 (W / L)

n, Io

Iin Vin

12DI

DCI CPD

VDD

M1

Iin

Vth,1 (W1 / L1)

Io, n

IN Vin

11DI

DCI C

Page 282: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

245

with, T

thDD

nvVV

oD eILWI

)(

1

11,

01

= (6.29)

Now, applying KCL at the sense node, we end up with

011

=++− inCD IIID

or

001

=+∂

∂+−−

inin

DvV

D It

VCeI T

in

(6.30)

This a 1st -order non-linear differential equation for the variable Vin, with steady-state

(trivial) solution of ( )01

ln1, DinTinsin IIvVV −== , which is equivalent to Equation 6.15

For the dynamic case, we can solve Equation 6.30, assuming an initial condition of

Vin(0)=V11 , to get the sense node voltage Vin (t) as:

( ) tCIee

II

vtVD

invVCvtI

in

DTin

TDTin −

+−−= 1101 1ln)( (6.31)

Equation 6.31 above is similar to that reported by Tian et al. [98].

For 2Log-APS, from Equation 6.22, (keeping in mind that we are doing

instantaneous (dynamic) analyses) we can write the current in transistor (M1) as:

T

in

vnV

DD eII ˆ0212

−= ,

with, T

ththDD

vnnVVV

oD eILWI ˆ

)(

1

1 22,1,

02

−−

= (6.32)

where )11(ˆ nn += . Now, applying KCL at the sense node, we end up with

012

=++− inCD IIID

or

0ˆ02

=+∂

∂+−−

inin

Dvn

V

D It

VCeI T

in

(6.33)

This is also 1st order non-linear differential equation for the variable Vin, with steady-state

(trivial) solution of ( )01

lnˆ2, DinTinsin IIvnVV −== , which is equivalent to Equation 6.22.

For the dynamic case, we can solve Equation 6.33, assuming initial condition of

Vin(0)=V12 , to get the sense node voltage Vin (t) as:

Page 283: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

246

( ) tCIee

II

vntVD

invnVCvntI

in

DTin

TDTin −

+−= ˆˆ 1202 1lnˆ)( (6.34)

Equation 6.31 above is similar to that reported by Loose [38]. Now, Equations 6.31 and

6.34 for Log-APS and 2Log-APS, respectively, are quantitatively different, but

qualitatively similar. Therefore, our discussion below equally applies to both of these

APS circuits.

Figure 6.40: Qualitative temporal behavior for Vin in both mode of operation. (A) Response to abrupt decrease in Vin (increase in light intensity). (B) Response to abrupt increase in Vin (decrease in light intensity) for different step heights (after [38])

According to Equations 6.31 and 6.34, the behavior of Vin(t) significantly depends on the

initial value of V11 and V12 respectively. There are two cases here: A: V11> Vins,1 (V12>

Vins,2) or B: V11< Vins,1 (V12< Vins,2). For the first case, the temporal behavior of Vin(t) is

Light Intensity

Light Intensity

V1

Vins

Vins

V1 Time (arbt. units)

Vin (V)

Vin (V)

Time (arbt. units)

A

B

0 1

4 0

Page 284: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

247

shown in (A) of Fig. 6.40, whereas the temporal behavior of Vin(t) for the second case is

shown in (B) of Fig. 40 above. Because this analysis is qualitative, (and thus applicable

to both modes of operation) to avoid any confusion, we generalized the notation for the

two cases as follows: case A: V1> Vins, and case B: V1< Vins shown in Fig. 6.40 A, B

respectively.

The difference between the two cases is quite obvious as shown above. For the

first case (decrease in Vin), the characteristic is linear due to the domination of the second

term of Equation 6.31 (Equation 6.34). This linearity will hold as long as the Vin>> Vins,1.

When Vin approaches Vins, the first term starts to dominate; thus the exponential behavior

of the characteristics becomes apparent. The linear (2nd) term depends on Iin and CD only.

This implies that the charge on CD is mainly discharged via Iin, which explain the long

wait required for discharging this charge and reaching the steady-state condition.

In the second case, Vin increases starting from a low level at V1 toward a steady-

state level value at Vins. No linear behavior was observed, because of the domination of

first term in this regime, especially when Vin is still small compared with Vins. This will

result in a very fast rise in Vin at the beginning. Then, the response slows down as Vin

approaches its steady-state value at Vins. The speed of response in case (B) is about 3-4

times faster than the opposite case shown in (A). Therefore, it is desirable to have a

switching-regime in which the initial-state output voltage (V1) is lower than its sought

final-state Vins. This statement is valid for both mode of pixel operation.

Since, at any light intensity, the transfer-characteristic of the 2Log-APS always

has a voltage level that is lower than that of Log-APS (refer to Fig.6.5), in order to have a

faster switching-regime, it is desirable to switch from 2Log-APS mode (as initial-state) to

Log-APS (as final-state). This is exactly the opposite of the case shown in Fig.6.12,

where switching (during ∆TR) occurs in reversed order. For this reason, we have to

interchange the patterns of “Log” and “Reset” nodes shown in Fig.6.12 to get a faster

switching regime.

Now, the switching from low to high voltages corresponds to an opposite current

switching (from high to low) in 2Log-APS responses. Now assume that the “desired”

steady state is achieved for ( ) αinD ItI =12

, where α (>1) is a design parameter that is

Page 285: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

248

determined by the tradeoff between speed of response and the smoothness of the

switching operation. In this case, the settling time was found as:

−−

=02

02

12

)1(ln

ˆ ˆ

D

DvnV

in

in

DTs I

IeII

CvntT

α (6.35)

It is important to note that in our temporal analyses so far, we treated the pixel in

each mode as an isolated case to ease the analyses. In other words, we did not include the

effect of switching of M2 and/or M3 (refer to Fig.6.6) when Log Reset ≡ goes high or

vice-versa. We assumed already established modes to examine the step response of pixel

for each mode, which is more important.

125 Hz Log Reset ≡ Log-APS

2Log-APS

250 Hz

500 Hz

650 Hz

Page 286: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

249

1k Hz

1.25 kHz

LogReset≡ 750 Hz

2.5 kHz

Figure 6.41: Measured 3M-APS response (bottom curves) as it is switched between Log-APS and 2Log-APS modes. The switching frequency ranges from 125 Hz to 2.5 kHz. The clock (top pulse train) on “Reset” node (refer to Fig. 6.6), which is used along with other simultaneous “inverted” clock (not shown) on “Log” node to switch the pixel from one mode to another. Light source was pulsed red (670nm) laser diode at light intensity is ~ 2 mW/cm2.

In Fig. 6.41, above, we present the measurement of the 3M-APS response as it is

switched between Log-APS and 2Log-APS modes using test structures for one complete

column in the ALO-chip fabricated in 0.35µm CMOS technology (refer to Appendix C

&D for more details). Note that the switching frequencies used here are consistent with

the range of frame-rates (video rate 30f/s) used. Since we do not need to switch back to

Page 287: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

250

the initial mode during the row-preparation-time (∆TR), we can switch back while the

middle portion of the active row is read out, as shown in Fig.6.12. These measurements

support our theoretical predictions previously presented in this section. A fast response

when the pixel switches from 2Log-APS to Log-APS modes ( LogReset ≡ goes from low

to) and a much slower response when the pixel switches from Log-APS to 2Log-

APS( LogReset ≡ goes high to low) as clearly shown in Fig. 6.41above.

1

100

10000

10 100 1000 10000 100000 1000000

Modulation frequency (Hz)

V out

pk-

pk (m

V)

0.81.62.3

Light Intensity(mW/cm2)

Figure 6.42: The measured optical modulation frequency responses for Log-APS at different light intensities.

Another way to measure the speed of the response of 3M-APS is the "optical

modulation frequency response" method previously presented in Chapter 3. The optical

modulation frequency response was measured using a red laser diode (670 nm)

modulated by a square wave with a maximum output of ~ 2 mW/cm2, and a frequency

range of 200 Hz to 500 kHz. These measurements were performed on a 3M-APS-test

structure that consists of one complete column in ALO-chip fabricated using 0.35µm

Page 288: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

251

CMOS technology (refer to Appendix C & D for more details). The results for Log-APS

are shown in Fig.6.42 above, while those for 2Log-APS are presented in Fig. 6.43 below.

1

100

10000

10 100 1000 10000 100000 1000000

Modulation Frequency (Hz)

V out

pk-

pk (m

V)

0.81.62.3

Light Intensity(mW/cm2)

Figure 6.43: The measured optical modulation frequency responses for 2Log-APS at different light intensities. Based on our observations, two interesting conclusions were reached. First, these

results are consistent with our simulation in Chapter 3: the Log-APS have slightly higher

3dB frequencies compared with 2log-APS, but not to the extent that is vital to the

operation of our MDS circuit. Our argument of using the “faster” 2log-APS to Log-APS

switching-regime during the ∆TR still holds. Secondly, the effect light intensity on the

3dB frequency was significant but not large. This attributed to fact that the range of light

intensities used is not wide (less than a decade), which was limited by the range laser

diode used.

Last, but not the least, we can have longer row-preparation-times (∆TR).

However, this comes at the expense of pixel readout time, which is controlled by S&H

circuit switches. These switches are more design friendly since they are located per

Page 289: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

252

column, where there are less constraints on silicon area, and so we can have larger

transistors and capacitors sizes to increase the switching speed.

6.6 Summary and Discussion There are various techniques established to reduce fixed pattern noise FPN in CMOS

image sensors. In the case of linear (integrating) active pixel sensors (APS), the

correlated double sampling (CDS) circuit [15, 16, 105] is usually implemented. For the

logarithmic (continuous) APS, however, the concept of CDS fails because of the lack of a

reset state, which could be subtracted from the sensor signal. Therefore, off-chip

techniques with digital calibration have been established [106, 114]. On the other hand, a

limited number of methods to suppress FPN by on-chip calibration in logarithmic sensors

reported in the literature [38, 54,102, 107-109]. Some of these are off-chip methods.

This requires individual calibration for each sensor after production, which does not

compensate for aging or temperature effects [54]. Others [38, 102, 107] implemented the

FPN reduction functionality at the expense of resolution, by using a large number of in-

pixel transistors and capacitor (> 8 transistors /pixel) with applicability that is limited to

the single mode of operation.

Here, a logarithmic-response image sensor with high optical dynamic range has

been presented employing an on-chip offset FPN removal technique. This technique is

based on the Modal Double Sampling (MDS) method. It makes use of “super” pixels

with five MOS transistors. Thus, high resolution is achievable while keeping the total

chip area reasonable. The pixel can operate in three modes: logarithmic (Log-APS),

enhanced logarithmic (2Log-APS), or linear (Linear-APS), simply by choosing the

appropriate control clocks. Even though the emphasis here was on the removal of FPN in

logarithmic mode, the technique was also used (as CDS) to remove FPN in the linear

mode.

Logarithmic mode We demonstrated that the MDS technique is an effective method for fixed pattern noise

(FPN) removal in the logarithmic mode APS, especially column (offset) FPN. Referring

Page 290: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

253

to Fig. 6.22 and Fig. 6.23, we achieved ~52 mV FPN reduction in FPN (which

corresponds to a 95 % reduction efficiency) over a wide range of light intensity (0-

50µW/cm2). This efficiency is comparable to that reported by [114]. However, it seems

that they achieved their high efficiency at the expense of fill factor (15% compared with

our 41%). Moreover, their system requires driving signals with such high voltages as 16

V and long stressing times, thus making that scheme unattractive.

The maximum residual column-FPN < 0.5% (PRNU) over a wide range of light

intensity (0-160µW/cm2), realized with ~ 70mW power dissipation at video frame rate of

30f/s is more superior to 0.7% reported by [115], which was realized with 300mW at the

same frame rate. The 3.5 mV residual FPN reported by [116, 117] compared with our

<10 mV in (0-160µW/cm2) range achieved better results than those presented. However,

this result was accomplished at the expense of resolution by using > 8 transistors and

capacitors per pixel and the single mode of operation. Another merit of the technique

presented is the independency of FPN reduction efficiency on both frame rate and Vbias-0

over a wide range of frame rates (11.66 f/s—233.2 f/s), and Vbias-0 (~0.25V to ~0.65V)

(refer to Fig. 6.31 and Fig. 6.37 respectively). This ensures higher system reliability.

Linear mode Similar results were obtained for the linear mode of pixel operation (refer to Fig. 6.17 and

Fig. 6.18). The column-FPN was effectively reduced using the MDS technique. In terms

of RMS, there is ~ 50mV reduction in the dark-FPN, which results in ~ 0.59mV residual

FPN. This is equivalent to ~100% reduction efficiency. Upon comparison with the CDS

results in [118], our residual FPN (~0.59mV) is ~3 times less than their reported (2mV)

FPN, realized with a fill factor of only 16% compared with our 41%. Perhaps the work

of Decker et al. [111] is comparable to our system in terms of fill factor (49%) and power

dissipation (52 mW @ 30f/s). However, their scheme resulted in 4 mV residual FPN (> 6

times our result of ~0.59mV).

For PRNU (up to 160µW/cm2), the reduction was lower in the range of ~ 25mV,

which corresponds to ~40% reduction efficiency. For a particular case PRNU (@ ½

Vsat), the residual PRNU of 1.42% is more than 2 times higher when compared with

0.63% reported by those in [119]. However, the power dissipated is ~200mW, which is

Page 291: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

254

high when compared with our 70mW. Moreover, our imager can work in dual mode,

whereas theirs is only linear. Results showed independency of FPN reduction efficiency

on Vbias-0 variations (Fig.6.34), but not on frame rate variations. In fact, there is an

optimum frame rate where the reduction is maximal as shown in Fig. 6.26. This is not

surprising, considering the nature of the linear (integrating) operation, whose response is

a strong function of integration time, and hence, on the frame rate which controls this

time.

Limitations 1. Modal double sampling (MDS) technique, like other double sampling techniques,

cannot remove gain-FPN [19, 103]. This is clearly depicted in row (pixel) FPN,

which is mainly gain-FPN as shown in Fig. 6.15, Fig. 6.16, Fig. 6.21, and Fig.

6.28.

2. Modal double sampling (MDS) technique is uncorrelated. Therefore, it cannot

remove kTC noise [23, 49,113]. In fact MDS may add more kTC noise because

of the S&H switches used [19].

3. A poor linearity of the Linear-APS mode response caused by the current

contribution from the idle (left) branch of the 3M-APS pixel to the total current in

the linear mode.

Page 292: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

255

Chapter Seven

Summary and Conclusions

7.1 Summary In this thesis, we demonstrated three novel double sampling techniques for on-chip image

processing in different domains: the spatial double sampling (SDS) for visual edge

detection, the temporal double sampling (TDS) for “direct” motion detection in the linear

mode, and contrast enhancement, and 2D edge detection in the logarithmic mode, and the

modal double sampling (MDS) for the FPN reduction technique in logarithmic image

sensors. The core of these three functionalities is the novel 3M-APS, which can be

operated as an adaptive sensor with three distinct pixel responses if provided with proper

controls.

In Chapter 3, we fulfilled the first objective of this thesis by characterizing,

modeling, and simulating photocircuits that are based on CMOS compatible photodiodes.

We presented detailed analyses on the CMOS active pixel sensors (APS), especially in

the logarithmic mode operation. We devised different equivalent circuit model and

several behavioral models for CMOS compatible photodiode. Upon comparison between

simulations using these models and measurements, we concluded that the best fit for dc

(static) analysis is the Piece Wise Linear (PWL) model, which uses actual measurements

in the simulations. However, we found that the VCCS photodiode model is best suited to

our investigation because of its accuracy and applicability to AC (dynamic) and transient

Page 293: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

256

analyses. Results suggest that as far as the electrical sensitivity is concerned, the

inverted Log-APS have superiority over the conventional one. However, conventional

pixels have higher voltage levels (especially at low light intensity levels) compared to

those of the inverted. Therefore they are expected to have higher output signal-to-noise-

ratio in this early (and important) stage of signal processing than the inverted ones.

Moreover, the percentage of electrical sensitivity improvement adding MOS-diode is

more pronounced in the conventional structure compared with that in the inverted ones,

which gives room for further response enhancements.

However, these improvements have some drawbacks on the optical sensitivity

(because of the reduced fill-factor) and/or the speeds of response. Therefore there is a

tradeoff between the electrical sensitivity of the pixel and its speed of response. For the

present applications, on-chip image processing using double sampling techniques, both

figures of merit are important. Accordingly, the choice of conventional pixel with two

MOS diodes would be a good compromise. In this case we sacrifice some speed and fill-

factor for the sake of more electrical sensitivity.

Results also show that for low light intensity the speed of response (3dB-

frequency) will be reduced, thus this effect should be taken in consideration when

designing for low light intensity illumination, especially in applications where speed of

response is vital.

It is concluded that more robust on-chip image processing for CMOS imagers

with Log-APS as photocircuits can be achieved with careful designs that consider all

issues discussed here.

In Chapter 4, we described and demonstrated a straightforward, yet robust analog

VLSI edge detection sensor based on a novel spatial double sampling (SDS) algorithm.

This system, which incorporates both the imager array and the edge detection block on

the same chip, was particularly devised to implement edge detection functionality, while

preserving high output image quality, thus can be a fundamental building block for many

real-time image processing applications. The imager was implemented using CMOS

image sensors with dual mode (3 transistors) APS that can be operated in either

logarithmic (continuous) mode with wider dynamic range or in a linear (integrating)

mode with higher image quality. The prototype chip (CYC) was fabricated using

Page 294: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

257

standard 0.5µm CMOS technology with spatial resolution of 64×64 pixels, and pixel

pitch of ~ 30µm, have high fill factor of ~ 60%.

Results showed that SDS edge detection sensor is robust and operates in real-time

over a wide range of light intensities, frame rates and over considerable stimulus

contrasts and speeds. For instance, in linear mode of pixel operation, the performance

was superior with outputs that were not only reliable and high enough (~ 400mV>>

16.5mV-FPN), but also invariant over a wide range of light intensities (824-2560 Lux),

frame rates (23f/s-115f/s), and stimuli contrasts (over 60%). These merits were found

equally valid for both edge types (WB-OFF and BW-ON). The minimum edge output

signal was significant (~100mV>> 16.5mV), even at light intensities as low as 206 Lux,

or at frame rates (≥1150 f/s or ≤ 11.5f/s) or stimuli contrasts as low as 20%. These

superior qualities make the SDS edge detection algorithm suitable as a front end to many

sensitive image-processing applications, such as image segmentation and motion

detection of interest here.

Similar characteristics were obtained for logarithmic mode of operation, but with

output levels that reflect the severe logarithmic compression and the high fixed pattern

noise inherent to its operation. Results showed that SDS edge detection is a feasible

technique for both edge types in two operational regimes: High light intensity with low

frame rates or low light intensity with high frame rates. Nevertheless, the outputs were

invariant at level high enough (≥ 80 mV) to ensure reliable SDS edge detection in

logarithmic mode over light intensities in the range ≥ 922 Lux, but limited to high

contrast stimuli. Accordingly, SDS edge detection is only feasible in this range, which

may limit the application of SDS in logarithmic mode to stationary stimuli.

SDS moving edge detection in linear mode demonstrated a relatively high

dynamic range (200mV) over a wide speed range (15.24 cm/sec—200.39 cm/sec), with

the minimum SDS edge output signal (≥ 100 mV at speeds ≥ 200.39 cm/sec) that is

higher than the noise level (16.5 mV for linear mode). For moving stimuli, it is a tradeoff

between edge detection efficiency and its detection reliability. In this context, we found

that using threshold voltage at 100 mV and operational speed range < 25 cm/sec will

ensure high detection reliability and acceptable detection efficiency (≥ 50%).

Page 295: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

258

In general, SDS edge detection is limited to 1D-operation, and bipolar edge

signals. Both are inherent to SDS algorithm. Moreover, SDS techniques suffer from low

precision inherent to analog VLSI circuitry utilized. In addition, SDS in logarithmic

mode is particularly limited to high contrasts stationary stimuli operation.

In conclusion, the proposed SDS edge detection technique operates quite

effectively over a wide range of light intensities, frame rates, stimulus contrasts, and

stimuli speeds, for both edge types in the linear mode of pixel operation. These results

show that such inexpensive, compact, low-power, analog VLSI edge detection SDS

circuit can be used as front end for real-time motion analysis or image segmentation in

dynamic environments.

In this Chapter 5, we described and demonstrated analog VLSI implementation of

a robust temporal double-sampling (TDS) algorithm to perform real-time temporal

sampled differentiation, providing a simple method for change/motion detection. We

also report a direct application of this algorithm in performing two elementary, yet

important image processing functions for CMOS image sensors operating in the

logarithmic mode: contrast enhancement at low light intensities, where sensor sensitivity

is at its lowest level because of the logarithmic compression, and 2D edge detection at

high light intensities.

Results are very promising and suggests use of the TDS output signal as an input

to many on-chip advanced image processing functionalities such as motion detection,

image segmentation, and data reduction, with an operational range that expand over a

wide range of light intensities, stimuli contrasts, frame rates, and speeds. These real-time

image processing utilities have gained a potential to be employed in application where

integrated functionalities are advantages, such as industrial inspection, target tracking,

and navigation.

In the linear mode, the TDS sensor exhibits a minimum dynamic range of ~

290mV for OFF-edge over a wide range of light intensities (240-2200 Lux), with

minimal TDS signals > 97mV (>> 16.5mV). Contrast sensitivity results suggests that

reliable TDS detection is feasible down to contrasts of about 30% with signal levels >

93mV. Thus, allowing room for wide range applications. Results suggest that a better

TDS detection performance can be achieved by choosing an “optimum” operational

Page 296: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

259

frame rate; however, there is a tradeoff between the maximum achievable edge signal and

the minimum intra sampling interval.

For logarithmic mode, results showed that contrast enhancement, a fundamental

image processing functionality can be realized with TDS technique. This function is

particularly suited for logarithmic mode operation at low light intensities (~133 Lux),

where the electrical sensitivities are very low because of the logarithmic compression.

Thus, it can be used as front end to SDS edge detection in logarithmic mode.

On the other hand, reliable 2D-edge detection functionality was realized using

TDS technique in logarithmic mode but at high intensities (> 7,000 Lux). This TDS edge

detection has a potential to be used as an input stage to motion detection in pointing

devices, laser alignment, and position sensing detector applications. To our knowledge,

these two image processing functionalities in the logarithmic mode are both original and

first to be reported.

For stimuli in motion with speeds in the range (~15 cm/sec—180 cm/sec), the

results indicate that a tradeoff is required between detection efficiency and detection

reliability when choosing threshold voltages. Hence using a threshold of 70mV is will

result in a good compromise between the two figures of merit.

Like SDS operation, TDS technique suffers from low precision inherent in analog

VLSI circuitry utilized and from the presence of a strong AC component in artificial

(interior) lighting, which may degrade the sensitivity of TDS technique operations. In

addition, the contrast enhancement functionality is limited to low light intensities (<200

Lux), whereas 2D edge detection is limited to high light intensities (> 7000 Lux).

In conclusion, the proposed TDS technique operates (in linear mode) robustly

over a wide range of light intensities, frame rates, stimulus contrasts, and stimuli speeds

for both edge types. These results show that such inexpensive, compact, low-power,

analog VLSI edge detection TDS circuit has potential to be used as a front end for real-

time motion analysis or image segmentation in dynamic environments. On the other

hand, the contrast enhancement and 2D edge detection are reliable and can be used with

logarithmic mode to enhance its performance, hence its range of applicability.

In Chapter 6, we demonstrated a novel on-chip offset FPN removal technique for

the logarithmic image sensor. This technique, which is based on Modal Double

Page 297: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

260

Sampling (MDS) method, is straightforward, yet very effective in reducing the offset-

FPN in CMOS image sensors, especially in the logarithmic mode. The core of this

technique is a new pixel design with five MOS transistors. Thus, high resolution is

achievable while keeping reasonable chip area. This “super” pixel can be operated in

three modes: logarithmic, enhanced slope logarithmic or linear by choosing the proper

control clocks. On the other hand, the FPN reduction functionality can be operated in

two modes: linear mode, where the system act like ordinary CDS circuit, and logarithmic,

where the modal double sampling (MDS) method showed a superior performance

compared with other state-of-the-art attempts to remove FPN.

Experimental results on the prototype chip (ALO) fabricated using standard

0.35µm CMOS technology with pixel resolution of 64×64 and 23.4µm pixel pitch,

operated in the logarithmic mode showed offset FPN reduction efficiency as high as 95%

over a wide range of light intensities (0-50µW/cm2) achieved with a relatively high fill

factor (41%) and low dissipation power (70mW @ 3.3V and 30 f/s). The maximum

residual column FPN was very low (< 0.5% @ saturation) over a wide range of light

intensities (0-160µW/cm2). Accordingly, it was concluded that the MDS techniques

proposed here is an effective method for offset FPN removal in the logarithmic mode

APS. Moreover, this technique exhibited high reliability and robustness since its

performance is independent on system parameters, such as frame rate and current-source

load bias.

A similar result was obtained for the linear mode, but with lower performance,

especially for PRNU. While dark (offset) FPN was effectively removed using MDS

technique in linear mode, with reduction efficiency high at ~100%, and a residual FPN as

low as 0.59 mV RMS), the PRNU reduction efficiency was low at ~40% over the range

under investigation (up to 160µW/cm2). This result, however, was found to be

comparable with other state-of-the-art attempts. Results also showed independency of

MDS performance on some system parameters such as current-source bias variations, but

not on frame rate variations. This caused the performance to have a optimum frame rate,

where the reduction efficiency ~ 100%, and extreme frame rates where the efficiency

degraded to ~ 50%, which is still good.

Page 298: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

261

There are three major limitations using MDS technique for FPN reduction in

CMOS image sensors. The first two are inherent to all double sampling methods, and

cannot be overcome. Model double sampling (MDS) technique cannot remove row-FPN

or pixel-FPN because of their gain sources (which cannot be removed by double

sampling methods). Secondly, MDS cannot remove kTC noise because the samples are

uncorrelated with this technique. The third limitation is the poor linearity of the linear

mode response, which can be improved by reducing the current sharing in the “super”

pixel.

In conclusion, the proposed MDS technique has proved to be highly effective in

removing column-FPN in logarithmic APS, which will enhance their image quality.

Therefore, it can be considered as an important contribution that blends well with other

qualities offered by logarithmic sensors such as high optical dynamic range, small size,

high fill-factor (no reset line), and true random accessibility. This unprecedented

flexibility will widen the range of applications of these sensors, especially where

continuous high dynamic range operation with quality image capture is an advantage

(such as motion analyses applications, and in industrial and automotive applications

where extreme illumination conditions and/or flexible readout are required).

7.2 Future Work In this section we will list some of the potential future work that can improve the image

processing functionalities while preserving the higher image qualities, with a justifiable

usage of the silicon area. The list is limited to future works that are based on hardware

improvements, which would need new chip designs.

1. A low leakage sample and hold circuit design.

2. Extension of the SDS edge detection to 2D operation.

3. Using information resulted from our elementary image processing functionalities

(edge/motion) for advanced tasks, such as adaptive scanning on-chip.

4. Implementing on-chip image processing, such as edge/change/motion detection with

simultaneous high-quality video output.

Page 299: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

262

5. Devising a circuit for an automatic switching from one mode to another (in 2M-APS

and 3M-APS) based on image irradiance (or ambient illumination); hence providing a

simple way for imager adaptability.

6. Reducing the effect of the presence of a strong 120 Hz (100 Hz) AC component in

artificial lighting on TDS output signals, especially in interior applications.

7. Extending the application of edge detection and other on-chip image processing

functionalities to account for low-contrast edges and high-noise (thermal) outdoor

environments.

8. Eliminating the bipolarity of the edge signals by using a proper absolute and

threshold circuits.

9. Reducing constraints on in-pixel modal switching (in MDS) by having longer row-

preparation times (∆TR).

10. To improve the linearity of the Linear-APS mode response by reducing the effect of

the idle branch on active branch of the 3M-APS operation in the linear mode.

Page 300: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

263

Appendix A

CYC Image Sensor Chip The CYC-chip is a dual-mode CMOS image sensor with an array size of 64x64 pixels

and pixel pitch of 30µm. It was fabricated using the Hewlett-Packard 0.5µm digital

CMOS process (single poly, triple metal). The design also includes on-chip double

sampling circuits for read-out and FPN reduction as well as decoder circuits for array

read-out scanning. The imager pixels can be operated in two modes of operation: Linear-

APS (integrating) mode (with larger output swing and better image quality) and Log-APS

(continuous) mode (with wider optical dynamic range and true random accessibility).

The imager mode of operation can be easily selected by choosing the proper control-

signals.

Figure A.1 below shows the layout of the dual-mode active pixel (2M-APS). The

photosensitive area of the photodiode consists of the L-shaped n+ diffusion in the p-

substrate. Also shown are the in-pixel transistors, which consist of reset transistor (M1),

source-follower buffer (SF) and row-select switch (M3), along with pixel contacts such as

VDD contact, ground (substrate) contact, Reset contact, Row select contact, and column

bus contact. These contacts and the in-pixel transistors constitute the non-photosensitive

area of the pixel, which determine the fill-factor of the pixel (defined as the percentage of

photosensitive area to the pixel area). The fill-factor of this imager is high, at about 60%.

Page 301: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

264

Figure: A.1 Dual-mode active pixel sensor (2M-APS) layout showing the (N+, p-substrate) photodiode, all in-pixel transistors: ➊ M1 (reset transistor), ➋ SF (source follower transistor), ➌ M3 (Row select transistor), and all pixel contacts (see Fig. 3.1).

Figure A.2 below shows a microscopic photograph of part of actual (fabricated)

CYC-imager. The squares shown in the photo are for the pixels the CMOS imager array.

The white squares inside these pixels are for the metal-3 shielded non-photosensitive area

of the pixels (in-pixel transistors).

Figure: A.2 Microscopic photograph for CYC-imager using the ElectroScan System.

Column bus

Reset Row select

VDD

n+/p-substrate photodiode

Ground contact

➌ ➊

Page 302: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

265

The floor-plan diagram of the CYC-image sensor, shown in Fig. A.3 below, is

superimposed on the mask layout used in the fabrication of this imager. Also shown in

the figure are the 64x64 imager array, a vertical (row) decoder (to control row-select

switches and/or to provide the “Reset” clocks to the Reset-nodes of the pixels), and

horizontal (column) decoder (to control column read-out selection). The control blocks

shown are counters that provide input addresses to the decoders. The double sampling

block, which consists of two sample-and-holds per column, is protected from light by a

metal-3 layer to eliminate any unwanted photoelectric effects on its sensitive operation.

The scanning read-out method used here is the “raster” algorithm, in which rows are

activated sequentially. For Linear-APS mode operation, outputs from the pixels (in the

activated row) are sampled and held in one sample-and-hold (S&H) circuit.

Subsequently, the pixels of this row are reset and their outputs are then sampled and held

again, but in the other S&H circuit of the DS circuit. The two sampled outputs are then

read out and differenced simultaneously using the horizontal (column) scanner. For Log-

APS operation, the reset transistor (M1) is diode-connected to VDD. Therefore, it will act

as though it is “always reset”.

Figure: A.3: The layout floor plan of the CYC-chip used for fabrication, which consists of CMOS image sensor (64x64 2M-APS pixels) array, with pixel pitch of 30µm, ➊ vertical (Row select and/or Reset) decoder, ➋ control block (counter), ➌ double sampling (DS) shielded circuit block, and ➍ horizontal (column) decoder.

Sensors array

➊➊

➋➋

➌ ➍

Page 303: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

266

Appendix B

CYC-Chip Measurement Setups

B.1 Edge Detection Measurements for Stationary Stimuli Figure B.1 below shows the experimental setup of the edge detection measurements. The

test-fixture consists of CYC-chip (2M-APS CMOS Imager with CDS circuit) mounted

using ZIF-socket (zero insertion) on PCB, where it is connected to various instruments in

the setup. In particular, this chip is connected to three power supplies (VDD, Vbias-0, and

Vbias-2), input control clocks from CompuGen pattern-generator software via flat-cable

and the Gage-Card, and outputs (VSH1, VSH2) connected to the National Instrument PC-

Card to the PC, where they are displayed and stored (in gray level form) using LabView

software adapted for image capture. These outputs are also connected to a Tektronix

TDS 360 Digital real-time oscilloscope for waveform display and precise measurements

(voltage form). These measurements were logged automatically using WaveStar

software with a GPIB (IEEE-488) connection between the oscilloscope and the PC. Also

connected to test-fixture is the trigger-signal from CompuGen used to trigger the

oscilloscope and synchronize the LabView image capture system.

The light source is a power controllable bulb-lamp. The light ray is applied to the

image sensor through a standard wide-angle lens of ~ 35º and focal length of 3 mm. We

used an INS digital Lux meter because the light source is a multi-chromatic white light

source. The stimulus (e.g. vertical bars) was put on a stand in front of the image sensor,

Page 304: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

267

where it was exposed to the light source. The light source, stimulus, lens, and the test-

fixture are all mounted on a Newport optical table to ensure stable light exposure. After

arranging the experimental setup, as shown in Fig. A.1, we start by setting the required

light intensity using the Halogen-lamp power supply and the Newport power meter.

Then we set the CYC-chip DC power supplies according to their typical (optimum)

values (VDD= 3.3 V; Vbias-0= 0.5V, and Vbias-2= 2.6V). Next, we specify the pattern

(control signals) for the mode under investigation (Log-APS or Linear-APS) and/or for

particular operation using CompuGen software, which is then fed to the chip via the flat-

cable. In accordance with the conducted parametric investigation, the following

parameters are specified: frame rate using CompuGen software, the light-intensity using

Lux-meter, and/or the stimulus-contrast by selecting the proper stimulus. The output of

the chip is sent to PC where it is processed, displayed, and stored (in gray level form)

using LabView software. A copy of the output is simultaneously displayed and/or logged

using WaveStar software. The data are then re-processed and analyzed according to

parametric investigation conducted using MS EXCEL.

Page 305: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re: B

.1: E

xper

imen

tal s

etup

for e

dge

mea

sure

men

ts fo

r sta

tiona

ry st

imul

i. T

he N

atio

nal I

nstru

men

t Lab

Vie

w so

ftwar

e, w

hich

was

ad

opte

d fo

r im

age

capt

ure,

is a

lso

used

as

an o

scill

osco

pe u

sing

Gag

e-ca

rd.

Gag

e C

ompu

Gen

is p

atte

rn g

ener

ator

sof

twar

e.

Each

so

ftwar

e ha

s a h

ardw

are

card

ass

ocia

ted

with

it.

The

test

fixt

ure

cons

ists

of t

he C

YC

-chi

p an

d PC

B.

The

light

sour

ce is

a b

ulb-

lam

p.

V DD, V

bias

-0, V

bias

-2

Pow

er su

pplie

s

Flat

-cab

le

PC u

sing

NI L

abV

iew

, and

Ga g

e C

ompu

Gen

softw

are

BNC

cab

les

V SH

1, V S

H2,

trigg

er

Trig

ger

New

port

op

tical

ta

ble

Ligh

t sou

rce

Bul

b-la

mp

The

stim

ulus

(e

.g. v

ertic

al b

ars)

INS

DX

-100

D

igita

l lux

met

er IN

S di

gita

l Lu

x m

eter

Test

fixt

ure

CY

C-c

hip

and

lens

Tek

tron

ix T

DS

360

Dig

ital r

eal-t

ime

O

scill

osco

pe

GPI

B

IEE

E-4

88

PC u

sing

W

aveS

tar

softw

are

Page 306: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

269

B.2 Edge Detection Measurements for Moving Stimuli The photoresponse measurement experimental setup, shown in Fig. B.2 below, is

identical to that in Fig. B.1, except that the stimulus is a moving drum with different

diameters instead of the stationary flat stimulus. The drum is connected to a DC motor

with a controllable speed that can be easily controlled using mechanical and electrical

controls as shown in Fig. B.2. To get more accurate instantaneous measurement of the

speed of drum rotation, an independent motion speed measurement system (tachometer),

not shown for clarity, was assembled from the Newport power meter (which also

incorporate photodiodes) and a laser diode light source (coherent red @~670mV). Two

reflectors were mounted on the drum to reflect the laser light to the photodiode power

meter.

The analog output of power meter was fed to channel-B of the Tektronix TDS 360

Digital real-time oscilloscope. From the displayed square waveform information (period

and width), the speed of drum rotation can be easily calculated. Again, we can use

WaveStar software for its automatic logging and averaging capability in order to get

more accurate speed measurements. The whole tachometer system was arranged and

placed in a position so that it will not interfere with the conducted measurements in any

way.

The measurement procedure and settings are identical to those for stationary

stimuli, except for the fact that we need to set the drum (stimulus) speed for each set of

measurements. This is done manually with the aid of the tachometer described above.

Page 307: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re: B

.2: E

xper

imen

tal s

etup

for e

dge

mea

sure

men

ts fo

r mov

ing

stim

uli.

The

Nat

iona

l Ins

trum

ent L

abV

iew

sof

twar

e, w

hich

was

ad

opte

d fo

r im

age

capt

ure,

is a

lso

used

as a

n os

cillo

scop

e us

ing

Gag

e-ca

rd.

Gag

e C

ompu

Gen

is p

atte

rn g

ener

ator

softw

are.

Wav

eSta

r is

an

auto

mat

ic m

easu

rem

ent a

nd lo

ggin

g so

ftwar

e. E

ach

softw

are

has

a ha

rdw

are

card

ass

ocia

ted

with

it.

The

test

fixt

ure

cons

ists

of

the

CY

C-c

hip

and

PCB

. Th

e lig

ht so

urce

is a

bul

b-la

mp.

V DD, V

bias

-0, V

bias

-2

Pow

er su

pplie

s

Flat

-cab

le

PC u

sing

NI L

abV

iew

, and

Gag

e C

ompu

Gen

softw

are

Test

fixt

ure

CY

C-c

hip

and

lens

BNC

cab

les

V SH

1, V S

H2,

trigg

er

Trig

ger

GPI

B

IEE

E-4

88

PC u

sing

W

aveS

tar

softw

are

Tek

tron

ix T

DS

360

Dig

ital r

eal-t

ime

O

scill

osco

pe

Ligh

t sou

rce

Bul

b-la

mp

DC

Mot

or

Dru

m

stim

ulus

INS

DX

-100

D

igita

l lux

met

er IN

S di

gita

l Lu

x m

eter

DC

Mot

or c

ontro

l

INS

DX

-100

D

igita

l lux

met

er

Page 308: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

271

Appendix C

ALO Image Sensor Chip

The ALO-chip is a dual-mode CMOS image sensor fabricated using TSMC35 2P3M

(3.3V/5V) process. It is the TSMC design technology for the double-poly triple-metal

0.35µm mixed-mode polycide process (with no silicide block). The imager includes a

sensor array of 64x64 pixels, with pixel pitch of 23.4µm and on-chip double sampling

(DS) circuits for read-out and FPN reduction (both in linear-mode and logarithmic modes

of pixel operation) as well as shift-register circuits for array read-out scanning. The pixel

can be operated in three modes: Linear-APS (integrating) mode, with larger output swing,

and better image quality, Log-APS (continuous) mode, with wider optical dynamic range

and true random accessibility, and 2Log-APS (continuous) mode, with enhanced slope

transfer characteristics (reduced logarithmic compression). This imager incorporates the

three modes of operation, which can be selected by choosing the proper control signal.

With the FPN reduction functionality using the DS technique, only two modes are

available: Linear-APS, and Log-APS at the output; even the three modes are effectively

completed.

Figure C.1 below shows the layout of the 3-mode active pixel sensor (3M-APS).

The photosensitive area of the photodiode consists of L-shaped n+ diffusion in the p-

substrate. Also shown are the in-pixel transistors (refer to Fig.6.10), which consist of

diode-connected transistor (M1), Log-node transistor (M2), Reset-node transistor (M3),

Page 309: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

272

source-follower buffer (SF), and row-select switch (M3), along with pixel contacts such

as VDD contact, ground (substrate) contact, Reset contact, Row-select contact, and column

bus contact. These contacts and the in-pixel transistors constitute non-photosensitive area

of the pixel, which determine the fill-factor of the pixel (defined as the percentage of

photosensitive area to the pixel area). The fill-factor of this imager is high at about 41%.

Note that the in-pixel transistor area is completely covered by metal-3 layer to eliminate

any unwanted photoelectrical effects that may occur if these transistors are exposed to

light.

Figure C.1 Triple-mode active pixel sensor (3M-APS) layout showing the (N+, p-substrate) photodiode, all in-pixel transistors: 1 (diode-connected NMOS), 2 M2 (Log-node NMOS, 3 M3 (Reset-node NMOS), 4 SF (source- follower transistor), 5 M5 (Row-select NMOS), and all pixel contacts (refer to Fig. 6.10).

The layout of the ALO-chip used for fabrication is shown in Fig. C.2 below. This

chip (as shown) consists of CMOS image sensor (64x64 3M-APS pixels) array, with

GND - ground contact

N+, p-substrate photodiode

Row-select

VDD

Log (poly) Reset (metal 1) VDD

Column (output)

Metal-3 cover layer

Sense node (NI)

1

2 3 5

4

Page 310: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

273

pixel pitch of 23.4µm, a vertical shift-register for Row-select clock, another vertical shift-

register Reset clock, an analog differentiator block, a double sampling (DS) circuit block,

a horizontal shift-register for column read-out, a single pixel test structure with complete

signal path to the output, and a single analog differentiator test structure. The double

sampling (DS) circuit is located in one circuit/column and consists of two sample-and-

holds circuits. The scanning read-out algorithm used is here is the “raster” scanning

method in which rows are activated sequentially. For linear-mode operation, outputs

from pixels (in the activated row) are sampled and held in one sample-and-hold (S&H)

circuit. Subsequently, the pixels in this row are reset and their outputs are then sampled

and held again, but in the other S&H circuit. The two sampled outputs are then read out

and differenced simultaneously using the horizontal (column) scanner.

Figure C.2: The layout the ALO-chip used for fabrication, which consists of a CMOS image sensor (64x64 3M-APS pixels) array, with pixel pitch of 23.4µm, 1 vertical shift-register (Row-select), 2 vertical shift-register (Reset), 3 analog differentiator block 4 double sampling (DS) circuit block, 5 horizontal shift-register (column), 6 single pixel test structure with complete signal path to the output, and 7 single analog differentiator test structure.

2

3 4

5

1

6

7

Sensor array

Page 311: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

274

A similar sequence is used for the logarithmic-mode operation, but with different signals

being sampled and held. The Log-APS responses from activated pixels are sampled and

held first, followed by the sampling and holding of 2Log-APS pixels’ responses. The

two sampled responses are then read out and differenced simultaneously using the

horizontal (column) scanner. In both operations significant reduction in FPN was

observed, as presented in Chapter 6.

Figure C.3: Monographic photograph for a part of the actual (fabricated) ALO-CMOS image sensor imager.

Figure C.3 above shows a monographic photo of part of the actual fabricated

ALO-imager. The sensor array is located in the top-right corner, where the tiny squares

shown represent the pixels of the CMOS imager. Other imager building blocks such as

shift-registers and double sampling block are also shown in this photograph. Parts of the

metal (black and thick) connectors from the pads to the pins of the IC-package (PGA-84)

are also shown.

Page 312: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

275

In Figure C.4 below is the layout of a single pixel test structure with complete

signal path to the output used for single pixel measurements such as photoresponse,

optical modulation frequency response and/or pixel switching performance. This test

structure contains all the single-column components (clearly shown in the figure).

Therefore, it emulates the complete signal path from any pixel in the imager to the output

pads.

Figure C.4: The layout single pixel test structure with a complete signal path to the output. This test-structure is incorporated in the ALO-chip as shown in Fig. C.2 above. 1 3M_APS pixel 2 transmission-gate switches, 3 analog differentiator block, 4 VDD, 5 ground (substrate), 6 double sampling (DS) circuit, and 7 current-source load.

2

3 4

5

1

6

3

2 2

2

Page 313: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

276

Appendix D

ALO-Chip Measurement Setups

D.1 Fixed Pattern Noise (FPN) Measurements Figure D.1 below shows the experimental setup of the fixed pattern noise FPN

measurements. The test-fixture consists of the ALO-chip (3M-APS CMOS Imager with

DS FPN reduction) mounted using ZIF-socket (zero insertion) on PCB, where it is

connected to various instruments in the setup. In particular, the chip is connected to 3-

power supplies (VDD, Vbias-0, and Vbias-B), input control clocks from CompuGen pattern-

generator (PC) via flat-cable and the Gage PC-Card, and outputs (VSH1, VSH2) connected

to National Instrument PC-Card to the PC, where they are displayed using LabView

software adapted for image capture. Also connected to test-fixture is the trigger-signal

from CompuGen used to trigger and synchronize the LabView image capture system.

The light source is the OREIL Halogen lamp powered by a high-current power supply

also shown in Fig. D.1. The light beam is applied to the image sensor through a green-

filter (monochromatic @~540nm), and the OREIL integrating sphere is used to make the

light beam uniform. We used the green-filter because of power meter (Newport 1830-c)

used for light intensity measurement (which requires monochromatic light) and because

green is the so-called “room” light [6.5]. The light source, integrating sphere, and the

test-fixure are all mounted on a Newport optical table to ensure reliable light beam

Page 314: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

277

alignment. To insure complete darkness for dark-FPN measurement, the imager is

completely covered besides to the dark condition of the room.

After arranging the experimental setup, as shown in Fig. D.1, we start by setting

the required light intensity using the Halogen-lamp power supply and the Newport power

meter. Then we set the ALO-chip DC power supplies according to the condition required

(typically VDD= 3.3 V [constant]; Vbias-0= 0.48V, and Vbias-B= -0.58V). Next, we specify

frame rate and the specific pattern (control signals) of the mode required (Log-APS,

2Log-APS or Linear-APS) using CompuGen software, which is then fed to the chip via

the flat-cable. The output of the chip is sent to the PC where it is processed, displayed,

and stored using LabView software. The stored data (in gray level format) consists of 11

frames (for each measurement condition) that are averaged as a means to remove random

(temporal) noise. The data are then re-processed to calculate the FPN under different

parametric conditions and analyzed using MS EXCEL.

Page 315: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re: D

.1: E

xper

imen

tal s

etup

of F

PN &

PR

NU

mea

sure

men

ts.

The

Nat

iona

l Ins

trum

ent L

abvi

ew s

oftw

are

was

ado

pted

for i

mag

e ca

ptur

e.

Gag

e C

ompu

Gen

is

patte

rn g

ener

ator

sof

twar

e.

Each

sof

twar

e ha

s a

hard

war

e ca

rd a

ssoc

iate

d w

ith i

t. T

he t

est

fixtu

re

cons

ists

of t

he A

LO-c

hip

and

PCB

.

New

port

pow

er m

eter

m

odel

183

0-C

OR

IEL

Inte

grat

ing

Sphe

re

OR

IEL

Hal

ogen

La

mp

Hal

ogen

Lam

p Po

wer

supp

ly

V DD, V

bias

-0, V

bias

-B

Pow

er su

pplie

s

Flat

-cab

le

PC u

sing

NI L

abV

iew

, and

Ga g

e C

ompu

Gen

softw

are

Filte

r

Test

fixt

ure

ALO

-chi

p

BNC

cab

les

V SH

1, V S

H2,

trigg

er

Trig

ger N

ewpo

rt

optic

al

tabl

e

Page 316: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

279

D.2 Test Structure Measurements The test structure under investigation is for a single pixel with a complete signal path to

the output pads. This test structure is incorporated in the ALO-chip (for more details,

refer to Fig. C.2 and Fig. C.4).

D.2.1 Photoresponse measurements The photoresponse measurement experimental setup, shown in Fig. D.2 below, is

identical to that in Fig. D.1, except that the ALO-imager (single pixel test structure with

complete signal path) outputs are also connected to a Tektronix TDS 360 Digital real-

time oscilloscope for waveform display and precise measurements (voltage form). These

measurements were logged automatically using WaveStar software with a GPIB (IEEE-

488) connection between the oscilloscope and the PC. Also connected to the test-fixture

is the trigger signal from CompuGen used to trigger the oscilloscope and synchronize the

LabView image capture system. The light source is the same as before. The only

difference is the intensity steps were lowered to increase the accuracy.

The measurement procedure and settings are similar to those for FPN

measurements, except for two conditions that need to be modified: the mode of pixel

operation and the light intensity level. All other conditions are kept constant (at their

typical values) throughout these measurements. The outputs from the chip are sent to the

oscilloscope for instantaneous display and measurements. They are also sent to the PC,

where it is logged, displayed, and stored using WaveStar software. It is worth noting that

this data, which consists of 10 samples per reading at 1 sample/sec, are automatically

averaged (by WaveStar) and stored in the PC. The total data are then processed and

plotted for each mode of pixel operation using MS EXCEL.

D.2.2 Optical modulation frequency responses measurements The optical modulation frequency response experimental setup, shown in Fig. D.3 below,

is identical to that in Fig. D.2, except that the light source here is the COHERENT

modulated-laser diode (red @~670mV and maximum power of 5mW), and its DC power

supply. A function generator (INTERSTATE) is used specifically for these

Page 317: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

280

measurements to provide the required modulation to the laser diode, as shown in Fig.

D.3.

After setting up the experiment, as shown in Fig. D.3, we start by setting the

ALO-chip (single pixel test structure with complete signal path to the output) DC power

supplies to their typical values (VDD = 3.3 V; Vbias-0= 0.48V, and Vbias-B= -0.58V), which

are all kept constant through out these measurements. Then we set the operational

conditions of COHERENT laser diode (6V, 120mA) using the DC power supply

associated with it. This will set the DC light intensity level of the laser diode. The

function generator is connected to the “control” input of the aser diode and is used to

modulate the electrical sinusoidal waveform (with specified peak-peak and frequency)

onto the AC optical signal (with the corresponding peak-peak light intensity and

frequency). Finally, we specify the pattern of control signals for the mode under

investigation (Log-APS, 2Log-APS, or Linear-APS) using CompuGen software, which is

then passed on to the chip via the flat-cable. Now, for each optical modulation frequency

(keeping DC and AC intensity levels constant), the output of the chip for every

modulation frequency is sent to the oscilloscope for instantaneous displaying and

measurements, as well as to PC, where it is logged, displayed, and stored using WaveStar

software. Again this data, which consists of 10 samples per reading at 1 sample/sec, are

automatically averaged (by WaveStar) and stored in the PC. The total data is then

processed and plotted using for each mode of pixel operation using MS EXCEL.

Page 318: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re: D

.2: E

xper

imen

tal s

etup

for

pho

tore

spon

se m

easu

rem

ents

. G

age

Com

puG

en is

pat

tern

gen

erat

or s

oftw

are.

W

aveS

tar

is a

n au

tom

atic

mea

sure

men

t and

logg

ing

softw

are.

Eac

h so

ftwar

e ha

s a

hard

war

e ca

rd a

ssoc

iate

d w

ith it

. Th

e te

st fi

xtur

e co

nsis

ts o

f the

A

LO-c

hip

(pix

el te

st-s

truct

ure

with

com

plet

e si

gnal

pat

h to

the

outp

ut) a

nd P

CB

.

New

port

pow

er m

eter

m

odel

183

0-C

V DD, V

bias

-0, V

bias

-B

Pow

er su

pplie

s

Flat

-cab

le

PC u

sing

NI L

abV

iew

, and

Gag

e C

ompu

Gen

softw

are

Test

fixt

ure

ALO

-chi

p

BNC

cab

les

V SH

1, V S

H2,

trigg

er

Trig

ger

GPI

B

IEE

E-4

88

PC u

sing

W

aveS

tar

softw

are

OR

IEL

Hal

ogen

La

mp

Hal

ogen

Lam

p Po

wer

supp

ly

OR

IEL

Inte

grat

ing

Sphe

re

Filte

r

Tek

tron

ix T

DS

360

Dig

ital r

eal-t

ime

O

scill

osco

pe

New

port

op

tical

ta

ble

Page 319: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re:

D.3

: Ex

perim

enta

l se

tup

for

optic

al m

odul

atio

n fr

eque

ncy

resp

onse

mea

sure

men

ts.

Gag

e C

ompu

Gen

is

patte

rn g

ener

ator

so

ftwar

e. W

aveS

tar i

s an

aut

omat

ic m

easu

rem

ent a

nd lo

ggin

g so

ftwar

e. E

ach

softw

are

has

a ha

rdw

are

card

ass

ocia

ted

with

it.

The

test

fixt

ure

cons

ists

of t

he A

LO-c

hip

(pix

el te

st-s

truct

ure

with

com

plet

e si

gnal

pat

h to

the

outp

ut) a

nd P

CB

.

Lase

r bea

m

(red

~67

0nm

)

5mW

CO

HER

EN

T La

ser D

iode

New

port

pow

er m

eter

m

odel

183

0-C

V DD, V

bias

-0, V

bias

-B

Pow

er su

pplie

s

Flat

-cab

le

PC u

sing

NI L

abV

iew

, and

Gag

e C

ompu

Gen

softw

are

Test

fixt

ure

ALO

-chi

p

BNC

cab

les

V SH

1, V S

H2,

trigg

er

Trig

ger

GPI

B

IEE

E-4

88

PC u

sing

W

aveS

tar

softw

are

Lase

r D

C p

ower

su

pply

Tek

tron

ix T

DS

360

Dig

ital r

eal-t

ime

O

scill

osco

pe

INT

ER

STA

TE

Fu

nctio

n ge

nera

tor

New

port

op

tical

ta

ble

Page 320: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

283

D.2.3 Switching performance of 3M-APS The measurement setup is identical to that in Fig. D.3 above, except for the light source

used, which has a DC (constant) light intensity instead of the AC (modulated) light

intensity source. The light source used in this experiment is red (670nm) Helium-Neon

Laser (Ealing Inc.), which also incorporates a DC power supply.

The settings of the DC power supplies applied to the single pixel test structure are

all kept at their typical values (VDD = 3.3 V; Vbias-0= 0.48V, and Vbias-B= -0.58V)

throughout these measurements. The measured light intensity of the laser light source

near the focal-plane was ~ 1.9863 mW/cm2. Now, the “control signals” pattern from the

CompuGen software is designed so that the pixel can flip its mode of operation from

Log-APS to 2Log-APS according to the pulse-trains ( LogReset = ) that are applied to its

“Reset” and “Log” nodes (refer to Fig. 6.10) simultaneously as shown below in Fig. D.4.

Figure: D.4: Control clocks applied simultaneously to “Reset” and “Log” nodes of the 3M-APS shown in Fig.10 to provide the flip-flopping between the two logarithmic modes (Log-APS and 2Log-APS) of the pixel operation.

To investigate the effect of switching speed (pulse-train frequency) on the 3M-

APS responses when switched from one mode to another, the pulse-train frequency was

swept between 125Hz-2.5 kHz.

LogReset =

ResetLog =

APS2Log −

APSLog − APSLog − APSLog − APSLog −

APS2Log − APS2Log −

Page 321: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

284

Appendix E

Miscellaneous measurements Mostly single pixel test-structure Measurements

Page 322: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.1:

The

expe

rimen

tal s

etup

for s

ingl

e pi

xel m

easu

rem

ents

of m

odul

atio

n fr

eque

ncy

resp

onse

, spe

ed d

isto

rtion

s and

freq

uenc

y di

stor

tions

for l

ogar

ithm

ic a

ctiv

e pi

xel s

enso

rs A

PS .

CO

HE

RE

NT

red

L

aser

dio

de

New

port

pow

er

met

er m

odel

181

5-C

LA

MB

DA

pow

er

supp

lies

~6V

~3V

T

est F

ixtu

re (i

mag

e se

nsor

pix

el’s

rea

d-

outc

ircu

it)FL

UK

E di

gita

l m

ultim

eter

INT

ERST

ATE

20-

MH

z lo

g-lin

ear

swee

p ge

nera

tor

HA

ME

B 2

0-M

Hz

osci

llosc

ope

With

gr

aphi

c pr

inte

r

Las

er

mod

ulat

ion’

s co

ntro

l sig

nal

Imag

er p

ixel

’s

outp

ut si

gnal

Las

er b

eam

in th

e op

tical

cha

nnel

Page 323: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.2: D

efin

ition

s of

key

par

amet

ers

used

in e

xper

imen

tal s

imul

atio

n of

spe

ed.

The

spee

d is

sim

ulat

ed b

y ke

epin

g th

e fr

eque

ncy

of m

odul

ated

lase

r bea

m c

onst

ant (

f W) w

hile

cha

ngin

g th

e am

plitu

de (∆

L), w

hich

cor

resp

onds

to V

Lp o

f the

lase

r bea

m.

T W f W

f 3dB

f L

∆L1

∆L2

∆L3

∆L4

Spee

d

VLd

c V

pixe

l.dc

Vpi

xel.a

cV

Lp

L

aser

Dio

de in

put

Pixe

l PD

Vdd

Vdd

Log.

Act

ive

Pixe

l Sen

sor

unde

r Tes

t

Vpi

xel

Lase

r Lig

ht

Load

Page 324: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.3:

Mea

sure

d M

odul

atio

n Fr

eque

ncy

Res

pons

e of

the

CO

HER

ENT

VLM

-2 re

d la

ser d

iode

(Usi

ng N

ewpo

rt Po

wer

met

er).

-35

-30

-25

-20

-15

-10-505 1.0E

+01

1.0E

+02

1.0E

+03

1.0E

+04

1.0E

+05

1.0E

+06

1.0E

+07

Freq

uenc

y (H

z)

Attenuation (dB)

VLp

= 3

V T

TL (6

70 n

m @

~ 1

.933

5 m

W/c

m2)

VL

dc =

5.9

5V

3 dB

fre

quen

cy~2

00kH

z

3 dB

Page 325: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.4:

Mea

sure

d m

odul

atio

n fr

eque

ncy

resp

onse

for t

he L

ogar

ithm

ic A

PS

-50

-45

-40

-35

-30

-25

-20

-15

-10-505 1.0E

+00

1.0E

+01

1.0E

+02

1.0E

+03

1.0E

+04

1.0E

+05

1.0E

+06

Freq

uenc

y (H

z)

Attenuation (dB)

VLp

= 3

V SQ

. (67

0 nm

@ ~

1.8

895

mW

/cm

2)

VL

dc =

5.9

9V

Vdd

=

3.2

8V

3 dB

Fr

eque

ncy

~ 25

KH

z

3 dB

Page 326: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.5:

Out

put

spee

d di

stor

tion

unde

r di

ffer

ent

sim

ulat

ed s

peed

s. L

aser

inp

ut C

hann

el. @

2v/

div;

Pix

el o

utpu

t C

hann

el. @

20

mv/

div;

Tim

e ba

se @

20µ

s/di

v, a

nd in

put f

requ

ency

is c

onst

ant @

5kH

z.

Las

er in

put

APS

out

put

Page 327: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Fi

gure

E.6

: C

ombi

ned

spee

d di

stor

tion

of th

e ou

tput

und

er d

iffer

ent s

imul

ated

spe

eds.

Lase

r inp

ut C

hann

el. @

2v/

div;

Pix

el o

utpu

t C

hann

el. @

20m

v/di

v; T

ime

base

@ 2

0µs/

div,

and

inpu

t fre

quen

cy is

con

stan

t @ 5

kHz.

Page 328: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.7:

Out

put s

peed

dis

torti

on u

nder

diff

eren

t sim

ulat

ed s

peed

s. L

aser

inpu

t Cha

nnel

@ 2

v/di

v (b

otto

m);

Pixe

l out

put C

hann

el

@ 5

0mv/

div(

top)

; Tim

e ba

se @

20µ

s/di

v an

d in

put f

requ

ency

is c

onst

ant @

20k

Hz.

Page 329: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Fi

gure

E.8

: C

ombi

ned

spee

d di

stor

tion

of th

e ou

tput

und

er d

iffer

ent s

imul

ated

spe

eds.

Lase

r inp

ut C

hann

el. @

2v/

div;

Pix

el o

utpu

t C

hann

el. @

20m

v/di

v; T

ime

base

@ 2

0µs/

div,

and

inpu

t fre

quen

cy is

con

stan

t @ 2

0kH

z.

Page 330: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.9:

Out

put f

requ

ency

dis

torti

on u

nder

diff

eren

t fre

quen

cies

. La

ser

inpu

t Cha

nnel

@ 2

v/di

v (to

p); P

ixel

out

put C

hann

el @

50

mv/

div

(bot

tom

). In

put a

mpl

itude

(VLp

) is k

ept c

onst

ant @

6V

.

Tim

e B

ase

@ 2

0 µs

/div

Fr

eque

ncy

@ 1

0kH

z

Tim

e B

ase

@ 5

0 µs

/div

Fr

eque

ncy

@ 5

kHz

Tim

e B

ase

@ 2

0 µs

/div

Fr

eque

ncy

@

20k

Hz

Tim

e B

ase

@ 2

0 µs

/div

Fr

eque

ncy

@ 5

0kH

z

Page 331: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.10:

Exp

erim

enta

l set

up fo

r low

cur

rent

mea

sure

men

ts.

LO (l

ow) i

s con

nect

ed th

e sh

ield

(~ 1

cm

cop

per b

ox).

+ V

r

I ph

Bla

ck

Red

Inpu

t low

con

nect

ed to

sh

ield

Sh

ield

(rec

omm

ende

d fo

r lo

w

curr

ent m

easu

rem

ents

)

INTE

RST

ATE

20-

MH

z lo

g-lin

ear

swee

p ge

nera

tor

CO

HE

RE

NT

red

L

aser

dio

de L

aser

bea

m in

the

optic

al c

hann

el

VLd

c ~

6V

LAM

BDA

po

wer

supp

ly

Ket

hely

C

abl

Page 332: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.11:

Mea

sure

d m

odul

atio

n fr

eque

ncy

resp

onse

of C

MO

S co

mpa

tible

pho

todi

odes

.

Mea

sure

d Fr

eque

ncy

Resp

onse

for C

MO

S ph

otod

iode

D4

(Av.

val

ue)

-13

-11-9-7-5-3-11 1.0E

+02

1.0E

+03

1.0E

+04

1.0E

+05

1.0E

+06

Freq

uenc

y (H

z)

Attenuation (dB)

VLpp

=

4.12

V SA

W (

670

nm @

~2.

7 m

W/c

m2)

VL

dc

= 6.

035V

Vdd

= 3

V o

ffset

+v

e

Mea

sure

d Fr

eque

ncy

Resp

onse

for C

MO

S ph

otod

iode

D1

(Av.

val

ue)

-7-6-5-4-3-2-101 1.0E

+02

1.0E

+03

1.0E

+04

1.0E

+05

1.0E

+06

Freq

uenc

y (H

z)

Attenuation (dB)

VLpp

=

5.44

V SA

W (

670

nm @

~ 3

.5m

W/c

m2)

VLdc

=

5.93

V

Vdd

= 3

V o

ffset

+ve

Mea

sure

d Fr

eque

ncy

Resp

onse

for C

MO

S ph

otod

iode

D4

(Av.

val

ue)

-66

-60

-54

-48

-42

-36

-30

-24

-18

-12-60 1.

E+02

1.E+

031.

E+04

1.E+

051.

E+06

Freq

uenc

y (H

z)

Attenuation (dB)

VLpp

=

5.37

V SA

W (

670

nm @

~3.

5 m

W/c

m2)

VL

dc

= 6.

065V

V

dd

= 3V

offs

et 0

Mea

sure

d Fr

eque

ncy

Resp

onse

for C

MO

S ph

otod

iode

D4

(Av.

val

ue)

(Low

er In

tens

ity)

-80

-70

-60

-50

-40

-30

-20

-10010 1.0E

+02

1.0E

+03

1.0E

+04

1.0E

+05

1.0E

+06

Freq

uenc

y (H

z)

Attenuation (dB)

VLpp

=

3.06

3V S

AW (

670

nm @

~2

mW

/cm

2)

VLdc

=

6.03

5V

Vd

d =

3V

offs

et

+ve

Page 333: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.12:

Mea

sure

d da

rk c

urre

nt o

f CM

OS

com

patib

le p

hoto

diod

es.

The

slop

e of

thes

e lin

ear r

elat

ions

hips

cor

resp

onds

to th

e sh

unt c

ondu

ctan

ce (1

/Rsh

), w

hich

app

ears

to b

e co

nsta

nt a

t 5 ×

1013

Ω.

Dark

Cur

rent

Sca

tter P

lot o

f the

CM

OS

pot

odio

de D

4 (M

AX)

y =

-2E-

14x

+ 2E

-14

-8.0

E-14

-6.0

E-14

-4.0

E-14

-2.0

E-14

0.0E

+00

2.0E

-14

4.0E

-14

00.

51

1.5

22.

53

3.5

44.

5

Reve

rse

Volta

ge (V

)

Dark Current (A)

MAX

Line

ar (M

AX)

Dark

Cur

rent

Sca

tter P

lot o

f the

CM

OS

pho

todi

ode

D1

(MAX

)

y =

-2E-

14x

+ 9E

-14

0.0E

+00

2.0E

-14

4.0E

-14

6.0E

-14

8.0E

-14

1.0E

-13

1.2E

-13

00.

51

1.5

22.

53

3.5

44.

5

Rev

erse

Vol

tage

(V)

Dark Current (A)

MAX

Line

ar (M

AX)

Page 334: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.13:

Ex

perim

enta

l se

tup

for

sing

le p

ixel

mot

ion/

chan

ge m

easu

rem

ents

usi

ng C

MO

S im

age

sens

or c

hip

(CY

C)

fabr

icat

ed

usin

g 0.

5µm

CM

OS

tech

nolo

gy (r

efer

to A

ppen

dix

A).

CO

HE

RE

NT

red

L

aser

dio

de

Las

er b

eam

in th

e op

tical

cha

nnel

FLU

KE

dig

ital

mul

timet

er

INTE

RST

ATE

20-

MH

z lo

g-lin

ear

swee

p ge

nera

tor

New

port

pow

er

met

er m

odel

181

5-C

Hew

lett

Pack

ard

Puls

e ge

nera

tor

Vbi

as1~

2.6V

Vdd

~3.3

V

ICC

WTC

YC

Sam

ple

& H

old

Gen

erat

or

Tes

t Fix

ture

(im

age

sens

or

pixe

l’s r

ead-

out

cir

cuit)

VL

dc ~

6 V

Vbi

as2

~ 0.

5V

Ket

hely

Pow

er S

uppl

y

Clo

ck L

AM

BD

A p

ower

su

pply

Page 335: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.14:

Mot

ion/

chan

ge d

etec

tion

circ

uit u

sing

tem

pora

l dou

ble

sam

plin

g (T

DS)

alg

orith

m p

rese

nted

in C

hapt

er 5

.

PD

Vdd

Vdd

RO

W

Vbi

as1

Pixe

l

+ _

Col

umn

Bus

Vsi

gnal

(t 1)

Vsi

gnal

(t 2)

Dif

fere

ntia

l A

mpl

ifie

r

SH2

C2

Vbi

as2

CO

L. V

dd

SH1

C1

Vbi

as2

CO

L. V

dd

α dtdI

ROW

SH1

SH

2

CO

L.

Col

umn

∆t

t 1 t 2

Page 336: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.15:

Dou

ble

sam

plin

g di

ffer

entia

tor.

(Top

-left)

Sys

tem

clo

ck, S

H1,

SH2.

(Top

-rig

ht) V

R a

nd V

S no

t sam

pled

, the

diff

eren

ce

equa

ls z

ero.

(B

otto

m-le

ft) S

ampl

ed V

R a

nd V

S, th

e di

ffer

ence

not

zer

os.

(Bot

tom

-rig

ht)

Zoom

ed s

ampl

ed V

R a

nd V

S an

d no

n ze

ro

diff

eren

ce.

Page 337: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.16:

Sin

gle

pixe

l (A

PS) m

otio

n/ch

ange

det

ectio

n ou

tput

. (T

op-le

ft) S

ampl

ed s

igna

ls (V

R a

nd V

S) fo

r a s

quar

e-w

ave

(Las

er)

inpu

t alo

ng w

ith th

e de

tect

ed T

DS-

sign

al u

sing

sam

pled

diff

eren

tiato

r. (T

op-r

ight

) For

saw

(Las

er) i

nput

, (B

otto

m-le

ft) fo

r sin

wav

e (L

aser

) inp

ut, a

nd (B

otto

m-r

ight

) for

squa

re-w

ave

(Las

er) i

nput

.

Page 338: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.17:

Sce

nario

s and

def

initi

ons o

f par

amet

ers t

hat c

ould

be

used

to d

ecod

e m

otio

n/ch

ange

det

ectio

n.

L/d

L/u

R/d

R

/u

Ligh

t

Dar

k

Ligh

t

Dar

k

Row

, n

12

37

Col

umn,

m

A

B

27

+R

/u

R/d

+

27

+L

/d

L/u

+

I ph α

Are

a

OFF

-WB

(d)

ON

-BW

(u)

Page 339: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.18:

Mot

ion/

chan

ge d

etec

tion

for a

nd in

put (

Lase

r) ra

mp.

Rig

ht g

raph

s are

for O

FF-W

B (d

own)

ram

p ou

tput

s. L

eft g

raph

s ar

e fo

r ON

-BW

(up)

ram

p ou

tput

s. T

op g

raph

s are

for t

he ra

mp

inpu

t and

one

of t

he sa

mpl

ed si

gnal

s. B

otto

m g

raph

is fo

r bop

th

sam

ple

sign

als a

nd th

e m

otio

n/ch

ange

(TD

S) d

etec

tion

sign

als.

Page 340: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

Figu

re E

.19

Sam

ple

and

hold

circ

uit o

pera

tion

with

diff

eren

t sam

plin

g fr

eque

ncie

s. A

t ver

y lo

w sa

mpl

ing

freq

uenc

ies (

botto

m-r

ight

gr

aph)

or s

low

ram

ps th

e he

ld si

gnal

leak

out

due

to le

akag

e cu

rren

t in

the

switc

hing

tran

sist

ors a

t OFF

stat

e, th

eref

ore

sign

al m

ay

deca

y w

hen

it is

sam

pled

aga

in.

Page 341: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

304

Appendix F

Correction of FPN Measurements According to our measurements in Chapter 6 (also recall Appendix C & D) for optical

response presented in gray level form for the column means, <columns>, and row means,

<rows> (recall Fig. 6.7-9) of imager data array for the three modes of operation (Linear-

APS, Log-APS, and 2Log-APS) to see optical responses for each mode is affected by the

presence FPN, we noticed that the <columns> plots showed significant variation across

the width of the sensor for all modes of operation, and that for any given light intensity,

the <columns> response (Tops of Fig. 6.7-9) increases monotonically with column

position (column #). These plots not only have an almost identical slope for all light

intensities investigated, but also similar slopes for all modes of operation. This suggests

that this behavior is due to an effect from outside the pixel; thus, we attributed it to the

fading of the sampled pixel output as the pixel read-out. According to Raster scanning

algorithm used here, when a row is selected, all pixels in that row are sampled

simultaneously in the respective sample-and-hold circuit (S&H), but they are read

sequentially (column after column). Therefore, if the S&H capacitors were small and/or

their NMOS switches had higher leakage currents, the reading of the 1st pixel would be

different when compared with the reading of last pixel in the row. On the other hand, the

<rows> response (bottoms of Fig. 6.7-9) showed invariable nature for all rows in the

imager, which support our argument above, because at any given light intensity, pixels of

Page 342: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

305

a particular column (different row) in the array should have, ideally, the same response

(excluding FPN), because they are sampled at the same time and read at the same time

relative to each row-timing). This applies to any column in the array. Therefore, when

we calculate the rows mean <rows>, we end up with the invariable nature shown in Fig.

6.7-6.9 bottoms.

Accordingly, any column-FPN value calculated directly from the measured data

using Equation 6.26 without excluding this across sensorwidth variation would be

artificial and misleading. Therefore, a pre-correction is needed for the column-FPN

before plotting the results. The correction scheme is illustrated below in Fig. E.1.

y = 0.3036x + 48.607R2 = 0.9249

y = 3E-05x + 48.607

40

45

50

55

60

65

70

0 10 20 30 40 50 60 70

column #

Ave

rage

col

umn

gray

leve

l

before correctionafter correctionLinear (before correction)Linear (after correction)

slope =tanθ =m∆=mx or∆j =m*j

θ

∆1

∆4 ∆5∆j

<Pj >

<PjC > <PjC >=<Pj > -∆jwhere ∆j =0.3036*j

What changed :1. Eliminate the effect of S&H.2. We have new mean (~48.607) instead of the distorted mean (~59.12).3.We have new col-col variation

Figure F.1: Gray level response of the imager in logarithmic-mode (Log-APS). Top plot is for <columns> before correction, with some bad columns excluded, whereas the bottom plot is for <columns> after eliminating the effect of S&H. The measurement was held with Vbias-0 @ ~0.48 V, Vbias-B @-0.58 V, and frame rate @ ~23.32-f/s. Light intensity is 0µW/cm2 (dark).

The top plot in Fig. E.1 shows the gray level response of the imager before correction,

where the column FPN is modulated on the acrosssensorwidth variations, which

have to be eliminated. Using linear-trend-line fitting (least squares fit) feature of MS

Page 343: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

306

EXCEL, we end up with the straight-line trend as shown with equation Amxy += , with

m= 0.3036 and A= 48.607 for this particular case. The R-squared value is a measure of

how good the fit is (maximum = 1). Now, to correct these measurements, we use this

information to find ∆j required to be subtracted from <Pj> to get the corrected <Pjc>

values. As illustrated in Fig. E.1, jmj ×=∆ . Therefore, for this particular case, <Pjc>

= <Pj> j×− 3036.0 , as shown in the bottom (corrected) plot of Fig. E.1. With this

correction, we eliminated the across sensor-width variations, but preserved the

meaningful column-to-column variations around a meaningful mean of the pixel

array, which is essential for reliable column-FPN analysis.

These across sensor-width variations should not affect the pixel-FPN, because

according to Equation 6.28, the pixel-FPN is determined by calculating the standard

deviation of the variation of the outputs of the column pixels from that columns mean

<column> for all columns in the imager array. This means that the variations in the local

pixels outputs are always compared with the local column mean. Therefore, as far as the

FPN is concerned, it is concluded that there is no effect of across sensor-width variations

on the credibility of the pixel-FPN calculation.

As expected for row-FPN, there is no variation in the row-direction (across the

sensor-height) as shown in the bottom plots of Fig. 6.7-6.9; therefore, no correction

needed in this direction. Moreover, even though the mean of the entire array (which is

used in the calculation of row-FPN) is distorted by the across sensor-width variations,

this distortion is compensated by identical distortion in the rows means <rows> used in

the calculation of row-FPN. Any correction applied here means that instead of

calculating the standard deviation of the variation of rows means <rows> from the

distorted mean of the entire data array, we calculate the same variation but around

another corrected mean, which has no effect on the calculated level of row-FPN. This

calculation is analogous to the calculation of the peak-peak value of two identical

sinusoidal waveforms but have different offset (DC) value.

In conclusion, column-FPN has to be corrected, whereas pixel-FPN and/or row-

FPN do not require correction.

Page 344: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

307

Results discussed here are specifically related to Log-APS mode of operation;

however, the argument is quite general and applied for all modes of pixel operations.

Figures E.2 below show (from the top to the bottom) the respective gray level response

across the width the sensor before and after correction for Linear-, Log-, and 2Log-APS.

Figure F.2: Gray level response of the imager before and after correction for: (Top) Linear-APS mode, (Middle) Logarithmic- (Log-APS) mod, and (Bottom) logarithmic (2Log-APS) mode of pixel operation.

Page 345: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

308

Bibliography

[1] N. Tu, R. Hornsey, and S.G. Ingram, “CMOS Active Pixel Image Sensor with Combined Linear and Logarithmic Mode Operation,” IEEE 1998 Canadian Conference on Electrical and Computer Engineering, Waterloo, ON, Canada, p. 754, 1998.

[2] E. Roca, F. Frutos, S. Espejo, R. D-Castro, and A. R-Vázquez, “Electro-optical

Measurement System for the DC Characterization of Visible Detectors for CMOS-Compatible Vision,” IEEE Trans. on Instrumentation and Measurement, vol. 47, p. 499, 1998.

[3] T. Lulé, S. Benthien, H. Keller, F. Mütze, P. Rieve, K. Seibel, M. Sommer, and

M. Böhm, “Sensitivity of CMOS Based Imager and Scaling Perspectives,” IEEE Trans. Electron Device, vol. 47, p. 2110, 2000.

[4] S. Kavadias, B. Dierickx, D. Scheffer, A. Alaerts, D. Uwaerts, and J. Bogerts, “A

Logarithmic Response CMOS Image Sensor with On-Chip Calibration,” IEEE J. Solid-State Circuits, vol. 35, p. 1146, 2000.

[5] E. Fossum, “CMOS image sensors: Electronic camera on a chip,” in Proc. Int.

Electron Device Meetings, p. 17, 1995. [6] M. Tabet, N. Tu, and R. Hornsey, “Modeling and characterization of logarithmic

CMOS active pixel sensors, ” J. Vac. Sci. Technol. A, vol. 18, p.1006, May 2000. [7] H. Wong, “Technology and device scaling consideration for CMOS imagers,”

IEEE Trans. Electron Device, vol. 43, p. 2131, Dec. 1996. [8] CMOS Active Pixel Sensors (APS) at JPL web site, URL: http://

137.79.14.14/APS_Page. [9] VISION home page web site, URL: http://www.vvl.co.uk/. [10] Photobit Corporation, Technology, “CMOS’s advantages on CCDs,”

http://www.photobit.com/, 2000. [11] C. Koch, and H. Li (editors), “Vision Chips: Implementing Vision Algorithms with

Analog VLSI Circuits,” IEEE Computer Society Press, 1995. [12] C. Mueller and M. Rudolph, ''Light and Vision," Time Inc., 1966.

Page 346: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

309

[13] T. Delbrück and C. Mead, "Phototransduction by Continuous Time, Adaptive, Logarithmic Photoreceptor Circuits," Technical Report, California Institute of Technology, Pasadena, California, USA, 1994.

[14] L. E. Ravich, ''Electronic Imaging Promises and Threats,'' Laser Focus World,

vol. 22, pp.128-130, Jan. 1986. [15] A. Moini, “Vision chips or seeing silicon,” Center for High Performance

Integrated Technologies and Systems, University of Adelaide, Adelaide, Australia, Technical Report, March 1997.

[16] S. Mendis, S. Kemeny, R. Gee, B. Pain, Q. Kim, and E. Fossum, “CMOS active

pixel image sensors for highly integrated imaging system,” IEEE J. Solid-State Circuits, vol. 32, p. 187, Feb. 1997.

[17] T. Delbrück and C. Mead, ''Analog VLSI Phototransduction by Continuous-Time,

Adaptive, Logarithmic, Photoreceptor Circuits,'' Technical Report, California Institute of Technology, Pasadena, California, USA, 1996.

[18] [2.10a] E. Fossum, “CMOS Image Sensors: Electronic Camera-on-a-Chip,” IEEE

Transaction on Electron Devices, Vol. 44, No. 10, pp. 1689-1698, Oct. 1997. [19] A. El Gamal, “Introduction to Image Sensors an Digital Cameras,” Stanford

University, Stanford, USA, EE-392B Lecture Notes, April 2001. [20] S. K. Mendis, S. E. Kemeny, R. C. Gee, B. Pain, Q. Kim, and E. R. Fossum,

'Progress in CMOS Active Pixel Image Sensors,” Proc. SPIE, Vol. 2172, pp. 19-29, 1994.

[21] Y Media Corporation, Engineering Application, CMOS Tech Primer, “An

Introduction to CMOS Image Sensor Technology,” http://www.y-media.com, 2000.

[22] E. R. Fossum, ''Active Pixel Sensors: Are CCD’s Dinosaurs?” Proc. PIE Vol.

1900, pp. 2-14, 1993. [23] R. Hornsey, “Design and fabrication of integrated image sensors,” Institute for

Computer Research, University of Waterloo, Waterloo, Canada, Short-course Notes, May 1998.

[24] V. Brajovic, ''Computational Sensors for Global Operations in Vision,'' Ph.D.

Dissertation, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA, Jan. 1996.

[25] K. Ishihara et al., ''A Half-pel Precision Motion-Estimation Processor with

Concurrent Three-Vector Search,'' ISSCC Tech. Dig., pp. 288-289, Feb. 1995.

Page 347: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

310

[26] J. Bastos, M. Steyaert, R. Roovers, P. Kinget, W. Sansen, B. Graindourze, A.

Pergot, and Er. Janssens, ''Mismatch Characterization of Small Size MOS transistors,'' Proc. IEEE Int. Conference on Microelectronics Test Structures, pp. 271-276, March 1995.

[27] A. Pavasovic, A. G. Andreou, and C. R. Westgate, ''Characterization of

Subthreshold MOS Mismatch in Transistors for VLSI Systems,'' Analog Integrated Circuits and Signal Processing, Vol. 6, No. 1, pp. 75-85, July 1994.

[28] M. Steyaert, J. Bastos, R. Roovers, P. Kinget, W. Sansen, B. Graindourze, A.

Pergot, and Er. Janssens, ''Threshold Voltage Mismatch in Short-Channel MOS Transistors,'' Electronic Letters, Vol. 18, pp. 1546-1548, Sept. 1994.

[29] F. Forti and M. E. Wright, ''Measurement of MOS Current Mismatch in the Weak

Inversion Region,'' IEEE Journal of Solid State Circuits, Vol. 29, No. 2, pp. 138-142, 1994.

[30] S. Masui, ''Simulation of Substrate Coupling in Mixed-Signal MOS Circuits,''

Symposium on VLSI Circuits, Seattle, pp. 42-43, June 1992. [31] D. K. Su, M. J. Loinaz, S. Masui, and B. A. Wooley, ''Experimental Results and

Modeling Techniques for Substrate Noise in Mixed-Signal Integrated Circuits,'' IEEE Journal of Solid State Circuits, Vol. 28, No. 4, pp. 420-430, April 1993.

[32] K. J. Kerns, I. L. Wemble , and A. T. Yang, ''Efficient Parasitic Substrate

Modeling for Monolithic Mixed-A/D Circuit Design and Verification,'' Analog Integrated Circuits and Signal Processing, Vol. 10, No. 1-2, pp. 7-21, June-July 1996.

[33] N. K. Verghese, D. J. Allstot, and M. A. Wolfe, ''Verification Techniques for

Substrate Coupling and their Application to Mixed-Signal IC Design,'' IEEE Journal of Solid State Circuits, Vol. 31, No. 3, pp. 345-365, March 1996.

[34] A. Moini, A. Bouzerdoum, A. Yakovleff, and K. Eshraghian, ''A Two

Dimensional Motion Detector Based on the Insect Vision,'' Proc. SPIE, Advanced Focal Plane Arrays and Electronic Cameras, Berlin, pp. 146-157, Oct. 1996.

[35] M. Sze, Physics of Semiconductor Devices, Wiley, New York, 1981. [36] A. S. Deval, ''Design of Adaptive Active Pixel Sensors,'' M. Sc. thesis, Arizona

State University, Arizona, USA, May 1997. [37] A. M. A. Khan, ''CMOS Photodetection Circuits for Vision Chips,'' M. Sc. thesis,

Arizona State University, Arizona, USA, May 1997.

Page 348: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

311

[38] M. Loose, “A self-calibrating CMOS image sensor with logarithmic response,” Ph.D. Dissertation, University of Heidelberg, Heidelberg, Germany, Oct. 1999.

[39] J.S. Lee and R.I Hornsey, “Improved One-Dimensional Analysis of CMOS

Photodiode Including Epitaxial–Substrate Junction” 2001 IEEE Workshop on CCDs and Advanced Image Sensors, Crystal Bay, Nevada, June 2001.

[40] Fraunhofer-Gesellschaft web site, Analog IC-Design, Microsystems Technology,

“Photosensors on Standard CMOS Processes,” http://www.fraunhofer.de/, 2000. [41] M. Schanz, “Linear CMOS Image Sensors with Integrated Signal Processing,”

PhD dissertation, University of Duisburg, Duisburg, Germany, June 1998. [42] P. Aubert et al., ''Monolithic Position Encoder with On Chip Photodiodes,'' IEEE

Journal of Solid State Circuits, Vol. 23, No. 2, pp. 465-473, April 1988. [43] A. Mäkynen et al., ''CMOS Photodetectors for Industrial Position Sensing,'' IEEE

Transactions on Instrumentation and Measurement, Vol. 43, No. 3, pp. 489-492, Jun. 1994.

[44] I. Brouk, and Y. Nemirovsky, “Dimensional effects in CMOS photodiodes,”

Solid-State Electronics Vol. 46, No. 1, pp. 19-28 Jan. 2002. [45] D. Droste, “Realisierung eines Wellenfrontsensors mit einem ASIC,” Ph. D.

thesis, Institute of Computer Engineering, University of Mannheim, Mannheim, Germany, 1999.

[46] O. Y. Pecht, and R. Ginosar, ''A Random Access Photodiode Array for Intelligent

Image Capture, '' IEEE Transactions on Electron Devices, Vol. 38, No. 8, pp. 1772-1779, Aug. 1991.

[47] N. Stevanovic, “Integrated CMOS Image Sensors for High Speed Imaging,”

Ph.D. dissertation, University of Duisburg, Duisburg, Germany, May 2000. [48] M. Vidal, M, Bafleur, J. Buxo, and G. Sarrabayrouse, “ A bipolar photodetector

compatible with standard CMOS technology,” Solid-State Electron, Vol. 34, pp. 809-814, Aug. 1991.

[49] B. Dierickx, “CMOS image sensors: Concepts,” and “Limits,” SPIE—The

International Society for Optical Engineering, Photonics West, San Jose, USA, Short-course Notes, Jan 2000.

[50] B. Fowler, ''CMOS Area Image Sensors with Pixel Level A/D Conversion,'' Ph.D.

thesis, Stanford University, California, USA, 1995.

Page 349: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

312

[51] R. Nixon, S. E. Kemeny, C. O. Staller, and E. R Fossum, ''256X256 CMOS Active Pixel Sensor Camera on a Chip,'' ISSCC ‘96, pp. 178-179, Feb. 1996.

[52] S. Chamberlain, ''Photosensitivity and Scanning of Silicon Image Detector

Arrays,'' IEEE Journal of Solid State Circuits, Vol. SC-4, No. 6, pp. 333-342, Dec. 1969.

[53] O. Vietze, and P. Seitz, ''Image Sensing with Programmable Offset Pixels for

Increased Dynamic Range of more than 150dB,'' Proc. SPIE, Solid State Sensor Arrays and CCD Cameras, Vol. 2654, pp. 93-98, 1996.

[54] N. Ricquier, and B. Dierickx, ''Active Pixel CMOS Image Sensor with On-Chip

Non-Uniformity Correction,'' IEEE Workshop on CCDs and Advanced Image Sensors, Dana Point, CA, April 1995.

[55] B. Dierickx, D. Scheffer, G. Meynants, W. Ogiers, and J. Vlummens, "Random

Addressable Active Pixel Image Sensors," AFPAEC Europto, Berlin, invited paper, Oct. 1996, also SPIE Proceedings Vol. 2950, p.1.

[56] H. Wong, ''Technology and Device Scaling Considerations for CMOS Imagers,''

Solid State Circuits Technology Workshop on CMOS Imaging Technology, San Francisco, CA, Feb. 1996.

[57] C. Higgins and C. Koch, ''Analog CMOS Velocity Sensors,'' Proc. of Electronic

Imaging '97 (SPIE Vol. 3019), San Jose, CA, Feb. 1997. [58] K. Nakayama, ''Biological Image Motion Processing: A Review,'' Vision Res.,

Vol. 25, No. 5, pp. 625-660, 1996. [59] J. Hutchinson, C. Koch, J. Luo, and C. Mead, ''Computing Motion Using Analog

and Binary Resistive Networks,'' Computer, Vol. 21, pp. 52-64, March 1988. [60] R. Etienne-Cummings, S. Fernando, N. Takahashi, V. Shtonov, and J. Van der

Spiegel, ''A New Temporal Domain Optical Flow Measurement Technique for Focal Plane VLSI Implementation,'' Workshop on Computer Architecture for Machine Perception, pp. 241-250, 1993.

[61] J. Tanner, and C. Mead, ''An Integrated Analog Optical Motion Sensor,'' VLSI

Signal Processing II, Vol. 21, pp. 59-76, 1987. [62] R. C. Meitzler, A. G. Andreou, K. Strohbehn, and R. E. Jenkins, ''A Sampled-

Data Motion Chip,'' Proc. Midwest Symposium on Circuits and Systems, Vol. 1, pp. 288-291, 1993.

Page 350: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

313

[63] A. Simoni, G. Torelli, F. Maloberti, A. Sartori, S. E. Plevridis, and A. N. Birbas, ''A Single-Chip Optical Sensor with Analog Memory for Motion Detection,'' IEEE Journal of Solid State Circuits, Vol. 30, No. 7, pp. 800-806, July 1995.

[64] T. Hamamoto, K. Aizawa, and M. Hatori, ''Motion Adaptive Image Sensor for

Enhancement and Wide Dynamic Range,'' Proc. SPIE, Advanced Focal Plane Arrays and Electronic Cameras, Vol. 2950, pp. 137-145, 1996.

[65] K. Aizawa, Y. Egi, T. Hamamoto, M. Hatori, and J. Yamazaki, ''A Image Sensor

for On-Sensor-Compression,'' Workshop on Computer Architecture for Machine Perception, pp. 14-20, 1995.

[66] X. Arreguit, F. A. Van Schaik, F. V. Bauduim, M. Bidiville, and E. Raeber, ''A

Motion Detector System for Pointing Devices,'' IEEE Journal of Solid State Circuits, Vol. 31, No. 12, pp. 1916-1921, Dec. 1996.

[67] C. Koch, A. Moore, W. Bair, T. Horiuchi, B. Bishofberger, and J. Lazzaro,

''Computing Motion Using Analog VLSI Vision Chips: an Experimental Comparison Among Four Approaches,'' Proceedings of the IEEE Workshop on Visual Motion, pp. 312-24, 1991.

[68] H. Li, and C. H. Chen, ``Simulating a Function of Visual Peripheral Processes

with an Analog VLSI Network,'' IEEE Micro, Vol. 11, No. 5, pp. 8-15, Oct. 1991.

[69] R. Sarpeshkar, W. Bair, C. Koch, "Visual Motion Computation in Analog VLSI

Using Pulses," Neural Information Processing Systems, Vol. 5, pp. 781-788, 1993.

[70] T. Delbrück, "Silicon Retina with Correlation-Based Velocity-Tuned Pixels,''

IEEE Trans. Neural Networks, Vol. 4, No. 3, pp. 529-541, May 1993. [71] C. P. Chong, C.A.T. Salama, and K.C. Smith, ''Image Motion Detection Using

Analog VLSI,'' IEEE Journal of Solid State Circuits, Vol. 27, No. 1, pp. 93-96, Jan. 1992.

[72] A. Vittoz, O. Oguey, M. A. Maher, O. Nys, E. Dijkstra, and M. Chevroulet,

"Analog Storage of Adjustable Synaptic Weights,'' in U. Ramacher, and U. Ruckert (editors), VLSI Design of Neural Networks, Kluwer Academics Publishers, Boston, MA, pp. 47-63, 1991.

[73] M. Gottardi, and W. Yang, ''A CCD/CMOS Image Motion Sensor,'' Proc. IEEE

Int. Solid State Circuits Conf., pp. 194,288,289, 1993. [74] A. Moini, A. Bouzerdoum, K. Eshraghian, A. Yakovleff, X. T. Nguyen, A.

Blanksby, R. Beare, D. Abbott, and R. E. Bogner, ''An Insect Vision-Based

Page 351: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

314

Motion Detection Chip,'' IEEE J. Solid State Circuits, Vol. 32, No. 2, pp. 279-284, Feb. 1997.

[75] [2.60] C. Mead, ''Analog VLSI and Neural Systems,'' Addison-Wesley Publishing

Co., 1989. [76] A. Moini, A. Bouzerdoum, A. Yakovleff, D. Abbott, O. Kim, K. Eshraghianc, R.

E. Bogner, ''An Analog Implementation of Early Visual Processing in Insects,'' Proc. Int. Symposium on VLSI Technology, Systems, and Applications, pp. 283-287, May 1993.

[77] A. Moini, ''BugEye II: the Second Generation of a Motion Detection Chip:

Technical, and Programming Information,'' Technical Rapport, Center for GaAs VLSI Technology, University of Adelaide, March 1995.

[78] Moni, “Synthesis of Biological Vision Models Using Analog,” PhD dissertation,

University of Adelaide, Adelaide, Australia, 1997. [79] A. Marshall, “Vision Systems,” Cardiff University, Wales, UK, Lecture Notes,

http://www.cs.cf.ac.uk/, 1997. [80] HSPICE user’s Manual, Meta-Software, Campbell, Vol. 2 and 3, 1992. [81] Star-HSPICE user’s Manual, Avant! Corporation, Fremont, Release 1998,2, July

1998. [82] Cadence® 2002 Cadence Design Systems, Inc. [83] E. Uiga, Optoelectronics (Prentice Hall, New Jersey, 1995). [84] N. Desai, K, Hoang, and G. Sonek, “Applications of PSPICE Simulation Software

to the study of Optoelectronic Integrated Circuits and Devices,” IEEE Transaction on Education, Vol. 36, No. 4, pp. 357-362, Nov. 1993.

[85] V. Liberali, F. Maloberti, and A. Regini, “Electro-optical Device Models for

Electrical Simulators,” Analog Integrated Circuits and Signal Processing, Kluwer Academic Publishers, Vol. 10, pp. 119-132, 1996.

[86] Canadian Microelectronics Corporation, http://www.cmc.ca, “CMC's Analog

Flow Description, “ Jan 10, 2002. [87] D. Scheffer, B. Dierickx, and G. Meynants, IEEE Trans. Electron Devices ED-44,

p.1716, 1997. [88] A. Yakovleff, A. Moini, J. Analog Integrated Circuits and Signal Processing,

vol.15, p.183, 1998.

Page 352: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

315

[89] R. Muller and T. Kamins, Device Electronics for Integrated Circuits, second

edition, John Wiley, 1986. [90] J. Huang, Z. Liu, M. Jeng, K. Hui, M. Chan, P. Ko and C. Hun, BSIM3 Version

2.0 User’s Manual, University of California, Berkeley, California, March 1994. [91] W. Liu, X. Jin, J. Chen, M. Jeng, Z. Liu, Y. Cheng, K. Chen, M. Chan, K. Hui, J.

Huang, R. Tu, P. Ko and C. Hu, BSIM3v3.2.2 User’s Manual, University of California, Berkeley, California, 1999.

[92] C. Hong and R.I Hornsey (2001) “Inverted Logarithmic Active Pixel with Current

Readout” to be presented at the 2001 IEEE Workshop on CCDs and Advanced Image Sensors, Crystal Bay, Nevada, June 2001.

[93] R. Geiger, P. Allen, and N. Strader, VLSI Design Techniques for Analog and

Digital Circuits, McGraw-Hill, Inc., 1990. [94] Fill Factory NV, “The Logarithmic Pixel Response FAQ,” Technology, Image

Sensor FAQ, http://www.fillfactory.com, 2002. [95] J. Huppertz et al., IEEE Workshop on Charge-Coupled Devices and Advanced

image Sensors, Bruges, Belgium, June 1997. [96] M. Tabet, N. Tu, and R. Hornsey, “Modeling and characterization of logarithmic

CMOS active pixel sensors,” J. Vac. Sci. Technol. A, vol. 18, p.1006, May 2000.

[97] R. Etienne-Cummings, J. Van der Spiegel and P. Mueller, “A Focal Plane Visual Motion Measurement Sensor,” IEEE Trans. Circuits and System I, Vol. 44, No. 1, pp. 55-66, 1997.

[98] H. Tian, B. Fowler, and A. El-Gamal, “Analysis of temporal noise in CMOS photodiode active pixel sensor,” IEEE J. Solid-State Circuits, vol. 36, p. 92, Jan. 2001.

[99] R. Deutschmann, “Analog VLSI Motion Sensors,” Diploma thesis, California

Institute of Technology, Los Angeles, California, USA, July 1997. [100] T. Delbrück, Investigations of Analog VLSI Phototransudation and Visual Motion

Processing, Ph.D. Thesis, Dept. of Computation and Neural Systems, California Institute of Technology, Pasadena, CA, 1993.

[101] T. Delbrück, “Silicon retina with Correlation-Based, Velocity-Tuned Pixels,”

IEEE Transactions on Neural Networks, vol. 4, no. 3, pp. 529–541, May 1993.

Page 353: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

316

[102] R. Etienne-Cummings, J. Van der Spiegel, P. Mueller, and M. Zhang, “A Foveated Visual Tracking Chip,” IEEE International Solid-State Circuits Conference ISSCC97 Proceeding, San Francisco, Feb. 1997.

[103] Z.Li, and K.Aizawa, “Vision Chip for Very Fast Detection of Motion Vectors: Design and Implementation,” IEICE Trans. Electronics, Vol.82-C, No.9, pp.1739-1748, Sep. 1999.

[104] Y. Ni and J. Guan, “A 256x256 Pixel Smart CMOS Image Sensor for Line-Based Stereo Vision Applications,” IEEE J. OF Solid-State Circuits, Vol. 35, No. 7, JULY 2000.

[105] Rainer A. Deutschmann and Christof Koch, “An Analog VLSI Velocity Sensor

Using the Gradient Method,” Proc. IEEE International Symposium on Circuits and Systems (ISCAS 1998), Monterey/USA, Vol. 6, pages 649-652

[106] J. Shin, D. Park, S. Lee, H. Kim, J. Park, J. Kim, M. Lee, H. Yamada, and H.

Yonezu, “A Foveated CMOS Retina Chip for Edge Detection with Local Light-Adaptation Function, ”ICONIP – 2001 8thInternational Conference on Neural Information Processing, Shanghai China Nov. 14-18, 2001

[107] R. Deutschmann, C. Higgins and C. Koch, “Real-Time Analog VLSI Sensors for

2-D Direction of Motion,” Proc. International Conference on Artificial Neural Networks (ICANN 1997), Vol. 1327 of Lecture Notes in Computer Science, pages 1163-1168, Springer-Verlag, 1997.

[108] Rainer A. Deutschmann and Christof Koch, “Compact Analog VLSI 2-D Velocity

Sensor,” Proc. IEEE International Conference on Intelligent Vehicles (IV), Stuttgart/Germany, Vol. 1, pages 359-364, 1998.

[109] M. Loose, and J. Schemmel, “Self-calibrating logarithmic CMOS image sensor

with single chip camera functionality,” IEEE CCD & AIS Workshop, Karuizawa, Japan, June 1999.

[110] Decker, “FPN reduction techniques in CMOS imaging arrays,” IEEE Solid-State

Circuits Technology Workshop on CMOS Imaging Technology, San Francisco, CA, USA, Feb. 1996.

[111] Dalsa Inc. “Camera measurement definitions and techniques,” Application Notes,

www.dalsa.com, 2002. [112] Dalsa Inc. “Correlated double sampling (CDS),” Application Notes,

www.dalsa.com, 2002. [113] G. F. Marshall, S. Collins, “A high dynamic range front end for automatic image

processing applications”, Advanced Focal Plane Arrays and Electronic Cameras, Thierry M. Bernard, ed., Proc. SPIE 3410, p. 176, Sept. 1998.

Page 354: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

317

[114] I. Stuttgart, “The high-dynamic-range CMOS evaluation camera HDRC,” Institut

für Mikroelektronik Stuttgart, Germany, Publicity material, 1997. [115] M. Loose, K. Meier, J. Schemmel, “CMOS image sensor with logarithmic

response and self-calibrating fixed pattern noise correction”, Advanced Focal Plane Arrays and Electronic Cameras, Thierry M. Bernard, ed., Proc. SPIE 3410, p. 117, Sept. 1998.

[116] B. Dierickx, G. Meynants, and D. Scheffer, "Offset-free offset calibration for

APS,” IEEE CCD & AIS workshop, Brugge, Belgium, p. R13, June 1997 [117] G. Meynants, B. Dierickx, D. Uwaerts, and J. Bogaerts, "Fixed pattern noise

suppression by a differential readout chain for a radiation-tolerant image sensor," IEEE CCD & AIS Workshop, Lake Tahoe, Nevada, USA, p. 56, June 2001.

[118] S. Lovett, M. Welten, A. Mathewson, and B. Mason, “Optimizing MOS transistor

mismatch,” IEEE J. Solid-State Circuits, vol. 33, p. 147, Jan. 1998. [119] S. Decker, D. McGrath, K. Brehmer, and C. Sodini “256×256 CMOS imaging

array with wide dynamic range pixels and column-parallel digital output,” IEEE J. Solid-State Circuits, vol. 33, p. 2081, Dec.1998.

[120] E. Fox, J. Hynecek, D. Dykaar,” Wide-Dynamic-Range Pixel with Combined

Linear and Logarithmic Response and Increased Signal Swing,” Presented at the IS&T/SPIE 12th International Symposium on Electronic Imaging 2000, Jan. 23 to 28, 2000.

[121] B. Pain, “CMOS digital image sensors, ”SPIE—The International Society for

Optical Engineering, Photonics West, San Jose, USA, Short-course Notes, Jan 2001.

[122] N. Ricquer and B. Dierickx, “Active pixel CMOS image sensor with on-chip

nonuniformity correction,” in Proc. IEEE Workshop Charge-Coupled Devices and Advanced Image Sensors, Dana Point, CA, Apr. 20–22, 1995.

[123] S.Kavadias, B.Dierickx, D.Scheffer, A.Alaerts, D.Uwaerts, J.Bogaerts, "A

Logarithmic Response CMOS Image Sensor with On-Chip Calibration", IEEE Journal of Solid-State Circuits, Volume 35, Number 8, August 2000, pp.1146-1152.

[124] M. Loose, K. Meier, and J. Schemmel, “CMOS image sensor with logarithmic

response and self calibrating fixed pattern noise correction,” in Proc. SPIE Advanced Focal Plane Arrays and Electronic Cameras, vol. 3410, T. M. Bernard, Ed., 1998, pp. 117–127.

Page 355: Double Sampling Techniques for CMOS Image … Sampling Techniques for CMOS Image Sensors by Muahel Tabet A thesis presented to the University of Waterloo in fulfillment of the thesis

318

[125] S. Kavadias, B. Dierickx, G. Meynants, “Self-calibrating logarithmic CMOS image sensor with single chip camera functionality,” in Proc. IEEE Workshop Charge-Coupled Devices and Advanced Image Sensors, Karuizawa, Japan, June 1999, pp. 191–194.

[126] B. Dierickx, G. Meynants, D. Scheffer "Offset-free offset calibration for APS", IEEE CCD & AIS workshop, Brugge, Belgium, 5-7 June (1997); Proceedings p. R13.

[127] G. Meynants, B.Dierickx, D.Uwaerts, J.Bogaerts, "Fixed pattern noise suppression by a differential readout chain for a radiation-tolerant image sensor", IEEE Workshop on CCDs and AISs, 2001, pp 1-4.

[128] H. Nomura, T. Shima, A. Kamashita, T. Ishida, and T. Yoneyama, “A 256x256 BCAST Motion Detector with Simultaneous Video Output,” ISSCC Digest of Technical Papers, pp. 17.6-1-17.6-10, February 1998.