Modeling of a CMOS Active Pixel Image Sensor: Towards Sensor Integration with Microfluidic Devices

download Modeling of a CMOS Active Pixel Image Sensor: Towards Sensor Integration with Microfluidic Devices

of 63

description

This thesis presents research work on modeling and simulation of CMOS Active Pixel Sensors providing some basis for their future integration with microfluidic devices. An overview of image sensors and a literature review of microfluidic systems integrating image sensors are presented. Different stages of a CMOS active pixel sensor are modeled, including readout, buffer and selection circuits. Computer simulations are carried out demonstratingthe functionality of every stage.

Transcript of Modeling of a CMOS Active Pixel Image Sensor: Towards Sensor Integration with Microfluidic Devices

  • INSTITUTO TECNOL OGICO Y DE ESTUDIOSSUPERIORES DE MONTERREY

    CAMPUS MONTERREY

    SCHOOL OF ENGINEERING

    DIVISION OF MECHATRONICS AND INFORMATIONTECHNOLOGIES

    GRADUATE PROGRAMS

    MODELING OF A CMOS ACTIVE PIXEL IMAGE SENSORTOWARDS SENSOR INTEGRATION WITH MICROFLUIDIC DEVICES

    THESIS

    MASTER OF SCIENCE WITH MAJOR IN ELECTRONICSENGINEERING (ELECTRONCS SYSTEMS)

    BY

    MATIAS V AZQUEZ PI N ON

    MONTERREY, N.L. MAY, 2011

  • Modeling of a CMOS Active Pixel Image SensorTowards Sensor Integration with Microfluidic Devices

    by

    Matias Vazquez Pinon

    Thesis

    School of Engineering

    Division of Mechatronics and Information TechnologiesGraduate Programs

    Master of Science with Major in Electronics Engineering(Electronics Systems)

    Instituto Tecnologico y de Estudios Superiores de MonterreyCampus Monterrey

    Monterrey, N.L. May, 2011

  • Instituto Tecnologico y de Estudios Superiores de MonterreyCampus Monterrey

    Division of Mechatronics and Information TechnologiesGraduate Programs

    The committee members hereby recommend the thesis presented by Matias Vazquez Pinonto be accepted as a partial fulfillment of the requirements for the degree of Master in

    Science with Major in Electronics Engineering (Electronics Systems).

    Evaluation committee:

    Prof. Sergio Omar Martnez Chapa, Ph.D.Advisor

    Prof. Graciano Dieck Assad, Ph.D.Member

    Prof. Sergio Camacho Leon, Ph.D.Member

    Prof. Gerardo A. Castanon Avila, Ph.D.Member

  • INSTITUTO TECNOL OGICO Y DE ESTUDIOSSUPERIORES DE MONTERREY

    CAMPUS MONTERREY

    SCHOOL OF ENGINEERING

    DIVISION OF MECHATRONICS AND INFORMATIONTECHNOLOGIES

    GRADUATE PROGRAMS

    MODELING OF A CMOS ACTIVE PIXEL IMAGE SENSORTOWARDS SENSOR INTEGRATION WITH MICROFLUIDIC DEVICES

    THESIS

    SUBMITTED AS PARTIAL FULFILLMENT OF THE REQUIREMENTSFOR THE DEGREE OF

    MASTER OF SCIENCE WITH MAJOR IN ELECTRONICSENGINEERING (ELECTRONCS SYSTEMS)

    BY

    MATIAS V AZQUEZ PI N ON

    MONTERREY, N.L. MAY, 2011

  • To my family. . .For supporting me when I decide,congratulating me when I succeedand advising me when I mistake.

    v

  • Modeling of a CMOS Active Pixel Image Sensor -Towards Sensor Integration with Microfluidic Devices

    Matias Vazquez Pinon, B.Sc.Instituto Tecnologico y de Estudios Superiores de Monterrey, 2011.

    Advisor: Sergio Omar Martnez Chapa, Ph.D.Instituto Tecnologico y de Estudios Superiores de Monterrey.

    A B S T R A C T

    Recently, microfluidic devices have received considerable attention because of the manypotential applications in medicine and environment monitoring. In such systems, cells andparticles suspended in fluids can be manipulated for analysis. On the other hand, solid stateimagers have been very successful in consumer electronic devices like digital still camerasand handy camcorders. Microfluidic systems are projected to develop more complex func-tions as they integrate electronic/optoelectronic sensors that could monitor the activity withinmicrochannels.

    This thesis presents research work on modeling and simulation of CMOS Active PixelSensors providing some basis for their future integration with microfluidic devices. Anoverview of image sensors and a literature review of microfluidic systems integrating imagesensors are presented. Different stages of a CMOS active pixel sensor are modeled, includingreadout, buffer and selection circuits. Computer simulations are carried out demonstratingthe functionality of every stage. Additionally, it was modeled and simulated a 4 5 pixelarray, incorporating the addressing and reset signals.

    Simulation results illustrate how the performance of the CMOS active pixel sensor canbe adjusted to meet the specifications for scientific applications. A wide dynamic range isobtained by achieving a large full-well capacity for the photodiode and maximizing the gainof the source follower amplifier. Also, the fill factor is increased by reducing the size of theon-pixel transistors.

    vi

  • Contents

    1 Introduction 1

    2 Theoretical Background on Solid-State Image Sensors 52.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Human light perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.3 The solid-state imaging process . . . . . . . . . . . . . . . . . . . . . . . . . 6

    2.3.1 Absorption of light in semiconductors . . . . . . . . . . . . . . . . . 72.3.2 Charge collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    2.4 Photodiodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.4.1 Operation principle of photodiodes . . . . . . . . . . . . . . . . . . 92.4.2 The photodiode full-well capacity model . . . . . . . . . . . . . . . 11

    2.5 CMOS Image Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.5.1 CMOS pixel structures . . . . . . . . . . . . . . . . . . . . . . . . . 12

    3 State-of-the-Art on Microfluidic Systems and Image Sensors Integration 18

    4 Active Pixel Sensor Modeling and Simulation 254.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.2 Pixel read-out circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    4.2.1 Reset transistor MRST . . . . . . . . . . . . . . . . . . . . . . . . . . 264.2.2 Source Follower Amplifier . . . . . . . . . . . . . . . . . . . . . . . 29

    4.3 Photodiode design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.3.1 Pn-junction capacitance . . . . . . . . . . . . . . . . . . . . . . . . 314.3.2 Physical characteristics . . . . . . . . . . . . . . . . . . . . . . . . . 34

    4.4 Pixel array simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.4.1 Simulation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    5 Conclusions and future work 46

    6 Appendix A 49

    References 52

    Vita 53

    vii

  • List of Figures

    2.1 Solid-state imaging process [1]. . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Transmission and reflection of light in dielectric layers . . . . . . . . . . . . 72.3 Silicon absorption length of light [2]. . . . . . . . . . . . . . . . . . . . . . . 82.4 The photodiode structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.5 Evolution of pixel size, CMOS technology node used to fabricate the devices

    and the minimum feature size of the most advanced CMOS logic process [3]. 132.6 Passive CMOS pixel with a single in-pixel transistor [3]. . . . . . . . . . . . 142.7 Active CMOS pixel based on in-pixel amplifier [3]. . . . . . . . . . . . . . . 152.8 Digital Pixel Sensor Architecture . . . . . . . . . . . . . . . . . . . . . . . . 16

    3.1 Integrated digital cytometer system components and architecture [4]. . . . . . 183.2 Photograph of the linear active pixel CMOS sensor [4]. . . . . . . . . . . . . 193.3 Flip-chip on glass illustration of a hybrid microfluidic digital cytometer. [4]. . 203.4 PDMS cast chamber, shown in a cross-sectional view, realized a microfluidic

    channel passing through the structure and over the active area of the sensorchip which is wire-bonded to a PCB. [4]. . . . . . . . . . . . . . . . . . . . . 20

    3.5 The photodiode pixel linear arrays. [5]. . . . . . . . . . . . . . . . . . . . . 213.6 Post bond image of the CMOS sensor to the microfluidic chip. [5]. . . . . . . 213.7 Plot of the CMOS sensor output upon detection of a 6 m polystyrene poly-

    sphere. [5]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.8 A schematic of photodiode type CMOS active pixel sensor. [6]. . . . . . . . . 223.9 Comparison of images of microbeads on chip surface taken by (a) a camera

    and (b) the contact imager. An overlapped view is also shown in (c). [6]. . . . 233.10 Schematic diagram of the modified active pixel circuit. [7]. . . . . . . . . . . 24

    4.1 3T pixel configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.2 Linear approximation of VTH term as a function of the body-factor term

    s + VS B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.3 Boosted VRST and resulting VPD RST . . . . . . . . . . . . . . . . . . . . . . 294.4 Source follower drain current and output voltage for different feature sizes of

    MCOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.5 Photodiode response at dark environment at 33ms integration time . . . . . . 324.6 Charge distribution effect in the photodiodes capacitance . . . . . . . . . . . 334.7 Photodiode response at a dark environmet at 3.3ms integration time . . . . . 34

    viii

  • 4.8 Full discharge of the 50 fF ideal capacitance at a) 30 frames per second, b)300 frames per second, c) 3,000 frames per second and d) 30,000 frames persecond. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    4.9 Layout example of a square-shaped photodiode including on-pixel transistors 364.10 Pn-junction photodiode response at dark environment at 33ms integration time 374.11 Charge distribution effect on the pn-junction photodiode . . . . . . . . . . . 384.12 Pn-junction photodiode response at dark environment at 300 frames per second 384.13 Full discharge of the square-shaped NDIFF PSUB photodiode with a CPD = 58.44 fF

    pn-junction capacitance at a) 30 frames per second, b) 300 frames per second,c) 3,000 frames per second and d) 30,000 frames per second . . . . . . . . . 39

    4.14 Full discharge of the ideal photodiode with a CPD = 58.44 fF capacitance at a)30 frames per second, b) 300 frames per second, c) 3,000 frames per secondand d) 30,000 frames per second . . . . . . . . . . . . . . . . . . . . . . . . 40

    4.15 Timing signals for accessing and read-out pixels . . . . . . . . . . . . . . . . 414.16 Active pixel sensor array including horizontal, vertical and read-out circuitry 424.17 Simulation results for a single pixel of the 4 5 array. From top to bottom:

    Reset pulse, Select pulse, Photodiode voltage, Source follower output, Col-umn output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    4.18 Vertical control signals: Reset pulses and row selection . . . . . . . . . . . . 444.19 Output signals of the pixel array . . . . . . . . . . . . . . . . . . . . . . . . 45

    ix

  • List of Tables

    3.1 Output characteristics and specifications of the Linear Active Pixel Sensor . . 193.2 Summary of sensor performance . . . . . . . . . . . . . . . . . . . . . . . . 23

    4.1 Physical parameters of MRST for simulation of a photodiode with capacitanceCPD = 58.44 fF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    4.2 Simulation results and percentage error of the full-discharge current of theideal and the pn-junction capacitances. . . . . . . . . . . . . . . . . . . . . . 39

    4.3 Simulation results and percentage error of the full-discharge current of theideal and the pn-junction capacitances for the same capacitance values. . . . . 40

    x

  • Chapter 1

    Introduction

    In recent years, biomedical research has brought to the world the possibility of developingeach time smaller and more complex devices designed for a wide range of medical applica-tions. From micro-machined accelerometers useful to measure patients movement in ther-apy rehabilitation to complete laboratories implemented in just one small piece of silicon todiagnose fast and effectively illnesses. Today, micro-fabrication processes allow the imple-mentation of, not only electronic circuits, but also mechanical and optoelectronical systemson the same silicon die. This capability widely extends the design possibilities when thinkingabout new devices that help people to diagnose illness, develop new drugs and new kind ofsensors.

    Of particular interest is the development of new micro-devices that have the capabilityof performing clinic analysis using a very small volume of samples like corporal fluids, tis-sues or even individual cells [7]. These devices, so-called Laboratory-on-a-Chip (LoCs), arepromising instruments because of its high throughput and the possibility of performing suchanalysis without relying on significant laboratory infrastructure.

    A lab-on-a-chip device integrates all the instruments required to perform a clinical anal-ysis on a single die of a very small size (few cm2). This possibility brings some importantadvantages such as mass fabrication through a standard semiconductor process and with this,a reduced fabrication cost; ease of transportation due to its low volume and weight, low sam-ple volume consumption and shorter analysis times.

    On the other hand, LoC technologies are new research areas having many innovationspossibilities. The laboratory miniaturization innovation provides development of sensor, in-struments and microelectronic devices on the same die. The LoC technology can be used ina wide variety of biological studies, for example the extraction of DNA from a blood sample,cellular separation and characterization, detection of virus, bacteria and cancer cells, testingof new drugs, among others.

    LoCs are possible thanks to the research advances in Biological-MicroelectromechanicalSystems (BioMEMs) where, as the name suggests, mechanical and electrical elements can be

    1

  • integrated in a single microsystem in order to analyze biological matter. Mechanical devicesare used to manipulate the sample and electrical devices are used to stimulate reactions andmonitor results.

    Such new micro-opto-electronic devices suggest the possibility of implementing a fullimage-sensing system where not only the image capture is possible but also the chip timing,control and image processing circuitry onto the same silicon die. This allows the possibilityof customizing a micro-camera for a particular application [8] such as LoC monitoring. Thesemicro-devices are called Camera-on-a-Chip and are possible thanks to a relatively new im-age sensor technology called Active Pixel Sensor (APS) that takes advantage of the existingComplementary Metal-Oxide Semiconductor (CMOS) manufacturing facilities. This possi-bility of fabrication brings several advantages over the other main imaging technologies, theCoupled-Charge Device (CCD), like lower foundry cost, lower power consumption, lowerpower supply voltages, higher speed and smartness by incorporating on-chip signal process-ing [1]. These advantages makes the camera-on-a-chip the perfect complement for the LoCsto accomplish the objective of fast and low-cost clinical analysis.

    Camera-on-a-chip technology is implemented using CMOS active pixel sensors. Theseimage sensors have been the subject of extensive development and now share the market withthe CCD image sensors, which have dominated the field of imaging sensors for a long time[9]. The CMOS image sensor technology finds many areas of application including roboticsand machine vision, guidance and navigation, automotive, etc. [10]. Consumer electronicdevices such as digital still cameras (DSC), mobile phone cameras, handy camcorders anddigital single lens reflex cameras (DSLR) are other applications of this technology. Moreover,scientific applications sometimes requires additional functions like real-time target trackingor three-dimensional range finding, etc. The devices designed for such applications are calledsmart CMOS image sensors [9].

    In solid-state imaging, there are four important functions that have to be realized: lightdetection, accumulation of photo-generated signals, switching from accumulation to readoutand scanning. The scanning function was proposed in early 1960s by S. R. Morrison at Hon-eywell as the photoscanner and by J. W. Horton et al. at IBM as the scanistor [9]. Afterthat, the solid-state image sensor with scanning circuit using thin-film transistors (TFT) asthe photodetector was proposed by P. K. Weimer et al., and M. A. Schuster and G. Strull atNASA proposed the phototransistor (PTr) as well as switching devices to realize X-Y pixeladdressing. They successfully obtained images with a fabricated 50 x 50-pixel array sensor[9].

    The details of solid-state image sensors were published in IEEE Transaction on Elec-tron Devices in 1968 and almost at the same time, the CCD was invented in 1969 by W.Boyle and G. E. Smith at AT&T Bell Laboratories. The production of first commercial MOSimagers was in the 1980s. CCDs image sensors were prefered over the MOS image sen-sors due to the fact that they offered superior image quality. Subsequently, efforts has beenmade to improve quality of signal of MOS imagers by incorporating an in-pixel amplification

    2

  • mechanism resulting in several amplified type imagers proposed in the late 1980s includingthe charge modulated device (CMD), floating gate array (FGA), base-stored image sensor(BASIS), static induction transistor (SIT) type, amplified MOS imager (AMI), and others.Besides AMI, these architectures required some modification of standard MOS fabricationtechnology and ultimately they were not commercialized and their development was termi-nated. AMI, on the other hand, can be fabricated in standard CMOS technology withoutany modification and its structure is the active pixel sensor (APS). However, AMI uses I-Vconverter as a readout circuit while APS uses a source follower, though this difference is notcritical [9].

    The development of CMOS APS technology has several advantages over CCD technol-ogy. It has been mentioned the possibility of being fabricated on a standard CMOS processwhich results in a lower foundry cost, but also has performance improvements such low powerconsumption (100 to 1000 times lower), high dynamic range, higher blooming threshold, in-dividual pixel readout, low supply voltage operation, high speed, large size array, radiationhardness and smartness [1].

    High-resolution imaging applications such as professional photography, astronomicalimaging, x-ray, TV broadcasting and machine vision require very large format image sensors.CCD image sensors have being fabricated with very large formats to support these applica-tions (from 66 megapixel by Philips in 1997 to 111 megapixel by STA Inc in 2007), however,large format CCDs are very expensive and difficult to produce with the low defect densitiesneeded for high quality imaging. When increasing the full-well capacity and incorporatinghigher spectral response requirements, the necessary pixel size (i.e, sensor size) makes theproduction of such CCDs extremely expensive. Furthermore, power consumption and theneed for external support electronics make CCDs less attractive for those applications. Onthe other hand, CMOS APS technology has recently gained more popularity in these imagesensor segments with the recent advancement in frame rates, noise levels and array formats.This was achieved by utilizing better image sensor architectures and design techniques, andby improvement in the CMOS fabrication processes and pixel technologies [1].

    Furthermore, it has been demonstrated the integration of micro-opto-electronic deviceswith LoCs. The opto-electronic components placed directly over the LoCs, replace the taskdone by a microscope in the actual laboratory analysis [7], providing, a low-cost, portablemicrosystem for clinical analysis.

    The work presented in this thesis is concerned with the realization of a CMOS pixelwhich can be used in the design of a complete CMOS APS camera-on-a-chip useful for con-tact imager applications such as micro-channel dielectrophoretic analysis (DEP), cell-basedcharacterization and others. The proposed pixel was designed using a standard 0.35 mCMOS process which counts with four metal layers, two poly layers, and a high-resistancepoly layer. Pixel schematic circuits were implemented for parameter extraction and behav-ioral simulations. The design of the pixel is a modification of the methodology presented byS. U. Ay in [1].

    3

  • Thesis outlineThis document is organized as follows. Chapter 1 presents a general overview of the recentneeds for the development of biomedical micro-devices. It is also explained how advancesin MEMS and Microelectronics has brought new concepts of Lab-on-a-Chip, Camera-on-a-Chip, and the integration of both technologies to produce a full clinic analysis system. Thisexplanation includes main applications, advantages and disadvantages.

    Chapter 2 presents a theoretical framework on the design of CMOS image sensors. Itbegins with solid state imaging concepts in order to describe the process; from photons im-pacting the active area of the pixel, through the absorption of light and charge collection.The main pixel architectures used for CMOS image sensors are described and one of them isselected for this application. Then, the design step of such architecture is presented includingthe equations that will be used in later chapters.

    Chapter 3 reviews the state-of-the-art literature for CMOS image and optical sensors withlab-on-a-chip applications. Some of the active pixel architectures using three and four tran-sistors are described. The chapter also describes techniques used for sensor-microchannelcoupling and packages used to avoid microelectronics breakdown due to microfluids manip-ulation. Moreover, advantages and disadvantages are discussed in detail.

    In chapter 4, a CMOS active pixel sensor is designed using equations presented on chapter2. The design process begins in-pixel with the read-out circuit including the reset transistorand the source follower amplifier. Then the photodiode characteristics are determined andsimulated including electrical and physical parameters. Simulation results between an idealcapacitance and the designed pn-junction photodiode are presented and analized and the op-timum parameters are determined. Also, the simulation analysis for a complete active pixelis performed its results are discussed.

    Finally, chapter 5 shows conclusions and future work.

    4

  • Chapter 2

    Theoretical Background on Solid-StateImage Sensors

    2.1 IntroductionIn this chapter, the human perception of light and its similarity with solid-state image sen-sors are described. Also a theoretical framework about solid-state imaging is given. Thechapter describes the imaging process using the semiconductor material as the photo-sensingelement, and includes the transmission and reflection phenomena due to the opaque materialsused in todays fabrication processes. The chapter also illustrates the collection of photo-generated carriers produced by the incidence of photons over the sensitive region.

    The main pixel structures described includes the photo-sensitive elements available in astandard CMOS technology. The photodiode structure is shown in detail because this elementis the most commonly used in the design of CMOS image sensors and it was selected as theideal sensing element in this work.

    2.2 Human light perceptionThe human eye is capable of detecting light within a wavelength range of 370 nm to 730 nm.This is due to the two type of photo-detection cells: the rods and the cones. Both cell typesare located at the retina; the rods are located at the periphery and the cones are concentrated atits center. The rods are highly photosensitive but have poor color sensitivity, while cones arehighly color sensitive but poor photosensitivity. This means that rods are mainly used underlow-light conditions (scotopic vision) at expense of poor color perception and cones are usedat well-lit conditions and color perception is better (photopic vision). In dark environments,the human eye can detect between 126 and 214 photons per second in a range of 650nm and450nm wavelength, respectively, and at 555nm, it can detect 10 photons per second [1]. Forcolor sensitivity, rods are classified into L, M and S types, which have similar characteristicsof RGB color filters in image sensors having its center wavelengths of red at 565 nm, greenat 545 nm and blue at 440 nm, respectively. The solid-state image sensors developed using

    5

  • silicon as the photo-sensitive element are suitable to detect light with similar characteristicsas the human eye.

    2.3 The solid-state imaging processA solid-state image sensor is a semiconductor device capable to convert an optical image thatis formed by an imaging lens into electric signals (current or voltage). An image sensor candetect light within a wide spectral range: from x-rays to infrared wavelengths regions. Thisis possible by tunning the detector structure and/or by employing a material that is sensitiveto the wavelength region of interest [11]. The process of converting light into an electricalsignal is depicted in Figure 2.1.

    Photon

    Transmission - Reflection

    Absorption / Convertion

    Collection

    Buffering

    Conversion

    Processing

    Interpretation

    Pixel

    Imager

    System

    Figure 2.1: Solid-state imaging process [1].

    The imaging process starts at the pixel. Impinging photons pass through dielectric layers,then are absorbed in pixel structures and converted into charges taking advantage of the pho-ton energy. Those photogenerated charges are collected in a three dimension confined area,then buffered and read sequentially to an upper level of processing circuits.

    The image sensor converts pixel signals into more meaningful signal type, and processesit in such a way that todays signal processors could use and transport. Processing circuitsconvert and process pixel readings to form images.

    Finally, at the system level these images are again processed or interpreted for human ormachine use. All these processes are done at different levels as shown in Figure 2.1[1].

    6

  • 2.3.1 Absorption of light in semiconductorsWhen light strikes a semiconductor, the photons pass through multiple layers of dielectricsbefore reaching the photo-conversion sites. Those dielectric layers are placed on top of thesolid-state material to isolate different functional layers such as multi-layer routing metals.Some of the layers are opaque and some of them are transparent. Because each layer hasdifferent optical properties, some portions of the impinging photons are reflected and someare absorbed, leading to quantum loss (Figure 2.2).

    Indice

    ntlig

    htReflected light

    Abso

    rbed l

    ight

    Figure 2.2: Transmission and reflection of light in dielectric layers

    Nowadays, silicon is the most widely used material in very-large scale integrated circuits(VLSI) and is also suitable for visible-range image sensors because the band gap energy ofsilicon ( 1.12 eV) matches the energy of visible wavelength photons [11]. This means thatphotons with an energy higher than 1.12 eV could produce electron-hole pairs in the siliconsubstrate and those pairs are called photo-generated carriers [9, 1].

    The amount of photo-generated carriers in a material is described by means of its absorp-tion coefficient (), which is defined as the fractional change of light power when the lighttravels through the material. The mathematical expression is given as follows:

    () = 1z

    PP

    (2.1)

    where is the wavelenght of the light, PP is the fractional reduction of light power insidethe material and z is the distance traveled by the light. The absorption length Labs in asemiconductor is defined as:

    Labs = 1 (2.2)Figure 2.3 shows the absorption depth of light for silicon at 300K, depending on the

    wavelength of the incident light. Photons that have shorter than 1100 nm wavelength are

    7

  • elegible for silicon base imaging. Since the human visible range is in about 380-750 nm, theabsorption length lies within 0.038 to 8 m.

    Figure 2.3: Silicon absorption length of light [2].

    2.3.2 Charge collectionAfter photo-generated carriers are released, negatively charged electrons are separated frompositively charged holes using an electric field by which electrons are collected and holes aredrained. The electric field of a photodiode-based image sensor is produced at the depletionregion of the pn-junction as shown in Figure 2.4.

    8

  • Vbias

    p-substrate

    - +

    Depletion

    Region

    SiO2

    n+

    Figure 2.4: The photodiode structure

    The number of the collected electrons is a measure of the amount of light dropped on thephotosensitive region of the pixel and the way to measure it is to integrate such charges in acharge pocket and read the integrated charges at predetermined time intervals [1].

    2.4 PhotodiodesThere is a variety of photo-sensing elements that can be built using the silicon substrate andthe most commonly used is the pn-junction photodiode [1, 11, 9]. In this work, the photodiodewas studied as the sensing element for the design of the CMOS image sensor.

    2.4.1 Operation principle of photodiodesThe photodiode is a reverse biased p-n junction diode grounded at the p-type substrate witha shallow n+ doped region. A bias voltage is applied to the n+ region to form a depletionregion around the metallurgical p-n junction. This depletion region is free of any charge be-cause of the electrical field formed in that region. Any electron generated in there slide atthe opposite direction of the electric field towards the n+ region, while the holes go towardsthe p- region. Electrons are collected in the charge pocket in the n+ region and the holes aredriven to ground, or they are recombined.

    The main problem of photodiodes for CMOS image sensors is their low sensitivity in theblue spectrum. This is because short wavelength photons (blue photons) are absorbed on thesurface of the silicon so they can reach the depletion region.

    There is another type of photodiode which have improved the short wavelength response;the pinned photodiode. This type of photodiode has been used in CCDs and CIS but its maindisadvantage is that it is not available for a standard CMOS process due to an extra p+ mask

    9

  • that has to be used.

    In a pn-junction diode, the forward current IF is expressed as

    IF = Idiff[exp

    (qV

    nkBT

    ) 1

    ](2.3)

    where q is the electron charge, kB is the Boltzmann constant, n is an ideal factor and Idiff isthe saturation current or diffusion current, which is given by

    Idiff = qA(DnLn

    npo +DpLp

    pno)

    (2.4)

    where Dn,p is the diffusion coefficient, Ln,p is the difussion length, npo is the minority carrierconcentration in the p-type region and pno is the minority carrier concentration in the n-typeregion and A is the cross-section area of the pn-junction photodiode. The output current ofthe pn-junction photodiode is expressed as follows:

    IL = Iph IF

    = Iph Idiff[exp

    (qV

    nkBT

    ) 1

    ](2.5)

    where Iph is the photo-generated current.

    There are three modes for biasing a photodiode: solar cell mode, PD mode and avalanchemode [9]:

    Solar cell mode. In the solar cell mode, no bias is applied to the PD. Under lightillumination, the PD acts as a battery that produces a voltage across the pn-junction. Inthe open circuit condition, the voltage Voc can be obtained from IL = 0 A in Equation2.5, and thus

    Voc =nkBT

    qln

    (IphIdiff

    + 1)

    (2.6)

    This shows that the open circuit voltage does not linearly increase according to theinput light intensity.

    PD mode. The second mode is the PD mode. When a PD is reverse biased, that isV < 0, the exponential term in Equation 2.5 can be neglected, and thus IL becomes

    IL Iph + Idiff (2.7)

    In Equation 2.7 can be seen that in the absence of light (Iph = 0 A), there is only thediffusion current flowing through the photodiode and as the light intensity increases, thephoto-generated current also increases linearly due to the electron-hole pairs generatedby the impinging photons.

    10

  • Avalanche mode. The third mode is the avalanche mode. When a PD is stronglybiased, the photocurrent suddenly increases. This phenomena is called an avalanche,where impact ionization of electrons and holes occurs and the carriers are multiplied.The voltage where an avalanche occurs is called the avalanche breakdown voltage Vbd.The avalanche mode is used in an avalanche photodiode (APD).

    2.4.2 The photodiode full-well capacity modelThe collected charges are stored in the depletion region of the photodiode. The photodiodescapacitance is related to the area and peripheral of diffusion layer forming the pn-junction.The junction capacitance of the reverse-biased photodiode si voltage dependent, so the deple-tion capacitance is non-linear. Pn-junction capacitances are function of the applied terminalvoltage across the terminals and process parameters. This capacitance consists of two com-ponents: the bottom plate capacitance and the sidewall capacitance.

    The zero-bias junction capacitance per unit area associated with the bottom plate deple-tion region of the photodiode is given by

    CJ0 =

    si q

    2

    (NA NDNA + ND

    )10

    (2.8)

    where si is the permitivity of silicon, q is the charge of the electron, NA and ND are thedoping concentrations for p-type and n-type materials, respectively and 0 is the junctionbuilt-in potential which is given by

    0 = T ln(NA ND

    N2i

    )(2.9)

    where T is the thermal voltage (26mV at 300K) and Ni is the intrinsic carrier concentrationof the material, which is Ni = 1.432 1010 cm3 for silicon.

    With this, the junction area capacitance of the bottom plate region of the photodiode isgiven by

    CJ =A CJ0(

    1 + VPD0

    )mj (2.10)where A is the area of the bottom plate pn-junction, mj is a grading factor specific for eachtechnology and VPD is the photodiodes reverse bias voltage. Similarly to Equation 2.8, thezero reverse-bias sidewall junction capacitance per unit length is given by

    CJ0SW =

    si q

    2

    (NA NDNA + ND

    )1

    OSW(2.11)

    where 0SW is the built-in potential for the sidewall junction. Considering the depth of thepn-junction, xJ, the sidewall junction capacitance per unit length is defined as

    CJ0SW = CJ0SW xJ (2.12)

    11

  • With this, the total sidewall junction capacitance at zero bias can be calculated by mul-tiplying CJ0SW with the perimeter of the junction and the total junction capacitance for anyreverse bias voltage on the photodiode is given by

    CJSW =P CJ0SW(

    1 + VPD0SW

    )mjsw (2.13)Finally, the total photodiode junction capacitance is calculated as follows:

    CPD = CJ + CJSW (2.14)

    2.5 CMOS Image SensorsAn image sensor consists of an imaging area, vertical/horizontal access circuitry and read-out circuitry. The imaging area is formed by an array of pixels where each pixel contains aphoto-sensitive element and some transistors for accessing and buffering the generated sig-nals out of the array using the access and readout circuitry. The transistors included in thepixel structure define the type of the image sensor.

    A pixel structure that includes, besides the photo-sensing element, only one access tran-sistor, is called the passive pixel sensor (PPS) because there is no in-pixel amplification ofthe photo-generated signal. This was the first structure used in CMOS image sensors.

    The second generation of CMOS image sensors. called active pixel sensor (APS), im-proved the image quality due to a buffer (source follower) that was included in the pixelcircuit to prevent destructive readout [12]. This type of pixel includes, in its most basic struc-ture, three transistors: one used to take the photodiodes voltage to a known value, one foraccessing the pixel through the external circutry, and one more used as an in-pixel amplifier.This last type of pixel structure is the most widely used for image sensors today because ofits superior image quality when compared to passive pixel sensors. A detailed characteristicsdescription is illustrated in this work.

    There is a novel type of pixel structure called digital pixel sensor (DPS), which includesan in-pixel analog-to-digital converter. This structure, besides the tasks performed by theAPS, it also converts the photodiodes voltage into a digital signal which is read by an exter-nal circuit.

    2.5.1 CMOS pixel structuresThrough time, the whole variety of photo-sensing elements have been studied and tested onimage sensors designs. The first APS was fabricated using a photogate (PG) as the photode-tector element. After that, the photodiode (PD) was used. The PG was first implemented due

    12

  • to ease of signal charge handling but its main problem was low sensibility due to the polysil-icon in transistors gate is opaque to the visible spectrum. Today, the most used architecturein CMOS image sensors is the APS using three transistors and a photodiode in a pixel (3T-APS). In the first stage of 3T-APS development, the image quality could not compete withthat of CCDs, with respect to both, fixed pattern noise (FPN) and random noise [9] (noisetypes are described in Appendix 6).

    By incorporating a pinned PD structure used in CCDs, which has a low dark current andcomplete depletion structure, the four transistor APS (4T-APS) has been successfully devel-oped. This architecture has four transistors plus a PD and floating diffusion in the pixel. Theimplementation of the 4T-APS with correlated double-sampling (CDS) circutry reduces therandom noise. The main issue for 4T-APSs is the large pixel size when compared to the CCD[9].

    Figure 2.5 gives an overview of CMOS imager data published at IEDM and ISSCC overthe last 15 years. The bottom curve illustrates the CMOS scaling effects over the years, asdescribed by the International Technology Roadmap for Semiconductors. The second curveshows the technology node used to fabricate the reported CMOS image sensors, and the thirdcurve illustrates the pixel size of the same devices [3].

    Figure 2.5: Evolution of pixel size, CMOS technology node used to fabricate the devices and theminimum feature size of the most advanced CMOS logic process [3].

    As seen in Figure 2.5, CMOS image sensor are fabricated using processes behind theITRS processes. The reason for this is that very advanced CMOS processes are not imag-ing friendly due to issues like large leakage currents, low light sensitivity, noise, etc. Thedifference between CMOS technology used for image sensors and ITRS technology is about3 technology generations but, CMOS image sensor technologgy scales almost at the samepace as standard digital CMOS processes do and also pixel dimension scales down with thetechnology node used and the ratio is about a factor of 20 [3].

    13

  • With the CMOS processes scaling down over the years, the design and fabrication ofsmaller pixels result in a weaker performance and is a real challenge to improve the pixel de-sign. Nevertheless, there are new innovations and techniques that improve the light sensitivityof imagers like:

    Dedicated processes with limited amount of metal layers

    Thin interconnect layers and thin dielectrics

    Micro-lenses

    Light guide-waves on top of pixels

    Back-side illumination

    Recently, CMOS fabrication technology advances have successfully reduced the pixelsize of CMOS image sensors [13], although is still difficult to realize a smaller pixel sizethan that of CCDs. Moreover, a pixel sharing technique has been widely used in 4T-APSsbecause has been effective in reducing the pixel size to be comparable with that of CCDs [9]and even with that of the conventional 35 mm film cameras [11]. The main pixel structuresare described next.

    Passive Pixel Sensor

    The first CMOS generation of image sensors was based on passive pixel sensors (PPS) withanalog readout. This sensors had poor signal quality due to the direct transmission of thepixel voltage on capacitive column busses [12].

    A passive pixel is formed by a combination of a photodiode and an addressing transistorthat act as a switch (Figure 2.6).

    Figure 2.6: Passive CMOS pixel with a single in-pixel transistor [3].

    In this pixel architecture, imaging starts with the light exposure of the photodiode, whichis reverse biased to a high voltage. During exposure time, photons impinging decrease thereverse voltage across the photodiode and, at the end of exposure, the remaining voltage is

    14

  • transmitted to the column bus. This remaining voltage is a measure of the amount of photonsfalling in the photodiode during exposure time [3].

    The main advantage of this architecture is the large fill factor, but unfortunately, the pixelssuffers a large noise level as well, but the improvement of this was made with the next pixelarchitecture, the active pixel.

    Active Pixel Sensor

    In this architecture, each pixel has an amplifier, being the source follower (Figure 2.7). Eachpixel counts with a photodiode, a reset transistor, the driver for the source follower and theaddressing transistor. The current source of the source follower is placed at the end of thecolumn bus.

    Figure 2.7: Active CMOS pixel based on in-pixel amplifier [3].

    In APS based image sensors, after exposure time, each pixel is addressed and the remain-ing voltage across the photodiode is buffered outside the pixel array by means of the sourcefollower, then the photodiode is reset.

    This architecture solve a lot of noise issues but not the kTC noise component which isintroduced by resetting the photodiode.

    Digital Pixel Sensor

    With reduced feature sizes, more transistor per pixel can be added to the point where a sig-nificant part of the pixel circuit is entirely digital. Today, the trend of image sensors movingtowards digital pixel sensors (DPS) [12], which is a novel pixel architecture in the design ofCMOS image sensors. In this devices, the conversion from analog photo-generated voltageto digital data is implemented on-pixel, due to each pixel, besides a photodiode, it also counts

    15

  • with a single slope ADC [14].

    vre f

    +

    Figure 2.8: Digital Pixel Sensor Architecture

    In a CMOS image sensor based in this pixel architecture, the analog-to-digital conversionis performed in parallel in every pixel and, therefore, readout time is significantly less than asingle or per-column analog readout architecture, which permits very high frame rates (up to10,000 frames per second) [14].

    This architecture makes many applications feasible, specially the dynamic range enhance-ment due to the possibility of combining two or more pictures taken with a very high rate. Ithas been demonstrated that the more samples used for the composition, the better dynamicrange will be achieved, so DPS-based image sensors with its very high readout speed is theperfect solution for this type of applications [14].

    The main constraint of this architecture is that the use of multiple sampling to get a pic-ture with high dynamic range consumes significant power. Furthermore, an extra hardware isrequired for implementing the multi-sampling algorithm, so the processing time is extended[14].

    The performance of a image sensor is constrained by factors like pixel full-well capac-ity, sensor resolution, wafer/die size, quantum efficiency, sensitivity and dark current. Pixelsize is limited by the reticle (die or wafer size), and quality of the supporting optics [1]. Forscientific image sensors, the two most important requirements are: a large full-well capacityand low noise readout. The combination of these two requirements leads to a higher dynamicrange. Large pixel full-well capacity is only achieved through the use of novel fabricationprocess and circuit design techniques. In photodiode type CMOS APS pixels, especially innear-UV spectrum (200-400 nm), quantum efficiency (QE) is improved using novel pixeldesign techniques since it depends on the fabrication process technology and pixel designtechnique [1].

    The die size of a CMOS integrated circuit is limited by the exposure field size of thephotolithographic stepper used during manufacturing which is typically 20mm by 20mm[1]. However, with new technology in CMOS Image Sensor (CIS) manufacturing, so-calledstitching technology, now is possible to fabricate die sizes up to a single die per 200 mmwafer using a 0.18 m CMOS process. The photolithographic stepper in the stitching process

    16

  • exposes the entire image sensor structure, one piece at a time, by precisely aligning eachreticle step. The stitching technology allows to seam 5.5 m pixel sections into a large pixelarray, resulting in ultra-high resolution, high-quality color image sensors [15].

    17

  • Chapter 3

    State-of-the-Art on Microfluidic Systemsand Image Sensors Integration

    In this chapter is presented a revision of the state-of-the-art about image sensors designedto monitor microfluidic channels. Due to this work presents a design based on a standardCMOS process, the articles reviewed in this chapter includes only image sensors designedusing this type of process. As will be seen through this revision, there is a variety of designswhich are specific depending on the characterization type and the specific task of the sensor.

    In reference [4] is reported a digital 16-element mixed-signal, near-field CMOS activepixel optical sensor using 0.18 m CMOS technology. This optical sensor is coupled directlyto a microfluidic channel employing either flip-chip or molded polymer packing technologies.Such system is used to identify and quantify the biophysical or biochemical properties ofthe cell population transported in the microchannel. The schematic diagram of the flip-chipsystem is illustrated in Figure 3.1.

    Figure 3.1: Integrated digital cytometer system components and architecture [4].

    As seen in Figure 3.1, the microchannel was mounted over the optical sensor and thegenerated signal is processed by the on-chip digital interface. Output signals are sent to a mi-crocontroller for interpretation and finally displayed in a pocket PC. The Texas InstrumentsMSP430F449 mixed-signal microcontroller was used to control and monitor the output ofthe sensor and to interface to the Viewsonic VC37 pocket PC that acts as the host controller.The microcontroller is an ultra low power, battery-operated, 16-bit RISC architecture whichallows portability of the entire system. The CMOS optical sensor designed for near-fieldmicrofluidic integration is shown in Figure 3.2.

    18

  • Figure 3.2: Photograph of the linear active pixel CMOS sensor [4].

    The optical sensor was designed to be directly coupled to the microfluidic channel fabri-cated in glass or polymer as a modular add-on in order to enable the collection of particlesand the fluid flow information. This device has seven electrical pads (left-hand side of thefigure) and seven mechanical pads (right-hand side of the figure) for flip-chip bonding sta-bility. The output electrical properties, physical dimensions and technology specifications ofthe sensor chip are provided in Table 3.1 [4].

    Table 3.1: Output characteristics and specifications of the Linear Active Pixel SensorLinear Sensor Chip Specifications

    Technology 0.18 mDimensions 1.0 2.4 mmSupply Voltage 1.8 VPower Consumption 15 mWPads 5 dig / 2 pwr / 7 mechNumber of pixels 16Pixel size 7 m 7 mFill Factor 75%Dynamic Range 30 dB

    Figure 3.3 shows a block diagram of the mixed-signal CMOS sensor architecture whichcomprises the linear active pixel sensor array (APS), correlated double-sampling (CDS), andan adaptive spatial filter (SF) with a digital control block for monitoring and configuration.

    19

  • Figure 3.3: Flip-chip on glass illustration of a hybrid microfluidic digital cytometer. [4].

    The optical sensor chip was wire-bonded to a PCB and subsequently encapsulated in poly-dimethylsiloxane (PDMS) beneath 120 m diameter cylindrical microchannel passing overthe sensors active area. The cross-section diagram for this structure is depicted in Figure 3.4[4].

    Figure 3.4: PDMS cast chamber, shown in a cross-sectional view, realized a microfluidic channelpassing through the structure and over the active area of the sensor chip which is wire-bonded to a PCB. [4].

    While the assembly process of the system proved to be mechanically simple and cheap toproduce, the reliability of the device presents some issues specifically in the region where themicrochannel passes over the chip surface. In this region, the technique used for forming themicrochannel resulted in some tearing of the PDMS which results in mechanical instability.Furthermore, after extensive handling, prototypes eventually succumbed to wire-bond sepa-ration, rendering the devices electricalle non-functional.

    In contrasts to reference [4], where a single row APS array was used, in [5] a doublelinear array was used, which makes possible not only detection, but also the determinationof particle velocity and size, which means that the sensor can be used to characterize cells aswell as count them. The active area of the new optical sensor consist of two linear arrays of16 elements with each pixel measuring 7 m 7 m (see Figure 3.5).

    20

  • Figure 3.5: The photodiode pixel linear arrays. [5].

    In the same way than the die fabricated in [4], the pads located on the left-hand side ofthe chip are the electrical interface pads, and the pads on the right-hand side are electricallyinactive and provide flip-chip bonding stability only. The chip bonding on the glass substrateis designed to ensure that the active area of the sensor properly aligns to the microchannel ofthe microfluidic substrate after bonding. Figure 3.6 shows the CMOS sensor coupled to themicrofluidic chip [5].

    Figure 3.6: Post bond image of the CMOS sensor to the microfluidic chip. [5].

    From Figure 3.6, as an individual particle is transported over the active area of a pixel, thelight intensity received by the photodiode changes. This change manifests as an input currentchange in the pixel which gives rise to a rapid change in the output voltage of the sensor. Inthe dual photodiode-photodiode pixel array configuration, such perturbations in voltage as aparticle passes over the active area of the sensors, give rise to a characteristic double-pulsesignature. Thus by monitoring the output of the sensor, and suitably tracking the presence ofdouble pulses, the detection of particles is enabled. Figure 3.7 shows the output of the sensorduring the transit of a 6 m polystyrene polysphere. As the particle passes over the first APSarray, the first negative-going pulse is generated, and as it passes over the second APS array,the second negative-going pulse is generated.

    21

  • Figure 3.7: Plot of the CMOS sensor output upon detection of a 6m polystyrene polysphere. [5].

    The negative-pulse width and time interval between pulses are the features of the detectedsignal, and is sensitive to the particle size and fluid flow rate. The average negative-pulsewidth increases as the particle size increases, assuming the fluid flow rate is invariant. andthe time period between two consecutive pulses is inversely proportional to the particle ve-locity and fluid flow rate is independent of particle size.

    Ji et al. developed an optical image sensor called contact imager in order to manipulateindividual cells using on-chip micro-actuators [6]. This sensor was designed using a 0.5 mCMOS technology with a pixel pitch of 8.4 m and is capable of providing a 2D image of themonitored cells. The designed pixel was intended to be minimum size, so in order to avoidNwell spacing requirements, a N+Psub photodiode was used. Also, to reduce the number ofcontacts, there is only one Vdd contact per pixel which is shared by the source follower inputtransistor of one pixel and the reset transistor of another with this, the fill factor is 17%. Thecontact imager consists of a 9696 APS array, row and column scanners, column-wise read-out circuits and buffers and switches for input control and clock signals. Figure 3.8 showsthe schematic diagram for the CMOS APS.

    Figure 3.8: A schematic of photodiode type CMOS active pixel sensor. [6].

    22

  • The major characteristics of the chip are summarized in Table 3.2.

    Table 3.2: Summary of sensor performanceProcess AMI05 (SCMOS design rule, = 0.35 m)

    Power supply 5 VMaximum signal 1.2 VConversion gain 22 m/e

    Pixel noise = 2.5 mV over 2 msDynamic range 53.6 dB

    Dark signal 0.46 V/sec

    The chip was tested as a contact imager using microbeads placed directy on the chipsurface (Figure 3.9), then after bio-compatible material packaging to protect the bonds andwires, the chip was tested with cells. Figure 3.9 shows the image acquired from the contactimager using dry polymer microspheres of diameter 16 m placed directly on the chip surface.

    Figure 3.9: Comparison of images of microbeads on chip surface taken by (a) a camera and (b) thecontact imager. An overlapped view is also shown in (c). [6].

    A most recent version of the contact imager was published in [7]. This work shows a256 256 four-transistor pixel array. The new active pixel has a 5 m 5 m area with afill factor of 31%. One of the improvements of this new pixel was that the pixel is able tooperate at different modes which are selected by some control signals for either reset noisesuppression or dark current reduction. The pixels electronic circuit is shown in Figure 3.10.

    23

  • Figure 3.10: Schematic diagram of the modified active pixel circuit. [7].

    Recent efforts have resulted in the development of lab-on-a-chip systems in which it ispossible to perform a wide variety of tests using microfluidics. Also, it has been demonstratedthat exists the possibility of designing and fabricating a CMOS optical sensor capable of be-ing coupled to either a microfluidic channel or a plane surface where particles are suspendedin order to be detected identified and monitored, avoiding with this, the need of expensiveand bulky microscopes. Furthermore, with the integration of smart on-chip functions is nowpossible not only to detect, but also to identify, monitor and even characterize the suspendedparticles. The goals are pursued due to the need of having highly automated testing plat-forms to enable robust, low-cost analysis in order to eliminate the conventional laboratoryequipment, which is roughtly automated.

    24

  • Chapter 4

    Active Pixel Sensor Modeling andSimulation

    In this chapter, the design methodology for the design of an Active Pixel Sensor is presented.It begins with the read-out circuit that consists of the reset transistor, buffer transistor as partof the source follower amplifier and the selection transistor. There are also determined thecharacteristics of the active load that is located at the bottom of the pixel array and which,together with the buffer transistor located in-pixel, forms the source follower amplifier thatbrings the output signal out of the array.

    The electrical and physical characteristics of the pn-junction photodiode are determinedand compared to an ideal photodiode simulated using a capacitor and a current source. Thesource terminal of the reset transistor is designed as the pn-junction photodiode by sizing itsarea and perimeter in order to obtain the required capacitance.

    4.1 IntroductionIn microfluidics research, a 640480 image sensor pixel resolution is well suited for particletracking and parameter-determination. On the other hand, at a fixed resolution, the randomnoise introduced in the image is inversely proportional to the image sensor image size. Thisis, less random noise will be introduced to a 2/3 sensor (which actual size is 8.8mm 6.6mm) than a 1/8 sensor (actual size: 1.6mm 1.2mm) if both have the same number ofpixels. This is because the photo-sensitive area on a pixel is larger on a larger image sensor,so more electron-hole pairs can be generated because of more photons have the possibility toimpinge such a area. For a larger count of photo-generated carriers, a greater voltage is gen-erated, so the introduced noise is less representative for that signal. If the generated voltageis comparable to the random noise signal, the amplified signal will have a significant amountof noise.

    Another important parameter in the noise-inmunity for a image sensor is the full-well

    25

  • capacity, which is the amount of charge that an imaging pixel could collect and transfer. Thisparameter is limited by the size of the photoconversion region and by the read-out circuitsability to buffer pixel signals [1].

    4.2 Pixel read-out circuit

    4.2.1 Reset transistor MRSTThe read-out circuit of a three-transistor, single active pixel is shown in Figure 4.1. Thereare three in-pixel NMOS transistors and a load transistor at the bottom including the loadcapacitor.

    VDD

    MRSTVRST

    CPDIPD

    VPD MSF

    MSELVSEL

    MCOLVBIAS

    VOUT

    CCOL 10 pF

    Figure 4.1: 3T pixel configuration

    The transistor MRST is used to set the photodiode to a known voltage value through signalVRST. This device is usually sized to have the minimum allowable feature size in order tomaximize the pixels fill factor and to reduce the charge injection to the photosensitive areaafter reset [1]. In the case of the CMOS process used in this work, the minimum feature sizeis W/L = 0.4m/0.35m.

    The threshold voltage for an NMOS transistor is given by

    VT H = VT H0 + K1( s + VS B

    s

    )+ K2VS B + VT H (4.1)

    where VT H0 is the zero back-gate bias threshold voltage; K1 and K2 are the body-effect coef-ficients, VTH is the term that contains short channel effects, and VSB is the applied voltage

    26

  • between source and bulk terminals. s is the surface potential and for short channel is givenby

    S =2kT

    qln

    (NCH

    Ni(T ))= 0.86440 V (4.2)

    where k is Boltzmann constant, T is room temperature (300K), q is the charge of the elec-tron, NCH is the channel doping concentration and Ni(T ) is the temperature dependend intrin-sic carrier concentration of silicon.

    Although equation 4.1 is used by the simulation tools to determine the thresold voltage,reference [1] gives a new equation more suitable for hand-calculations. Such equation takestwo fitting-coefficients (1 and 2) to approximateVTH on Equation 4.1 as a linear equation:

    VT H = VT H0 + K1 (1 + 1)s + VS B K1 (1 + 2)

    s + K2VS B. (4.3)

    From the BSIM3v3 model card for the n-channel device, the zero-bias threshold voltagewas found to be VT H0 = 0.4979 V, the first-order body effect coefficient K1 = 0.50296 V

    12

    and the second-order body effect coefficient K2 = 0.033985.

    Using the configuration depicted in Figure 4.1, the VTH term was obtained throughsimulation. A minimum size reset transistor and a test current source were used to charge anddischarge the junction capacitance of the photodiode. The body-factor term was calculatedas

    s + VS B. Results are shown in Figure 4.2, where VTH term is plotted as a function of

    the body-factor term.

    0.8 1 1.2 1.4 1.6 1.8 2 2.20.12

    0.1

    0.08

    0.06

    0.04

    0.02

    0

    0.02

    0.04

    Body Factor term

    Del

    ta te

    rm (V

    )

    Figure 4.2: Linear approximation of VTH term as a function of the body-factor terms + VS B

    From the linear regretion of VTH, the two fitting function coefficients (1 and 2) werefound. Coefficient 1 corresponds to the slope of the linear approximation and coefficient 2corresponds to the offset. The values for these two coefficients are

    1 = 0.1116652 = 0.123019

    27

  • Using Equation 4.3 is possible to calculate the threshold voltage for any VS B value. Thephotodiode reset voltage can be found using Equation 4.4.

    VPD RST = VSB M1 = VRST VTH RST = 2 S (4.4)

    where

    = +2 +

    2(1 + 2

    )1 + 1

    S + S +

    2 (VRST VTH0)K1(1 + 1)

    and

    =K1

    (1 + 1

    )2(1 + K2) .

    With these equations and using a reset pulse VRST = 3.3 V on the gate of the MRST device,the photodiode reset voltage was found to be VPD RST = 2.43 V, which means that more thanthe 26% of the signal range is lost because of the increased threshold voltage of MRST.

    From the previous results and using Equation 4.4, the threshold voltage of MRST can beeasily calculated:

    VTH RST = VRST VPD RST = 3.3 2.43 = 0.87 V

    Through simulation, the threshold voltage of the reset transistor was found to be VTH RST = 0.931 Vfor the same conditions; this gives a 6.5% calculation error for the threshold voltage and a3.7% calculation error for the reset voltage of the photodiode which has a value of VPD RST = 2.34 Vin simulation.

    One way to recover the voltage loss is to boost the reset pulse to a value above the powersupply voltage [suat]. The boosting factor (B) is a fraction of the zero-bias voltage thresholdof MRST and is a function of the power supply voltage (VDD) and the minimum channel lengthavailable in the technology (Lmin) and can be calculated using the Equation 4.5:

    B =VDDLmin + 1.5

    4(4.5)

    which, for a 3.3 V supply voltage and 0.35 m minimum channel length, gives a boostingfactor B 1.79. With this, the required reset pulse that allows a 3.3 V photodiode resetvoltage is

    VRST = B VTH0 + 3.3 = 4.2 VFrom simulation and using a reset pulse of VRST = 4.2 V, the reset voltage of the photo-diode gives VPD RST = 3.12 V. The optimum value was determined to be VRST 5 V forVPD RST = 3.3 V. Figure 4.3 shows the obtained results.

    28

  • 0 0.5 1 1.5 2 2.5x 106

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Vol

    tage

    [V]

    Photodiode reset voltage for different reset pulse amplitudes

    Reset pulsePhotodiode reset voltage

    Figure 4.3: Boosted VRST and resulting VPD RST

    4.2.2 Source Follower AmplifierAfter integration time, the remaining photodiode voltage is buffered by the source follower(common-drain) amplifier. From Figure 4.1, device MSF acts as a buffer and MCOL as a cur-rent sink; these two devices form the source follower when both are working on saturation.Such amplifier has a low voltage gain (slightly less than 1), a high current gain and is used todrive the capacitive load encountered at the end of each column of pixels.

    The minimum output voltage of the source follower amplifier is determined by the thresh-old voltage of the buffer device, VTH SF, since for a lower voltage values than that, the bufferdevice is turned off. On the other hand, the maximum output voltage is considerably lowerthan VDD because of the body effect which causes VTH SF to increase as the output voltageincreases.

    The output voltage range of the source follower amplifier is determined by the thresholdvoltage of the buffer transistor, VTH SF, and by the column current biased by the load transistorMCOL. As mentioned before, the body effect present in the operation of the buffer devicecauses the output voltage to increase at a slower rate than the input signal, which means thatVTH SF increases with the output and so, the gain is lower than 1. This behavior is describedby Equation (4.6):

    VOUT = VPD VTH SF (4.6)where VPD is the voltage of the photodiode and VTH SF is the modulated threshold voltage ofthe buffer device MSF.

    29

  • In order to determine the output voltage range of the source follower amplifier, the relativeminimum voltage that can be buffered by MSF was determined through Equation 4.7 [1]:

    VOUT = VSB SF = 2 S (4.7)where

    = +2 +

    2(1 + 2

    )1 + 1

    S + S +

    2 (VPD VTH0)1 + K2

    . (4.8)

    An output voltage value of VOUT = 0 V in Equation 4.7 was set and it was found thatthe absolute minimum photodiode voltage that can be buffered by MSF is VPD MIN0 = 0.48 V,which is close to its zero-bias threshold voltage. Then, the absolute minimum photodiodevoltage was found by adding the effective voltage of the load transistor to VPD MIN0:

    VPD MIN = VEFF COL + VPD MIN0 (4.9)where VEFF COL is the effective voltage of the load transistor and is given by:

    VEFF COL = VGS COL VTH0 (4.10)A bias voltage of VGS COL = 1 V was used to determine the size of MCOL so, substituting

    (4.10) into (4.9), the input voltage necessary for the source follower to operate linearly wasdetermined to be VPD MIN 1 V.

    The buffer device, MSF, was set to the minimum aspect ratio in order to maximize thepixels fill factor, that is, W/L = 0.4 m/0.35 m, and the load transistor, MCOL, was de-signed to sink a low bias current ( 1.5 A) in order to achieve the widest output voltagerange possible which also enables a reduced noise performance. A simulation analysis hasbeen performed in order to determine the optimum feature size of MCOL and results are shownin Figure 4.4.

    Figure 4.4 shows that for an input voltage of VIN 1 V, the load transistor MCOL withan aspect ratio of W/L = 0.1 begins to operate in saturation at a lower bias current. Forgreater feature sizes, drain current increases and saturation starts at a greater input voltage,so the output voltage range is reduced, as can be seen from the output voltage plot 4.4. Alsothrough simulation was determined that for aspect ratios smaller than W/L = 0.1, the out-put voltage range was incremented in a very low quantity and the size of the load transistoroccupies a larger area, so the optimum feature size of MCOL was determined to be W/L = 0.1.

    From the results above was determined that, applying a boosted pulse to reset the photo-diodes voltage level and biasing the source follower amplifier with optimum column currentfound through simulation, the output voltage range of the pixel is

    VPR = VDD VPD MIN = 3.3 1 = 2.3 V (4.11)This result is about the 70% of the photodiode voltage swing or a voltage gain of 0.7 V/V.

    30

  • 0 0.5 1 1.5 2 2.5 3 3.50

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    1.8 x 105

    Input voltage [V]

    Dra

    ins

    ourc

    e cu

    rrent

    [A]

    W/L=0.1W/L=0.25W/L=0.5W/L=0.75W/L=1

    0 0.5 1 1.5 2 2.5 3 3.50

    0.5

    1

    1.5

    2

    2.5

    Input voltage [V]

    Out

    put v

    olta

    ge [V

    ]

    W/L=0.1W/L=0.25W/L=0.5W/L=0.75W/L=1

    Figure 4.4: Source follower drain current and output voltage for different feature sizes of MCOL

    4.3 Photodiode designOn a standard CMOS technology, the three types of photodiode that can be built:

    Ndiff Psub

    Nwell Psub

    Pdiff Nwell

    4.3.1 Pn-junction capacitanceAs a starting point in the design of the photodiode, it was found in the literature that forscientific applications of Active Pixel CMOS image sensors, the typical charge pocket sizeis between N = 100Ke and N = 1Me, so in this design a value of N = 1Me was taken.On the other side, the photodiodes voltage swing is VPD = 3.3 V, as shown in the previoussection.

    The conversion gain of the photodiode necessary to reach such specifications was deter-mined to be: [1]

    CG = VFDN

    =3.3

    1 106 = 3.3 V/e. (4.12)

    Therefore, the capacitance of the photodiode CPD is evaluated as follows:

    CFD =q

    CG =1.602 1019

    3.3 106 50 fF. (4.13)

    31

  • 0 0.01 0.02 0.03 0.04 0.05 0.060

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    4.5

    5

    X: 0.03299Y: 2.137

    X: 0.001999Y: 3.3

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    Reset signalDark response

    Figure 4.5: Photodiode response at dark environment at 33ms integration time

    A simulation test was performed in order to determine the photodiodes capacitance per-formance for different light levels and sampling rates. The testing circuit includes the resettransistor (MRST), the calculated capacitance of the photodiode (CPD) represented by an idealcapacitor and a test current source (IPD) which represents the photo-generated carriers on thedepletion region of the photodiode.

    Figure 4.5 shows the simulation results for a dark environment which is represented bysetting IPD = 0 A. It was determined that under this conditions, a leakage current dischargesthe photodiode about 1.2 V. Using Equation (4.12) was found that the number of electronsaccumulated by the leakage effect is

    Ndark =3.3 2.1373.3 106 = 350 Ke

    .

    So the dynamic range of the pixel at this sampling rate is

    DR = 20 log(

    1 106350 103

    )= 9 dB

    The charge distribution also affects the pixel performance by decreasing the pixel voltagewhen the reset device turns off. Figure 4.6 is a close up to Figure 4.5 at the time whenMRST turns off. It can be seen that the remaining electrons in the channel discharges thephotodiodes capacitance about 12 mV.

    32

  • 1.9975 1.998 1.9985 1.999 1.9995 2 2.0005 2.001 2.0015x 103

    3.286

    3.288

    3.29

    3.292

    3.294

    3.296

    3.298

    3.3

    3.302

    X: 0.001999Y: 3.3

    X: 0.002Y: 3.288

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    Reset signalDark response

    Figure 4.6: Charge distribution effect in the photodiodes capacitance

    At the same lighting conditions, if the sampling rate is increased, the dynamic rangeis also increased. This is because for shorter integration times, the photodiode is less timeexposed to the leakage current. In Figure 4.7 is shown that for a sampling rate of 300 samplesper second, the voltage reduction due to the dark current is of 0.197 V and the dynamic rangeof the sensor is then increased to DR = 24.5 dB.

    33

  • 0 1 2 3 4 5 6 7x 103

    0

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    4.5

    5

    X: 0.0001935Y: 3.301

    X: 0.0033Y: 3.104

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    Reset signalDark response

    Figure 4.7: Photodiode response at a dark environmet at 3.3ms integration time

    The current required to completely discharge the photodiode for different sampling rateswas also determined. There were used sampling rates of 30, 300, 3,000 and 30,000 framesper second. The width of the reset pulse in all cases is 30% of the total integration time anda boosted amplitude VRST = 5 V was set. Simulation results are shown in Figure 4.8.

    Simulation results on Figure 4.8 shows that the maximum photodiode voltage swing(VPD = 3.3 V) was reached in all cases. The current necessary to discharge the photodi-ode capacitance, CPD, in a) is IPD 6 pA, in b) is IPD 55 pA, in c) is IPD 550 pA and in d)IPD 5.5 nA. Due to each time the integration time is shorter, a bigger current is necessaryto discharge the same capacitance.

    4.3.2 Physical characteristicsFigure 4.40 in [1] shows that for a charge pocket of 1Me, the pixel pitch using a standard0.35 m technology is about 17 m, which gives a pixel fill factor between 15% and 45%,depending on the photodiode architecture and design rules of each specific foundry.

    The total junction capacitance of the photodiode as a function of the area and perimeteris given by

    CPD =CJ A(

    1 VPDB

    )MJ + CJSW P(1 VPD

    BSW

    )MJSW (4.14)where CJ and CJSW are the unit zero-bias area and peripheral junction capacitances, A and

    34

  • 0 0.01 0.02 0.03 0.04 0.05 0.061

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    a)

    Reset signalIPD=6pA

    0 1 2 3 4 5 6 7x 103

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    b)

    Reset signalIPD=55pA

    0 2 4 6 8x 104

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    c)

    Reset signalIPD=550pA

    0 1 2 3 4 5 6x 105

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    d)

    Reset signalIPD=5.5nA

    Figure 4.8: Full discharge of the 50 fF ideal capacitance at a) 30 frames per second, b) 300 frames persecond, c) 3,000 frames per second and d) 30,000 frames per second.

    P are the area and perimeter of the photodiode, B and BSW are the built-in potentials ofarea and side-wall junctions, MJ and MJSW are the junction grading coefficients of area andside-wall junctions and VPD is the photodiode junction voltage.

    Equation (4.14) was solved for an square-shaped photodiode and it was determined thatfor an area APD = 100 m2 and a perimeter PPD = 40 m, the capacitance of the photodiodeis CPD = 58.44 fF, which represents a 16.8% greater capacitance than the defined for a 1Mecharge pocket using an ideal capacitor.

    According to the design rules of the CMOS process used in this design, the area andperimeter for a minimum feature size transistor are, respectively,

    AMIN = 0.45 m 0.4 m = 0.18 m2 (4.15)PMIN = 2 0.45 m + 2 0.4 m = 1.7 m (4.16)

    As an example (not optimized area), Figure 4.9 shows an active pixel using the designedsquare-shaped photodiode and minimum size transistors. In this figure, the source terminalof the reset transistor was set as the photo-sensitive area. This is achieved by exposing suchregion to light by avoiding all superior layers (metal layers) to overlay the n-active surface ofthe transistor.

    35

  • Figure 4.9: Layout example of a square-shaped photodiode including on-pixel transistors

    With this, a new simulation test was set, but now the ideal capacitor used before wasremoved and instead of it, the area and perimeter of both drain an source terminals of theMRST transistor were included in the Spice deck. Table 4.1 shows the physical parameters setfor MRST in simulation.

    Table 4.1: Physical parameters of MRST for simulation of a photodiode with capacitanceCPD = 58.44 fF.

    Physical parameter ValueWidth 0.4 106Lenght 0.35 106

    Drain area 0.18 1012Drain perimeter 1.7 106

    Source area 100 1012Source perimeter 40 106

    36

  • In the first run of the simulation, a test current IPD = 0 A was set in order to determinethe dark current and charge distribution effects on the pn-junction photodiode. Figure 4.10shows the simulation results.

    0 0.01 0.02 0.03 0.04 0.05 0.060

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    4.5

    5

    X: 0.03299Y: 2.252

    X: 0.001999Y: 3.3

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    Reset signalDark response

    Figure 4.10: Pn-junction photodiode response at dark environment at 33ms integration time

    Then, as done before, the number of electrons that discharges the photodiodes capaci-tance at the absense of light was calculated:

    Ndark =3.3 2.2523.3 106 = 317 Ke

    ,

    and the dynamic range of the pixel at 30 frames per second is

    DR = 20 log(

    1 106317 103

    )= 10 dB

    As in Figure 4.6, Figure 4.11 shows the charge distribution effect on the pn-junctionphotodiode. It can be seen that the total voltage discharge is about 10 mV.

    37

  • 1.985 1.99 1.995 2 2.005 2.01x 103

    3.285

    3.29

    3.295

    3.3

    3.305

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]X: 0.001999Y: 3.3

    X: 0.002Y: 3.29

    Reset signalDark response

    Figure 4.11: Charge distribution effect on the pn-junction photodiode

    A slightly decrease can be appretiated on the number of electrons accumulated due tothe dark current when compared to the ideal capacitor and the dynamic range increases 1dB. A more significant increase in the dynamic range is described when the sampling rate isincreased as shown in Figure 4.12.

    0 1 2 3 4 5 6 7x 103

    0

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    4.5

    5

    X: 0.0001999Y: 3.299

    X: 0.0033Y: 3.129

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    Reset signalDark response

    Figure 4.12: Pn-junction photodiode response at dark environment at 300 frames per second

    38

  • For a 300 frames per second sampling rate, the voltage reduction due to the dark currentis 0.170 V which means a reduction of 15% when compared to the ideal capacitance; thedynamic range this time is 25.8 dB, which represents a increase of 5% when compared to theideal 50 fF capacitance.

    The current necessary to completely discharge the pn-junction capacitance for differentsampling rates was also determined through simulation. Figure 4.13 shows that more currentis needed in this case due to the increased capacitance of the photodiode.

    0 0.01 0.02 0.03 0.04 0.05 0.061

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    a)

    Reset signalIPD=7.5pA

    0 1 2 3 4 5 6 7x 103

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    b)

    Reset signalIPD=70pA

    0 2 4 6 8x 104

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    c)

    Reset signalIPD=700pA

    0 1 2 3 4 5 6x 105

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]d)

    Reset signalIPD=7nA

    Figure 4.13: Full discharge of the square-shaped NDIFF PSUB photodiode with a CPD = 58.44 fF pn-junction capacitance at a) 30 frames per second, b) 300 frames per second, c) 3,000frames per second and d) 30,000 frames per second

    Table 4.2 shows the current values for a full discharge of both capacitances, ideal andpn-junction, for different sampling rates and the percent error between these two cases. Ascan be seen, the maximum error is about 27.3% and ocurrs at the higher sample rates.

    Table 4.2: Simulation results and percentage error of the full-discharge current of the ideal and thepn-junction capacitances.

    Sample rate [fps] Ideal [pA] p-n junction [pA] Error [%]3 6 7.5 25

    30 55 70 27.3300 550 700 27.3

    3000 5500 7000 27.3

    39

  • The ideal capacitor was then fixed to the calculated pn-junction capacitance and a newsimulation was set in order to determine the percentage error using the same capacitancevalue. With this, the charge capacity of the ideal photodiode is the same as the pn-junctionphotodiode. Figure 4.14 shows the simulation results.

    0 0.01 0.02 0.03 0.04 0.05 0.061

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    a)

    Reset signalIPD=7pA

    0 1 2 3 4 5 6 7x 103

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    b)

    Reset signalIPD=65pA

    0 2 4 6 8x 104

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    c)

    Reset signalIPD=650pA

    0 1 2 3 4 5 6x 105

    1

    0

    1

    2

    3

    4

    5

    Time [s]

    Phot

    odio

    de v

    olta

    ge [V

    ]

    d)

    Reset signalIPD=6.5nA

    Figure 4.14: Full discharge of the ideal photodiode with a CPD = 58.44 fF capacitance at a) 30 framesper second, b) 300 frames per second, c) 3,000 frames per second and d) 30,000 framesper second

    The currents needed to completely discharge the capacitance of the ideal photodiodechanged with respect to the pn-junction photodiode, although the percentage error was re-duced considerably. This is shown in Table 4.3.

    Table 4.3: Simulation results and percentage error of the full-discharge current of the ideal and thepn-junction capacitances for the same capacitance values.

    Sample rate [fps] Ideal [pA] p-n junction [pA] Error [%]3 7 7.5 7.1

    30 65 70 7.7300 650 700 7.7

    3000 6500 7000 7.7

    The maximum percent error in this case is 7.7% which is considerably lower than the27.3% of the 50 fF capacitance.

    40

  • 4.4 Pixel array simulationA pixel array was simulated using the active pixel sensor designed on the previous sectionsand the corresponding control signals for vertical and horizontal accessing. The pixels weredistributed in four rows and five columns (4 5) and each pixel in a row shares the reset andselect signals so the read-out was performed one row at a time.

    At the bottom of each column was placed an nmos device acting as the active load of thebuffer transistor of each pixel. Such load was biased with a voltage source in order to set a de-termined drain current and a capacitive load of 5 fF was connected at the output each column.

    4.4.1 Simulation setupEach pixel has a pn-junction photodiode with a capacitance of 58.44 fF, as designed in section4.3.2, and its corresponding reset, buffer and select transistors (see Figure 4.1). As mentionedbefore, at the end of each column is placed the active load device and at the output of eachcolumn is connected a capacitive load of 5 fF.

    PHOTODIODEVOLTAGE

    RESET

    SELECT

    Integration time

    Figure 4.15: Timing signals for accessing and read-out pixels

    The timing signals of the pixel array are shown in Figure 4.15. As can be seen in thisfigure, a reset pulse is applied to the photodiode to set the voltage across its terminals to aknown value. Once the reset device is turned off, the integration time starts until a new resetpulse is applied. During integration time, the photodiode is floated and electron-hole pairs aregenerated on the depletion region of the junction depending on the light intensity and wave-length of the photons. The select signal is applied just a moment before and during the resetpulse so the voltage variation on that time lapse can be amplified, measured and sampled.

    41

  • PIXEL ARRAY

    RES

    ET &

    R

    OW

    SE

    LECT

    SI

    GN

    ALS

    READ-OUT

    VERTICAL ACCESS OUTPUT

    Figure 4.16: Active pixel sensor array including horizontal, vertical and read-out circuitry

    The control signal were set using voltage sources on simulation. There were set four resetsignals, one per each row of the array, although just one reset signal could be applied to allthe pixels since this event occurrs at the same time for all the pixels in the array. The selectsignals were set on the same way so the read-out could be done row per row.

    Although it was not included in the simulation, the horizontal access circuitry of Figure4.16 includes the conditioning circuits that prepares the analog signals to be converted to itsdigital version so they can be interpreted out of the sensor easily.

    There were set 20 current sources to emulate the impinging photons on each pixel. Thevalues of each current source were chosen randomly.

    4.4.2 ResultsSimulation results of the pixel array are shown next. In Figure 4.17 are depicted the controlsignals and the output voltage for one of pixels included in the array. In this case, a sourcecurrent of IPD = 2.3 pA was randomly set. As can be seen, the reset pulse has 5 V amplitudewhich sets the voltage of the photodiode to 3.3 V. When integration time has finished, voltageof the photodiode has discharged from its reset value to 1.573 V for an integration time of 33ms.

    42

  • 0 0.01 0.02 0.03 0.04 0.05 0.060

    2

    4

    6

    X: 0.001999Y: 5

    Vol

    tage

    [V]

    0 0.01 0.02 0.03 0.04 0.05 0.060

    2

    4

    6

    X: 0.001999Y: 5

    Vol

    tage

    [V]

    0 0.01 0.02 0.03 0.04 0.05 0.060

    1

    2

    3

    4

    X: 0.03299Y: 1.573X: 0.001999Y: 3.3

    Vol

    tage

    [V]

    0 0.01 0.02 0.03 0.04 0.05 0.060

    1

    2

    3

    X: 0.03279Y: 1.117X: 0.001999

    Y: 2.34

    Vol

    tage

    [V]

    0 0.01 0.02 0.03 0.04 0.05 0.062

    0

    2

    4

    X: 0.001999Y: 2.338

    Time [s]

    Vol

    tage

    [V] X: 0.03279

    Y: 1.117

    Reset pulse

    Select pulse

    Photodiode voltage

    Source follower voltage

    Output voltage

    Figure 4.17: Simulation results for a single pixel of the 4 5 array. From top to bottom: Reset pulse,Select pulse, Photodiode voltage, Source follower output, Column output.

    In section 4.2 was determined that using a boosted reset pulse of 5 V, the photodiodesreset voltage could be set to 3.3 V (Figure 4.3). Also, it was determined that the source fol-lower amplifier has a voltage gain of 0.7 V/V and that output voltage range for this amplifierwas 2.3 V (Equation 4.11).

    In this simulation, the maximum voltage at the output of the the source follower amplifieris at the beginning of the integration time, which is 2.34 V and the gain at this point of timeis

    ASFB =2.343.3

    = 0.71

    and the gain at the end of the integration time is

    ASFE =1.1171.573 = 0.71.

    With this, the design parameters obtained previously are demonstrated to be correct bymeans of the simulation of a complete pixel as part of a pixel array.

    43

  • In order to demonstrate the performance of the array, a complete exposition was simu-lated. Figure 4.18 shows four reset pulses, one for each row, and the corresponding row selectsignals. In this simulation, one row was selected for each reset pulse applied to the entire ar-ray, so the output shows the buffered signal of each row when a reset pulse is applied.

    0 0.02 0.04 0.06 0.08 0.1 0.120

    5

    Volta

    ge [V

    ]

    Reset

    0 0.02 0.04 0.06 0.08 0.1 0.120

    5

    Volta

    ge [V

    ]

    Row 1

    0 0.02 0.04 0.06 0.08 0.1 0.120

    5

    Volta

    ge [V

    ]

    Row 2

    0 0.02 0.04 0.06 0.08 0.1 0.120

    5

    Volta

    ge [V

    ]

    Row 3

    0 0.02 0.04 0.06 0.08 0.1 0.120

    5

    Volta

    ge [V

    ]

    Time [s]

    Row 4

    Figure 4.18: Vertical control signals: Reset pulses and row selection

    In Figure 4.19 the output signals for each row are shown. The reading of the signalsis actually done in parallel for each column, so at 1 ms, there are five pulses, one for eachcolumn, and all those pulses are read at the same time. Then, at 3 ms, the second readingtakes places and so on until the four rows are read.

    44

  • 0 0.02 0.04 0.06 0.08 0.1 0.122

    024

    Volta

    ge [V

    ]

    0 0.02 0.04 0.06 0.08 0.1 0.122

    024

    Volta

    ge [V

    ]

    0 0.02 0.04 0.06 0.08 0.1 0.122

    024

    Volta

    ge [V

    ]

    0 0.02 0.04 0.06 0.08 0.1 0.122

    024

    Volta

    ge [V

    ]

    0 0.02 0.04 0.06 0.08 0.1 0.122

    024

    Volta

    ge [V

    ]

    Time [s]

    Column 1

    Column 2

    Column 3

    Column 4

    Column 5

    Figure 4.19: Output signals of the pixel array

    According to the results presented in this chapter, it can be assumed that the design pa-rameters and simulation results obtained are correct from a simulation point of view.

    45

  • Chapter 5

    Conclusions and future work

    This work presented the main pixel architectures used today in the design of CMOS imagesensors. Based on literature, the more suitable architecture for scientific applications waschosen and analysed. The CMOS active pixel as part of a complete image sensor was studiedand a new design was presented. Also, a small array was simulated and results were reported.The design presented on this work aims to the possibility of fabricate a CMOS image sensorusing the studied pixel architecture. Also, simulation results agree with must designs pre-sented recently in literature.

    A CMOS active pixel image sensor was designed. This task was made considering themost common in-door lighting environments. The size of the on-pixel devices were chosento be the minimum feature size possible in order to maximize the pixels fill factor.

    From the results obtained, it was found that the current necessary to fully discharge thecapacitance of the photodiode increases as the sampling rate increases, this is because