Open Access Paper
16 April 2014 CMOS digital pixel sensors: technology and applications
Orit Skorka, Dileepan Joseph
Author Affiliations +
Abstract
CMOS active pixel sensor technology, which is widely used these days for digital imaging, is based on analog pixels. Transition to digital pixel sensors can boost signal-to-noise ratios and enhance image quality, but can increase pixel area to dimensions that are impractical for the high-volume market of consumer electronic devices. There are two main approaches to digital pixel design. The first uses digitization methods that largely rely on photodetector properties and so are unique to imaging. The second is based on adaptation of a classical analog-to-digital converter (ADC) for in-pixel data conversion. Imaging systems for medical, industrial, and security applications are emerging lower-volume markets that can benefit from these in-pixel ADCs. With these applications, larger pixels are typically acceptable, and imaging may be done in invisible spectral bands.

1.

INTRODUCTION

The image sensor market was traditionally dominated by charge-coupled device (CCD) technology. Ease of on-chip integration, higher frame rate, lower power consumption, and lower manufacturing costs pushed complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology to catch up with CCDs. This trend is especially prominent in the high-volume consumer electronics market. Furthermore, the difference in image quality, which gave advantage to CCDs in early days, has substantially reduced over the years.

When using either CCD or CMOS APS technology, electronic image sensors are based on analog pixels. With CCD sensors, data conversion is done at board level, and with CMOS APS ones, data conversion is done at either chip or column level. Because digital data is more immune to noise, transition to digital pixels can enhance performance on signal and noise figures of merit. In particular, digital pixels enable higher signal-to-noise-and-distortion ratios (SNDRs), lower dark limits (DLs), and wider dynamic ranges (DRs). SNDR is directly related to image quality, DL manifests in performance under dim lighting, and DR indicates maximal range of brightness that can be properly captured in a single frame.

With digital pixel sensor (DPS) technology, data conversion is done at pixel level, where each pixel outputs a digital signal. Digital pixels are larger than analog ones because they contain more circuit blocks and more transistors per pixel. These days, the highest volume of the image sensor market is based on consumer electronics applications that favor small pixels and high resolution arrays. Many DPS designs are currently unsuitable for this market segment. However, there are medical, surveillance, industrial, and automotive imaging applications that can accept large pixels and benefit from digital pixels. These are low-volume growing markets, where imaging is sometimes done in invisible bands of the spectrum. There are many approaches to DPS design, where specific application requirements make some preferred over others.

In this review paper, Section 2 analyzes the market of CMOS image sensors, focusing on diversification into invisible spectral bands. Section 3 compares and contrasts various digital pixel architectures in the literature. Main points are summarized in the conclusion section.

2.

DIVERSITY OF CMOS SENSORS

CMOS image sensor applications are diversified. Because design specifications are application-defined, there is a broad range of variety among CMOS image sensors, and they diversify by properties that include fabrication process and technology, band of imaging, use of color filters with visible-band imaging, pixel pitch, array size, array area, video rate, low-light performance, DR, temporal and fixed pattern noise properties, power consumption, and operating temperature. In general, technological developments are mainly driven by market demand.

2.1

MARKET AND TECHNOLOGY TRENDS

A white paper that was released in 2010 by the International Technology Roadmap for Semiconductors (ITRS), presents a dual-trend roadmap for the semiconductor industry.1 The first trend for future development has been called “More Moore”. It focuses on device miniaturization and mainly applies to digital applications, such as memory and logic circuits, and simply continues the traditional approach of Moore's Law. The second trend, which has been called “More than Moore”, focuses on functional diversification of semiconductor devices. It has evolved from microsystems that include both digital and non-digital functionalities, and that use heterogeneous integration to enable interaction with the external world. Examples include applications where transducers, i.e., sensors and actuators, are used, as well as subsystems for power generation and management. Image sensors are heterogeneous microsystems that require photodetectors for sensing, analog circuits for amplification and pre-processing, and digital circuits for control and post-processing.

While with the “More Moore” trend, the ITRS uses the technology push approach, with the “More than Moore” trend, the ITRS approach is based on identification of fields for which a roadmapping effort is feasible and desirable. In an update to the “More than Moore” roadmap from 2012,2 the ITRS recognizes energy, lighting, automotive, and health care as sectors that are lead technology drivers. Developments in the latter two sectors include various applications that are based on electronic imaging systems.

A report by Frost & Sullivan,3 which discusses technological and market trends of electronic image sensors, indicates that CCD technology and front-side illuminated CMOS APS technology are technologies that have passed their maturity stage, and are now in decline, while back-side illuminated CMOS APS technology is currently growing. The latter requires substrate thinning, which offers a structure that is more optimal for imaging, and allows vertical integration of transistors and photodiodes. Image sensors based on organic CMOS and quantum dots are considered as technologies in introductory and growth stages.

Frost & Sullivan also provide a demand-side analysis. The analysis shows that consumer electronics devices require high resolution sensor arrays with minimal pixel size, while industrial, security, and surveillance applications demand wide DR imaging capabilities. Low-light imaging is required by some medical imaging applications as well as security and surveillance ones. Fig. 1 presents distribution of the image sensor market according to a company presentation that was prepared by Yole Developpment.4 The presentation also indicates that, while consumer electronics accounts for the highest portion of the image sensor market, the market of the low-volume applications is also growing and expected to drive future growth of the industry.

Figure 1.

Low to high volume CMOS image sensor applications, according to a report prepared by Yole Développment.

00013_psisdg9060_90600G_page_2_1.jpg

2.2

IMAGING IN DIFFERENT BANDS

Electronic image sensors can be found in a wide range of applications that cover the entire electromagnetic spectrum, from γ-rays to terahertz (THz). While similar readout circuits may be used with various imaging systems, the photodetectors must be selected according to the band of interest. Fig. 2 presents typical pixel pitch of electronic image sensors in various imaging bands, and Table 1 summarizes common properties of image sensors in all spectrum bands. Details and sources are given below. Pixel pitch requirements are set by the application, and depend on the size of the available photodetectors as well as on image demagnification.

Figure 2.

Variation of typical pixel pitch with imaging band. (All artwork is original.)

00013_psisdg9060_90600G_page_3_1.jpg

Table 1.

Typical properties of image sensors in spectral bands used for imaging.

BandWavelengthFocusedPitch (μm)Detectors
γ-rays< 0.01 nmNo100–1000Indirect: Scintillator and c-Si devices Direct: CdZnTe devices
X-rays0.01–10 nmNo48–160Indirect: scintillator and c-Si devices Direct: a-Si:H, CdZnTe, or a-Se devices
UV10–400 nmYes5–10c-Si devices
Visible400–700 nmYes1–8c-Si devices
Near IR0.7–1 μmYes17–47c-Si devices
IR1–1000 μmYes17–52Microbolometers or HgCdTe devices
THz100–1000 μmYes50–180Microbolometers or c-Si antennas

2.2.1

γ-ray imaging

γ-ray cameras have applications in nuclear material detection, astronomy, nuclear medicine, nuclear power systems, and other fields where radioactive sources are used.5 Traditionally, crystal scintillators, such as CsI, which absorb the radiation and emit visible light, were used in combination with photomultiplier arrays for detection of γ-rays.

Recently, CMOS arrays that are either coated with scintillators or vertically integrated with materials that are direct converters of γ-rays, such as CdZnTe, have been demonstrated.6,7 Although γ-ray photons cannot be focused, image demagnification can be performed by use of collimators, as done in single-photon emission computed tomography (SPECT) imaging systems.

2.2.2

X-ray imaging

Medical X-ray imaging applications include mammography, radiography, and image-guided therapy. X-ray cameras are also used in security screening, industrial inspection, and astronomy. In general, X-ray imaging is performed without any demagnification mechanism. Die tiling, as shown in Fig. 3(a), is needed with X-ray sensors that are based on CMOS devices, when the specified imaging area exceeds maximal die area that is feasible with CMOS processes.

Figure 3.

(a) Die tiling is used in X-ray image sensors to fulfill the requirement for large-area arrays because X-ray imaging is done without image demagnification. (b) Pixel in an uncooled IR image sensor with a microbolometer device.

00013_psisdg9060_90600G_page_4_1.jpg

There are two approaches for detection of X-rays in electronic image sensors.8 The indirect approach employs scintillator films that absorb X-rays to emit photons in the visible band. Commonly used scintillators are CsI and Gadox. The direct approach is based on materials, such as a-Se, HgI2, and CdZnTe,9 that absorb X-rays to generate free charge carriers.

Image quality is better with the direct approach because, with scintillators, the emitted photons may not have the same directions as the absorbed X-rays, which causes image blur. However, direct converters operate under voltage levels that are significantly higher than those used with CMOS devices. Readout arrays for X-ray image sensors have been demonstrated with hydrogenated amorphous silicon (a-Si:H) thin-film transistor (TFT),10 CCD,11 and CMOS12 devices.

2.2.3

UV imaging

Applications for ultraviolet (UV) imaging include space research, daytime corona detection,13 missile detection, and UV microscopy. UV radiation from the sun in the range of 240 to 280 nm is completely blocked from reaching the Earth by the ozone layer in the stratosphere. A camera that is sensitive only to this region will not see any photons from the sun.

UV cameras based on monolithic crystalline silicon (c-Si) image sensors are available commercially. Examples include the Hamamatsu ORCA II BT 512, which uses a back-illuminated CCD sensor,14 and the Intevac MicroVista camera, which uses a back-illuminated CMOS sensor.15

2.2.4

Visible-band imaging

Most visible-band imaging applications involve a lens that creates a sharp image on the focal plane, where the image sensor is placed. However, there are visible-band applications, such as lab-on-chip, where imaging is done without a lens. 16 Fortunately, c-Si, which is the most commonly used semiconductor by the industry, is sensitive to visible light. Color and other aspects of the human visual system are crucial for design and evaluation of image sensors in this band.

2.2.5

IR imaging

The infrared (IR) band is divided here into two regions. Near IR lies between 0.7 and 1.0 μm. With bandgap of 1. 12 eV, c-Si is sensitive to radiation in this band. IR refers to longer wavelengths, where other types of photodetectors must be used. IR photodetectors may be categorized as either semiconductor or micro-electromechanical system (MEMS) devices.

Operating principles of semiconductor photodetectors are based on solid-state physics, where free charge carriers are generated by absorption of photons. Alloys of mercury cadmium telluride (MCT) are commonly used for detection of IR radiation. Because photon energy in this band is on the order of thermal energy at room temperature, semiconductor photodetectors must be cooled.

Operating principles of MEMS IR detectors, called microbolometers, are based on change in electrical properties of conductive films as a result of temperature increase with exposure to IR radiation. Microbolometers do not require cooling, and can be directly deposited on a CMOS readout circuit array,17 as illustrated in Fig. 3(b).

IR imaging applications include medical imaging (e.g., breast thermography), night vision cameras, and building inspection (e.g., detection of hot spots and water). With modern IR cameras, image sensors with pixel pitch of 17 μm or higher are readily available.18

2.2.6

THz imaging

The THz region lies between optical wavelengths and electronic wavelengths or microwaves. Challenges with generation and detection of THz radiation made THz imaging impractical until recently. However, imaging in this band is attractive because THz is a non-ionizing radiation that presents a promising alternative to X-rays in various applications.

The technology takes advantage of the transparency of air particles, such as dust and smoke, and of thin layers, such as plastic, paper, and clothing, to THz rays, versus the high absorption coefficient of water and metals. This allows sensors to “see through” materials that are opaque in other regions of the electromagnetic spectrum.

THz imaging has applications in medical diagnosis, such as identification of dental caries and determination of hydration levels, space research, industrial quality, and food control. Currently, the THz market mainly focuses on security screening, as the technology allows detection of hidden weapons and chemicals used in explosives. 19

There are two main approaches for fabrication of THz sensors. With monolithic CMOS sensors, each pixel includes an antenna that couples THz waves to a CMOS transistor. The transistor rectifies the THz signal and converts it into a continuous voltage. With hybrid sensors, microbolometer detectors are directly deposited on CMOS devices. Typical pixel pitch is around 150 μm20,21 with the former approach, and 50 μm22 with the latter approach, which resembles the uncooled IR imaging approach.

2.3

HIGHLIGHTED MARKET SEGMENTS

Digital X-ray systems are expected to have the largest growth in the radiography market, which includes mam-mography, fluroscopy, dental imaging, and computed tomography. According to a report published by Millennium Research Group (MRG), the trend toward minimally-invasive surgical procedures, which can improve efficiency of existing procedures, leads to increased demand for both diagnostic and interventional X-ray systems. 23 This manifests in high sales growth for the hybrid operating-room market segment. These are multi-procedural rooms that function both as regular operating rooms and as interventional suites, which combine services and procedures.

Although the initial cost for purchasing a digital X-ray system is several times higher than a conventional one, operating costs with digital systems are lower than with conventional ones. Digital systems do not require film and processing, and large film storage facilities are no longer needed once a digital X-ray system is installed. Other factors that drive sales of digital systems are convenience and usability. With digital systems, images taken are retrieved almost immediately, and have higher quality and higher resolution than those obtained with analog systems.

Over the past fifteen years, the market for uncooled IR imaging systems has grown rapidly thanks to the improved performance and production process of microbolometer detectors,24 as well as decrease in their manufacturing costs. Operation at room temperature has allowed a significant reduction in system complexity, size, and cost. For comparison, while a cooled IR sensor costs $5,000-$50, 000 in low-volume production, an uncooled IR sensor costs $200-$10,000 with similar volume.

Market segments with high demand for uncooled IR sensors include: (a) thermography - increased use of IR cameras for maintenance engineering and building inspection; (b) automotive - more new car models include a thermal night vision system; (c) surveillance - new models of thermal cameras have been introduced for closed-circuit television (CCTV) systems; and (d) defence - demand for uncooled IR cameras for soldier use, e.g., weapon sights and portable goggles, and for military vehicles, e.g., vision enhancement systems and remote weapon stations.

3.

DIGITAL PIXEL ARCHITECTURES

The initial objective behind the development of DPS arrays was to increase the DR of linear sensors. Better noise filtering allows extension of the DR in dim light, lowering the DL. Furthermore, digital control allows extension of the DR in bright scenes, where well saturation is easily reached.

Nonlinear sensors, such as logarithmic sensors, can also benefit from digital pixels because they facilitate higher SNDR. Charge integration in linear sensors acts as a first-order low-pass filter (LPF). Logarithmic sensors operate in continuous mode and compress a wide DR of photocurrent to a small voltage range. The lack of integration results in higher temporal noise relative to the smaller signal, which degrades image quality. With digital pixel circuits, some of this noise may be filtered and further noise during readout is prevented.

Various digital pixel architectures have been demonstrated with image sensors. In general, each one may be categorized as either a non-classical analog-to-digital converter (ADC) or a classical ADC. With the former, conversion principles are unique to imaging because they largely depend on photodetector properties. With the latter, conventional analog-to-digital conversion techniques are adapted.

3.1

NON-CLASSICAL ADCS

Fig. 4 shows the photodiode (PD) and single-photon avalanche diode (SPAD) regions on a p-n junction current-voltage curve, as well as the avalanche photodiode (APD) region, a transition region between the former two.

Figure 4.

Reverse bias operation of photodiodes may be divided into three regions: PD, APD, and SPAD. The gain is 0 in the PD region, linearly proportional to V in the APD region, and “infinite” in the SPAD region.

00013_psisdg9060_90600G_page_6_1.jpg

DPS arrays based on non-classical ADCs have been demonstrated with p-n junctions mainly in two of these operating regions: PD, which requires reverse-bias voltages that are readily available from the CMOS supply line; and SPAD, which operates in Geiger mode and requires extreme reverse-bias voltages, i.e., more negative than the breakdown voltage, VBD.

3.1.1

PD-based ADCs

In the time-to-first-spike approach, also called time to saturation, a circuit that controls and records integration time is included in each pixel. 25 As shown in Fig. 5(a), the pixel has a PD circuit whose voltage is sensed by a comparator. At the beginning of an integration, the reset line is activated, which charges the PD capacitance. During integration, VPD drops as charge accumulates on the PD capacitance.

Figure 5.

(a) In time-to-first-spike pixels, a control circuit, triggered by a comparator, stops integration and stores the time required for VPD to reach Vref. (b) Under brighter light, less time is required and a lower value is latched in the memory. (c) In light-to-frequency conversion pixels, when VPD reaches Vref, a comparator increments a counter and resets the photodiode. (d) Brighter lights lead to higher frequencies on Vcomp and higher values in the counter.

00013_psisdg9060_90600G_page_7_1.jpg

When VPD falls below a global reference voltage, Vref, the comparator generates a pulse. This pulse triggers a circuit that records integration time in a memory unit and stops integration for this pixel to avoid saturation. At the end of the frame, the value that was latched in the memory is read. It is then used to determine the relative brightness level for each pixel in order to construct a digital image. Fig. 5(b) shows the circuit waveforms of a pixel under a bright and dim light.

In the light-to-frequency conversion approach, also called intensity-to-frequency conversion, the brightness level is converted into frequency26 by repeatedly resetting the PD capacitance over the frame period. Fig. 5(c) shows the schematic of a light-to-frequency conversion pixel. 27 Waveforms of the pixel under bright and dim light are shown in Fig. 5(d). Note the similarities between this pixel and the previous one.

At the beginning of a frame, the reset line is activated to charge the PD capacitance. During exposure, VPD drops as the photocurrent progressively discharges the capacitance. When VPD drops below Vref, the comparator generates a pulse that increments the counter and triggers a feedback circuit to recharge the capacitance. A new integration cycle is then initiated, and the process is repeated until a fixed period elapses. At the end of the frame period, the value that is stored in the counter is read, and the counter is reset to zero.

3.1.2

SPAD-based ADCs

With PD-based digital pixels, the detector output is an analog signal. It is converted to a digital signal via a circuit that utilizes the PD, e.g., its reverse-biased capacitance. However, with SPAD-based digital pixels, the detector output is a pulsed signal, where each pulse represents a detected photon, and the subsequent circuit blocks detect each pulse and use it to increment a counter.

Because SPAD operation requires high voltage levels to accelerate electrons in high electric fields, their structure must allow enough distance for charge acceleration and include guard rings for voltage isolation. This results in a layout area that is substantially larger than that of a standard PD. Therefore, PDs make a better choice for applications where small pixels and system compactness are desirable. SPADs are preferred for low-light and time-of-flight imaging applications.

When an electron-hole pair is generated in a SPAD, either by a photon absorption or by a thermal reaction, the free charge carriers are accelerated by the high electric field across the junction, generating additional carriers by impact ionization. 28 To allow detection of subsequent photons, the avalanche process must be quenched, which can be achieved by lowering the SPAD voltage to a level below VBD. This can be easily done by connecting a high impedance ballast resistor, RB, in series with the SPAD.

When the circuit is inactive, the SPAD is biased to Vbias > VBD through RB. When a photon is absorbed and successfully triggers an avalanche, the current rises abruptly. This results in development of a high voltage drop over RB that acts as a negative feedback to lower the voltage drop over the SPAD. In this manner, the avalanche current quenches itself, and the edge of an avalanche pulse marks the arrival time of a detected photon.

Fig. 6(a) shows this passive quenching circuit (PQC), as it is called. A comparator to perform edge detection and a counter are also used. Waveforms of the SPAD current and voltage are shown in Fig. 6(b). PQCs are suitable for SPAD arrays thanks to their simplicity and small area. However, they suffer from afterpulsing and a long reset time,29 which may be overcome by additional circuitry. Mixed passive-active quenching circuits are commonly used in SPAD arrays because they offer better performance. They include a feedback circuit that starts quenching the SPAD as soon as an avalanche is sensed.

Figure 6.

(a) A SPAD-based pixel with PQC has a serially-connected ballast resistor, a circuit that performs edge detection, and a counter. (b) Waveforms of the SPAD current and voltage, indicating Geiger operation, quenching, and reset.

00013_psisdg9060_90600G_page_8_1.jpg

3.2

CLASSICAL ADCS

DPS arrays have been designed, as shown in Table 2, by adapting classical ADCs. Here, FF and PSNR stand for fill factor and peak SNR, respectively. The use of classical ADCs is advantageous because it builds upon a large body of theory that has produced high performance ADCs for various applications. Classical ADCs may be categorized as either Nyquist-rate converters or oversampling converters. A single ADC may be included in each pixel or shared among a small group of pixels.

Table 2.

Example designs where classical ADCs are used with DPS arrays.

YearDesignADC TypeProcess (μm)Transistors (per pixel)Pitch (μm)FF (%)PSNR (dB)DR (dB)
1999Yang et al.30Nyquist-rate0.354.510.52848 
2001Kleinfelder et al.31Nyquist-rate0.18379.415  
2004Kitchen et al.32Nyquist-rate0.35 4512 85
2006Bermak and Yung33Nyquist-rate0.35 5020 90
2009Crooks et al.34Nyquist-rate0.25 3010 68
2009Ito et al.35Nyquist-rate0.35505414.9 68
2010Rocha et al.36Oversampling 193669  
2011Figueras et al.37Oversampling0.18 70100  
2012Ignjatovic et al.38Oversampling0.35510315274
2013Mahmoodi et al.39Oversampling0.1829838246110

With integrated circuits (ICs) that contain a few ADCs, area and power per ADC are not critical, unlike with in-pixel ADCs intended for megapixel applications. Furthermore, performance variability is more important in the design of ADC arrays because low-performing ADCs cannot simply be discarded without also discarding satisfactory and high-performning ADCs. For these reasons, classical ADCs are adapted not simply adopted. For example, designs are reduced in size often by using serial instead of parallel approaches.

3.2.1

Nyquist-rate ADCs

With CMOS APS arrays, flash40,41 and pipelined4244 ADCs are commonly used for chip-level data conversion. Cyclic,45,46 successive approximation (SAR),47,48 oversampling,49,50 and integrating51-53 ADCs have all been demonstrated for column-level data conversion. Except for the oversampling cases, of course, all these ADCs are Nyquist-rate ADCs.

There has been much interest in adapting Nyquist-rate ADCs for pixel-level data conversion. The examples illustrated here come from a research program at Stanford University that has resulted in commercialized DPS technology. While the authors called one version of their design,31 illustrated in Figs. 7(a) and (b), a single-slope integrating ADC, it is technically a ramp-compare ADC, which has more in common with flash ADCs than with integrating ADCs.

Figure 7.

(a) In ramp-compare ADC pixels, erroneously called single-slope integrating ADC pixels, the detector output Vsense is compared to an external ramp Vramp. (b) When VsenseVramp changes sign, a ramp counter is latched in pixel memory. (c) In MCBS ADC pixels, there is only one bit stored per pixel, which saves area. (d) Ramp comparison is done multiple times in sequence, each time to resolve one bit of a code that identifies the ramp value.

00013_psisdg9060_90600G_page_10_1.jpg

An n-bit flash ADC has 2n comparators that each perform one comparison in parallel. The ramp-compare ADC replaces this parallel operation with a serial operation, where one comparator performs 2n comparisons in sequence. A ramp voltage is generated and compared to the ADC input voltage. When there is a sign change in the comparison, a digital code representing the ramp voltage is latched. An n-bit digital-to-analog converter (DAC) may be used to generate the ramp voltage in steps, as well as to provide the latched code, i.e., the DAC input that triggers the sign change.

Yang et al.30 demonstrated a DPS array that is based on multichannel bit serial (MCBS) ADCs. The MCBS design, illustrated in Figs. 7(c) and (d), is similar to the ramp-compare design, but includes modifications that have allowed it to be commercialized for visible-band applications. Unlike with the ramp-compare ADC, bits are obtained in a serial manner to significantly reduce pixel area. For n-bit resolution, the “same” ADC input is compared n times in a single frame to the “same” 2n-valued ramp signal. One bit is resolved (and read out) each time, so the pixel needs to contain only one latch bit, instead of all n bits.

3.2.2

Oversampling ADCs

As shown in Table 3, ADC architectures may be divided into three groups based on conversion speed and accuracy.54 Inside the pixel, video capture is a low-bandwidth, i.e., low-speed, application that demands high bit-resolution, i.e., high accuracy, for high image quality. These specifications make oversampling ADCs, such as delta-sigma (ΔΣ) ADCs, especially suitable for pixel-level data conversion.

Table 3.

Comparison of classical ADC architectures.

SpeedAccuracyArchitectures
Low-to-mediumHighIntegrating, oversampling
MediumMediumCyclic (algorithmic), successive approximation
HighLow-to-mediumFolding, interpolating, flash, time-interleaved, two-step, pipelined

DPS arrays have been realized with first-order ΔΣ modulators inside each pixel.36-38 Higher-order ΔΣ modulators demonstrate better noise-shaping performance. However, they take more area and power. Although ΔΣ modulators are oversampling ADCs, they are not ΔΣ ADCs. In a ΔΣ ADC, the digital output of the modulator is processed by a decimator, a digital circuit that performs low-pass filtering and down-sampling. Recently, Mahmoodi et al.39 presented a design, shown in Fig. 8, that includes in-pixel decimation.

Figure 8.

A true ΔΣ ADC pixel, as shown, has both a modulator and decimator. The modulator oversamples and quantizes the ADC input, while shaping noise to high frequencies. The decimator filters the modulator signal and down-samples it to the Nyquist rate. In this schematic, a logarithmic sensor is shown, but linear sensors may also be used.

00013_psisdg9060_90600G_page_11_1.jpg

Without in-pixel decimation, the bandwidth required to read the modulator outputs of a large array of pixels may be very high. As a result, either frame size, frame rate, or oversampling ratio has to be compromised. Lowering the oversampling ratio reduces the noise filtering and degrades the accuracy of the ΔΣ ADC. On the other hand, with in-pixel decimation, a large number of transistors are needed per pixel, which results in larger pixels. While this is acceptable for invisible-band applications, further efforts to shrink the in-pixel ΔΣ ADC are needed to apply the technology to visible-band applications competitively.

4.

CONCLUSION

Transition to digital pixels can boost signal and noise figures of merit of CMOS image sensors. However, a larger pixel area makes DPS arrays less competitive for consumer electronics applications, which dominate the image sensor market. Electronic imaging systems for medical, automotive, industrial, and security applications form lower-volume growing markets that can accept large pixels and benefit from DPS arrays. With many of these systems, imaging is done in invisible bands of the spectrum, such as X-ray and IR.

DPS arrays have been demonstrated with various architectures. Some used digitization techniques that are unique to imaging. Others adapted classical ADCs. Digital pixels not based on classical ADCs have been demonstrated by exploiting PD and SPAD detectors. Classical Nyquist-rate ADCs have been used successfully also, some achieving small pixels. However, according to classical ADC theory, oversampling ADCs make the best choice for low-speed high-accuracy applications, which are the specifications of DPS arrays.

ACKNOWLEDGMENTS

The authors thank Mr. Jing Li and Dr. Mark Alexiuk for technology and application advice. They are also grateful to NSERC, TEC Edmonton, and IMRIS for financial and in-kind support.

REFERENCES

1. 

W. Arden, M. Brillouet, P. Cogez, M. Graef, B. Huizing, and R. R. Mahnkopf, “More-than-Moore,” White Paper, International Technology Roadmap for Semiconductors, (2010). Google Scholar

2. 

ITRS, “2012 Update Overview,” Report, International Technology Roadmap for Semiconductors, (2012). Google Scholar

3. 

, “Frost & Sullivan, Developments in Image Sensors,” Technical Insights, (2012) www.frost.com Google Scholar

4. 

P. Danini and J. Baron, “Status of the CMOS Image Sensors Industry,” Presentation, Yole Devéloppment, (2012). Google Scholar

5. 

K. P. Ziock, “Gamma-Ray Imaging Spectroscopy,” Science & Technology Review, 14 –26 (1995). Google Scholar

6. 

K. Spartiotis, A. Leppanen, T. Pantsar, J. Pyyhtia, P. Laukka, K. Muukkonen, O. Mannisto, J. Kinnari, and T. Schulman, “A photon counting CdTe gamma- and X-ray camera,” Nuclear Instruments and Methods in Physics Research A, 550 267 –277 (2005). https://doi.org/10.1016/j.nima.2005.04.081 Google Scholar

7. 

P. Russo, A. S. Curion, G. Mettivier, L. Aloj, C. Caraco, and S. Lastoria, “The MediPROBE CdTe Based Compact Gamma Camera,” in IFMBE Proceedings, 556 –558 (2009). Google Scholar

8. 

M. J. Yaffe and J. A. Rowlands, “X-ray Detectors for Digital Radiography,” Physics in Medicine and Biology, 42 (1), 1 –39 (1997). https://doi.org/10.1088/0031-9155/42/1/001 Google Scholar

9. 

S. Kasap, J. B. Frey, G. Belev, O. Tousignant, H. Mani, J. Greenspan, L. Laperriere, O. Bubon, A. Reznik, G. DeCrescenzo, K. S. Karim, and J. A. Rowlands, “Amorphous and polycrystalline photoconductors for direct conversion flat panel x-ray image sensors,” Sensors, 11 (5), 5112 –5157 (2011). https://doi.org/10.3390/s110505112 Google Scholar

10. 

, “ANRAD, ANRAD Flat Panel Digital X-ray Detectors,” (2009) www.anrad.com Google Scholar

11. 

, “Hamamatsu, CCD area image sensors,” (2011) www.hamamatsu.com Google Scholar

12. 

, “DALSA, DALSA XiNEOS-1313 CMOS Flat-Panel Detector for High Frame Rate X-Ray Imaging,” (2010) www.teledynedalsa.com Google Scholar

13. 

M. Lindner, S. Elstein, P. Lindner, J. M. Topaz, and A. J. Phillips, “Daylight corona discharge imager,” Eleventh International Symposium on High Voltage Engineering, 4 349 –352 (1999). Google Scholar

14. 

Hamamatsu, “BT(Back-thinned)-CCD Cooled Digital Camera ORCA II-BT-512G,” (2006) www.hamamatsu.com Google Scholar

15. 

, “Intevac, MicroVista-UV Back-Illuminated CMOS Camera,” (2012) www.intevac.com Google Scholar

16. 

H. Eltoukhy, K. Salama, and A. El Gamal, “A 0.18-μm CMOS Bioluminescence Detection Lab-on-Chip,” IEEE Journal of Solid-State Circuits, 41 (3), 651 –662 (2006). https://doi.org/10.1109/JSSC.2006.869785 Google Scholar

17. 

F. Niklaus, C. Vieider, and H. Jakobsen, “MEMS-Based Uncooled Infrared Bolometer Arrays: A Review,” Proceedings of the SPIE, 6836 68360D 1 –15 (2007). Google Scholar

18. 

Yole Développment, “MicroTech is CleanTech,” CMC Microsystems Annual Symposium, (2011). Google Scholar

19. 

F. Schuster, W. Knap, and V. Nguyen, “Terahertz imaging achieved with low-cost CMOS detectors,” Laser Focus World, 47 (7), 37 –41 (2011). Google Scholar

20. 

E. Ojefors, U. R. Pfeiffer, A. Lisauskas, and H. G. Roskos, “A 0.65 THz Focal-Plane Array in a Quarter-Micron CMOS Process Technology,” IEEE Journal of Solid-State Circuits, 44 (7), 1968 –1976 (2009). https://doi.org/10.1109/JSSC.2009.2021911 Google Scholar

21. 

S. Domingues, M. Perenzoni, D. Stoppa, A. D. Capobianco, and F. Sacchetto, “A CMOS THz staring imager with in-pixel electronics,” 7th Conference on Ph.D. Research in Microelectronics and Electronics, 81 –84 (2011). Google Scholar

22. 

M. Bolduc, M. Terroux, B. Tremblay, L. Marchese, E. Savard, M. Doucet, H. Oulachgar, C. Alain, H. Jerominek, and A. Bergeron, “Noise-equivalent power characterization of an uncooled microbolometer-based THz imaging camera,” in Proceedings ofthe SPIE, 80230C 1 –10 (2011). Google Scholar

23. 

, “Millennium Research Group, US X-ray System Market to Reach Value of $2.8 Billion by 2016,” (2012) www.businesswire.com Google Scholar

24. 

C. Li, G. D. Skidmore, and C. J. Han, “Uncooled Infrared Sensor Development Trends and Challenges,” in Proceedings ofthe SPIE, 815515 1 –11 (2011). Google Scholar

25. 

X. Guo, X. Qi, and J. Harris, “A Time-to-First-Spike CMOS Image Sensor,” IEEE Sensors Journal, 7 (8), 1165 –1175 (2007). https://doi.org/10.1109/JSEN.2007.900937 Google Scholar

26. 

A. Spivak, A. Belenky, A. Fish, and O. Yadid-Pecht, “Wide-Dynamic-Range CMOS Image Sensors—Comparative Performance Analysis,” IEEE Transactions on Electron Devices, 56 (11), 2446 –2461 (2009). https://doi.org/10.1109/TED.2009.2030599 Google Scholar

27. 

X. Wang, W. Wong, and R. Hornsey, “A High Dynamic Range CMOS Image Sensor With Inpixel Light-to-Frequency Conversion,” IEEE Transactions on Electron Devices, 53 (12), 2988 –2992 (2006). https://doi.org/10.1109/TED.2006.885642 Google Scholar

28. 

S. Cova, M. Ghioni, A. Lacaita, C. Samori, and F. Zappa, “Avalanche photodiodes and quenching circuits for single-photon detection,” Applied Optics, 35 (12), 1956 –1976 (1996). https://doi.org/10.1364/AO.35.001956 Google Scholar

29. 

A. Gallivanoni, I. Rech, and M. Ghioni, “Progress in quenching circuits for single photon avalanche diodes,” IEEE Transactions on Nuclear Science, 57 (6), 3815 –3826 (2010). Google Scholar

30. 

D. X. D. Yang, B. Fowler, and A. El Gamal, “A Nyquist-rate pixel-level ADC for CMOS image sensors,” IEEE Journal of Solid-State Circuits, 34 (3), 348 –356 (1999). https://doi.org/10.1109/4.748186 Google Scholar

31. 

S. Kleinfelder, S. Lim, X. Liu, and A. El Gamal, “A 10000 frames/s CMOS digital pixel sensor,” IEEE Journal of Solid-State Circuits, 36 (12), 2049 –2059 (2001). https://doi.org/10.1109/4.972156 Google Scholar

32. 

A. Kitchen, A. Bermak, and A. Bouzerdoum, “PWM digital pixel sensor based on asynchronous self-resetting scheme,” IEEE Electron Device Letters, 25 (7), 471 –473 (2004). https://doi.org/10.1109/LED.2004.831222 Google Scholar

33. 

A. Bermak and Y.-F. Yung, “A DPS array with programmable resolution and reconfigurable conversion time,” IEEE Transactions on Very Large Scale Integration Systems, 14 (1), 15 –22 (2006). https://doi.org/10.1109/TVLSI.2005.863193 Google Scholar

34. 

J. Crooks, S. Bohndiek, C. D. Arvanitis, R. Speller, H. XingLiang, E. Villani, M. Towrie, and R. Turchetta, “A CMOS Image Sensor With In-Pixel ADC, Timestamp, and Sparse Readout,” IEEE Sensors Journal, 9 (1), 20 –28 (2009). https://doi.org/10.1109/JSEN.2008.2008407 Google Scholar

35. 

K. Ito, B. Tongprasit, and T. Shibata, “A Computational Digital Pixel Sensor Featuring Block-Readout Architecture for On-Chip Image Processing,” IEEE Transactions on Circuits and Systems I, 56 (1), 114 –123 (2009). https://doi.org/10.1109/TCSI.2008.926983 Google Scholar

36. 

J. G. Rocha, G. Minas, and S. Lanceros-Mendez, “Pixel Readout Circuit for X-Ray Imagers,” IEEE Sensors Journal, 10 (11), 1740 –1745 (2010). https://doi.org/10.1109/JSEN.2010.2046406 Google Scholar

37. 

R. Figueras, J. Sabadell, L. Teres, and F. Serra-Graells, “A 70μm Pitch 8μW Self-Biased Charge-Integration Active Pixel for Digital Mammography,” IEEE Transactions on Biomedical Circuits and Systems, 5 (5), 481 –489 (2011). https://doi.org/10.1109/TBCAS.2011.2151192 Google Scholar

38. 

Z. Ignjatovic, D. Maricic, and M. Bocko, “Low Power, High Dynamic Range CMOS Image Sensor Employing Pixel-Level Oversampling Analog-to-Digital Conversion,” IEEE Sensors Journal, 12 (4), 737 –746 (2012). https://doi.org/10.1109/JSEN.2011.2158818 Google Scholar

39. 

A. Mahmoodi, J. Li, and D. Joseph, “Digital Pixel Sensor Array with Logarithmic Delta-Sigma Architecture,” Sensors, 13 (8), 10765 –10782 (2013). https://doi.org/10.3390/s130810765 Google Scholar

40. 

X. Jin, Z. Liu, and J. Yang, “New Flash ADC Scheme With Maximal 13 Bit Variable Resolution and Reduced Clipped Noise for High-Performance Imaging Sensor,” IEEE Sensors Journal, 13 (1), 167 –171 (2013). https://doi.org/10.1109/JSEN.2012.2210955 Google Scholar

41. 

M. Loinaz, K. J. Singh, A. J. Blanksby, D. A. Inglis, K. Azadet, and B. D. Ackland, “A 200-mW, 3.3-V, CMOS color camera IC producing 352 χ 288 24-b video at 30 frames/s,” IEEE Journal of Solid-State Circuits, 33 (12), 2092 –2103 (1998). https://doi.org/10.1109/4.735552 Google Scholar

42. 

J. Deguchi, F. Tachibana, M. Morimoto, M. Chiba, T. Miyaba, H. Tanaka, K. Takenaka, S. Funayama, K. Amano, K. Sugiura, R. Okamoto, and S. Kousai, “A 187μVrms-read-noise 51mW 1.4Mpixel CMOS image sensor with PMOSCAP column CDS and 10b self-differential offset-cancelled pipeline SAR-ADC,” in IEEE International Solid-State Circuits Conference, 494 –495 (2013). Google Scholar

43. 

, “ON Semiconductor, LUPA300 CMOS Image Sensor,” (2013) www.onsemi.com Google Scholar

44. 

M.-H. Choi, G.-C. Ahn, and S.-H. Lee, “12b 50 MS/s 0.18 μm CMOS ADC with highly linear input variable gain amplifier,” Electronics Letters, 46 (18), 1254 –1256 (2010). https://doi.org/10.1049/el.2010.1834 Google Scholar

45. 

M.-W. Seo, T. Sawamoto, T. Akahori, Z. Liu, T. Iida, T. Takasawa, T. Kosugi, T. Watanabe, K. Isobe, and S. Kawahito, “A Low-Noise High-Dynamic-Range 17-b 1.3-Megapixel 30-fps CMOS Image Sensor With Column-Parallel Two-Stage Folding-Integration/Cyclic ADC,” IEEE Transactions on Electron Devices, 59 (12), 3396 –3400 (2012). https://doi.org/10.1109/TED.2012.2215871 Google Scholar

46. 

K. Kitamura, T. Watabe, T. Sawamoto, T. Kosugi, T. Akahori, T. Iida, K. Isobe, T. Watanabe, H. Shi-mamoto, H. Ohtake, S. Aoyama, S. Kawahito, and N. Egami, “A 33-Megapixel 120-Frames-Per-Second 2.5-Watt CMOS Image Sensor With Column-Parallel Two-Stage Cyclic Analog-to-Digital Converters,” IEEE Transactions on Electron Devices, 59 (12), 3426 –3433 (2012). https://doi.org/10.1109/TED.2012.2220364 Google Scholar

47. 

M.-S. Shin, J.-B. Kim, M.-K. Kim, Y.-R. Jo, and O.-K. Kwon, “A 1.92-Megapixel CMOS Image Sensor With Column-Parallel Low-Power and Area-Efficient SA-ADCs,” IEEE Transactions on Electron Devices, 59 (6), 1693 –1700 (2012). https://doi.org/10.1109/TED.2012.2190936 Google Scholar

48. 

S. Matsuo, T. Bales, M. Shoda, S. Osawa, K. Kawamura, A. Andersson, M. Haque, H. Honda, B. Almond, Y. Mo, J. Gleason, T. Chow, and I. Takayanagi, “8.9-Megapixel Video Image Sensor With 14-b Column-Parallel SA-ADC,” IEEE Transactions on Electron Devices, 56 (11), 2380 –2389 (2009). https://doi.org/10.1109/TED.2009.2030649 Google Scholar

49. 

Y. Oike and A. El Gamal, “CMOS Image Sensor With Per-Column ΔΣ ADC and Programmable Compressed Sensing,” IEEE Journal of Solid-State Circuits, 48 (1), 318 –328 (2013). https://doi.org/10.1109/JSSC.2012.2214851 Google Scholar

50. 

Y. Chae, J. Cheon, S. Lim, M. Kwon, K. Yoo, W. Jung, D.-H. Lee, S. Ham, and G. Han, “A 2.1 M Pixels, 120 Frame/s CMOS Image Sensor With ΔΣ Column-Parallel ADC Architecture,” IEEE Journal ofSolid-State Circuits, 46 (1), 236 –247 (2011). https://doi.org/10.1109/JSSC.2010.2085910 Google Scholar

51. 

S.-F. Yeh and C.-C. Hsieh, “Novel Single-Slope ADC Design for Full Well Capacity Expansion of CMOS Image Sensor,” IEEE Sensors Journal, 13 (3), 1012 –1017 (2013). https://doi.org/10.1109/JSEN.2012.2227706 Google Scholar

52. 

D. Kim and M. Song, “An Enhanced Dynamic-Range CMOS Image Sensor Using a Digital Logarithmic Single-Slope ADC,” IEEE Transactions on Circuits and Systems II, 59 (10), 653 –657 (2012). https://doi.org/10.1109/TCSII.2012.2213359 Google Scholar

53. 

, “BAE Systems, CIS1021 Datasheet,” (2011) alliedscientificpro.com Google Scholar

54. 

D. A. Johns and K. Martin, Analog Integrated Circuit Design, John Wiley & Sons, U.K. (1997). Google Scholar
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Orit Skorka and Dileepan Joseph "CMOS digital pixel sensors: technology and applications", Proc. SPIE 9060, Nanosensors, Biosensors, and Info-Tech Sensors and Systems 2014, 90600G (16 April 2014); https://doi.org/10.1117/12.2044808
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Image sensors

CMOS sensors

Infrared imaging

X-rays

Double positive medium

Imaging systems

RELATED CONTENT


Back to Top