A multi-spectral backside illuminated Time Delayed Integration Radiation Hardened line scan sensor utilizing CMOS technology was designed for continuous scanning Low Earth Orbit small satellite applications. The sensor comprises a single silicon chip with 4 independent arrays of pixels where each array is arranged in 2600 columns with 64 TDI levels. A multispectral optical filter whose spectral responses per array are adjustable per system requirement is assembled at the package level. A custom 4T Pixel design provides the required readout speed, low-noise, very low dark current, and high conversion gains. A 2-phase internally controlled exposure mechanism improves the sensor's dynamic MTF. The sensor high level of integration includes on-chip 12 bit per pixel analog to digital converters, on-chip controller, and CMOS compatible voltage levels. Thus, the power consumption and the weight of the supporting electronics are reduced, and a simple electrical interface is provided. An adjustable gain provides a Full Well Capacity ranging from 150,000 electrons up to 500,000 electrons per column and an overall readout noise per column of less than 120 electrons. The imager supports line rates ranging from 50 to 10,000 lines/sec, with power consumption of less than 0.5W per array. Thus, the sensor is characterized by a high pixel rate, a high dynamic range and a very low power. To meet a Latch-up free requirement RadHard architecture and design rules were utilized. In this paper recent electrical and electro-optical measurements of the sensor's Flight Models will be presented for the first time.
KEYWORDS: 3D image processing, Imaging systems, Cameras, Lens design, Modulation transfer functions, Image resolution, 3D image reconstruction, 3D metrology, 3D modeling, Structured light
There are many visual inspection and sensing applications where both a high resolution image and a depth-map of the
imaged object are desirable at high speed. Presently available methods to capture 3D data (stereo cameras and structured
illumination), are limited in speed, complexity, and transverse resolution. Additionally these techniques rely on a
separated baseline for triangulation, precluding use in confined spaces. Typically, off the shelf lenses are implemented
where performance in resolution, field-of-view, and depth of field are sacrificed in order to achieve a useful balance.
Here we present a novel lens system with high-resolution and wide field-of-view for rapid 3D image capture. The design
achieves this using a single lens with no moving parts. A depth-from-defocus algorithm is implemented to reconstruct
3D object point clouds and matched with a fused image to create a 3D rendered view.
Many operations that are done manually, such as assembly operations, can be difficult to instruct to someone working in
an unstructured environment that is not already familiar with the operation. The typical approach is to take pictures of
the system and attempt to provide instructions using the pictures with some annotations. We have explored a variety of
visual aids that might be used to provide a more real-time feedback to guide such manual operations. These methods
include indirect feedback tools, such as signals or graphs to be interpreted, as well as direct methods that provide a
simulated or real view of the operation as the user works. This paper will explore some of the pros and cons of these
methods, and present some very preliminary results that suggest future directions for this work.
In this paper, the design and evaluation of a 3D stereo, near infrared (IR), defect mapping system for CZT inspection is
described. This system provides rapid acquisition and data analysis that result in detailed mapping of CZT crystal defects
across the area of wafers up to 100 millimeter diameter and through thicknesses of up to 20 millimeter. In this paper,
system characterization has been performed including a close evaluation of the bright field and dark field illumination
configurations for both wafer-scale and tile-scale inspection. A comparison of microscope image and IR image for the
same sample is performed. As a result, the IR inspection system has successfully demonstrated the capability of
detecting and localizing inclusions within minutes for a whole CZT wafer. Important information is provided for
selecting defect free areas out of a wafer and thereby ensuring the quality of the tile. This system would support the CZT
wafer dicing and assembly techniques that enable the economical production of CZT detectors. This capability can
improve the yield and reduce the cost of the thick detector devices that are rarely produced today.
This paper describes a compact, imaging Twyman-Green interferometer to measure small features such as corrosion pits,
scratches and digs on hard to access objects such as assembled parts. The shoebox size interferometer was designed to
guarantee proper orientation and working distance relative to the inspected section. The system also provides an
extended acceptance angle to permit the collection at selected view points on a subject. We will describe the various
image shifting techniques investigated as part of the prototype. All the components with the exception of power supplies
were integrated into an enclosure. The interferometer has been demonstrated to provide sub-micron depth resolution and
diffraction limited spatial resolution (a few microns). This paper will present the final performance achieved with the
system and provide examples of applications.
KEYWORDS: Imaging systems, Point spread functions, Image analysis, Convolution, Sensors, 3D image processing, 3D modeling, Stereoscopy, Analytical research, 3D metrology
Recovering 3D object information through analyzing image focus (or defocus) has been shown to be a potential tool in
situations where only a single viewing point is possible. Precise modeling and manipulation of imaging system
parameters, e.g. depth of field, modulation transfer function and sensor characteristics, as well as lighting condition and
object surface characteristics are critical for effectiveness of such methods. Sub-optimal performance is achieved when
one or more of these parameters are dictated by other factors. In this paper, we will discuss the implicit requirements
imposed by most common depth from focus/defocus (DFF/DFD) analysis methods and offer related application
considerations. We also describe how a priori information about the objects of interest can be used to improve
performance in realistic applications of this technology.
We have developed a standoff iris biometrics system for improved usability in access-control applications. The system
employs an eye illuminator, which is composed of an array of encapsulated near-infrared light emitting diodes (NIRLEDs),
which are triggered at the camera frame rate for reduced motion blur and ambient light effects. Neither the
standards / recommendations for NIR laser and lamp safety, nor the LED-specific literature address all the specific
aspects of LED eye-safety measurement. Therefore, we established exposure limit criteria based on a worst-case scenario
combining the following: the CIE/ANSI standard/recommendations for exposure limits; concepts for maximum
irradiance level and for strobing from the laser safety standards; and ad-hoc rules minimizing irradiance on the fovea, for
handling LED arrays, and for LED mounting density. Although our system was determined as eye safe, future variants
may require higher exposure levels and lower safety margins. We therefore discuss system configuration for accurate
LED radiometric measurement that will ensure reliable eye-safety evaluation. The considerations and ad hoc rules
described in this paper are not, and should not be treated as safety recommendations.
This paper describes progressive generations of hand held triangulation sensors for measuring small features, from edge
breaks to corrosion pits. We describe the design considerations, ergonomics, packaging and interface between the device
and part, such as the sensor tip and optional fixtures. We then present a customized design to address different types of
surface features and defects. Next, we present the calibration concept, and its execution. The paper closes by
summarizing system performance evaluation experiments and their results. It was shown that the system is capable of
measuring edges down to a radius of 250 microns at a repeatability of 50 microns.
CdZnTe is a high efficiency, room temperature radiation detection material that has attracted great interesting in
medical and security applications. CZT crystals can be grown by various methods. Particularly, CZT grown with the
Transfer Heater Method (THM) method have been shown to have fewer defects and greater material uniformity. In this
work, we developed a proof-of-concept dual lighting NIR imaging system that can be implemented to quickly and
nondestructively screen CZT boule and wafers during the manufacturing process. The system works by imaging the
defects inside CZT at a shallow depth of focus, taking a stack of images step by step at different depths through the
sample. The images are then processed with in-house software, which can locate the defects at different depths, construct
the 3D mapping of the defects, and provide statistical defect information. This can help with screening materials for use
in detector manufacturing at an early stage, which can significantly reduce the downstream cost of detector fabrication.
This inspection method can also be used to help the manufacturer understand the cause of the defect formation and
ultimately improve the manufacturing process.
Current spectroscopic detector crystals contain defects that prevent economic production of devices with sufficient
energy resolution and stopping power for radioisotope discrimination. This is especially acute for large monolithic
crystals due to increased defect opportunity. The proposed approach to cost reduction starts by combining stereoscopic
IR and ultrasound (UT) inspection coupled with segmentation and 3D mapping algorithms. A "smart dicing" system
uses "random-access" laser-based machining to obtain tiles free of major defects. Application specific grading matches
defect type to anticipated performance. Small pieces combined in a modular sensor pack instead of a monolith will
make the most efficient use of wafer area.
There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In
particular, using 3D data to obtain shape information of both people and objects for applications ranging from
identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but
does require a means to quickly and robustly collect data in the millimeter range. Systems using methods such as
structured light or stereo have seen wide use in measurements, but due to the use of a triangulation angle, and thus the
need for a separated second viewpoint, may not be practical for looking at a subject 10 meters away. Even when working
close to a subject, such as capturing hands or fingers, the triangulation angle causes occlusions, shadows, and a physically
large system that may get in the way. This paper will describe methods to collect medium resolution 3D data, plus highresolution
2D images, using a line of sight approach. The methods use no moving parts and as such are robust to
movement (for portability), reliable, and potentially very fast at capturing 3D data. This paper will describe the optical
methods considered, variations on these methods, and present experimental data obtained with the approach.
The paper presents a spoof detection technique employing multi-spectral and multi-polarization imaging for a
contactless fingerprint-capture system. While multispectral imaging has been proven to enable spoof detection for
contact fingerprint imagers, these imagers typically rely on frustrated total internal reflection that requires a planar
fingerprint, achieved by contact. The multispectral imaging method is based primarily on the difference in the spectral
absorption profile between a real finger and a fake one. This paper will describe the expansion of this capability using
blue and red light with contactless imaging in conjunction with polarization. This new method uses images at various
rotated linear polarizations (each image representing a different value of specular and diffuse components), which are
used to create the feature vectors representing the spectral and polarization diversity. The software extracts complex
wavelet transforms (CWT) and FFT features from the images and builds a supervised learning method to train Support
Vector Machine (SVM) classifiers. Experimental data was collected from a diversity of human fingers and silicon based
phantoms molded from the corresponding humans. Fake and actual fingerprints were collected using individuals with a
large diversity in skin tone, age, and finger dimensions. Our initial results, with an accuracy rate of at least 83%, are
promising and imply that using the polarization diversity can enhance the spoof detection performance.
KEYWORDS: Probability theory, 3D modeling, Magnetorheological finishing, Photography, Image processing, Image segmentation, Detection and tracking algorithms, Algorithms, Light sources and illumination, Intelligence systems
We present four new change detection methods that create an automated change map from a probability map. In this
case, the probability map was derived from a 3D model. The primary application of interest is aerial photographic
applications, where the appearance, disappearance or change in position of small objects of a selectable class (e.g., cars)
must be detected at a high success rate in spite of variations in magnification, lighting and background across the image.
The methods rely on an earlier derivation of a probability map. We describe the theory of the four methods, namely
Bernoulli variables, Markov Random Fields, connected change, and relaxation-based segmentation, evaluate and
compare their performance experimentally on a set probability maps derived from aerial photographs.
In some applications such as field stations, disaster situations or similar conditions, it is desirable to have a contactless,
rugged means to collect fingerprint information. The approach described in this paper enables acceleration of the
capture process by eliminating an otherwise system and finger cleanup procedure, minimizes the chance of the spread of
disease or contaminations, and uses an innovative optical system able to provide rolled equivalent fingerprint
information desirable for reliable 2D matching against existing databases. The approach described captures highresolution
fingerprints and 3D information simultaneously using a single camera. Liquid crystal polarization rotators
combined with birefringent elements provides the focus shift and a depth from focus algorithm extracts the 3D data. This
imaging technique does not involve any moving parts, thus reducing cost and complexity of the system as well as
increasing its robustness. Data collection is expected to take less than 100 milliseconds, capturing all four-finger images
simultaneously to avoid sequencing errors. This paper describes the various options considered for contactless
fingerprint capture, and why the particular approach was ultimately chosen.
This paper describes a real time, low cost part metrology method for capturing and extracting 3D part data using a
single camera and no moving elements. 3D capture in machine vision is typically done using stereo photogrammetry,
phase shifting using structured light, or autofocus mechanism for depth capture. These methods rely on expensive and
often slow components such as multiple cameras, specialized lighting, or motion components such as motors or
piezoelectric actuators. We demonstrated a method for 3D capture using only a single camera, birefringent lenses and
ultra-fast electronic polarization switches. Using multiple images acquired at different polarization states and thus
different focal distances, a high-resolution 3D point cloud of a test part was extracted with a good match to the ground
truth data. This paper will describe the operation of the method and discuss the practical limitations.
KEYWORDS: Cameras, Defect detection, Signal to noise ratio, Imaging systems, Sensors, 3D metrology, 3D image processing, Interference (communication), Image analysis, Detection and tracking algorithms
The fabrication of new optical materials has many challenges that suggest the need for new metrology tools. To this
purpose, the authors designed a system for localizing 10 micron embedded defects in a 10-millimeter thick semitransparent
medium. The system, comprising a single camera and a motion system, uses a combination of brightfield and
darkfield illumination. This paper describes the optical design and algorithm tradeoffs used to reach the desired detection
and measurement characteristics using stereo photogrammetry and parallel-camera stereoscopic matching. Initial
experiment results concerning defect detection and positioning, as well as analysis of computational complexity of a
complete wafer inspection are presented. We concluded that parallel camera stereoscopic matching combined with
darkfield illumination provides the most compatible solution to the 3D defect detection and positioning requirement,
detecting 10 micron defects at a positioning accuracy of better than +/- 0.5 millimeters and at a speed of less than 3
minutes per part.
This paper describes preliminary development of a high-speed distance gage for manufacturing process control. The
objective of the system was to measure and record the distance from a tool/processing tip to the processed surface at a
frequency of 10 Kilohertz with minimal sensitivity of the device to tilt or curvature of the processed surface. This speed
is not achievable by use of a standard camera system or by typical position sensitive detectors (PSDs) due to data
processing and optical limitations. The proposed solution comprises a linescan camera system and a laser light source
positioned diagonally and about the nominal area of interest. In this setup, the line segment, which defines the range of
locations of the laser spot, is imaged onto the linescan sensor. The location of the image of the spot is proportional to the
location of the spot on the target object. The height relative to a reference tool position is then calculated geometrically.
This setup enables flexible analysis of spot location where a
multi-layered partly transparent surface is inspected, allows
removal of stray light reflections and handles different types of surface finish. A real-time image analysis is enabled
through the use of embedded technology.
In order to effectively map fine surface structure ranging from surface finish at the sub-micron level to surface defects which can be millimeter size, methods are needed that can provide sub-micron resolution, but also have sufficient measurement range to see much larger features In the past, this nitch has been addressed with the use of white light interferometry that can be mechanically scanned in depth to provide mappings of structures on a very fine scale. However, such methods are limited to lab situations due to stability requirements, and are not fast enough to be used for a shop floor decision. We propose a system that uses a hybrid of classical laser interferometry for the fine structure, but adds in phase shifted structured light for a coarser measurement within the same data set. We will explore the pros and cons of this approach, and the limitations on the overall system imposed by each method.
Phase shift analysis sensors are popular in inspection and metrology applications. The sensor's captured image contains the region of interest of an object overlaid with projected fringes. These fringes bend according to the surface topography. 3D data is then calculated using phase shift analysis. The image profile perpendicular to the fringes is assumed to be sinusoidal. A particular version of phase shift analysis is the image spatial phase stepping approach that requires only a single image for analysis, but it is sensitive to noise. When noise, such as surface texture, appears in the image, the sinusoidal behavior is partially lost. This causes an inaccurate or noisy measurement. In this study, three digital de-noising filters are evaluated. The intent is to retrieve a smoother sine-like image profile while precisely retaining fringe boundary locations. Four different edge types are used as test objects. "Six Sigma" statistical analysis tools are used to implement screening, optimization, and validation. The most effective enhancement algorithms of the selection comprise (1) line shifting followed by horizontal Gabor filtration and vertical Gaussian filtering for chamfer edge measurement and (2) edge orientation detection followed by 2-D Gabor filter for round edges. These algorithms significantly improve the gauge repeatability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.