We present a method to estimate the time-to-impact (TTI) from a sequence of images. The method is based on detecting and tracking local extremal points. Their endurance within and between pixels is measured, accumulated, and used to achieve the TTI. This method, which improves on an earlier proposal, is entirely different from the ordinary optical flow technique and allows for fast and low-complex processing. The method is inspired by insects, which have some TTI capability without the possibility to compute high-complex optical flow. The method is further suitable for near-sensor image processing architectures.
In this paper we present a 2D extension of a previously described 1D method for a time-to-impact sensor [5][6]. As in
the earlier paper, the approach is based on measuring time instead of the apparent motion of points in the image plane to
obtain data similar to the optical flow. The specific properties of the motion field in the time-to-impact application are
used, such as using simple feature points which are tracked from frame to frame. Compared to the 1D case, the features
will be proportionally fewer which will affect the quality of the estimation. We give a proposal on how to solve this
problem. Results obtained are as promising as those obtained from the 1D sensor.
KEYWORDS: Sensors, High dynamic range imaging, Cameras, Optical flow, Photodiodes, Motion measurement, Image processing, Motion estimation, Analog electronics, Video
We present a method suitable for a time-to-impact sensor. Inspired by the seemingly "low" complexity of small insects, we propose a new approach to optical flow estimation that is the key component in time-to-impact estimation. The approach is based on measuring time instead of the apparent motion of points in the image plane. The specific properties of the motion field in the time-to-impact application are used, such as measuring only along a one-dimensional (1-D) line and using simple feature points, which are tracked from frame to frame. The method lends itself readily to be implemented in a parallel processor with an analog front-end. Such a processing concept [near-sensor image processing (NSIP)] was described for the first time in 1983. In this device, an optical sensor array and a low-level processing unit are tightly integrated into a hybrid analog-digital device. The high dynamic range, which is a key feature of NSIP, is used to extract the feature points. The output from the device consists of a few parameters, which will give the time-to-impact as well as possible transversal speed for off-centered viewing. Performance and complexity aspects of the implementation are discussed, indicating that time-to-impact data can be achieved at a rate of 10 kHz with today's technology.
Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to-
Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision
applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the
use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the
need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the
shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact
solution with respect to hardware complexity, but also surprisingly high performance.
KEYWORDS: Sensors, Image processing, High dynamic range imaging, Image sensors, Analog electronics, Photodiodes, Digital filtering, Image compression, Switches, Range imaging
The paper describes the Near Sensor Image Processing (NSIP) paradigm developed in the early 1990s and shows that it
was a precursor to recent architectures proposed for direct (in the sensor) image processing and high dynamic range
(HDR) image sensing. Both of these architectures are based on the specific properties of CMOS light sensors, in
particular the ability to continuously monitor the accumulation of photon-induced charge as a function of time. We
further propose an extension of the original NSIP pixel to include a circuit that facilitates temporal and spatio-temporal
processing.
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
Analog Sensor Processing Using Exposure Control (ASPEC) is a new concept for high speed image processing. By using an addressable image array with integrating output amplifiers, single processing can be performed directly on the sensors. The major gain in using ASPEC techniques is that the operations are fast, the approach can be implemented using existing hardware, and that the processing is executed in parallel on the sensor array. Furthermore, the data reduction is carried out early in the signal processing chain. In this paper we present a novel programmable camera architecture based on the CIVIS CMOS integrating addressable image sensor which is well suited for ASPEC applications.
We discuss a device for real time compensation of image quality deterioration induced by atmospheric turbulence. The device will permit ground based observations with very high image resolution. We propose an instrument with two channels. One is an ordinary image detection channel, while the other uses a Hartmann-Shack wavefront detector to measure image degradation. This information is obtained in the form of a set of lenslet focus shifts, each corresponding to the local tilt of the wavefront. Through modeling, the entire wavefront is reconstructed. Consequently, we can estimate the optical transfer function and its corresponding point spread function. Through convolution techniques, the distorted image can subsequently be restored. Thus, image correction is performed in software, eliminating the need for expensive live optics designs. Due to the nature of atmospheric turbulence, detection and correction have to be made with 50 - 100 frames per second. This implies a need for very high computing capacity. A study of the mathematical operations involved has been made with special emphasis on implementation in the hardware architecture known as radar video image processor (RVIP). This hardware utilizes a high degree of parallelism. Results available show that RVIP together with complementary units provide the necessary high-speed computing capacity. The detection system in both channels must meet very high demands. We mention high quantum efficiency, fast readout at low noise levels and a wide spectral range. A preliminary investigation evaluates suitable detectors. ICCDs are so far the most promising candidates.
Near-sensor image processing, NSIP, is a concept where the temporal behavior of the photodiode is used to perform image processing. It has been shown that many conventional image processing operations like convolution and gray scale morphology can easily be implemented in NSIP. In this paper we describe the basis of NSIP and how the sensor/processor architecture is used to perform local as well as global operation. An implementation of an NSIP chip also is described. Finally, we show a number of algorithms and applications which have been implemented in our NSIP camera system.
In this paper we present a system for high speed pixelwise spectral classification. The system is based on the line imaging PGP (prism-grating-prism) spectrograph combined with the smart image sensor MAPP2200. The classification is implemented using a near-sensor approach where linear discriminant functions are calculated using exposure time modulation and analog summation of pixel data. After A/D-conversion the sums are compared and classified pixels are output from the sensor chip. The theoretical maximum classified pixel rate of the system is around 1 MHz depending on number of classes, etc. In most practical applications however, the limit will be set by the available amount of light.
A range image is an image where each pixel represents a measurement of the distance from the camera to the object. Typical applications for range imaging are inspection and dimension measurement in industrial processes, e.g. in the forest products industry. Range image acquisition can be made in many different ways. In this paper we use an active triangulation method where a sheet-of-light illuminates the scene. The sensor level signal processing task is to extract the light impact position in each sensor column. Two novel algorithms implemented on the commercially available smart image sensor MAPP2200 are presented. Both algorithms give 256 pixel width resolution at a line frequency of 15 - 20 kHz, corresponding to range pixel rates of 4 - 5 MHz, and range resolution varying from 8 up to 13 bits in special cases. This is considerably faster than other proposed methods. One of the methods also gives intensity data concurrent and in perfect registration with the range data and the other has an error detection capability to detect multiple peaks on the sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.