Automatic acquisition of aerial threats at thousands of kilometers distance requires high sensitivity
to small differences in contrast and high optical quality for subpixel resolution, since targets
occupy much less surface area than a single pixel. Targets travel at high speed and break up in
the re-entry phase. Target/decoy discrimination at the earliest possible time is imperative. Real
time performance requires a multifaceted approach with hyperspectral imaging and analog processing
allowing feature extraction in real time.
Hyperacuity Systems has developed a prototype chip capable of nonlinear increase in resolution
or subpixel resolution far beyond either pixel size or spacing. Performance increase is due to a
biomimetic implementation of animal retinas. Photosensitivity is not homogeneous across the
sensor surface, allowing pixel parsing. It is remarkably simple to provide this profile to detectors
and we showed at least three ways to do so. Individual photoreceptors have a Gaussian sensitivity
profile and this nonlinear profile can be exploited to extract high-resolution. Adaptive,
analog circuitry provides contrast enhancement, dynamic range setting with offset and gain control.
Pixels are processed in parallel within modular elements called cartridges like photo-receptor
inputs in fly eyes. These modular elements are connected by a novel function for a cell
matrix known as L4. The system is exquisitely sensitive to small target motion and operates
with a robust signal under degraded viewing conditions, allowing detection of targets smaller
than a single pixel or at greater distance. Therefore, not only is instantaneous feature extraction
possible but also subpixel resolution. Analog circuitry increases processing speed with more accurate
motion specification for target tracking and identification.
This paper is a revision of a paper presented at the SPIE conference on Medical Imaging 2005: Physiology, Function, and Structure from Medical Images, Feb. 2005, San Diego, California. The paper presented there appears (unrefereed) in SPIE Proceedings Vol. 5746.
Segmentation, or separating an image into distinct objects, is the key to creating 3-D renderings from serial slice images. This is typically a manual process requiring trained persons to tediously outline and isolate the objects in each image. We describe a template-based semiautomatic segmentation method to aid in the segmentation process and 3-D reconstruction of microscopic objects recorded with a confocal laser scanning microscope (CLSM). The simple and robust algorithm is based on the creation of a user-defined object template, followed by automatic segmentation of the object in each of the remaining image slices. The user guides the process by selecting the initial image slice for the object template, and labeling the object of interest. The algorithm is applied to mathematically defined shapes to verify the performance of the software. The algorithm is then applied to biological samples, including neurons in the common housefly. It is the quest to further understand the visual system of the housefly that provides the opportunity to develop this segmentation algorithm. Further application of this algorithm may extend to other registered and aligned serial section datasets with high contrast objects.
Standard Shack-Hartman wavefront sensors use a CCD element to sample position and distortion of
a target or guide star. Digital sampling of the element and transfer to a memory space for
subsequent computation adds significant temporal delay, thus, limiting the spatial frequency and
scalability of the system as a wavefront sensor. A new approach to sampling uses information
processing principles in an insect compound eye. Analog circuitry eliminates digital sampling and
extends the useful range of the system to control a deformable mirror and make a faster, more
capable wavefront sensor.
Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.
Our understanding of the world around us is based primarily on three-dimensional information because of the environment in which we live and interact. Medical or biological image information is often collected in the form of two-dimensional, serial section images. As such, it is difficult for the observer to mentally reconstruct the three dimensional features of each object. Although many image rendering software packages allow for 3D views of the serial sections, they lack the ability to segment, or isolate different objects in the data set. Segmentation is the key to creating 3D renderings of distinct objects from serial slice images, like separate pieces to a puzzle. This paper describes a segmentation method for objects recorded with serial section images. The user defines threshold levels and object labels on a single image of the data set that are subsequently used to automatically segment each object in the remaining images of the same data set, while maintaining boundaries between contacting objects. The performance of the algorithm is verified using mathematically defined shapes. It is then applied to the visual neurons of the housefly, Musca domestica. Knowledge of the fly’s visual system may lead to improved machine visions systems. This effort has provided the impetus to develop this segmentation algorithm. The described segmentation method can be applied to any high contrast serial slice data set that is well aligned and registered. The medical field alone has many applications for rapid generation of 3D segmented models from MRI and other medical imaging modalities.
Two challenges to an effective, real-world computer vision system are speed and reliable object recognition. Traditional computer vision sensors such as CCD arrays take considerable time to transfer all the pixel values for each image frame to a processing unit. One way to bypass this bottleneck is to design a sensor front-end which uses a biologically-inspired analog, parallel design that offers preprocessing and adaptive circuitry that can produce edge maps in real-time. This biomimetic sensor is based on the eye of the common house fly (Musca domestica). Additionally, this sensor has demonstrated an impressive ability to detect objects at subpixel resolution. However, the format of the image information provided by such a sensor is not a traditional bitmap transfer of the image format and, therefore, requires novel computational manipulations to make best use of this sensor output. The real-world object recognition challenge is being addressed by using a subspace method which uses eigenspace object models created from multiple reference object appearances. In past work, the authors have successfully demonstrated image object recognition techniques for surveillance images of various military targets using such eigenspace appearance representations. This work, which was later extended to partially occluded objects, can be generalized to a wide variety of object recognition applications. The technique is based upon a large body of eigenspace research described elsewhere. Briefly described, the technique creates target models by collecting a set of target images and finding a set of eigenvectors that span the target image space. Once the eigenvectors are found, an eigenspace model (also called a subspace model) of the target is generated by projecting target images on to the eigenspace. New images to be recognized are then projected on to the eigenspace for object recognition. For occluded objects, we project the image on to reduced dimensional subspaces of the original eigenspace (i.e., a “subspace of a subspace” or a “sub-eigenspace”). We then measure how close a match we can achieve when the occluded target image is projected on to a given sub-eigenspace. We have found that this technique can result in significantly improved recognition of occluded objects. In order to manage the combinatorial “explosion” associated with selecting the number of subspaces required and then projecting images on to those sub-eigenspaces for measurement, we use a variation on the A* (called “A-star”) search method. The challenge of tying these two subsystems (the biomimetic sensor and the subspace object recognition module) together into a coherent and robust system is formidable. It requires specialized computational image and signal processing techniques that will be described in this paper, along with preliminary results. The authors believe that this approach will result in a fast, robust computer vision system suitable for the non-ideal real-world environment.
The need for autonomous systems to work under unanticipated conditions requires the use of smart sensors. High resolution systems develop tremendous computational loads. Inspiration from animal vision systems can guide us in developing preprocessing approaches implementable in real time with high resolution and deduced computational load. Given a high quality optical path and a 2D array of photodetectors, the resolution of a digital image is determined by the density of photodetectors sampling the image. In order to reconstruct an image, resolution is limited by the distance between adjacent detectors. However, animal eyes resolve images 10-100 times better than either the acceptance angle of a single photodetector or the center-to-center distance between neighboring photodetectors. A new model of the fly's visual system emulates this improved performance, offering a different approach to subpixel resolution. That an animal without a cortex is capable of this performance suggests that high level computation is not involved. The model takes advantage of a photoreceptor cell's internal structure for capturing light. This organelle is a waveguide. Neurocircuitry exploits the waveguide's optical nonlinearities, namely in the shoulder region of its gaussian sensitivity-profile, to extract high resolution information from the visual scene. The receptive fields of optically disparate inputs overlap in space. Photoreceptor input is continuous rather than discretely sampled. The output of the integrating module is a signal proportional to the position of the target within the detector array. For tracking a point source, resolution is 10 times better than the detector spacing. For locating absolute position and orientation of an edge, the model performs similarly. Analog processing is used throughout. Each element is an independent processor of local luminance. Information processing is in real time with continuous update. This processing principle will be reproduced in an analog integrated circuit using photodiodes and fiber optic waveguides as the nonlinear light sensing devices, current mirrors and opamp circuits for the processing. The outputs of this circuit will go to other artificial neural networks for further processing.
An automated analysis of ocular fundus images allows ophthalmologists to diagnose or prognosticate diseases more effectively based on reliable fundus evaluations. This paper presents a computer method to identifying one of the most important features in fundus images, the retinal vascular network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.