The search for effective noise removal algorithms is still a real challenge in the field of image processing. An efficient image denoising method is proposed for images that are corrupted by salt-and-pepper noise. Salt-and-pepper noise takes either the minimum or maximum intensity, so the proposed method restores the image by processing the pixels whose values are either 0 or 255 (assuming an 8-bit/pixel image). For low levels of noise corruption (less than or equal to 50% noise density), the method employs the modified mean filter (MMF), while for heavy noise corruption, noisy pixels values are replaced by the weighted average of the MMF and the total variation of corrupted pixels, which is minimized using convex optimization. Two fuzzy systems are used to determine the weights for taking average. To evaluate the performance of the algorithm, several test images with different noise levels are restored, and the results are quantitatively measured by peak signal-to-noise ratio and mean absolute error. The results show that the proposed scheme gives considerable noise suppression up to a noise density of 90%, while almost completely maintaining edges and fine details of the original image.
We report on a fiber optic sensor based on the physiological aspects of the eye and vision-related neural layers of the common housefly (Musca domestica) that has been developed and built for aerospace applications. The intent of the research is to reproduce select features from the fly’s vision system that are desirable in image processing, including high functionality in low-light and low-contrast environments, sensitivity to motion, compact size, lightweight, and low power and computation requirements. The fly uses a combination of overlapping photoreceptor responses that are well approximated by Gaussian distributions and neural superposition to detect image features, such as object motion, to a much higher degree than just the photoreceptor density would imply. The Gaussian overlap in the biomimetic sensor comes from the front-end optical design, and the neural superposition is accomplished by subsequently combining the signals using analog electronics. The fly eye sensor is being developed to perform real-time tracking of a target on a flexible aircraft wing experiencing bending and torsion loads during flight. We report on results of laboratory experiments using the fly eye sensor to sense a target moving across its field of view.
Reducing the environmental impact of aviation is a primary goal of NASA aeronautics research. One approach to
achieve this goal is to build lighter weight aircraft, which presents complex challenges due to a corresponding increase in
structural flexibility. Wing flexibility can adversely affect aircraft performance from the perspective of aerodynamic
efficiency and safety. Knowledge of the wing position during flight can aid active control methods designed to mitigate
problems due to increased wing flexibility. Current approaches to measuring wing deflection, including strain
measurement devices, accelerometers, or GPS solutions, and new technologies such as fiber optic strain sensors, have
limitations for their practical application to flexible aircraft control. Hence, it was proposed to use a bio-mimetic optical
sensor based on the fly-eye to track wing deflection in real-time. The fly-eye sensor has several advantages over
conventional sensors used for this application, including light weight, low power requirements, fast computation, and a
small form factor. This paper reports on the fly-eye sensor development and its application to real-time wing deflection
measurement.
Musca domestica, the common house fly, possesses a powerful vision system that exhibits features such as fast, analog,
parallel operation and motion hyperacuity -- the ability to detect the movement of objects at far better resolution than
predicted by their photoreceptor spacing. Researchers at the Wyoming Information, Signal Processing, and Robotics
(WISPR) Laboratory have investigated these features for over a decade to develop an analog sensor inspired by the fly.
Research efforts have been divided into electrophysiology; mathematical, optical and MATLAB based sensor modeling;
physical sensor development; and applications. This paper will provide an in depth review of recent key results in some
of these areas including development of a multiple, light adapting cartridge based sensor constructed on both a planar
and co-planar surface using off-the-shelf components. Both a photodiode-based approach and a fiber based sensor will
be discussed. Applications in UAV obstacle avoidance, long term building monitoring and autonomous robot navigation
are also discussed.
Since the mid-1980s, the development of a therapeutic, computer-assisted laser photocoagulation system to treat retinal disorders has progressed under the guidance of Dr. Welch, the Marion E. Forsman Centennial Professor of Engineering, Department of Biomedical Engineering, the University of Texas at Austin. This paper reviews the development of the system, related research in eye movement and laser-tissue interaction, and system implementation and testing. While subsets of these topics have been reported in prior publications, this paper brings the entire evolutionary design of the system together. We also discuss other recent "spinoff" uses of the system technology that have not been reported elsewhere and describe the impact of the latest technical advances on the overall system design.
Traditional imaging sensors for computer vision, such as CCD and CMOS arrays, have well-known limitations with regard to detecting objects that are very small in size (that is, a small object image compared to the pixel size), are viewed in a low contrast situation, are moving very fast (with respect to the sensor integration time), or are moving very small distances compared to the sensor pixel spacing. Any one or a combination of these situations can foil a traditional CCD or CMOS sensor array. Alternative sensor designs derived from biological vision systems promise better resolution and object detection in situations such as these. The patent-pending biomimetic vision sensor based on Musca domestica (the common house fly) is capable of reliable object rendition in spite of challenging movement and low contrast conditions. We discuss some interesting early results of comparing the biomimetic sensor to commercial CCD sensors in terms of contrast and motion sensitivity in situations such as those listed above.
Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.
We describe a new development approach to computer vision for a compact, low-power, real-time system whereby we take advantage of preprocessing in a biomimetic vision sensor and a computational strategy using subspace methods and the Hotelling transform in an effort to reduce the computational imaging load. The approach is two-pronged: 1) design the imaging sensor to reduce the computational load as much as possible up front, and 2) employ computational algorithms that efficiently complete the remaining image processing steps needed for computer vision. This strategy works best if the sensor design and the computational algorithm design evolve together as a synergistic, mutually optimized pair. Our system uses the biomimetic “fly-eye” sensor described in previous papers that offers significant preprocessing. However, the format of the image provided by the sensor is not a traditional bitmap and therefore requires innovative computational manipulations to make best use of this sensor. The remaining computational algorithms employ eigenspace object models derived from Principle Component Analysis, and the Hotelling transform to simplify the models. The combination of sensor preprocessing and the Hotelling transform provides an overall reduction in the computational imaging requirements that would allow real-time computer vision in a compact, low-power system.
Two challenges to an effective, real-world computer vision system are speed and reliable object recognition. Traditional computer vision sensors such as CCD arrays take considerable time to transfer all the pixel values for each image frame to a processing unit. One way to bypass this bottleneck is to design a sensor front-end which uses a biologically-inspired analog, parallel design that offers preprocessing and adaptive circuitry that can produce edge maps in real-time. This biomimetic sensor is based on the eye of the common house fly (Musca domestica). Additionally, this sensor has demonstrated an impressive ability to detect objects at subpixel resolution. However, the format of the image information provided by such a sensor is not a traditional bitmap transfer of the image format and, therefore, requires novel computational manipulations to make best use of this sensor output. The real-world object recognition challenge is being addressed by using a subspace method which uses eigenspace object models created from multiple reference object appearances. In past work, the authors have successfully demonstrated image object recognition techniques for surveillance images of various military targets using such eigenspace appearance representations. This work, which was later extended to partially occluded objects, can be generalized to a wide variety of object recognition applications. The technique is based upon a large body of eigenspace research described elsewhere. Briefly described, the technique creates target models by collecting a set of target images and finding a set of eigenvectors that span the target image space. Once the eigenvectors are found, an eigenspace model (also called a subspace model) of the target is generated by projecting target images on to the eigenspace. New images to be recognized are then projected on to the eigenspace for object recognition. For occluded objects, we project the image on to reduced dimensional subspaces of the original eigenspace (i.e., a “subspace of a subspace” or a “sub-eigenspace”). We then measure how close a match we can achieve when the occluded target image is projected on to a given sub-eigenspace. We have found that this technique can result in significantly improved recognition of occluded objects. In order to manage the combinatorial “explosion” associated with selecting the number of subspaces required and then projecting images on to those sub-eigenspaces for measurement, we use a variation on the A* (called “A-star”) search method. The challenge of tying these two subsystems (the biomimetic sensor and the subspace object recognition module) together into a coherent and robust system is formidable. It requires specialized computational image and signal processing techniques that will be described in this paper, along with preliminary results. The authors believe that this approach will result in a fast, robust computer vision system suitable for the non-ideal real-world environment.
A system for robotically assisted retinal surgery has been developed to rapidly and safely place lesions on the retina for photocoagulation therapy. This system provides real- time, motion stabilized lesion placement for typical irradiation times of 100 ms. The system consists of three main subsystems: a global, digital-based tracking subsystem; a fast, local analog tracking subsystem; and a confocal reflectance subsystem to control lesion parameters dynamically. We have reported on these subsystems in previous SPIE presentations. This paper concentrates on the development of the second hybrid system prototype. Considerable progress has been made toward reducing the footprint of the optical system, simplifying the user interface, fully characterizing the analog tracking system and using measurable lesion reflectance growth parameters to develop a noninvasive method to infer lesion depth. This method will allow dynamic control of laser dosimetry to provide similar lesions across the non-uniform retinal surface. These system improvements and progress toward a clinically significant system are covered in detail within this paper.
A system for robotically assisted retinal surgery has been developed to rapidly and safely place lesions on the retina for photocoagulation therapy. This system provides real- time, motion stabilized lesion placement for typical irradiation times of 100 ms. The system consists of three main subsystems: a global, digital-based tracking subsystem; a fast, local analog tracking subsystem in previous SPIE presentations. This paper concentrates on the development of the confocal reflectance subsystem and its integration into the overall photocoagulation system. Specifically, our goal was to use measurable lesion reflectance growth curve parameters to develop a noninvasive method to infer lesion depth. This method will allow dynamic control of laser dosimetry to provide similar lesions across the non-uniform retinal surface.
A new system for robotically assisted retinal surgery requires real-time signal processing of the reflectance signal from small targets on the retina. Laser photocoagulation is used extensively by ophthalmologists to treat retinal disorders such as diabetic retinopathy and retinal breaks. Currently, the procedure is performed manually and suffers from several drawbacks which a computer-assisted system could alleviate. Such a system is under development that will rapidly and safely place multiple therapeutic lesions at desired locations on the retina in a mater of seconds. This system provides real- time, motion-stabilized lesion placement for typical clinical irradiation times. A reflectance signal from a small target on the retina is used to derive high-speed tracking corrections to compensate for patient eye movement by adjusting the laser pointing angles. Another reflectance signal from a different small target on the retina is used to derive information to control the laser irradiation time which allows consistent lesion formation over any part of the retina. This paper describes the electro-optical system which dynamically measures the two reflectance signals, determines the appropriate reflectance parameters in real time, and controls laser pointing and irradiation time to meet the stated requirements.
Laser photocoagulation is used extensively by ophthalmologists to treat retinal disorders such as diabetic retinopathy and retinal breaks and tears. Currently, the procedure is performed manually and suffers from several drawbacks: it often requires many clinical visits, it is very tedious for both patient and physician, the laser pointing accuracy and safety margin are limited by a combination of the physician's manual dexterity and the patient's ability to hold their eye still, and there is a wide variability in retinal tissue absorption parameters. A computer-assisted hybrid system is under development that will rapidly and safely place multiple therapeutic lesions at desired locations on the retina in a matter of seconds. In the past, one of the main obstacles to such a system has been the ability to track the retina and compensate for any movement with sufficient speed during photocoagulation. Two different tracking modalities (digital image-based tracking and analog confocal tracking) were designed and tested in vivo on pigmented rabbits. These two systems are being seamlessly combined into a hybrid system which provides real-time, motion stabilized lesion placement for typical irradiation times (100 ms). This paper will detail the operation of the hybrid system and efforts toward controlling the depth of coagulation on the retinal surface.
KEYWORDS: Eye, Analog electronics, Retina, Confocal microscopy, Optical tracking, Reflectometry, Reflectivity, In vivo imaging, Laser coagulation, Argon ion lasers
We describe initial in vivo experimental results of a new hybrid digital and analog design for retinal tracking and laser beam control. An overview of the design is given. The results show in vivo tracking rates which exceed the equivalent of 38 degrees per second in the eye, with automated lesion pattern creation. Robotically-assisted laser surgery to treat conditions such as diabetic retinopathy and retinal breaks may soon be realized under clinical conditions with requisite safety using standard video hardware and inexpensive optical components based on this design.
Researchers at the USAF Academy and the University of Texas are developing a computer-assisted retinal photocoagulation system for the treatment of retinal disorders (i.e. diabetic retinopathy, retinal tears). Currently, ophthalmologists manually place therapeutic retinal lesions, an acquired technique that is tiring for both the patient and physician. The computer-assisted system under development can rapidly and safely place multiple therapeutic lesions at desired locations on the retina in a matter of seconds. Separate prototype subsystems have been developed to control lesion depth during irradiation and lesion placement to compensate for retinal movement. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Two different design approaches are being pursued to combine the capabilities of both subsystems: a digital imaging-based system and a hybrid analog-digital system. This paper will focus on progress with the digital imaging-based prototype system. A separate paper on the hybrid analog-digital system, `Hybrid Retinal Photocoagulation System', is also presented in this session.
The initial experimental results of a new hybrid digital and analog design for retinal tracking and laser beam control are described. The results demonstrate tracking rates that exceed the equivalent of 60 deg per second in the eye, with automatic creation of lesion patterns and robust loss of lock detection. Robotically assisted laser surgery to treat conditions such as diabetic retinopathy and retinal tears can soon be realized under clinical conditions with requisite safety using standard video hardware and inexpensive optical components.
We describe initial experimental results of a new hybrid digital and analog design for retinal tracking and laser beam control. Initial results demonstrate tracking rates which exceed the equivalent of 50 degrees per second in the eye, with automatic lesion pattern creation and robust loss of lock detection. Robotically assisted laser surgery to treat conditions such as diabetic retinopathy, macular degeneration, and retinal tears can now be realized under clinical conditions with requisite safety using standard video hardware and inexpensive optical components.
Successful retinal tracking subsystem testing results in vivo on rhesus monkeys using an argon continuous wave laser and an ultra-short pulse laser are presented. Progress on developing an integrated robotic retinal laser surgery system is also presented. Several interesting areas of study have developed: (1) 'doughnut' shaped lesions that occur under certain combinations of laser power, spot size, and irradiation time complicating measurements of central lesion reflectance, (2) the optimal retinal field of view to achieve simultaneous tracking and lesion parameter control, and (3) a fully digital versus a hybrid analog/digital tracker using confocal reflectometry integrated system implementation. These areas are investigated in detail in this paper. The hybrid system warrants a separate presentation and appears in another paper at this conference.
Researchers at the University of Texas at Austin's Biomedical Engineering Laser Laboratory and the U. S. Air Force Academy’s Department of Electrical Engineering are developing a computer-assisted prototype retinal photocoagulation system. The project goal is to rapidly and precisely automatically place laser lesions in the
retina for the treatment of disorders such as diabetic retinopathy and retinal tears while dynamically controlling the extent of the lesion. Separate prototype subsystems have been developed to control lesion parameters (diameter or depth) using lesion reflectance feedback and lesion placement using retinal vessels as tracking landmarks. Successful subsystem testing results in vivo on pigmented rabbits using an argon continuous wave
laser are presented. A prototype integrated system design to simultaneously control lesion parameters and
placement at clinically significant speeds is provided.
KEYWORDS: Computing systems, Retina, Prototyping, Argon ion lasers, Control systems, Frame grabbers, Laser systems engineering, Laser applications, Reflectivity, Camera shutters
Researchers at the University of Texas at Austin's Biomedical Engineering Laser Laboratory investigating the medical applications of lasers have worked toward the development of a retinal robotic laser system. The ultimate goal of this ongoing project is to precisely place and control the depth of laser lesions for the treatment of various retinal diseases such as diabetic retinopathy and retinal tears. Researchers at the USAF Academy's Department of Electrical Engineering have also become involved with this research due to similar interests. Separate low speed prototype subsystems have been developed to control lesion depth using lesion reflectance feedback parameters and lesion placement using retinal vessels as tracking landmarks. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Work is ongoing to build a prototype system to simultaneously control lesion depth and placement. The instrumentation aspects of the prototype subsystems were presented at SPIE Conference 1877 in January 1993. Since then our efforts have concentrated on combining the lesion depth control subsystem and the lesion placement subsystem into a single prototype capable of simultaneously controlling both parameters. We have designed this combined system CALOSOS for Computer Aided Laser Optics System for Ophthalmic Surgery. An initial CALOSOS prototype design is provided. We have also investigated methods to improve system response time. The use of high speed non-standard frame rate CCD cameras and high speed local bus frame grabbers hosted on personal computers are being investigated. A review of system testing in vivo to date is provided in SPIE Conference proceedings 2374-49 (Novel Applications of Lasers and Pulsed Power, Dual-Use Applications of Lasers: Medical session).
KEYWORDS: Prototyping, In vivo imaging, Retina, Pulsed laser operation, Argon ion lasers, Control systems, Laser systems engineering, Laser tissue interaction, Computing systems, Laser applications
Researchers at the University of Texas at Austin's Biomedical Engineering Laser Laboratory investigating the medical applications of lasers have worked toward the development of a retinal robotic laser system. The overall goal of the ongoing project is to precisely place and control the depth of laser lesions for the treatment of various retinal diseases such as diabetic retinopathy and retinal tears. Researchers at the USAF Academy's Department of Electrical Engineering and the Optical Radiation Division of Armstrong Laboratory have also become involved with this research due to similar related interests. Separate low speed prototype subsystems have been developed to control lesion depth using lesion reflectance feedback parameters and lesion placement using retinal vessels as tracking landmarks. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Work is ongoing to build a prototype system to simultaneously control lesion depth and placement. Following the dual-use concept, this system is being adapted for clinical use as a retinal treatment system as well as a research tool for military laser-tissue interaction studies. Specifically, the system is being adapted for use with an ultra-short pulse laser system at Armstrong Laboratory and Frank J. Seiler Research Laboratory to study the effects of ultra-short laser pulses on the human retina. The instrumentation aspects of the prototype subsystems were presented at SPIE Conference 1877 in January 1993. Since then our efforts have concentrated on combining the lesion depth control subsystem and the lesion placement subsystem into a single prototype capable of simultaneously controlling both parameters. We have designated this combined system CALOSOS for Computer Aided Laser Optics System for Ophthalmic Surgery. We have also investigated methods to improve system response time. Use of high speed nonstandard frame rate CCD cameras and high speed frame grabbers hosted on personal computers featuring the 32 bit, 33 MHz PCI bus have been investigated. Design details of an initial CALOSOS prototype design is provided in SPIE Conference proceedings 2396B-32 (Biomedical Optics Conference, Clinical Laser Delivery and Robotics Session). This paper will review in vivo testing to date and detail planned system upgrades.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.