PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Out of all possible multiprocessor interconnection schemes, the time-sleazed bus ias some advantages for hardware realisations. Not only is it one of the simpliest and cheapest ways to tie processors together, but it is also an ideal interconnection scheme if one wants to keep the structure flexible and modular. On the other hand, the main disadvantage of the time-shared bus is the limited bandwidth. Especially in image processing, this can be very troublesome. This paper will try to explore the possibilities of a time-shared bus in this field of application. A process is divided into a set of processors, each with a specified number of inputs and outputs. Furthermore, each processor is determined by a set of delays between these inputs and outputs. The model is characterised by four parameters: - the delays per processor - the constancy of the delays - the use or no use of internal memory in a processor - the fact whether the operations on a processor are pipelined or not. These parameters influence the complexity and the effectiveness of the hardware. Using them to classify different hardware approaches, we develop a hardware definition of a time-shared bus, that optimises the use of that bus in order to diminish the disadvantage of the limited bandwidth. An example of a process, constructed by putting processors in pipeline and/or in parallel, illustrate the possibilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Dualistic Model for Computer Architecture Description uses a hierarchy of abstraction levels to describe a computer in arbitrary steps of refinement from the top of the user interface to the bottom of the gate level. In our Dualistic Model the description of an architecture may be divided into two major parts called "Concept" and "Realization". The Concept of an architecture on each level of the hierarchy is an Abstract Data Type that describes the functionality of the computer and an implementation of that data type relative to the data type of the next lower level of abstraction. The Realization on each level comprises a language describing the means of user interaction with the machine, and a processor interpreting this language in terms of the language of the lower level. The surface of each hierarchical level, the data type and the language express the behaviour of a ma-chine at this level, whereas the implementation and the processor describe the structure of the algorithms and the system. In this model the Principle of Operation maps the object and computational structure of the Concept onto the structures of the Realization. Describing a system in terms of the Dualistic Model is therefore a process of refinement starting at a mere description of behaviour and ending at a description of structure. This model has proven to be a very valuable tool in exploiting the parallelism in a problem and it is very transparent in discovering the points where par-allelism is lost in a special architecture. It has successfully been used in a project on a survey of Computer Architecture for Image Processing and Pattern Analysis in Germany.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diverse image understanding (IU) system applications and attendant reduced life cycle cost requirements call for real time system architectures which are increasingly flexible, maintainable, reprogrammable, and upgradable. The requirements for algorithm mapping to architecture are sufficiently complex to require automated functional analysis tools. Algorithm complexity dictates higher order language (HOL) system programmability for acceptable software development cycles. Architecture complexity requires automated architecture simulation/emulation support for acceptable hardware development cycles. Honeywell, through external contracts and IR&D, is pursuing the definition and development of real time IU architectures (and algorithms) which cost-effectively support diverse applications including tactical targeting and reconnaissance, scene analysis, robot vision, and autonomous navigation. We are addressing the systems requirements of these applications with designs which are modular, expandable, and software configurable. Our presentation will overview the IU applications we are pursuing, our architectural approach to meeting system real time throughput requirements, and our underlying design methodology for architecture development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Environmental Research Institute of Michigan (ERIM) has developed the fourth generation of its cellular image processing systems, known as Cytocomputers®. These systems have been developed over the past nine years, primarily for image analysis and machine vision. Other applications have been demonstrated in image enhancement, computer aided design, and signal processing. The new system utilizes the well-proven and mathematically supported neighborhood processing stages with modules for image storage, multiple-image combinations and high-speed image transfer. These modules support the cellular processor stages in an open, extendable architecture which allows enhancement through the addition of modules optimized for particular transformations. This paper first discusses the special requirements of machine vision and image analysis systems and the types of operations required. Next, an overview of the alternative architectures for image processors is presented, along with a discussion of the tradeoffs and criteria which must be weighed in system design. Finally, the new system is described and examples of its operation are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new image processing algorithm, called the pseudomedian filter, which is based on concepts of morphological image processing. The pseudomedian filter possesses many of the properties of the median filter, but it can be computed much more efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A class of processors which perform local neighborhood or cellular operations has well-documented applications in image processing. The architectural feature which unites these processors is their use of special-purpose hardware to compute image transformations based both on the values and on the spatial relationship of pixels in the input image. Typical operations include 3 x 3 convolution and mathematical morphology. Processing elements which compute cellular operations have been incorporated in a variety of system architectures. These range from single-processor, recirculating buffer systems to systems which have thousands of processors in a four-way interconnected mesh. Another feature which unites these processors is that they operate on images in the iconic domain. A new class of non-iconic processor is introduced here which uses image encoding schemes to reduce the number of operations required for certain classes of operations by between one and two orders of magnitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In multifunctional video tracking system, a number of specific factors must be considered in the presence of noises to the video signal in order to increase video processing capability for all-weather automatic video control. In fact, since the contrast of the video responses in accordance with different background environments, the signal threshold level and video gain adjustment must be reciprocally considered in a manner for keeping an optimal video source to the processor. In this paper, an adaptive video processor is implemented for all-weather conditions with respect to the auto-threshold control and the auto-gain control. This particular design structure utilizes the different preselection table stored in ROM which is in accordance with the practical contrast and brightness variations to provide an optimal video signal to the tracking system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Following the development of several high speed video image processors, the lack of such special purpose processors for applica-tions such as target tracking and image processing offering low power and volume was realized. A handful of shared algorithms for video target tracking and image processing were discovered and incorporated into the design. Realizing the design requirements, a methodology was established to develop a product that can support a wide range of applications, and a generic architecture to allow growth and a longer product life cycle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The evolution of displays over the last decade and a half, has seen a linear extrapolation from designs that perform simple raster refresh to include processing of those algorithms that can be easily and inexpensively added on within the basic architecture. Operations that do not fit this 'straightjacket' have been rejected as too expensive or even impossible to implement. The paramount example is geometric warp, the unavailability/expense of which has meant that the advantages of interactive digital image processing have not found practical use in many cases due to the difficulty of accessing the necessary subset of data, in the correct orientation, for display on the screen. To a user, the limitations are functional - not being able to turn a knob and rotate the image - but they have their roots in two definable categories. The base technology has not supported many desired functions, or perhaps has not supported them at an acceptable cost, plus there has been a rather narrow perspective of the design of image processing displays. The limitations of existing displays are summarized below: Refresh memory design Nearly all existing systems use discrete channels of refresh memory. This has either limited the size of the system, since the cost/practicality of multiplexing all the re-sultant paths becomes prohibitive, or it has limited the flexibility of the system by forcing groups of memory channels to be multiplexed in mutually exclusive sets. Further-more, discrete channels constrain the image size to be multiples of the channel size both spatially and radiometrically. This impacts the operational use of the system and may impact its cost (consider medical applications with say one dozen, 128x128 images in a case study).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Following the development of several high-speed image processing systems, it has been acknowledged that the only way to achieve real low power and system compactness is by using customized ICs. In an effort to reduce the development cost and the turnaround time, a unique architecture has been devised to combine semicustom ICs and off-the-shelf devices. The unit is controlled by a microprocessor. The microprocessor sets the various parameters according to the desired process. The procedure itself is carried out by a miniarray processor composed of an Address Generator IC, off the shelf static RAMs, and a Data processing IC. Using advanced CMOS technology and pipeline architecture, high-speed processing is achieved while power consumption and system volum are kept very low. The paper presents the architecture of the unit along with various configurations in which it can be used, and its performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developed herein is a point transformation of signal-dependent noise, such as film-grain, to signal-independent noise for digitized color images. The idea is an extension of a technique, developed by others, for digitized monochrome (black-and-white) images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major problem in image reconstruction and restoration is assuming that the obtained solution is acceptable to the user. The reason that a user can reject a computed solution is that the user has access to more a priori knowledge about the image than the solution method can use. This paper will discuss ways in which the amount of a priori knowledge that is actually made available to the restoration algorithm can be increased. The main vehicles for including this additional knowledge is the method of projection onto convex sets and application of fuzzy set theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The discrete Hartley transform (DHT) and its fast algorithm were introduced recently. One of the advantages of the DHT is that the forward and inverse transforms are of the same form except for a normalization constant. Therefore, the forward and the inverse transform can be implemented by the same subroutine or hardware when the normalization constant is properly taken care of. In this paper, the applications of the DHT to image compression are studied. The distribution of the DHT coefficients is tested using the Kolmogorov-Smirnov goodness-of-fit test. The compression efficiency of DHT coding is found to be about the same as discrete Fourier transform (DFT) coding. The DHT coding system incorporated with a human visual system model is also studied and this system offers about the same subjective image quality as a straight-forward DCT coding system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach to the quantization of discrete cosine transformed subimage data is discussed. It is shown that physical modeling of the sensor in combination with a power spectrum model of the scene leads to a direct means of generating the bit and variance maps necessary for coefficient quantization. Preliminary results indicate that good image quality can be maintained down to 1/4 bit-per-pixel, depending upon the Optical Transfer Function (OTF) and scene information content involved. A unique feature of this approach is that algorithm training is unnecessary since the bit and variance maps are computed directly from subimage data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple-detector scanning systems exhibit striping patterns caused by non-uniform calibration and analog-to-digital quantization process. Traditional nonlinear destriping is based on cumulative histogram normalization of intraband detectors to their composite statistics for the total image. A new statistical procedure uses the fractional part of the floating point values derived by applying a destripingfunction to the data as a location address in a two-dimensional randomly generated binary table.1 This table controls the conversion from floating point to integer output, and replaces the traditional truncation conversion method. Because nonlinear destriping creates an integer look-up table for the normalization process, and intermediate floating point values are not created, revision of the traditional nonlinear destriping algorithm is necessary to incorporate the statistical procedure. Landsat multispectral scanner data acquired on June 2, 1973, were processed with both the traditional and revised nonlinear destriping techniques. Based on quantitative results, the revised destriping was preferred to the traditional destriping. However, within certain image structures the traditional destriping was qualitatively more successful. These results suggested that a different algorithm for traditional nonlinear destriping and another way of incorporating Bernstein's procedure could create a more acceptable and consistent product.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Landsat Sensor (ALS) technology has produced requirements for increasing data rates that may exceed space to ground data link capacity, so that identification of appropriate data compression techniques is of interest. Unlike many other applications, Landsat requires information lossless compression. DPCM, Interpolated DPCM, and error-correcting successive-difference PCM (ESPCM) are compared, leading to the conclusion that ESPCM is a practical, real-time (on-board) compression algorithm. ESPCM offers compression ratios approaching DPCM with no information loss and little or no increase in complexity. Moreover, adaptive ESPCM (AESPCM) yields an average compression efficiency of 84% relative to successive difference entropy, and 97% relative to scene entropy. Compression ratios vary from a low of 1.18 for a high entropy (6.64 bits/pixel) mountain scene to a high of 2.38 for low entropy (2.54 bits/pixel) ocean data. The weighted average lossless compression ratio to be expected, using a representative selection of Landsat Thematic Mapper eight-bit data as a basis, appears to be approximately 2.1, for an average compressed data rate of about 3.7 bits/pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most of the pattern recognition techniques which are in common use rely solely upon the grey shade patterns which are present within the imagery for recognition of objects. By using photogrammetric techniques in connection with grey shade techniques, both segmentation of the image and recognition of features within the imagery can be simplified and improved for many common types of targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time digital scene based focusing algorithm, which will be implemented on the Honeywell FUR is described. Focus merit functions (FMF) which are optimal for Honeywell FUR imagery are examined. The algorithm was tested by demonstrating its capability to focus on a static scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An electronic control system is described which analyzes digitized TV images to simultaneously position 96 time-and-space multiplexed beams for a large KrF laser system. Degra-dation of position resolution due to the intervals between digitization is discussed, and improvement of this resolution by using inherent system noise is demonstrated. The methods shown resolve arbitrary intensity boundaries to a small fraction of the discrete sample spacing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes a sequence of algorithms used to perform segmentation of aerial images of natural terrain for the purpose of extracting features pertinent to cartographic applications. Topics include image filtering, labeling, automated editing and refinement of the segmentation within a resolution pyramid. These techniques are considered to be preprocessing activities which will, in general, require some editing by trained cartographers. The objective of this work is to minimize the tedium of feature extraction using algorithms that do not require excessive computational overhead.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A class of features, called "edge features," has been developed and applied to several problems of practical interest in image processing. These features are derived from a vector-valued function of the image called the "edge spectrum-" The edge spectrum at coordinate (x.,y) of the image describes the distribution of edge directions near (x,y). Several applications of edge features are discussed. One is considered in some detail. This application is to identify friendly aircraft descending for landing on an aircraft carrier. Identification is achieved by measuring wingspan - a good discriminant between the A6, A7, E2C and F.14. aircraft. For this purpose an edge feature was designed for locating the wing tips in the image. Wingspan was converted to physical dimension using range information and the known parameters of the optical system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A vision machine must locate possible objects before it can identify them. In simple images, where the objects and illumination are known, locating possible objects can be part of, and secondary to, identification. In complex, natural images it is more efficient to use a quick and simple method to locate possible objects first, and then to selectively identify them. This parallels the strategy normally used by the human visual system. We present a theory for how possible objects, called "blobs", will be represented in an image, and explore some measures of importance for these candidate objects. An example algorithm based on this theory can quickly generate a list of possible object locations for the identification computation. We discuss the implementation of this example algorithm on a standard image processing system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.