Since its first introduction, the lifting scheme has become a powerful method to perform wavelet transforms and many other orthogonal transforms. Especially for integer-to-integer wavelet transforms, the lifting scheme is an indispensable tool for lossless image compression. Earlier work has shown that the number of lifting steps can have an impact on the transform performance. The fidelity of integer-to-integer transforms depends entirely on how well they approximate thir original wavelet transforms. The predominant source of errors is due to the rounding-off of the intermediate real result to an integer at each lifting step. Hence, a wavelet transform with a large number of lifting steps would automatically increase the approximation error. In the case of lossy compression, the approximation error is less important because it is usually masked out by the transform coefficient quantization error. However, in the case of lossless compression, the compression performance is certainly affected by the approximation error. Consequently, the number of lifting steps in a wavelet transform is a major concern. The new lifting method presented in this paper reduces the number of lifting steps substantially in lossless data compression. Thus, it also significantly improves the overall rounding errors incurred in the real-to-integer conversion process at each of the lifting steps. The improvement of the overall rounding errors is more pronounced in the integer-to-integer orthogonal wavelet transforms, but the improvement in the integer-to-integer biorthogonal wavelet transforms is also significant. In addition, as a dividend, the new lifting method further saves memory space and decreases signal delay. Many examples on popular wavelet transforms are included.
The real advantage of using a wavelet transform for image data compression is the power of adapting to local statistics of the pixels. In hyperspectral data, many but not all spectral planes are well correlated. In each spectral plane, the spatial data is composed of patches of relatively smooth areas segmented by edges. The smooth areas can be well compressed by a relatively long wavelet transform with a large number of vanishing moments. However, for the regions around edges, shorter wavelet transforms are preferable. Despite the fact that the local statistics of both the spectral and spatial data change from pixel to pixel, almost all known image data compression algorithms use only one wavelet transform for the entire dataset. For example, the current international still image data compression standard, JPEG2000, has adopted the 5/3 wavelet transform as the default standard for lossless image data compression for all images. There is not a single wavelet filter that performs uniformly better than the others. Thus, it would be beneficial to use many types of wavelet filters based on local activities of the image. The selected wavelet transform can thus be best adapted to the content of the image locally. In this paper, we have derived a fast adaptive lifting scheme that can easily switch wavelet filters from one to the other. The adaptation is performed on a pixel-by-pixel basis, and it does not need any bookkeeping overhead. It is known that the lifting scheme is a fast and powerful tool to implement all wavelet transforms. Especially for integer-to-integer wavelet transforms, the lifting scheme is an indispensable tool for lossless image compression. Taking advantage of our newly developed lossless lifting scheme, the fast adaptive lifting algorithm presented in this paper not only saves two lifting steps but also improves accuracy compared to the conventional lifting scheme for lossless data compression. Moreover, our simulation results for ten two-dimensional images have shown that the fast adaptive lifting scheme outperforms both of the lossless wavelet tranforms used in JPEG2000 and the S+P transform in lossless SPIHT algorithm.
In this paper, we present a pair of new unitary transforms that were derived from the symmetric and antisymmetric orthonormal multiwavelet transform and the discrete cosine transform (DCT). This is motivated by the fact that the current international image compression standard, JPEG, is using DCT, whereas the proposed new standard, JPEG 2000, is using the 9/7 biorthogonal wavelet transforms as the default transform. Yet, recent research has reported that the Lapped transform which uses DCT as building blocks can obtain better performance than the 9/7 biorthogonal wavelet transform. ON the other hand, the relationship between the wavelet transform and DCT is not well known because of completely different paths of evolution. In this paper we explore the connection between the symmetric and antisymmetric orthonormal multiwavelet transform and the DCT. The known multiwavelet transforms today have been limited to the types consisting of only two scaling (dilation) functions and two wavelet functions. Based on the multiwavelet concept a pair of new block transforms, similar to the DCT, can be generated. Through extensive simulations we can show that both of the new unitary transforms perform better than the ordinary DCT. One of the new unitary transforms is preferred not only for its better performance but also fir its nice data-flow architecture. This new unitary transform also leads to a new MWT with four scaling functions and four wavelet functions.
In this paper we present a true radix-2 discrete cosine transform (DCT) algorithm in both decimation-in-frequency and decimation-in-time forms. To date, there has been strong interest in developing new processing techniques in the DCT domain for the reason that the DCT is popularly used in the current international standards for image, audio, and video compression. One important function in this respect is to merge or split of DCT blocks in the transform domain. Though many fast DCT algorithms have been existed, they are not suitable for such application. Most of the existing fast DCT algorithms are in radix-2 form, of which the DCT matrix is factorized into two half-sized transform matrices. But these two sub-matrices are not the same, at the best only one is the lower order DCT. In other words, the existing fast DCT algorithms are not true radix-2. This in turn has prevented them from direct applications to the transform-domain processing. The true radix-2 DCT algorithm presented in this paper has alleviated the above difficulty, and it may provide new techniques for other potential applications.
KEYWORDS: Fourier transforms, Algorithm development, Digital signal processing, Radon, Aerospace engineering, Signal processing, Data processing, Platinum, Digital image processing, Information technology
The Cooley-Tukey radix-2 Fast Fourier Transform (FFT) is well known in digital signal processing and has been popularly used in many applications. However, one important function in signal processing is to merge or split of FFT blocks in the Fourier transform domain. The Cooley-Tukey radix-2 decimation-in-frequency FFT algorithm can not be used for this purpose because twiddle factors must be multiplied to the input data before FFT is performed on the resultant. In other words, the existing radix-2 decimation- in-frequency FFT algorithm is not a true radix-2 algorithm. This in turn has prevented it from direct applications to the transform-domain processing, such as merge or split of FFT blocks in the Fourier domain. For real input data one may prefer to use the Fast Hartley Transform (FHT) because it completely deals with real arithmetic calculations. Then the same statements with regard to the radix-2 decimation-in-frequency FFT apply equally well to FHT because the existing FHT algorithms are the real-number equivalence of the complex-number FFT. The true radix-2 Decimation-in-frequency FFT and FHT algorithms presented in this paper have alleviated the above difficulty, and they may provide new techniques for other potential applications.
Future multispectral and hyperspectral remote sensing systems and image archives will benefit from effective, high-fidelity image compression techniques. In evaluating the effects of compression upon the data, one must not only consider the qualitative and quantitative effects upon the images themselves, but also upon the end user products that are derived from the imagery through the application of environmental retrieval algorithms. At The Aerospace Corporation, we have developed a fast algorithm for image compression techniques known as the modulated lapped transform (MLT). This compression algorithm obviates many of the artifacts that are introduced by some of the standard compression techniques. One example of compression artifacting is the blocking errors from discrete cosine transformation (DCT) based algorithms, which include the JPEG compression scheme. The Aerospace MLT technique is a hybrid of the wavelet and DCT techniques. It employs our patented split-radix approach, which is the fastest DCT algorithm known today. In this paper, we compare Aerospace MLT to JPEG, using cloud imagery and Earth surface scene classification. We also discuss the availability of a cost- effective VLSI hardware implementation of the Aerospace compression algorithm. The modulated lapped transform employs a peano scan with a split-radix approach to avoid blockiness artifacts. It has excellent resistance to errors, and it is amenable to fast processing using a 1-D hardware architecture to process a 2-D image. This technique encapsulates the favorable aspects of the wavelet transforms and produces images which, when compressed 10:1 and decompressed, compare very favorably (using error statistics, classification accuracy and visual quality metrics) to the original uncompressed image.
Among the various image data compression methods, the discrete cosine transform (DCT) has become the most popular in performing gray-scale image compression and decomposition. However, the computational burden in performing a DCT is heavy. For example, in a regular DCT, at least 11 multiplications are required for processing an 8 X 1 image block. The idea of the scaled-DCT is that more than half the multiplications in a regular DCT are unnecessary, because they can be formulated as scaling factors of the DCT coefficients, and these coefficients may be scaled back in the quantization process. A fast recursive algorithm for computing the scaled-DCT is presented in this paper. The formulations are derived based on practical considerations of applying the scaled-DCT algorithm to image data compression and decompression. These include the considerations of flexibility of processing different sizes of DCT blocks and the actual savings of the required number of arithmetic operations. Due to the recursive nature of this algorithm, a higher-order scaled-DCT can be obtained from two lower-order scaled DCTs. Thus, a scaled-DCT VLSI chip designed according to this algorithm may process different sizes of DCT under software control. To illustrate the unique properties of this recursive scaled-DCT algorithm, the one-dimensional formulations are presented with several examples exhibited in signal flow-graph forms.
Two types of frequency characterization of the discrete cosine transform (DCT) have been analyzed in detail. One is performed in the natural spatial frequency domain and the other in the eigenspace of the firstorder Markov stationary random process. In the past the direct conversion from Fourier transform toDCT has been very difficult. It requires either doubling of the input data or reshuf fling the input data sequency. In this paper we have derived a unitary transform that allows one to directly convert a Fourier transform of natural sequence input into a DCT. Furthermore though it is known that the DCT asymptotically approaches the KarhunenLoeve transform (KLT) of the firstorder Markov stationary random process no exact relationship between these two transforms has been given. This paper derives the exact relation and exhibits the frequency characteristics of a DCT. Applications to image data compression and enhancement are also included.
KEYWORDS: Signal detection, Signal processing, Filtering (signal processing), Electronic filtering, Silicon, Radar, Linear filtering, Electroluminescence, Frequency modulation, Aerospace engineering
A frequency versus timedelay (FVTD) technique is introduced to acquire unknown parameters of received chirp signals. This technique can precisely determine a single chirp clearly distinguish completely overlapped upchirp and downchirp and is also capable of realizing various overlapped multiple chirp signals. The basic implementation concept of this approach is relatively simple. We first employ a bank of bandpass filters to noncoherently process the incoming chirps. The filtered and sampled signals are then shifted into a set of frequency time and power distribution sequences which provide enough information for acquiring the unknown parameters of the received chirp signals. Examples and figures are used to illustrate this procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.