Paper
16 September 2005 Computational complexity of object-based image compression
Author Affiliations +
Abstract
Image compression via transform coding applied to small rectangular regions or encoding blocks appears to be approaching asymptotic rate-distortion performance. However, an emerging compression technology, called object-based compression (OBC) promises significantly improved performance via compression ratios ranging from 200:1 to as high as 2,500:1. OBC involves segmentation of image regions, followed by efficient encoding of each region's content and boundary. During decompression, such regions can be approximated by objects from a codebook, yielding a reconstructed image that is semantically equivalent to the corresponding source image, but has pixel- and featural-level differences. Semantic equivalence between the source and decompressed image facilitates fast decompression through efficient substitutions, albeit at the cost of codebook search in the compression step. Given small codebooks, OBC holds promise for information-push technologies where approximate context is sufficient, for example, transmission of surveillance images that provide the gist of a scene. However, OBC is not necessarily useful for applications requiring high accuracy, such as medical image processing, because substitution of source content can be inaccurate at small spatial scales. The cost of segmentation is a significant disadvantage in current OBC implementations. Several innovative techniques have been developed for region segmentation, as discussed in a previous paper [4]. Additionally, tradeoffs between representational fidelity, computational cost, and storage requirement occur, as with the vast majority of lossy compression algorithms. This paper analyzes the computational (time) and storage (space) complexities of several recent OBC algorithms applied to single-frame imagery. A time complexity model is proposed, which can be associated theoretically with a space complexity model that we have previously published [2]. The result, when combined with measurements of representational accuracy described in a companion paper [5], supports estimation of a time-space-error bandwidth product that could facilitate dynamic optimization of OBC algorithms. In practice, this would support efficient compression with visually acceptable reconstruction for a wide variety of military and domestic applications.
© (2005) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Mark S. Schmalz and Gerhard X. Ritter "Computational complexity of object-based image compression", Proc. SPIE 5915, Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150F (16 September 2005); https://doi.org/10.1117/12.613204
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Image compression

Image processing algorithms and systems

Image processing

Virtual colonoscopy

Algorithm development

Computer programming

Back to Top