Edge-preserving image smoothing is essential for computational imaging. Traditional filters are based on low-level image gradients; however, the edges defined on gradients do not align with the contours of human perception, thus suffering from certain limitations. We propose an image filter based on a soft clustering model that combines high-level semantics (derived from instance segmentation) with low-level features (intensities). The proposed filter works by first performing soft clustering on the input image according to the high- and low-level features to derive the affinity matrices, which are then fused for image smoothing. Experiment results indicate the advantages of the proposed filter in a variety of applications, including image smoothing, flash/non-flash fusion, detail enhancing, image dehazing, and depth upsampling. Furthermore, the proposed filter is efficient, and it takes 1.75 s to process a color image with 1 megapixel on a modern desktop.
Edge-preserving image smoothing is a fundamental tool in computational photography and graphics. It aims to suppress insignificant details while maintaining salient structures. The classical L0 filter provides an elegant framework for a variety of applications. However, it inclines to sharpen the salient edges, thus suffering from gradient reversals and color deviations. We propose a solution toward the objective of optimizing summed squared error regularized by the L0-norm of the gradients. The proposed solution explores an iterative strategy, where each iteration is an optimization problem based on the truncated L1 regularization, which can be solved efficiently with the combination of alternating direction method of multipliers and Fourier domain optimization. Leveraging L1-norm limits the aggressive modifications of gradients during the iterative process, which alleviates various artifacts in classical L0 filter. Experiment results indicate that the proposed method achieves superior performance in various applications, including texture removal, detail enhancement, HDR tone mapping, and compression artifact removal. The proposed filter can be conveniently implemented on GPU. It takes our filter 0.51 s to process a 720P color image on a NVIDIA GTX 1080 GPU.
In this paper, we propose a novel method to achieve efficient rendering of circular depth of field (DOF) effects. Like most of the existing work, we model the task of DOF rendering as the problem of filtering with spatial-varying circular kernels. To avoid direct calculation which is extremely computationally expensive, we approximate the circular kernels with multiple square kernels tilting at different angles. Integral images are then applied to speed up the task of filtering with square kernels. In order to avoid the problem of intensity leakage and edge discontinuity, not only the integral parts, but also the fractional parts of the disparity are considered in calculating the spatial-varying kernel sizes, i.e., the radii of the circles of confusion (COC). Finally, the circular depth of field effect is derived by combining the results of filtering with square kernels at different angles. Experimental results suggest that the proposed method could produce realistic DOF effects with decent running time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.