The study and analysis of underwater image enhancement are significant in marine engineering and aquatic robotics. The data-driven approach shows good performance in this field. However, the method faces challenges in addressing issues such as low contrast, blurring, and color deviation. Moreover, its performance is constrained by the availability of paired underwater images, making it challenging to capture the nuances among different underwater scenes. To address these challenges, we introduce a semantic attention-guided transfer learning method for stylization underwater image enhancement (SGTL-SUIE). This method enables the generation of multiple stylized and enhanced images from a single distorted underwater image. Within SGTL-SUIE, a style-filtering module is proposed to better bridge the domain gap between distorted images and style reference images. Subsequently, a semantic pairing module further mitigates the domain differences across varying semantics between reference and distorted images, guided by semantic information, producing multiple semantic pairing codes. The transfer enhancement module then takes these semantic pairing codes, co-encoding style features with distorted image features through an encoder. A decoder network subsequently decodes the encoded features into diverse stylized outputs. To validate the proposed method’s performance, qualitative and quantitative evaluations were conducted on multiple public datasets. The results demonstrate that the SGTL-SUIE method outperforms many state-of-the-art approaches and enhances the stylistic diversity of generated images. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image enhancement
Semantics
Image quality
Visualization
Image segmentation
Image processing
Image restoration