Convolutional neural networks (CNNs) can use larger convolutional kernels to provide a wide receptive field, achieving long-distance context processing similar to that of transformers with fewer model parameters. However, the complex and dynamic degradation of underwater environments makes it difficult for fixed large-kernel CNNs to adaptively capture complex multi-scale features underwater and dynamically integrate a broad range of contextual information. To address these issues, we propose the integrated dynamic context network, which adopts multi-scale receptive fields and adaptively processes global contextual information. In its core module, the integrated dynamic context module is incorporated; it uses multiple different-sized kernels to capture multi-scale features and designs a dynamic selection mechanism that adaptively emphasizes the most critical spatial features while fully utilizing extensive information. The proposed dynamic selective feature fusion module promotes valuable screening of redundant features through the fusion of multi-scale feature maps derived from the encoder and the previous layer decoder. Extensive experimental results verify the superior performance of the proposed method in addressing these challenges. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image enhancement
Image processing
Convolution
RGB color model
Visualization
Education and training
Feature fusion