Low-light images inevitably suffer from degradation problems during the enhancement process, such as loss of detail and local overexposure or underexposure. Many existing methods target only one of these issues, leading to suboptimal results. We propose a multistage Laplacian feature fusion network (MLFFNet) capable of simultaneously mitigating both degradation difficulties. MLFFNet employs a pyramid framework that incrementally learns the degradation functions across various frequency bands, leveraging Laplacian feature maps at each stage. A key innovation of our approach is the supervised refinement module, which refines features through a dual strategy: an attention mechanism that enriches the detail capture, including edges, textures, and colors, and a residual mechanism that adjusts the luminance for a balanced exposure. The resultant enhanced image benefits from channel-wise attention, ensuring superior enhancement. Finally, the enhanced image is acquired by several channel attention blocks. Extensive experiments on various datasets indicate that our proposed MLFFNet outperforms the state-of-the-art methods both qualitatively and quantitatively. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image enhancement
Image restoration
Light sources and illumination
Education and training
Visualization
Image quality
Feature fusion