Low-light images are not suitable for direct use in computer vision tasks due to the low visibility of the images. The existing low-light image enhancement methods usually produce colour distortion and noise amplification. This paper proposes a low-light image enhancement method based on multi-feature learning. Our method is different from most methods that decompose the image into two parts: an illumination image and a reflection image. In our learning model, these features are designed based on the pixel level, which makes the model concise and ensures colour fidelity. Our network learns three categories of image features: global features, local features, and texture features. A loss function part based on SSIM is used to ensure that multiple features are extracted effectively. Furthermore, a loss function part based on Sobel is designed to suppress noise and protect the image details. Subjective and objective experimental results demonstrate the effectiveness of our approach for low-light image enhancement.
Landsat data are widely used in various earth observations, but the clouds interfere with the applications of the images. This paper proposes a weighted variational gradient-based fusion method (WVGBF) for high-fidelity thin cloud removal of Landsat images, which is an improvement of the variational gradient-based fusion (VGBF) method. The VGBF method integrates the gradient information from the reference band into visible bands of cloudy image to enable spatial details and remove thin clouds. The VGBF method utilizes the same gradient constraints to the entire image, which causes the color distortion in cloudless areas. In our method, a weight coefficient is introduced into the gradient approximation term to ensure the fidelity of image. The distribution of weight coefficient is related to the cloud thickness map. The map is built on Independence Component Analysis (ICA) by using multi-temporal Landsat images. Quantitatively, we use R value to evaluate the fidelity in the cloudless regions and metric Q to evaluate the clarity in the cloud areas. The experimental results indicate that the proposed method has the better ability to remove thin cloud and achieve high fidelity.
Traditional shadow detection methods are usually detected shadow areas by the single threshold in shadow feature map.
This leads to the detection results susceptible to affect by noise, and some special target (high-bright objects and green
vegetation etc.) susceptible to misdetection. In this paper, a shadow detection method is proposed based on pulse coupled
neural network (PCNN). The model can ignore small differences of pixels values in one area, because the network output
is not only associated with the pixel brightness but also associated with pixel spatial location. Firstly, a new shadow
feature map is build. Then PCNN model is applied to get optimal detection result with max entropy. The experimental
results showed that the proposed model performed better than the single threshold models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.