In recent studies, remote sensing object detection methods based on deep learning have emerged as a primary concern in environmental monitoring, military investigation, and hazard response. However, many difficulties, such as complex backgrounds, dense target quantities, large-scale variations, and non-uniform distribution, lead to many parameters and complex network structures, thus limiting the accuracy of the detector and slowing the inference speed. To address these issues, we propose a lightweight and efficient object detector for remote sensing images. First, an asymmetric convolution with the visual attention mechanism is reconstructed to decrease the complexity and strengthen the feature representation ability. Then, an adaptive feature selection structure is designed to extract discriminative feature information, which can adaptively model the shapes of objects by introducing deformable convolution to obtain a stronger geometric feature representation. To reduce information loss across different channels and spatial locations, a hybrid receptive field module is also proposed to increase the receptive field model by mixing dilated convolutional layers with different dilation rates. Finally, experimental results on the DIOR dataset show that our approach significantly improves detection accuracy and running speed.
Infrared small target detection (IRSTD) plays an essential role in many fields such as air guidance, tracking, and surveillance. However, due to the tiny sizes of infrared small targets, which are easily confused with background noises and lack clear contours and texture information, how to learn more discriminative small target features while suppressing background noises is still a challenging task. In this paper, a context-aware cross-level attention fusion network for IRSTD is proposed. Specifically, a self-attention-induced global context-aware module obtains multilevel attention feature maps with robust positional relationship modeling. The high-level feature maps with abundant semantic information are then passed through a multiscale feature refinement module to restore the target details and highlight salient features. Feature maps at all levels are fed into a channel and spatial filtering module to compress redundant information and remove background noises, which are then used for cross-level feature fusion. Furthermore, to overcome the lack of publicly available datasets, a large-scale multiscene infrared small target dataset with high-quality annotations is constructed. Finally, extensive experiments on both public and our self-developed datasets demonstrate the effectiveness of the proposed method and the superiority compared with other state-of-the-art approaches.
In order to effectively detect the defects for fabric image with complex texture, this paper proposed a novel detection algorithm based on an end-to-end convolutional neural network. First, the proposal regions are generated by RPN (regional proposal Network). Then, Fast Region-based Convolutional Network method (Fast R-CNN) is adopted to determine whether the proposal regions extracted by RPN is a defect or not. Finally, Soft-NMS (non-maximum suppression) and data augmentation strategies are utilized to improve the detection precision. Experimental results demonstrate that the proposed method can locate the fabric defect region with higher accuracy compared with the state-of- art, and has better adaptability to all kinds of the fabric image.
Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.
Because of the variety and complexity of defects in the fabric texture image, fabric defect detection is a challenging issue in the fields of machine vision. In this paper, a novel fabric defect detection method is proposed based on wavelet transform and background estimating. Firstly, the feature map of the fabric image is generated according to wavelet transform. Secondly, the multi-backgrounds are estimated by averaging the divided blocks of the feature map, and the saliency maps are generated by comparing the map blocks with the estimating backgrounds. Thirdly, an integrated saliency map is generated by a fusing method. Finally, the contrast between foreground and background is enhanced by estimating the probability density function of the saliency map, and the threshold segmentation algorithm is adopted to locate the defect area. Experiment results show that the proposed algorithm is superior to the state of the art detection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.