Aircraft is a kind of valuable military equipment and transportation, so using target detection technology to detect ground aircraft in the optical remote sensing image has important research and application value. Although some achievements have been made in the relevant research, how to realize fast and effective ground aircraft target detection is still a challenging task because of the complex background of remote sensing image, large scale change and small imaging size, etc. Aiming at the application scenarios of multi-frame imaging, such as embedded detection and tracking system, this thesis proposes an aircraft target detection scheme based on hierarchical screening, which can improve the detection speed and reduce false alarm. Firstly, by analyzing the background characteristics, a target candidate region extraction method based on gray variance is adopted, and the acceleration is realized by integrating graph and shared computation. Then, the haar-like features are extracted in the candidate regions, which are then classified by the cascade AdaBoost classifier. Afterwards, a union-find-sets algorithm is used to merge the redundancy detection results and evaluate the confidence. Finally, the inter-frame correlation information is used to remove the false alarm. And we carried out experimental verification and proved the effectiveness of the algorithm.
Deep convolutional neural networks are increasingly used in various parallel embedded platforms such as mobile GPUs, AMD APUs, and FPGAs. At the same time, many new models have been developed for embedded platforms, such as MobileNet. In order to balance accuracy, speed and resource requirements and achieve cross-platform versatility, we have developed a software framework for in-depth research. Generated an OpenCL code that takes full advantage of parallel resources and improves the parallel efficiency of OpenCL code. Another advantage is that it optimizes and consolidates the network and compiles offline, making the entire application most efficient. MobileNets uses nonlongitudinal separable convolution (deep separable convolution) instead of standard convolution. Experiments with MobileNet have shown that the OpenCL code generation framework can significantly improve the efficiency of use.
Being able to adapt all weather at all times, it has been a hot research topic that using Synthetic Aperture Radar(SAR) for remote sensing. Despite all the well-known advantages of SAR, it is hard to extract features because of its unique imaging methodology, and this challenge attracts the research interest of traditional Automatic Target Recognition(ATR) methods. With the development of deep learning technologies, convolutional neural networks(CNNs) give us another way out to detect and recognize targets, when a huge number of samples are available, but this premise is often not hold, when it comes to monitoring a specific type of ships. In this paper, we propose a method to enhance the performance of Faster R-CNN with limited samples to detect and recognize ships in SAR images.
In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.