In order to improve the security of the defended targets in our region and efficiently utilize various types of optoelectronic devices, the research is conducted on optoelectronic devices deployment within the region. With the objectives of maximizing the protection effectiveness and minimizing devices operational cost, a multi-objectives optimization model is established by considering constraints such as devices-target visibility conditions and device types, quantities, etc. Non-dominated sorting genetic algorithmⅡ (NSGA-Ⅱ) improved by Q-learning is designed to solve the model. To address the difficulty of fixed parameter settings adapting to dynamic changes, Q-learning is adopted to adaptively adjust the crossover probability and variation probability. In order to search for Pareto front solutions that are close to the global optimal more efficiently. The correctness of the model and the effectiveness of the algorithm are verified through simulation examples.
A 3D reconstruction network, TRC-MVSNet, is proposed to address the issue of poor 3D reconstruction of infrared multi-view. It combines Transformer feature matching and RC-MVSNet. To enhance the quality of the infrared image, a moving average filter method is employed to mitigate noise by averaging signals in a sliding window. Additionally, the Transformer network is utilized to improve the matching quality of infrared multi-view information by matching the information of multiple infrared views in the feature matching stage. Experimental results demonstrate that the TRCMVSNet 3D reconstruction model enhances accuracy by 0.03 and synthesis by 0.013 compared to the RC-MVSNet 3D reconstruction model on the DTU dataset. The TRC-MVSNet achieves high accuracy, efficiency, and minimal noise in the 3D reconstruction of infrared targets on the SYLU-IMD dataset, outperforming other methods.
Vehicle attribute recognition mainly contains two tasks: vehicle object location and vehicle category recognition. We propose a multi-task cascaded model MC-CNN, which integrates the improved Faster R-CNN and CNN. The first stage uses the improved Faster R-CNN network (IFR-CNN) to process the object location, and the second stage uses the improved CNN network (ICNN) to realize the object recognition. In IFR-CNN sub network, a max pooling and the deconvolution operation are added to the shallow layers of Faster R-CNN network. IFR-CNN can extract features from the different levels and increase the location information of shallow object. In ICNN sub network, we optimize the information extraction ability of high-level semantics in the middle layers and the deep layers of CNN network. The experimental results show that MC-CNN network proposed in this paper has better attribute recognition accuracy on BIT-Vehicle dataset and SYIT-Vehicle dataset than the single Faster R-CNN and CNN network models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.