UAV thermal infrared remote sensing imagery allows for higher resolution LST (land surface temperature) to be acquired, but temperature drift during thermal camera data acquisition reduces the reliability of the data. Data with temperature drift cannot be accurately removed by the camera’s own automatic calibration or by using a fixed calibration function. In addition, during the acquisition of data by a thermal imaging camera at low altitude, the transmission of thermal radiation is affected by the atmosphere at mid-flight altitude, and the data need to be atmospherically corrected to characterize the actual LST. In this paper, the errors caused by temperature drift are removed by feature matching and linear fitting in the data processing process, so as to obtain more accurate mosaics of brightness temperatures. In the retrieval process, the LST is obtained by synchronizing the atmospheric wet temperature contour lines, based on the principle of thermal radiation transmission, and combining with the specific emissivity of the ground to obtain the high accuracy of the LST. The feasibility of the algorithm is verified using continuous actual measured LST. The results show that based on the synchronized atmospheric temperature and humidity contours, the atmospheric influence can be effectively eliminated, and the LST obtained by the retrieval has a high accuracy. From the experimental results, it can be seen that the method proposed and analyzed in this paper is a feasible method to obtain high-precision LST using thermal infrared remote sensing images from UAVs.
We develop a 3x3 -channel Bionic Compound Eyes Imaging System (BCEIS, which is composed of an optical system and a mechanical system) and apply it to target positioning. First, we analyze the overlapping condition based on the imaging model of the BCEIS. Then, considering the relation between the pixel coordinates of image points and world coordinate of the target is nonlinear, we fabricate three well-designed general regression neural networks (GRNNs) to position the target under three conditions where the FOV of four channels, six channels and nine channels overlaps at the same time respectively (the image point of each channel is obtained under three conditions). In order to overcome limitations of the GRNN, we sample a group of image points which cover the FOV of the system under above three conditions to train the network, and then utilize the testing set to verify the reliability of the three GRNNs. The experimental result shows that the positioning accuracy is the highest in the area where the FOV of nine channels overlaps simultaneously, which is followed by the accuracy in the area where the FOV of six channels overlaps at the same time. The positioning accuracy is the lowest in the area where the FOV of four channels overlaps simultaneously. Furthermore, we find that GRNN performs better both in positioning accuracy and time consumption when compared with BP network. Adopting the GRNN to position the target provides a new way in applications such as object tracking, robot navigation and etc.
A new model that can simultaneously do blind restoration and segmentation task is proposed in the paper. The new model belongs to the variant of Mumford Shah model. In order to promote the computational efficiency, the restoration part and segmentation part are decoupled from the original model. The blind image restoration part is based on the variable exponent regularizer to accurately estimate both piecewise constant point spread functions and smooth point spread functions. The segmentation part is the explicit edge indicator function obtained from the original model. The new model can be efficiently solved using split bregman framework. Numerical experiments show that the new algorithm produces promising results and robust to noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.