In recent years, the applications of hyperspectral imaging in the protection and analysis of cultural relics have received widespread attention. However, due to the limitation of imaging sensors, the spatial resolution of existing hyperspectral images is low, which hinders the development of hyperspectral digitization of cultural relics. Hyperspectral (HS) and RGB image fusion technology can generate hyperspectral images with high spatial resolution, which has gradually become a research hotspot. Inspired by the astounding performance of deep learning in various hyperspectral image processing tasks, this paper proposes a hyperspectral image fusion method based on dual-resolution fusion feature mutual guidance network (DRFFMG). Firstly, two feature extraction networks for HS and RGB images with different resolution pairs are designed to increase the richness of extracted features and reduce the loss of original hyperspectral information. Then, the spatial and spectral features extracted from the above feature extraction networks are fused, and a fusion feature mutual guidance module is designed to promote the mutual learning of different spatial features through information transmission, effectively reducing spatial distortion. Finally, the desired high spatial resolution HS image is restored from the fused features through an image reconstruction network. Experiments demonstrate that the proposed DRFFMG network can produce fusion images competitive with even better to state of the arts, and retain spectral information while improving spatial resolution.
In the preservation and restoration of murals, labeling and recording the location and size of the paint loss disease can bring convenience to the subsequent restoration work. At present, the most common method of disease labeling is to draw the disease area manually on an orthophoto map by human-computer interaction. However, this method not only requires much time, but also leads to different labeling results due to different experts' experience. In recent years, with the development of artificial intelligence, machine learning and other technologies, it is possible to realize intelligent labeling through image processing and other methods. Therefore, this paper focuses on the mural paint loss disease and tries to explore the intelligent disease labeling method, hoping to efficiently and accurately mark the paint loss disease. In this paper, firstly, the disease labeling is transformed into an image segmentation problem, and proposes a mural paint loss disease labeling based on U-Net. However, it was experimentally found that much detailed information is often lost when the U-Net is used directly. Therefore, this paper further proposes multi-scale detail injection U-Net, including the constructed multi-scale module and the method of injecting shallow features into in-depth features, which could effectively extract more abundant edge information and improve the labeling accuracy. Furthermore, we demonstrate that the method proposed in this paper could actually achieve the intelligent labeling of the paint loss disease through the murals of the Liao Dynasty Feng Guo Temple in Yi County, Jinzhou City, China.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.