Energy load forecasting across multiple buildings is beneficial for energy saving. Currently, most methodologies are training a single global model for all buildings as the deep learning model relies on large-scale data. However, the energy data distribution may vary a lot across different buildings and enforcing a global model may cause unnecessary computing resource overutilization. Meanwhile, building energy management encounters repeated manual efforts for machine learning model training over the new sensor data. To improve the computing resource utilization of load forecasting model training and automation of building energy management, a new automatic learning framework is proposed to support automatic building energy data analytics. The machine learning model is customized for each building based on an automatic algorithm with efficient model evaluations. The new framework brings comparable performance to federated energy data learning while fewer computing resource is consumed.
KEYWORDS: Deep learning, Data modeling, Transformers, Education and training, Machine learning, Neural networks, Performance modeling, Visualization, Data privacy, Autoregressive models
Building energy consumption grows rapidly with modern urbanization while the buildings’ sensor data also increases explosively. Improving energy utilization of community buildings is critical for sustainable development and global climate challenge. However, the data isolation across buildings’ privacy management prevents largescale machine learning model training, which may reduce the prediction accuracy due to lack of data. Federated building energy learning supports distributed learning through model sharing so that data privacy is mitigated. In federated learning, model-sharing brings a new concern about network resource limitation. Deep learning model transfers across multiple buildings would cause network ingestion and incur high latency of federated training. To improve the efficiency of federated training with fewer resources, a new federated learning algorithm is proposed with a new deep learning model design. The deep learning model memory usage is reduced by 80% while energy load forecasting accuracy is still comparable to the state-of-the-art methods.
KEYWORDS: Corrosion, Inspection, 3D modeling, Deep learning, Magnetism, Image processing, Metals, 3D image reconstruction, Signal processing, Machine learning
Magnetic flux leakage (MFL) is a widely used nondestructive testing technique in pipeline inspection to detect and quantify defects. In pipeline integrity management, the reconstruction of defects from MFL signals plays a critical role in failure pressure prediction and maintenance decision-making. In current research practices, this reconstruction primarily involves the determination of defect dimensions, including length, width, and depth, collectively forming a rectangular box. However, this box-based representation potentially leads to conservative assessments of pipeline integrity. To fine-scale the reconstruction results and provide detailed defect information for the integrity assessment, a 3-D reconstruction model for pipeline corrosion defects from MFL signals is proposed. In detail, the deep neural network is established to capture the nonlinear relationship between the MFL signals and 3-D defect profiles. In contrast to the limited insights offered by the box profile, the reconstructed 3-D profile in this paper enables more detailed metal loss geometry. The experiments using field pipeline in-line inspection data demonstrate promising results on both morphology and depth prediction.
Pipeline systems are critical infrastructure for modern economies, which serve as the essential means for transporting oil, gas, water, and other fluids. These pipelines are mostly buried underground, making their integrity highly crucial. Because they are buried, these pipelines are subject to stress and are prone to material degradation due to corrosion. Corrosion not only reduces the wall thickness of the pipes but also poses severe safety risks and can lead to catastrophic failures and substantial financial losses. Hence, there is an urgent need to develop accurate predictive models for evaluating pipe wall thickness. This paper aims to address this need by exploring machine learning-based algorithms to monitor the corrosion rates so that preventive measures can be taken to ensure pipeline integrity. Thus, four state-of-the-art machine-learning algorithms, namely, Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU), Bidirectional Gated Recurrent Unit (Bi-GRU), and Long Short-Term Memory (LSTM) are employed to predict accurate wall thickness of pipelines. The empirical results show that the LSTM algorithm outperforms its counterparts, achieving a low root mean squared error (RMSE) of 0.0721 mm. Therefore, incorporating LSTM-based models into pipeline integrity programs can be a significant step forward to safeguard these critical infrastructures.
The effects of the lightning strike on composite aircraft structures have been an active research area in the aviation industry, given the concern over safe aircraft operations. To maintain safe operations, civil and military regulators require effective approaches to assess and quantify the severity of lightning damage. Although x-rays are commonly used to determine material damage in aircraft structures, the technique requires access to both sides of the investigated part. This paper proposes a novel autoencoder model to check the feasibility of evaluating the damage to carbon fiber reinforced polymers (CFRP) panels from the outer surface of in-service aircraft structures. Two alternative techniques to x-ray, such as ultrasonic testing (UT) and infrared thermography (IR), nondestructive evaluation methods, are employed to develop the proposed model. The fusion model uses U-net as the backbone and spatial attention fusion as the fusion strategy while combining structural similarity index (SSIM) and perceptual losses as the loss function. Also, the log-Gabor filter is used in the model to obtain high-frequency edge information for fusion. The results are then compared against five state-of-the-art fusion methods, revealing that the proposed model performs better in quantifying the lightning damage to aircraft CFRP structures.
KEYWORDS: Data modeling, Deep learning, Education and training, Transformers, Machine learning, Buildings, Performance modeling, Power consumption, Neural networks, Design and modelling
Electricity data sensors are widely used across large buildings and households. As the data is collected by distributed sensors from varied locations, privacy-preserving becomes a top concern for data owners. Meanwhile, multiple deep learning models achieved state-of-art performance on forecasting with the electricity time series data in a centralized training mechanism. Although these deep learning models are powerful at capturing temporal features and making precise predictions, it usually consumes a large amount of memory and resources during the training process. To address two problems, i.e., the data privacy issue and high-demanded resources for training, we propose an efficient and practical deep learning model using a transformer framework while utilizing federated learning to move the training on local data instead of on a centralized place. With the proposed deep learning model, the computation will reduce its memory usage by 60% while achieving similar and even better results on forecasting with the electricity time series data. Case studies on the university communities’ building demonstrate our proposed solution’s great potential and comparative performance compared to the state of the arts.
KEYWORDS: Data modeling, Analytic models, Computing systems, Sensors, Data fusion, Computer simulations, Analytics, Internet of things, Industry, Ecosystems
Digital twin engineering is a disruptive technology that creates a living data model of industrial assets. The living model will continually adapt to changes in the environment or operations using real-time sensory data as well as forecast the future of the corresponding infrastructure. A digital twin can be used to proactively identify potential issues with its real physical counterpart, allowing the prediction of the remaining useful life of the physical twin by leveraging a combination of physics-based models and data-driven analytics. The digital twin ecosystem comprises sensor and measurement technologies, industrial Internet of Things, simulation and modeling, and machine learning. This paper will review the digital twin technology and highlight its application in predictive maintenance applications.
Since the rise of deep learning (DL), methods are being proposed daily for all kinds of applications such as systems that include radar, infrared (IR), and electro-optical (EO) imagery. The most common DL application uses the convolutional neural network (CNN) for visual (VIS) imagery as data sets are available for training. This paper highlights recent advances of DL for Infrared (IR) applications by conducting a literature review for IR only and IR plus another modality (e.g., Visual+IR). For IR DL developments, the paper examines that of (1) applications (medical, non-destructive evaluation, target recognition), (2) sensing (space, air, ground), and (3) multi-modal (transfer learning, image enhancement, band selection); while determining aspects for improving the IR sensor design.
Infrared to Visible (IR2VIS) image registration suffers from the challenge of cross-modal feature extraction and matching. Conventional methods usually design the same keypoint detector for both Infrared (IR) and Visible (VIS) images. The VIS images are even converted to gray-scale images before the keypoint detection. IR and VIS gray-scale images have different properties which might not be applicable for the same feature detector. Therefore, this paper proposes an IR2VIS image registration method, namely, Image Translation for Image Enhanced Registration (ITIER). The IR images are first translated to realistic VIS images by Wavelet-Guided Generative Adversarial Network (WGGAN) for the convenience of cross-modal feature detection. Then the keypoint detection and matching and the homography transformation, which have been integrated into our ITIER, are conducted on the translated and original VIS images. Experimental results demonstrate that the IR2VIS image registration accuracy is greatly enhanced by the image-to-image translation procedure, which transfers IR images to realistic VIS images.
Piezoelectric lead zirconate titanate (PZT) sensors are widely used in various structural health monitoring (SHM) applications, where data acquired by the PZT sensors are used for damage detection. Any failure of the PZT sensors will have a detrimental effect in the ability of SHM systems to detect damage. Therefore, detecting faulty PZT sensor is critical to reduce any false-calls associated with malfunctioning sensor to ensure proper functionality of SHM systems. This paper proposes a self-diagnostic method to monitoring the health of PZT sensors using the electro-mechanical impedance (EMI) data in two steps. In the first detection step, the onedimensional convolutional autoencoder (1D-CAE) is employed to obtain the reconstruction error as anomaly scores from the raw EMI data. Hence, the faulty PZT sensors can be detected by comparing the anomaly score with a pre-defined threshold. In the second diagnostic step, the data feature is first extracted with the 1D-CAE. The extracted feature is then fed into a multilayer perceptron (MLP) classifier to classify the fault type of the PZT sensor. The proposed method was validated through experiments, where typical in-service induced damages such as impact, environmental effect, sensor breakage localized high temperature heating, etc. were introduced. The results demonstrate the effectiveness of the proposed method for both detection and diagnosis of various types of PZT sensor damage.
Transfer learning provides a useful solution to learn a new conceptual domain from few examples, which exploits prior knowledge from a related domain. We proposed a simple and yet effective transfer learning method for image classification that constructs an activation ensemble generative adversarial net (AE-GAN) to transfer knowledge from one dataset to another. The AE-GAN is mainly composed of three convolutional layers and adopts an ensemble of multiple activation functions. Experimental results on five benchmark datasets show that when only a few samples are available for training a target task, leveraging datasets from other related datasets by AE-GAN can significantly improve the performance for image classification with a small set of samples.
Automatic Target Recognition (ATR) has seen many recent advances from image fusion, machine learning, and data collections to support multimodal, multi-perspective, and multi-focal day-night robust surveillance. This paper highlights ideas, strategies, and concepts as well as provides an example for electro-optical and infrared image fusion cooperative intelligent ATR analysis. The ATR results support simultaneous tracking and identification for physicsbased and human-derived information fusion (PHIF). The importance of context serves as a guide for ATR systems and determines the data requirements for robust training in deep learning approaches.
Automated situation awareness (ASA) in a complex and dynamic setting is a challenging task. The accurate perception of environmental elements and events is critical for the successful completion of a mission. The key technology to implement ASA is target detection. However, in most situations, targets of interest that are at a distance are hard to identify due to the small size, complex background, and poor illumination conditions. Thus, multimodal (e.g., visible and thermal) imaging and fusion techniques are adopted to enhance the capability for situation awareness. A deep multimodal image fusion (DIF) framework is proposed to detect the target by fusing the complementary information from multimodal images with a deep convolutional neural network. The DIF is built and validated with the Military Sensing Information Analysis Center dataset. Extensive experiments were carried out to demonstrate the effectiveness and superiority of the proposed method in terms of both detection accuracy and computational efficiency.
Condition assessment of underground buried utilities, especially water distribution networks, is crucial to the decision making process for pipe replacement and rehabilitation. Hence, regular inspection of the water pipelines is carried out with in-pipe inspection robots to assess the internal condition of the water pipelines. However, the inspection robots need to identify and negotiate with the valves to pass through. Therefore, the aim of this study is to detect the valves in water pipelines in real-time to ensure smooth operation of the inspection robot. In this paper, four state-of-the-art deep neural network algorithms namely, Faster R-CNN, RFCN, SSD, and YOLO are presented to perform the real-time valve detection analysis. The study shows that Faster R-CNN, pre-trained with Resnet101 outperforms all the selected models by achieving 97:35% and 76:73% mean Average precison (mAP) values when the threshold for prediction is set to 50% and 75% respectively. However, in terms of the detection rate in frames per second (FPS), YOLOv3-608 seems to have better processing speed than all other models.
KEYWORDS: Image fusion, Molybdenum, Information fusion, Data modeling, Image processing, Mid-IR, Data fusion, Sensors, Infrared imaging, Systems modeling
The resurgence of interest in artificial intelligence (AI) stems from impressive deep learning (DL) performance such as hierarchical supervised training using a Convolutional Neural Network (CNN). Current DL methods should provide contextual reasoning, explainable results, and repeatable understanding that require evaluation methods. This paper discusses DL techniques using multimodal (or multisource) information that extend measures of performance (MOP). Examples of joint multi-modal learning include imagery and text, video and radar, and other common sensor types. Issues with joint multimodal learning challenge many current methods and care is needed to apply machine learning methods. Results from Deep Multimodal Image Fusion (DMIF) using Electro-optical and infrared data demonstrate performance modeling based on distance to better understand DL robustness and quality to provide situation awareness.
The compressive strength of concrete structure is always influenced by the composition of varied materials, casting process, and curing period, etc. Among these variables, an optimal mix of different materials will achieve better structural compressive strength. Thus, understanding the non-linearity of concrete and its variables is paramount for improving and predicting the performance of concrete structures. Due to the expensive and time-consuming laboratory analysis, the use of post-processing and data analysis provides an excellent opportunity to explore and predict optimal models for concrete compressive strength performance. However, given the inadequacy of traditional regression models and other analytic techniques in modeling non-linear regression problems, there is still a need to achieve a better predictive model with minimal errors as well as the capability to estimate partial effects of characteristics on response variables. In this study, a predictive analysis was carried out to investigate the performance of concrete compressive strength at 28 days with a new machine learning model called boosting smooth transition regression trees (BooST). It is observed from the experimental results that the BooST model provides a better prediction accuracy in comparison with the state-of-the-art techniques used for concrete compressive strength prediction. Thus, there is a great potential to apply the BooST model for predicting the compressive strength of concrete in practice.
Entropy-based measures are popular for objective image fusion quality assessment due to a small parameter set for implementation and independency of ground-truth image as the reference for evaluation. We focus on Tsallis entropy and consider mutual entropy and entropic distance as the two entropic measures for image fusion quality assessment. To perform an in-depth analysis over quality measures and evaluate to what extent they are able to fulfill desired behaviors that are expected from ideal image fusion quality measures, we separately conduct theoretical analysis for each of them. To this goal, we employ an image formation model to obtain a closed-form expression for quality while weighted averaging is used as fusion algorithm. Our study shows that the so-called measures do not always satisfy the expected desired behaviors. We also provide explanations for unexpected behaviors that can improve the accuracy of image fusion quality measure in application. Investigations on real images are also performed, and the results verify the output of theoretical analysis.
Underground pipelines are subject to severe distress from the surrounding expansive soil. To investigate the structural response of water mains to varying soil movements, field data, including pipe wall strains in situ soil water content, soil pressure and temperature, was collected. The research on monitoring data analysis has been reported, but the relationship between soil properties and pipe deformation has not been well-interpreted. To characterize the relationship between soil property and pipe deformation, this paper presents a super learning based approach combining feature selection algorithms to predict the water mains structural behavior in different soil environments. Furthermore, automatic variable selection method, e.i. recursive feature elimination algorithm, were used to identify the critical predictors contributing to the pipe deformations. To investigate the adaptability of super learning to different predictive models, this research employed super learning based methods to three different datasets. The predictive performance was evaluated by R-squared, root-mean-square error and mean absolute error. Based on the prediction performance evaluation, the superiority of super learning was validated and demonstrated by predicting three types of pipe deformations accurately. In addition, a comprehensive understand of the water mains working environments becomes possible.
Be sure to take the SPIE online course Multispectral Image Fusion and Night Vision Colorization, with authors and course instructors Yufeng Zhang and Erik Blasch. Click here to register.
This book provides a complete overview of the state of the art in color image fusion, the associated evaluation methods, and its range of applications. It presents a comprehensive overview of fusion metrics and a comparison of objective metrics and subjective evaluations. Part I addresses the historical background and basic concepts. Part II describes image fusion theory. Part III focuses on quantitative and qualitative evaluation. Part IV presents several fusion applications, including two primary multiscale fusion approaches--the image pyramid and wavelet transform--as they pertain to face matching, biomedical imaging, and night vision.
Asbestos cement (AC) water mains were installed extensively in North America, Europe, and Australia during
1920s-1980s and subject to a high breakage rate in recent years in some utilities. It is essential to understand
how the influential factors contribute to the degradation and failure of AC pipes. The historical failure data
collected from twenty utilities are used in this study to explore the correlation between pipe condition and its
working environment.
In this paper, we applied four nonparametric regression methods to model the relationship between pipe
failure represented by average break rates and influential variables including pipe age and internal and external
working environmental parameters. The nonparametric regression models do not take a predetermined form
but it needs information derived from data. The feasibility of using a nonparametric regression model for the
condition assessment of AC pipes is investigated and understood.
The quantification of pitting corrosion in terms of material or metal loss is required for the understanding of pipe
condition. One approach to accurately map pitting corrosion is with a high-resolution laser scanner. However,
this process is time consuming and requires the removal of the pipe segment and sandblasting of its surface.
In this study, thermography is considered for the field testing. We investigated the potential of quantifying
pitting corrosion with thermography technique. A cleaned pipe was inspected with the pulsed thermography (PT) technique. Extracted signal features were used to characterize metal loss. The algorithms to process PT inspection data and extract signal features to characterize the pitting corrosion are presented in this paper.
Laser-based scanning can provide a precise surface profile. It has been widely applied to the inspection of
pipe inner walls and is often used along with other types of sensors, like sonar and close-circuit television
(CCTV). These measurements can be used for pipe deterioration modeling and condition assessment. Geometric
information needs to be extracted to characterize anomalies in the pipe profile. Since the laser scanning measures
the distance, segmentation with a threshold is a straightforward way to isolate the anomalies. However, threshold
with a fixed distance value does not work well for the laser range image due to the intensity inhomogeneity, which
is caused the uncontrollable factors during the inspection. Thus, a local binary fitting (LBF) active contour model is employed in this work to process the laser range image and an image phase congruency algorithm is adopted to provide the initial contour as required by the LBF method. The combination of these two approaches can successfully detect the anomalies from a laser range image.
KEYWORDS: Inspection, Corrosion, Data fusion, Probability theory, X-rays, Nondestructive evaluation, Data modeling, Associative arrays, Image fusion, Defense and security
In this work the Dempster-Shafer (DS) theory has been used for fusing nondestructive inspection (NDI) data. The success of a DS-based method depends on how the basic probability assignment
(BPA) or probability mass function is defined. In the case of nondestructive inspection of aircraft lap joints, which is of interest here, the inspection data is presented in raster-scanned images. These images are discriminated by iteratively trained classifiers. The BPA is defined based on the conditional probability of information classes and data classes, which are obtained from
ground truth data and NDI measurements respectively. Then, the Dempster rule of combination is applied to fuse multiple NDI inputs. The maximum mass outputs determine the final classification results. In this work, conventional eddy current (ET) and pulsed eddy current
(P-ET) techniques were employed to inspect the fuselage lap joints of a service-retired Boeing 727 aircraft in order to map corrosion sites. Estimation of the remaining thickness from the inspection data is the aim of this work. The ground truth data was obtained by
teardown inspections followed by a digital X-ray thickness mapping technique, which provides accurate thickness values. The experimental results verify the efficiency of the proposed method.
The fusion of data from Edge of Light(EOL) and eddy current inspections of aircraft lap joints is investigated in this study. The pillowing deformation caused by corrosion products is estimated by the EOL technique first. Eddy current (ET) techniques, e.g. multi-frequency eddy current testing (MF-ET) and pulsed eddy current testing (P-ET), can provide depth-sensitive inspections of fuselage joints. The objective of this study is to investigate how the testing results obtained from the two different methods correlate to each other and what kind of complementary information is available in each
result. This work contains two steps. First, the EOL inspection is quantified through a calibration process where a laser displacement sensor is used to provide the reference. The EOL estimation is for the total material loss while the eddy current or pulsed eddy current testing is employed to provide the complementary information on the remaining thickness.
Second, the ET data are fused with the principle component analysis method and the results are calibrated by a calibration experiment. Finally, the bottom layer corrosion is estimated through the subtraction of EOL and ET results. The preliminary results are presented in this paper.
Multi-frequency techniques are widely adopted for eddy current testing. One of the advantages of these techniques can be deduced from the skin depth formula (formula available in paper) where delta is the standard depth of penetration at excitation frequency f, with the other two parameters, mu and sigma, related to material properties. Thus, an inspection can be performed at several depths into the material with the simultaneous use of multiple frequencies. To investigate the potential of a multi-frequency eddy current technique (MFECT) for corrosion quantification, an experiment was carried out on a two-layered fuselage lap joint splice. Two data fusion approaches, namely Bayesian inference and multiresolution analysis, are investigated in this study to fuse eddy current images of different frequencies. The corrosion types are classified based on the percentage of material loss. The estimated thickness results, based on the fusion processes, are compared with accurate thickness maps obtained from teardown X-ray inspection data.
A computer vision inspection system, named Edge of Light TM (EOL), was invented and developed at the Institute for Aerospace Research of the National Research Council Canada. One application of interest is the detection and quantitative measurement of “pillowing” caused by corrosion in the faying surfaces of aircraft fuselage joints. To quantify the hidden corrosion, one approach is to relate the average corrosion of a region to the peak-to-peak amplitude between two diagonally adjacent rivet centers. This raises the requirement for automatically locating the rivet centers. The first step to achieve this is the rivet edge detection. In this study, gradient-based edge detection, local energy based feature extraction, and an adaptive threshold method were employed to identify the edge of rivets, which facilitated the first step in the EOL quantification procedure. Furthermore, the brightness profile is processed by the derivative operation, which locates the pillowing along the scanning direction. The derivative curves present an estimation of the inspected surface.
In this paper, a new model-based tracking algorithm is proposed for tracking rigid objects in 6 degrees of freedom. Only one calibrated camera is used in the approach which can handle the motion of objects with known geometry. Information in 2D images from the camera would conduct motion in 3D space. The useful image features are contour edges of object to be tracked. The matching process includes two aspects of: (1) feature extraction using local minimum energy and (2) global matching of known 3D models against the projected features. The algorithm is robust to change in lighting and background. The small motion hypothesis is used for fitting feature energy which is defined as the negative absolute value of the edge strength. An autoregressive AR(1) model is employed for detecting incorrect matches in terms of the feature energy. We have found a new invariance-based method to eliminate false matches caused by strong shadow or occlusion. The invariance is the ratio of trigonometric functions of the angles formed by a polygon. Both performance analysis and real object tracking show that the proposed algorithm is effective and robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.