PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307301 (2024) https://doi.org/10.1117/12.3027140
This PDF file contains the front matter associated with SPIE Proceedings Volume 13073, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307302 (2024) https://doi.org/10.1117/12.3026619
With the application of network and communication technologies, power distribution stations have achieved online monitoring of operational status, intelligent analysis, and decision control, leading to the digitization and intelligent development. While network communication technologies bring efficiency to the operation and management of smart grids, they also introduce various network threats. This paper, in the context of power distribution station, analyzes the process of data flow of transmission and the method of constructing network attack graphs based on communication topology. It conducts quantitative risk assessment for power distribution station under network attacks, analyzes attack graphs to find the optimal attack paths, and formulates network security enhancement strategies based on this analysis. The experimental results show that the proposed scheme can find the key attack nodes more efficiently in the network attack graph of the power distribution station.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307303 (2024) https://doi.org/10.1117/12.3026432
In weakly supervised semantic segmentation (WSSS) tasks, the noise in pseudo labels poses a significant challenge to the training of segmentation networks. However, the widely used cross-entropy loss, which only considers individual pixel information in the images, is insufficient to address this issue. In order to effectively train segmentation networks using noisy labels, we propose a Weakly Supervised Semantic Segmentation method (WMS) based on weighted loss and Mumford-Shah loss. Firstly, we introduce a weighted loss with confidence weights for pseudo labels. This loss assigns weights to each pseudo label by incorporating a confidence indicator, enhancing the network's ability to resist noise interference. Additionally, we propose a Mumford-Shah (MS) loss based on variational segmentation. By leveraging the similarity between pixels in the original image, this loss introduces additional noise-free self-supervised information to assist in the training of the segmentation network, further suppressing the interference caused by noisy labels. Extensive experiments on the PASCAL VOC 2012 dataset demonstrate that the proposed method significantly improves WSSS performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307304 (2024) https://doi.org/10.1117/12.3026632
Mobile Internet of Things is an Internet application based on satellite positioning technology, wireless sensor networks and communication networks. With the popularity of mobile terminal devices, people's demand for mobile Internet of Things is also growing. In response to this trend, this article proposes an optimization design method based on fuzzy algorithm, which can be used for natural node evaluation and route selection, thereby realizing data interactive processing functions in the two processes of intelligent path identification and real-time dynamic route query. Moreover, this article designs a natural router system for the mobile Internet of Things, and conducts relevant test simulations on the basic performance of the system. The test results are as follows: the average delay of the system is between 137ms-175ms, and the response time is between 121ms-185ms; the overall throughput is between 2314k-3235k; the overall network coverage is basically above 90%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307305 (2024) https://doi.org/10.1117/12.3026351
To solve the problem of low efficiency of artificial salvage of surface garbage, a surface garbage collection device based on Raspberry PI was proposed. The YOLOv7 object detection model and binocular camera were used to recognize and locate surface garbage, and the brushless motor and steering gear were used to drive the garbage to the garbage location to collect garbage. In addition, GPS module and ultrasonic module are added to realize global path planning function based on ant colony algorithm and obstacle avoidance function of artificial potential field. The test proves that the device can realize the functions of garbage identification, garbage location, garbage collection and automatic cruise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bo Yang, Yusheng Liu, Hao Li, Yimin Chen, Xuegang Deng
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307306 (2024) https://doi.org/10.1117/12.3026642
In a multi-access edge computing environment, optimizing task offloading under various constraints is a complex challenge. Traditional methods often neglect constraint relationships, leading to uneven resource allocation and suboptimal system performance. Additionally, these methods struggle to adapt to dynamic demands in edge computing. This paper introduces a novel distributed task offloading algorithm based on multi-agent deep reinforcement learning. This approach coordinates multiple agents in making task offloading decisions, aiming to achieve optimized task allocation while respecting constraints, thereby enhancing system performance and reducing latency and energy consumption. Simulation experiments in edge computing scenarios validate the effectiveness and stability of the proposed method. Results demonstrate that the multi-agent deep reinforcement learning approach outperforms traditional methods significantly. It excels in reducing task completion time and lowering terminal device energy consumption, affirming its effectiveness. This research offers a fresh perspective on task offloading strategies in edge computing, addressing limitations in traditional methods and providing a foundation for further exploration in this domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307307 (2024) https://doi.org/10.1117/12.3026440
Monolithic architecture systems encapsulate all functions in a single deployment unit. With the complexity of business requirements increasing, monolithic architecture systems require significant human resources to maintain. Unlike monolithic architecture, microservices architecture consists of multiple independent, autonomous, functionally cohesive services, making systems more flexible and easier to deploy on the cloud. Therefore, increasingly industrial companies decomposed their monolithic systems and migrated to microservices. During the migration process, how decomposing monolithic architecture appropriately is a critical problem. Software systems have been proven to have the characteristics of graph networks. Some recent studies have used graph neural networks to implement this decomposition task. However, existing works lack fully considering the dependencies and interactions among class nodes. To overcome this limitation, we provide a novel directed graph attention neural network (DGANN) for this task. The main aspect of DGANN is our new design of direct-attention mechanism to fully capture the dependencies between classes while expressing the directionality of class inter-calls. Using DGANN, all the class nodes' information can be learned automatically. Our approach significantly outperforms previous methods on four open-source datasets and several evaluation metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307308 (2024) https://doi.org/10.1117/12.3026316
Cloud-native virtualization technology combines virtualization technology with cloud-native computing to provide a more efficient, flexible, and scalable cloud computing environment. In the process of analysis and research in the field of bioinformatics, it is usually necessary to deal with large-scale data sets and complex computing tasks, and the demand for computing power throughout the research and development cycle is characterized by peaks and troughs. The elastic scalability of cloud-native virtualization technology allows for the expansion of computing resources according to demand, meeting the data processing and analysis requirements throughout the entire research and development cycle. By integrating virtualized InfiniBand high-speed NICs, data transfer and the execution of computational tasks are accelerated, further reducing the research and development cycle. In summary, cloud-native virtualization technology has significant application value in the field of bioinformatics, providing an efficient computing environment while saving time and costs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307309 (2024) https://doi.org/10.1117/12.3026476
Predicting the resource consumption and completion status of jobs is beneficial to improve the scheduling performance of the system. Many studies have shown that job name can effectively improve the accuracy of prediction. Therefore, by mining the structural semantic information of job name, this paper introduces new features of job name habit, including job name length, number of job name elements, editing distance, and analyzes each substructure of job name, adding classification features after clustering. The introduced new features can better characterize the similarity between jobs and provide strong support for model prediction. Based on the model trained by the new feature data set, the prediction accuracy is significantly improved compared with the model that only introduces the job name.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730A (2024) https://doi.org/10.1117/12.3026387
The shape of defects on steel surfaces is highly variable and training samples are limited, making it a significant challenge to transfer a high-performance pretrained vision language model to steel surface defect detection. Therefore, a Multi-level Supervised Vision Language Model based Steel Surface Defect Detection method MLS-VLM is proposed in this paper. MLS-VLM delves deeply into the extraction of profound features from limited samples with three levels of training: supervised contrast training from labeled areas and the entire image, as well as self-supervised contrast learning from Region Proposals. MLS-VLM can be rapidly transferred to two-stage object detector. Experimental results demonstrate that, compared to traditional object detection methods, MLS-VLM achieves 5.68~8.37 mAP improvement on three benchmark object detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730B (2024) https://doi.org/10.1117/12.3026663
Caching is a fundamental strategy in computer systems to minimize latency and enhance performance, crucial for achieving optimal program execution speed. The Cache Hit Ratio is a key metric, emphasizing the critical role of cache hits, which are significantly faster than misses. The challenge lies in efficient cache replacement strategies, determining which cache line to evict when introducing a new line. Current policies, often based on heuristics for common access patterns, fall short in diverse scenarios. In response, this paper introduces EGCR (Enhanced Graph Neural Network for Cache Replacement), a pioneering model integrating Graph Neural Networks (GNN) to intelligently adapt to varying workloads and enhance the Cache Hit Ratio. EGCR introduces a graph-based representation for cache-related data, dynamically learning to respond effectively to intricate access patterns. In empirical evaluations, EGCR consistently outperforms the current state of the art, demonstrating a remarkable 36% improvement in cache hit rates across 13 memory-intensive SPEC applications. This positions EGCR as a promising solution, effectively bridging traditional heuristics and the potential of GNNs for optimized Cache Hit Ratios in dynamic computing environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730C (2024) https://doi.org/10.1117/12.3026542
Forecasting air quality is a crucial technical approach to effectively respond to severe pollution conditions. The evolution of pollutant concentration has spatial correlation Due to the challenge of identifying monitoring stations with significant spatial correlation, a method utilizing the K-means clustering algorithm is proposed for partitioning air quality monitoring stations. Taking Nantong as an example, based on the selection of historical pollutant data in the target area, combined with meteorological data, the hybrid CNN-LSTM model, which consists of the convolutional neural network(CNN) and the long short-term memory(LSTM) neural network, is used to predict the pollutants, and finally realize the extraction of the temporal and spatial evolution characteristics of the pollutant concentration to complete the high accuracy of air quality forecast. Experimental results show that, after adding historical pollutant concentration data from stations in the cluster, the CNN-LSTM model can forecast PM2.5 concentrations precisely.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730D (2024) https://doi.org/10.1117/12.3026447
Fast detection and categorization of malware are increasingly important for securing hosts and networks. Although many Machine Learning models have been utilized to detect malware, single-model detection may not remain efficient in the face of diverse datasets. The continuously increasing size of malware need the distributed storage systems and distributed computing. The distributed methods and ensemble learning are introduced into malware detection, and an optimal combination of base classifiers suitable for stacking are designed. A hyperparameter optimization method based on the Tree-structured Parzen Estimator (TPE) approach to enhance malware detection. The proposed method is implemented on Apache Spark and Hadoop Distributed File System(HDFS). Experiments conducted on four independent datasets, including Android and Windows, demonstrate that the proposed method can achieve 99.41% accuracy on the Android dataset and 96.96% accuracy on the Windows dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730E (2024) https://doi.org/10.1117/12.3026636
The detection of oil well dynamic liquid level using acoustic methods requires digital signal processing techniques to reduce environmental noises. Most denoising techniques consist of filter-based approaches and spectral methods with time or frequency transformations. In this study, the Ensemble Empirical Mode Decomposition (EEMD) algorithm is implemented at the filtering step within the standard procedures for oil industrial use to detect dynamic liquid level. Five datasets from oil well production are tested using EEMD compared to standard filtering procedures in industrial use. The EEMD algorithm achieves results in good agreement with the reference results in general with worst case relative error reaching 0.51%. In particular, EEMD processed signal visibly displays more salient echo wave feature compared to reference for one case with unknown submerging noises or disturbances. Under such circumstances, the combination of a denoising filter and EEMD furthermore stabilizes the results, the maximum relative error falling to 0.39%. Importantly, the ensemble averaged Intrinsic Mode Functions (IMFs) of frequencies linked to reflected infra-sound could provide good knowledge helping locate the reflection of infra-sound wave. EEMD is a promising method for filtering signals of echo wave based dynamic liquid level detection. Further in-depth investigations are required to better interpret signals mixed with unknown noises or disturbances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730F (2024) https://doi.org/10.1117/12.3026331
BEV high-definition maps play a crucial role in autonomous driving and navigation systems, where their segmentation accuracy directly affects system performance and safety. Traditional feature extraction networks, when dealing with complex BEV maps, are often limited by their fixed kernel sizes and shapes, leading to insufficient accuracy in critical tasks such as lane segmentation. This paper improves and proposes an A-HDN framework for high-definition segmentation in BEV, observing the slender and continuous features of linear structures and introducing a Dynamic Serpentine Convolution network (DSConv). This network can flexibly conform to the lane structures in BEV and learn features, while also staying close to the target structure under constraints, thus better learning the features. Additionally, a Ghost module is introduced, which allows the learning network to better preserve features without affecting model performance. Finally, experiments show that this algorithm has increased the map segmentation precision by 1.6 IoU and improved directional detection by 0.8 mAP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730G (2024) https://doi.org/10.1117/12.3026425
Network systems have seen a significant transformation with the growing acceptance of RDMA for low-latency communications in data centers. Unfortunately, studies have shown that RDMA one-sided operations are subject to security risks such as packet eavesdropping, packet injection, and packet tampering; therefore, we are seeing new RDMA designs taking secure features into account, while most of which still neglecting efficiency in some ways. We propose SEC-RDMA, a scheme being compatible with the original RoCEv2 protocol and enhancing confidentiality and authentication for one-sided operations during RDMA transmissions, mainly focusing on the efficiency of two critical aspects: hard-wired key management and message-based packet authentication. We implement such a scheme on an FPGA-based RDMA network interface card to prove its viability. In testing with this implementation, message-based packet authentication takes roughly 84.6% less time than packet-based one, while hard-wired key management takes approximately 85.5% less time than the typical key exchange strategy at the QP level. This SEC-RDMA implementation adds 45K LUTs and 29K registers to the FPGA-based RDMA network interface card.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730H (2024) https://doi.org/10.1117/12.3026422
The continuous development of the construction industry has spurred the growth of mechanical and electrical (M&E) installation engineering, in which material management is a crucial component. The classification of material names in M&E installation has become increasingly prominent, with traditional manual classification not only being labor-intensive and costly but also inefficient. To address this issue, this paper introduces a Hidden Markov Model based on Named Entity Features (HMM-NF) for the study of the correlation between names of materials used in M&E installation. This model applies the Hidden Markov Model (HMM) and named entity recognition technology to classify and recognize the names of construction M&E installation materials and to establish the correlation between them. Experimental results show that the model can accurately identify and match material names, specifications, units, and other data, scoring the matches in a manner that well describes the correctness of the established correlations. Furthermore, this paper compares the HMM-NF model with the traditional HMM and the Maximum Probability Segmentation Model, demonstrating that the HMM-NF model exhibits higher accuracy and efficiency during both training and segmentation, making it a reasonable and effective model. This research provides new insights and methods for material management in construction M&E installation engineering, helping to enhance the efficiency and precision of material management and thereby optimizing the management of the entire M&E installation process. Additionally, this study offers a new application and reference for the use of Hidden Markov Models in the field of construction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yiping Wang, Tiantian Wang, Xinglong Li, Xianghu Wu, Kuanjun Liu
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730I (2024) https://doi.org/10.1117/12.3026535
This paper proposes an extension of the SysML safety semantic approach to address the lack of safety and reliability semantics and the support for safety and reliability analysis of the model in SysML. On this basis, fault tree generation and analysis are performed. The method first adds semantic information about fault tree and redundant module to the model using the Stereotype extension mechanism, integrating design data and safety data through an extended configuration file in the SysML model. Secondly, sub-mode decomposition of Internal Block Diagram is conducted, and fault mode recognition is described in relation to fault tree mapping. Based on this, a search is conducted on the SysML model to obtain necessary information for fault tree generation, followed by an analysis of the generated fault tree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Meiqiang Yang, Shaoqian Hu, Xiaoming You, Linchuan Song
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730J (2024) https://doi.org/10.1117/12.3026502
In the field of substation automation communication, traditional communication protocols such as IEC101/IEC104 and DNP3 face communication security risks due to the lack of security mechanisms. The IEC/TS 62351 standard provides a specification for implementing communication security based on protocol itself. In order to meet the communication security requirements for DNP3 of substation communication terminals and to study engineering implementation solutions for application layer security, the application layer security for DNP3 of substation communication terminals has been achieved. This article elaborates on the implementation details of DNP3 application layer security, then tests and analyzes the communication traffic and computation time data after implementing application layer security. Based on consistency testing, the feasibility of engineering implementation of DNP3 security authentication is verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Puhao Zhang, Shumin Xie, Xiaoya Lu, Zuodong Zhong, Qing Li
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730K (2024) https://doi.org/10.1117/12.3026671
With the rapid advancement of autonomous driving technology, effective trajectory planning has become crucial for ensuring road safety and driving efficiency. Traditional trajectory planning methods often rely on preset rules and models, making them ill-suited for the complex and dynamic traffic environment. To address this, a Maximum Entropy Inverse Reinforcement Learning (MaxEnt IRL) based trajectory planning method is proposed in this paper, aiming at learning from expert driving behaviors to infer an efficient reward function, which in turn guides decision-making and path planning for autonomous vehicles. This study begins by analyzing expert driving data to extract key state and action features. Then, the MaxEnt IRL algorithm is applied to learn the reward function underlying these features, reflecting the decision-making logic of expert drivers. The learned reward function is subsequently used to guide the trajectory planning of the autonomous driving system, generating safe and efficient driving paths. A series of experiments conducted in a simulated environment demonstrate that the MaxEnt IRL-based method proposed in this paper exhibits higher adaptability and efficiency in handling complex traffic scenarios compared to traditional trajectory planning methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730L (2024) https://doi.org/10.1117/12.3026709
To understand the application of network security vulnerability mining, application research of black box genetic algorithm in network security vulnerability mining is put forward. In this paper, firstly, aiming at the high false detection rate and missed detection rate of current system network security mining methods, a system network security vulnerability mining method based on a black box genetic algorithm is proposed. Secondly, the overall security situation of the system network is obtained by the all-around perception of the system network situation. Black-box genetic algorithm is introduced to carry out a black-box fuzzy test, the objective function is selected and the test parameters are generated. The optimized samples are transmitted to the fuzzy test module, and the abnormal situation is recorded in real-time through the log monitoring test system. When the fuzzy test reaches the preset goal, the test is stopped and the system network security vulnerability report is output. The final results show that the false detection rate of the proposed algorithm is low, which proves that the proposed algorithm is reliable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730M (2024) https://doi.org/10.1117/12.3026737
Wireless Sensor Network (WSN) plays a vital role in the field of information technology. Sensor nodes, as the core components of the network, are responsible for sensing and collecting data in the environment and transmitting them to other nodes or a central server through the network. To increase the coverage effect and connectivity of network nodes, the study combines the Coot Bird Swarm Optimization Algorithm (COOT) with the Multi-Objective Artificial Hummingbird Algorithm (MOAHA) were combined and improved using a multi-strategy approach. The results show that the average coverage of the improved White Bone Top Bird Flock Optimization Algorithm is 97.48%, and the difference between the improved Multi-Objective Artificial Hummingbird Algorithm on the functions F1 and F2, respectively, is 0.8916 and 0.0092. Therefore, the research of sensor nodes oriented to the performance of the network effectively improves the performance of optimal coverage of the sensor nodes of the WSNs and provides the multi-objective node deployment scheme problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730N (2024) https://doi.org/10.1117/12.3026594
Unmanned Surface Vessel (USV) is autonomous waterborne carriers with the need for the capability to interact with the external environment. To achieve this objective, USV must possess the abilities for path planning and dynamic obstacle avoidance to address various potential hazardous situations. During the navigation of USV, not only is global path planning decision-making necessary, but timely responses to local hazardous environments are also crucial to prevent accidents. Only through these measures can USV ensure the safe, efficient, and smooth completion of tasks. Building upon the foundation of the USV motion model, this paper identifies and summarizes the shortcomings of the Grey Wolf Optimizer (GWO) algorithm, proposing improvement strategies. A GWO algorithm based on random walks is introduced, utilizing random exploration of the search space through random walks, followed by having omega (ω) wolves follow them to update their positions, thereby enhancing global search capabilities. Through comparative algorithm simulations, the refined algorithm has made significant strides, demonstrating faster convergence and improved effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730O (2024) https://doi.org/10.1117/12.3026302
The graph filter can extract the desired features from the graph signal and filter out the noise signal. Most of the graph filters proposed in the literature are linear. Autoregressive moving average (ARMA) filter is a polynomial filter. Compared to finite-impulse response (FIR) graph filters, ARMA graph filters are robust to changes in the signal and/or graph, but are still linear. In this work, we propose a weighted median autoregressive graph filter (WMAF) based on a first-order ARMA graph filter. The proposed filter is a combination of weighted median filter in the traditional signal processing field and median autoregressive filters (MAF), and can be implemented in a distributed way. Compared with linear filter and MAF, the proposed WMAF filter has better filtering effect on pulse noise. In the denoising application of real sensor network data set, the filtered signal has a better signal-to-noise ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730P (2024) https://doi.org/10.1117/12.3026633
The BDS satellite common-view time-frequency synchronization algorithm based on miniature rubidium atomic clock is proposed to address the low frequency stability of the local-clock in the traditional method using temperature compensated crystal oscillators as satellite timing devices. Using a miniature rubidium atomic clock as the local-clock of the satellite timing device, the time difference between the observation time and the initial time of the measurement basic time frame is obtained by modelling and predicting the clock deviation and clock deviation change rate of the local-clock. Combined with the clock deviation change rate, the clock deviation correction amount of the local clock is calculated, and the measurement basic time frame is adjusted to generate and output second pulses, achieving the synchronization of the BDS satellite common-view time-frequency. The miniature rubidium atomic clock satellite timing device is constructed from the perspective of hardware assembly and software design and the performance evaluation is conducted on the frequency stability of the local clock and the synchronization results of the BDS satellite common-view time-frequency. The experimental results show that the algorithm proposed achieves frequency stability of the local clock of the satellite timing device is better than 2.0 ns based on B1I pseudo-range signal, and the synchronization accuracy of BDS satellite common-view time-frequency can reach 17.4 ns. The algorithm and timing device proposed in this paper are effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730Q (2024) https://doi.org/10.1117/12.3026723
Parallel nested arrays, as a new type of array structure, significantly improve the spatial resolution and array degrees of freedom of the system by combining two sub arrays with different spacing. This article conducts in-depth research on the two-dimensional direction estimation (DOA) algorithm based on unitary ESPRIT under parallel nested array structures. Firstly, a detailed analysis was conducted on the structural characteristics of parallel nested arrays, emphasizing their importance in improving DOA estimation performance. Secondly, the two-dimensional DOA estimation algorithm based on unitary ESPRIT was explored from the aspects of sub array output signal processing, cross covariance matrix construction, virtual array and dimensionality reduction estimation. Finally, the DOA estimation process was analyzed, providing an effective method for two-dimensional DOA estimation under parallel nested array structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kainan Lu, Jingjing Jiang, Tingyu Wang, Mengfei Yang
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730R (2024) https://doi.org/10.1117/12.3026670
Nand Flash is a widely used storage device in embedded systems. Because of its special physical properties, raw flash file systems need to have a garbage collection module to collect invalid data. Traditional garbage collection algorithms do not perform well in the case of data with different hot and cold degrees, which leads to high write amplification. And raw flash file systems such as JFFS and YAFFS do not have a wear leveling module to solve the problem of imbalance erase of blocks, which will cause the life span problem of Nand Flash. In this paper, we propose a garbage collection algorithm that realizes hot-cold separation with a low cost of memory and wear leveling by a bidirectional ordered linked list. The experimental result shows that our algorithm achieves an improvement of 15% in write amplification and a significant improvement in the degree of wear leveling compared to traditional raw flash file systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730S (2024) https://doi.org/10.1117/12.3026394
For Time Difference of Arrival (TDOA) location, an improved Dung Beetle Optimizer location algorithm based on golden sine strategy and self-adaptive Levy flight was proposed. In the population initialization stage, Bernoulli map was used to initialize the position of dung beetle population, which optimized the uneven distribution of population and limited search range. Then, the golden sinusoidal strategy is introduced into the dung beetle rolling behavior rules to enhance the local optimization ability and improve the convergence ability of the algorithm. Finally, self-adaptive Levy flight strategy was used to randomly update the location of the theft behavior of dung beetles, which decreased the constraint of the local extremum and improved the anti-jamming ability of the algorithm to the location noise. Experimental comparisons with Chan algorithm, Sparrow Search Algorithm(SSA), Whale Optimizer Algorithm(WOA), Grey Wolf Optimizer(GWO), and the Dung Beetle Optimizer demonstrate that the improved cockroach algorithm achieves higher positioning accuracy and convergence speed. It outperforms other algorithms and is suitable for TDOA location.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730T (2024) https://doi.org/10.1117/12.3026295
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lei Sun, Bo Shi, Ning Li, Chenmeng Guo, Wenyu Tang
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730U (2024) https://doi.org/10.1117/12.3026658
In order to optimize the resource allocation of power business systems and improve operational efficiency, a monitoring and analysis technology for power data flow interfaces in external interaction scenarios is proposed. Introduced API gateway technology, and based on the application of API gateway technology, explored sensitive data filtering and data desensitization methods in diplomatic interaction scenarios to protect sensitive information of enterprises, meet data usage needs, and achieve research on power data flow interface monitoring and analysis technology in external interaction scenarios. The experimental results show that in external interaction scenarios, after monitoring and analyzing the power data flow interface using the method proposed in this paper, the recognition rate of sensitive data after power data desensitization is low, and the desensitization time is shorter. Prove that the practical application effect of this monitoring technology is good.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dexun Jiang, Jianglong Hao, Yuanlong Chen, Xingling Li, Jie Liu
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730V (2024) https://doi.org/10.1117/12.3026437
Cataract, the leading cause of global blindness, represents a focal concern within the field of blindness prevention. Its diagnosis primarily relies on the observation of lens opacification under slit-lamp examination, coupled with best-corrected visual acuity assessment. With the rapid evolution of artificial intelligence, the ophthalmic domain has increasingly incorporated AI technologies; however, research in the realm of cataracts remains relatively limited. This study employs computer vision segmentation techniques to obtain precise images of cataractous lens nuclei and utilizes deep learning methodologies for training and validation, yielding commendable results in the realm of graded diagnostic accuracy. The application of computer vision for meticulous cataractous lens nuclear region-of-interest imaging, coupled with adept deep learning methodologies, has demonstrated notable efficacy in achieving superior diagnostic outcomes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730W (2024) https://doi.org/10.1117/12.3026362
The construction of earthwork and stonework in the roadbed base is an indispensable part of the highway engineering construction process, and the reasonable allocation of engineering machinery groups has a significant impact on the cost and duration of the roadbed base construction. To solve the optimization problem of highway construction scheduling, a construction machinery group configuration optimization model with the goal of minimum cost and minimum construction period is established. An improved particle swarm algorithm with optimized particle swarm bounding search strategy is proposed and solved. Firstly, a equipment construction model is established. Then, a particle swarm bounding search strategy is constructed based on the Grey Wolf algorithm to dynamically adjust the composition of the particle swarm, improve the initial population quality, and integrate the Grey Wolf algorithm strategy into the particle swarm algorithm to improve the population diversity in the later stage of the algorithm and retain high-quality individuals during the evolution process. Finally, the effectiveness of the improved algorithm is verified by comparing the configuration example of the construction machinery group with the algorithm
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730X (2024) https://doi.org/10.1117/12.3026694
Power equipment energy consumption data is affected by many factors, such as measurement errors, environmental changes, equipment aging, etc., these factors lead to data inaccuracy and noise interference. Therefore, it is difficult to detect the abnormal data. This paper presents a method for detecting abnormal energy consumption data of green building power equipment based on LOF algorithm. First, clean the energy consumption data of green building power equipment and collect it. The abnormal characteristics of power equipment energy consumption are obtained and standardized. Based on this, the Local Outlier Factor (LOF) algorithm is introduced to complete the accurate data detection of energy consumption anomalies of green building power equipment. The experimental results show that the detection accuracy of abnormal energy consumption data of power equipment in green buildings is ideal under the application of the research method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730Y (2024) https://doi.org/10.1117/12.3026624
In order to receive the habitus accurately and the reorganization and the decision information of the object fully, the spatial and temporal fusion must be taken into account. The algorithm adopts the high real-time Kalman filter algorithm fuses the data in wireless sensor network according to time series. Based on temporal data fusion, according to the spatial distribution characteristics, the multi-sensor data fusion is further carried out at the gateway layer according to the weight. According to the characteristics of different position errors changing in real time, the gateway layer uses spatial data, the adaptive weighting algorithm is used to dynamically adjust the weight of each node.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130730Z (2024) https://doi.org/10.1117/12.3026356
For industrial IoT systems with time requirements, a task offloading algorithm based on auction algorithm is proposed in order to improve the comprehensive offloading performance of the system as well as the ability to cope with unexpected tasks. The problem is abstractly modeled based on the current mainstream industrial IoT models, and an objective function model is proposed to evaluate the offloading results. Through the auction-based task offloading algorithm, the computational results of unified task scheduling and allocation are obtained at a relatively low time cost, and the simulation results show that the overall performance of the algorithm is better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307310 (2024) https://doi.org/10.1117/12.3026340
Railroad transportation is an important infrastructure for public travel and cargo transportation. With the rapid development of railroad construction, the mileage of operation continues to increase and the road network continues to improve, and the railroad covers a wider range of terrain, making the environment for train travel more complex. The behavior of individuals or groups entering the railroad track without permission or authorization may cause serious harm to the life safety of personnel and the normal operation of railroad traffic. Therefore, this paper aimed to propose an improved YOLOV5-based track personnel intrusion detection algorithm, which improves the recall rate by 14% and reduces the loss rate by introducing the CBAM attention mechanism into the C3 layer of the three pyramid strata of YOLO, achieving an average precision of 98%. The results of experimental simulation using the improved model on the acquired image data to be detected for unauthorized personnel intrusion into the track show that the machine vision-based railroad track personnel intrusion detection algorithm in this paper takes full account of the characteristics of the railroad scenario, and the processing has a high detection precision. The finding of the study can make contribution to the Railway Bureau to effectively detect the risk of railroad safety and reduce the probability of accidents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yan Yu, Wenting Zhu, Daojing Huang, Qian Wang, Mengqing Xiao, Peiying Li
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307311 (2024) https://doi.org/10.1117/12.3026630
Demand response has developed into one of the important contents of power supply system. In order to realize rapid response of power supply resources and promote efficient utilization of power resources, a rapid response method of power supply resources based on simulated annealing algorithm is proposed. Through simulated annealing algorithm, considering the constraints of power supply resources, the single-period response behavior and multi-period response behavior of power system are analyzed. The potential characteristics of power supply resources are extracted by combining the characteristic samples with the potential grade; Based on the effective value characteristics of the task and based on the time characteristics of the task, the rapid response of power supply resources is realized. The experimental results show that the response task completion rate of the proposed method is 95.7%. Under normal data and abnormal data, the best fitness can be achieved when the number of iterations is 40. It is proved that the proposed method has high efficiency and data adaptability when dealing with tasks, which can better ensure the completion and response of tasks and further improve the stability and adaptability of power supply system in order to meet the growing energy demand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307312 (2024) https://doi.org/10.1117/12.3026399
This paper presents an approach to build large-integer multipliers for Elliptic Curve Cryptography (ECC). Compared to the traditional divide and conquer method, our method introduces the 𝑛-term Karatsuba-like algorithm and adopts a cascaded structure for integer multiplications, which simplifies the complexity of the whole structure and contributes to a fine-grained multiplier design. Target at the required width of random elliptic curves over prime fields, we implement the design on virtex-7 FPGA and utilize internal DSPs as the fundamental multiplier resources. The synthesis results are compared quantitatively in terms of designs with different parameters and demonstrate the flexibility of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307313 (2024) https://doi.org/10.1117/12.3026306
Genetic Algorithms are an important method for optimizing composite materials. While using them Genetic Algorithms the optimization design of composite materials, a significant problem arises where material types may not be directly mapped to binary encodings. In this paper, we propose a natural number-binary hybrid encoding to overcome this issue, improving generality. Furthermore, one of the major drawbacks of Genetic Algorithms is premature convergence. This paper suggests a method to escape local optima by utilizing variable mutation rates to address premature convergence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307314 (2024) https://doi.org/10.1117/12.3026325
Supercomputing Internet is based on supercomputers and high-speed Internet, achieving the interconnection of data and computing resources between supercomputing centers in various regions, solving the problem of uneven distribution of computing resources, and providing diversified computing services. Workflow tasks consist of a large number of related tasks, and task scheduling of workflows is crucial in the supercomputing Internet. Good algorithms help provide users with higher quality services. Traditional heuristic methods usually focus on shortening the makespan and tend to prioritize the use of resources with stronger computing power. However, this method is prone to falling into local optima and increases cost overhead. In this article, we model the makespan of workflows and the cost of supercomputing resources as multi-objective optimization problems and propose a multi-objective ant colony algorithm (MOACO). The experimental results show that the algorithm effectively reduces makespan and cost, and fully utilizes available resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307315 (2024) https://doi.org/10.1117/12.3026700
With the increasing concern for environmental protection around the world, the application of green certificates in all walks of life is gradually increasing. As a recognition and record of a specific sustainable development practice, the accurate classification and label allocation of green certificate issuance text is very important for realizing the sustainable development goal. In order to improve the classification accuracy of green certificate issued text, a multi-label classification algorithm of green certificate issued text based on blockchain technology is designed. Multi-label dimensionality reduction is carried out on the green certificate issued text, and the text label vector is extracted. Based on blockchain technology, the labels are clustered and the training set is modified, and a classifier is introduced to design a multi-label classification algorithm for green certificate issued text. The test results show that this algorithm has high accuracy, low redundancy and high response speed. It is proved that this method is suitable for the application and processing of green certificate issuing text.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless Communication and Digital Signal Processing
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307316 (2024) https://doi.org/10.1117/12.3026705
Microwave signals transmitted through optical fibers have the advantages of large communication capacity, low transmission loss, high bandwidth, easy maintenance, and high security. The coherent light detection technology of microwave signals solves the problems of low direct detection sensitivity and limited power, and is therefore widely used in wireless communication systems, radar systems, satellite communication systems, and other fields. This article first provides a theoretical analysis of three modulation methods (AM, PM, FM) and coherent detection methods of homodyne and heterodyne. By observing spectral changes, coherent demodulation theories corresponding to three modulation methods were derived, and the results showed that amplitude modulation is the most suitable method for microwave signal transmission. The study analyzed the nonlinear characteristics generated by MZ modulators on LiNbO3 substrate during the modulation process, and pointed out that the greater the deviation between the modulation signal and the standard operating point, the worse the linear effect. The research results indicate that using microwave signal fiber optic transmission technology to replace traditional single microwave transmission and processing technology can effectively reduce energy consumption in long-distance signal transmission, improve the efficiency, security, and confidentiality of signal transmission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307317 (2024) https://doi.org/10.1117/12.3026499
This paper conducts an in-depth requirement analysis of artificial rainmaking operations using the ontology-guided problem framework method, constructing a clear and organized structure to systematically reveal the complexity and multi-level nature of artificial rainmaking operations. In the process of requirement modeling, we precisely define key aspects such as business goals, technical constraints, and environmental impacts, contributing to a more comprehensive understanding of the nature of the business. This ontology-guided problem framework method, based on ontology guidance, provides a unique and practical approach for artificial weather modification operations, offering systematic and actionable guidance for relevant decision-making and implementation. The aim of this paper is to provide more precise guidance for the planning and implementation of artificial rainmaking projects, thereby promoting the further development and application of artificial weather modification technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Zhongxiang Cai, Qingyun Xu, Junbiao He, Sicheng Tao, Yuan Li
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307318 (2024) https://doi.org/10.1117/12.3026634
With economic globalisation and the implementation of the "the Belt and Road" initiative, Chinese enterprises are expanding their business overseas at a faster pace. However, some overseas underdeveloped areas have weak communication infrastructure, insufficient signal coverage, poor network rate and other problems, which leads to constraints on the information management ability of Chinese enterprises' overseas projects, and even poses a potential threat to the safety of the employees stationed abroad, and may lead to emergency situations that can not be known and dealt with in a timely manner due to the lack of smooth information. This paper takes Pakistan, an important pivot country along the "Belt and Road", as the main research object, and designs a system combining hardware and software to address the problem of unstable network of a single operator. The system builds a local traffic resource pool through a cloud SIM Bank, uses dedicated Mobile WIFI Equipment to intelligently match SIM cards with operators, and cooperates with vSIM remote scheduling management and mobility load balancing mechanism to provide more stable and high-quality operator network connection for users. The results of local field tests show that this technology can significantly improve network quality and enhance network connection stability under the existing communication infrastructure conditions. This research result has important practical value for improving the environment for Chinese enterprises to expand their business overseas and enhancing the information management capability of overseas projects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 1307319 (2024) https://doi.org/10.1117/12.3026286
In the field of computer vision, Weakly Supervised Semantic Segmentation (WSSS) at the image level poses a significant research challenge. Existing WSSS methods primarily rely on class activation maps (CAM). Due to the disparity between fully-supervised and weakly-supervised approaches, CAM often yields imprecise and coarse semantic information in generating target masks. To address the issues of semantic coarseness and detail loss in image-level WSSS, this paper introduces a novel approach that utilizes pixel-level relationships to optimize and refine the features extracted by the network. Maintaining the foundation of image-level class label weak supervision for semantic segmentation, this study forgoes extensive modifications to the model structure in favor of incorporating a Graph Attention Module (GAM). This module, by constructing a weighted undirected graph with pixels as vertices and pixel affinity as edges, simulates the interactions among neighboring pixels in graph representation learning. Such an arrangement allows for the effective propagation and integration of relational information between pixels, thereby enhancing the overall representational quality of the image. By employing graph constraints to facilitate semantic dissemination, our method optimizes the network’s localization map, resulting in more precise CAMs and more accurate pseudo-labels for subsequent semantic segmentation network training. To evaluate our approach, we conducted quantity and quality experiments on the PASCAL VOC 2012 and MS COCO 2014 datasets. Compared to existing advanced weakly-supervised semantic segmentation methods, our approach shows notable improvements in performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731A (2024) https://doi.org/10.1117/12.3026637
In the wave of future large-scale drone development, the transformation of drone mission mode from single reconnaissance to the coexistence of reconnaissance and combat has become an unstoppable trend. When carrying out difficult tasks, changes in the high-frequency body posture pose a huge challenge to the unmanned aerial vehicle communication system, which requires the drone to have longer flight hours, longer mission radii, and stronger maneuverability. This article is based on the requirements of airborne satellite communication antennas for satellites. By establishing a three-dimensional mathematical model and solving the satellite conditions, based on STK and Matlab simulation software, the impact of drone maneuvering on satellite communication quality during long-range flight missions is simulated and analyzed. Based on the results, advance warnings are made for drone route planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731B (2024) https://doi.org/10.1117/12.3026403
We mainly studied the effective computation method of dilution of precision (DOP). We extended the differential superposition principle and proposed the generalized differential superposition principle (GDSP) method and algebraic cofactor (AC) method for DOP calculation. The methods we proposed are fast algorithms based on effective computation, which avoids matrix inversion. The comparison of the calculation speed and algorithm structure of the definition-expression method, GDSP method, and AC method were conducted: when n=4, the GDSP method has the simplest algorithm structure; when n ≥ 4, the AC method always outperforms the definition expression method, when the complicated computations of the definition method and AC method were decomposed into simple calculations, the difference of the multiplication calculation amount stays at a fixed value. The results validate that the algorithm structure and computing speed of the proposed methods outperform the definition-expression method; the DOP based on our method are fast, general, and accurate analytical solution. The GDSP also applies to positioning performance analysis with complex input error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731C (2024) https://doi.org/10.1117/12.3026383
The clock tree synthesis (CTS) step is critical for the physical design of ultra-large-scale integrated circuits (VLSI). Outstanding CTS results can reduce the skew of clock networks, decrease chip power consumption while improving performance and reliability of the chip. This paper proposes a register clustering method based on the nondominated sorting genetic algorithm II (NSGA-II) algorithm to generate the leaf-level topology of the clock tree, addressing multi-objective optimization issues in integrated circuit physical design. We model register clustering as a multi-objective optimization problem. By analyzing the clock network model and power consumption to design the objective functions, register clustering schemes are encoded. Pareto optimal solutions are obtained through iterative evolution and non-dominated sorting. This method is integrated into the traditional CTS process. Three circuits were selected from the ISCAS89 benchmark circuit for testing and analysis proving its effectiveness. Experimental results show that compared to traditional clock tree synthesis methods, our approach reduces timing skew by more than 5%, maximum clock delay by over 15%, and power consumption by over 10%. Our strategy achieves a balance among multiple physical design objectives, obtaining superior clustering schemes. This paper offers an effective algorithm for multi-objective register clustering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731D (2024) https://doi.org/10.1117/12.3026625
The Aeronautical Ad Hoc Network (AANET) serves as a novel solution to alleviate the resource constraints in civil aviation communication systems significantly. However, due to the characteristics of civil aviation, such as large network scale, rapid topological changes, sparse node distribution, and unstable channels, existing ad hoc network routing protocols cannot be directly applied. In this study, we propose the Movement Change Triggers Beacon Routing based on GPSR (MCTB-GPSR). This method employs the detection of node movement state changes to trigger beacon sending for maintaining the neighbor table, replacing the conventional GPSR mechanism of fixed time interval beacon sending. Simulation experiments reveal that compared with GPSR, the proposed MCTB-GPSR packet delivery ratio is improved by 2.2%, End-to-End Delay is reduced by 0.12 s, and routing overhead is reduced by 9 kb/s.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731E (2024) https://doi.org/10.1117/12.3026498
With the development of communication technology from 1G to 5G, emerging intelligent services put forward higher requirements for network performance. In order to further improve communication efficiency, new semantic communication system can be designed at the level of meaning expression. Semantic communication is concerned with the meaning carried in the transmission content, that is, semantic information, which is encoded with semantic information. In this paper, the framework of the semantic communication system for unmanned vehicles is designed with text as the transmission content. The framework design consists of the semantic layer and the traditional transmission layer, which guarantee the semantic extraction and communication accuracy between unmanned vehicles respectively. Meanwhile, the transformer model is used to design the semantic encoder and decoder, and its attention mechanism can effectively extract the semantic information of the sentence. Experiment results show that the BLEU score obtained by semantic communication system for unmanned vehicles is higher than that of traditional communication system in different signalto-noise ratio environments, and has higher transmission reliability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chunyan Yang, Songming Han, DongMei Bin, Ying Ling, Hua Fu
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731F (2024) https://doi.org/10.1117/12.3026713
As the power system undergoes continuous digitization and networking, the associated network security challenges have become increasingly prominent. Addressing these issues, the zero trust framework emerges as a cutting-edge security concept that underscores sustained vigilance toward users, devices, and applications within the network environment. Its core principle involves minimizing the scope of trust granted to the smallest possible extent. This study delves into the application of the zero trust framework in crafting the architecture of power system networks, with the primary objective of enhancing network security and thwarting potential threats. The article commences by scrutinizing the network security requirements of internet applications and subsequently proposes a zero trust architecture as a security solution. The construction of a zero trust-based smart grid security protection architecture follows, encompassing elements such as terminal trusted perception agents, multi-source data aggregation platforms, intelligent trust evaluation platforms, dynamic access control platforms, and trusted access agents. Finally, the efficacy of the zero trust framework in power system network architecture is validated through case studies and simulation experiments. The research findings highlight a significant enhancement in the network security of the power system with the incorporation of the zero trust framework. This, in turn, establishes an effective security mechanism conducive to the digital transformation of future power systems. In summary, the study contributes a novel security approach and technical solution for the design of power system network architecture, offering valuable insights and reference for the development of the power system network security field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731G (2024) https://doi.org/10.1117/12.3026623
Due to the large amount of airborne multispectral light detection and ranging (MS LiDAR) point cloud data, it is required to annotate it to complete supervised learning. However, the annotation cost of large-scale point clouds is high, which can easily lead to incomplete or inaccurate annotation, affecting the accuracy of point cloud classification. Therefore, this article proposes a new weakly supervised MS LiDAR point cloud classification method based on kernel point convolutional semantic query network. Firstly, using kernel convolutional semantic query network to detect weak targets in point clouds. On this basis, sparsify the point cloud data. Introduce weakly supervised learning methods to classify MS LiDAR point clouds. The experimental results have verified that the research method can accurately classify different types of point cloud data, and the time consumption can be controlled within 5ms. Compared with traditional methods, it has significant application advantages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731H (2024) https://doi.org/10.1117/12.3026618
In order to meet the needs of nonlinear target long-range detection, a pair of transmitting and receiving helical antenna for high-power hand-held harmonic radar are described in this paper. The operating frequency of the transmitting antenna is 3.0 GHz and the operating frequency of the receiving antenna is 6.0 GHz and 9.0 GHz. Simulated results show that both two antennas are left-handed circular polarized. The transmitting antenna has an overall impedance bandwidth (VSWR ≤ 2) from 2.6 GHz to 3.7 (24.7%), and an axial ratio (AR) bandwidth (AR ≤ 3 dB) from 2.9 GHz to 3.45 GHz (16.2%). The receiving antenna has an overall impedance bandwidth (VSWR ≤ 2) from 5.8 GHz to 10.5 GHz and an axial ratio (AR) bandwidth (AR ≤ 3 dB) from 5.8 GHz to 10.5 GHz (16.2%) for 2nd and 3rd harmonic. Maximum gain of 9 dBic for transmitting antenna and 10.7 dBic for receiving antenna are obtained as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731I (2024) https://doi.org/10.1117/12.3026655
This paper analyzes the main elements of vehicle wire harness diagrams and summarizes the rules that a correct wire harness diagram should adhere to. Building upon this foundation, with the aid of Teigha, successful extraction of entities within vehicle wire harness diagrams has been achieved. Node Recognition Algorithm and Wire harness recognition algorithm have been designed, which can correctly analyze elements in the harness diagram and extract data from them. A comparative experiment was conducted with the traditional electronic wiring board, and the experimental results showed that the intelligent electronic wiring board using the algorithm shortened the average path resolution time from 4.3 hours to 7.6 seconds, and no errors occurred during the processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wu Xie, Zhenzhao Su, Huimin Zhang, Ping Kang, Kun Qin, Yong Fan, Quanyou Zhao
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731J (2024) https://doi.org/10.1117/12.3026677
The detection methods of yellow dragon disease spread via wood lice transmission networks like social networks are very important for diverse citrus trees and farmers. Although current methods have some detection accuracy or low cost, the detection processes are relatively troublesome and the detection cycles are long, making it be a difficult problem to apply them in large-scale orange farms as practical scenarios to detect citrus yellow dragon disease in a timely manner. A new method toward detecting citrus yellow dragon disease spread utilizing Spark and deep learning is proposed for this problem. By obtaining citrus field video stream data through high-definition cameras, and transferring the stream data to the Spark cluster through Kafka like intelligent agents, it is practicable to use the structured streaming component under the Spark framework via big data ecosphere to process video or image stream data transmitted via monitoring. We construct a citrus yellow dragon disease detection model via YOLOv7, and use self-made citrus yellow dragon disease images as training and testing data sets. The preliminary experimental results show that the new methods achieved an accuracy of 83.14%. To reduce the occurrence of missed and false detections, the shallow detection heads are added to the feature fusion networks for improvement, extracting and fusing shallow network information to try to improve the detection effects of yellow dragon disease. Then replace the convolution operation in the ELAN (Effective Long-range Aggregation Network) module with deep separable convolution to reduce the number of model parameters. The preliminary experimental results show that compared to the original YOLOv7 model, our improved citrus yellow dragon disease detection model with YOLOv7 has an accuracy improvement of 2.43%, maintaining a higher detection accuracy with lower time than before.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731K (2024) https://doi.org/10.1117/12.3026320
As a carrier for long-distance and high-capacity transmission of energy, a large number of overhead transmission lines cross through unmanned areas without mobile network signals, posing challenges to high-speed and long-distance transmission of video, image, and sensor data at the construction site of power infrastructure. The high-speed spatial laser communication technology, a data transmission technology for high-speed and long-distance transmission, has unique advantages in the field of power infrastructure construction. This article first demonstrates the principle and advantages of high-speed spatial laser communication technology. Then, aiming at the problems of high alignment accuracy requirements and long building time in space optical communication, an all-optical fast alignment algorithm without auxiliary is introduced. Finally, based on the communication requirements of the construction site of power transmission line infrastructure, the key parameter design and device selection of the spatial high-speed laser communication device are carried out. The research results can provide important support for high-speed and long-distance transmission of image, video, and sensor signals in the construction site of power transmission lines in unmanned areas without public network signals, ensuring the construction quality and safety at the construction site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731L (2024) https://doi.org/10.1117/12.3026666
In embedded human-computer interaction systems, the development of high-performance gesture recognition technology is crucial due to its demand for low power consumption and efficient processing. Addressing the challenge of highprecision gesture recognition in complex backgrounds, a high-performance embedded gesture recognition method based on the Expandable Residual Attention mechanism is proposed. This method enhances the capability of extracting differentscale gesture features by introducing the Expandable Residual Attention mechanism into YOLOv7. Additionally, to address the characteristics of high degrees of freedom and self-occlusion in hand gestures, SoftNMS is introduced with a penalty term to effectively reduce the probability of target omissions. Finally, the gesture recognition model is compressed and accelerated with TensorRT. Experimental results on the Jochen Triesch Static Hand Posture Database demonstrate that the proposed method significantly improves gesture recognition accuracy while maintaining high inference efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731M (2024) https://doi.org/10.1117/12.3026643
Aiming at the route planning for collaborative operation of unmanned aerial vehicle(UAV) hangar in power inspection scenarios, the paper presents a quantitative calculation and planning method. Based on the constraints of the characteristics of UAV hangar inspection operations, combined with the multi-objective optimization conditions of minimum inspection voyage and shortest task completion time, a comprehensive utility function was constructed and an integer programming model was established, and an adaptive large neighborhood search algorithm was proposed to solve the model and calculate the optimal cooperative inspection operation route. Compared with the results of simulated annealing algorithm and ant colony optimization algorithms, the effectiveness of the algorithm can be proved. The case analysis selected the route planning research of the UAV hangar collaborative inspection operations in a certain area of Foshan, and the results show that the method used in this paper can effectively improve the efficiency of UAV hangar collaborative inspection operations and reduce resource consumption, and have practical application value in engineering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731N (2024) https://doi.org/10.1117/12.3026359
To address the challenges of small defect objects and complex background in photovoltaic panel defect detection, an improved YOLOv7 based photovoltaic panel defect detection is proposed in this paper. Coordinate attention mechanism is incorporated to enhance the model's global perception capabilities. Additionally, C-IoU loss function is adopted to optimize training while ensuring improved training accuracy. Experimental results conducted on public dataset demonstrate that the proposed method outperforms baseline object detection algorithms, achieving a mean Average Precision (mAP) of 93.9%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731O (2024) https://doi.org/10.1117/12.3026682
Sentence semantic matching plays a crucial role in various natural language processing (NLP) tasks such as question answering, information retrieval, and text classification. Prior research has made significant contributions to sentence semantic matching, but there is still room for improvement, especially in handling subtle semantic representations. This paper introduces a new approach to sentence semantic matching that integrates Isotropic Batch Normalization and Generalized Pooling Operator, two advanced neural network architectures. By combining these techniques, we aim to enhance the accuracy and efficiency of semantic matching and address the challenges of capturing subtle semantic representations. We compare our approach to existing state-of-the-art models and demonstrate its effectiveness through comprehensive experiments on benchmark datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731P (2024) https://doi.org/10.1117/12.3026622
In order to reduce or avoid the adverse effects of electromagnetic interference on power system and civil aviation communication navigation monitoring equipment, and improve the safety and stability of operation, an electromagnetic interference suppression technology for power system and civil aviation communication navigation monitoring equipment is proposed. According to the magnitude of the maximum-average index of amplitude and the amplitude index characteristics of electromagnetic interference signal frequency in time domain, the characteristics of electromagnetic interference signal are extracted, the extracted signals are summarized into the intermediate frequency, the electromagnetic interference information is identified, and the channel coding is combined to correct, so that the data can be recovered to the maximum extent and the electromagnetic interference suppression result can be obtained. The experimental results show that the variation range of the suppression amount is small, which can significantly reduce the fluctuation of signal amplitude and improve the stability and reliability of the signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731Q (2024) https://doi.org/10.1117/12.3026639
Secure communication protocols are an important means to ensure the security of information transmission, and test sequence generation algorithms are used to evaluate and verify secure communication protocols. To improve the performance and reliability of secure communication protocols, a correlation vector machine hyperparameter optimization algorithm for generating secure communication protocol test sequences is proposed. We build an FSM model for secure communication protocols, analyze the optimization process of vector machine hyperparameters related to the FSM model, and generate a test sequence for secure communication protocols based on this. The experimental results show that the proposed algorithm performs well in generating secure communication protocol test sequences and can effectively improve the efficiency of generating secure communication protocol test sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yang Li, Lingchu Zhao, Jia Zhu, Yonglin Liu, Jian Ding, Jia Yang
Proceedings Volume Third International Conference on High Performance Computing and Communication Engineering (HPCCE 2023), 130731R (2024) https://doi.org/10.1117/12.3026491
Covert communication refers to achieving high-level secure transmission by protecting communication behaviour. This paper primarily centres on the examination of covert communication in the scene of the power Internet of Things (PIoT). Specifically, we explore the scenario where power users (PU) transmit information to the master station (MS) while being exposed to potential eavesdropping. In addition, we assume that Willie has noise uncertainty when detecting communication behaviour, which makes it difficult for Willie to accurately detect communication behaviour, and significantly affects the detection error probability (DEP) of Willie and the overall covert performance of the proposed model. It is worth noting that the simulation results indicate that the presence of noise uncertainty can enable MS and PU to successfully achieve covert transmission. By leveraging this uncertainty, MS and PU can effectively protect their communication behaviour and minimize the risk detected by potential Willie, which provides a solution for high-level secure communication in the field of PIoT and effectively protects the privacy of PU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.