KEYWORDS: Feature extraction, Data modeling, Semantics, Education and training, Matrices, Deep learning, Binary data, Classification systems, Statistical modeling, Intelligence systems
Current approaches to document-level event extraction face challenges such as scattered argument parameters across sentences and multiple event records within a single document. In this study, we propose a solution that combines context paragraph information modeling with argument extraction to tackle these issues. Firstly, we employ advanced techniques for segmenting lengthy documents into smaller parts. Additionally, we utilize pre-trained models to obtain paragraph vector matrices for each segment. By integrating these models, annotated sentence vector matrices and paragraph matrices are combined through a dot product fusion model and concatenated together to generate improved sentence vector embeddings using pooling operations. Next, the entities extracted through named entity recognition are used to select pseudo-trigger words and construct adjacency similarity matrices and composite graphs. Finally, the task of populating event table elements is transformed into a multi-label classification task to address the issue of overlapping roles. Experiments on the chFinAnn dataset demonstrate an overall F1 improvement of 1.5 percentage points, with a significant increase in the model's inference speed.
Personalized courses recommendation in Massive Open Online Courses (MOOCs) is the key to improve the learning experience. Traditional recommendation systems often overlook the diversity of user preferences and the consistency of users in different courses thereby limiting the accuracy and personalization of recommendations. To address these challenges, a Multi-View Attention Graph Convolutional Network (MVA-GCN) is proposed in this paper, which aims at extracting the complex relationships between courses and users more comprehensively to achieve precise prediction of user preferences. Firstly, a heterogeneous information network (HIN) from five types of entities and four types of relationships is constructed in the recommendation system. To mine the hidden relations of users in different meta-paths, we introduce a multi-view attention mechanism in MVA-GCN. And a contrastive learning based loss function is proposed to align entities in different meta-path views. Experimental results on the MOOC XuetangX dataset demonstrate that the MVA-GCN significantly outperforms recommendation baseline on multiple evaluation metrics, effectively validating the efficacy of the proposed method.
Most news recommendation methods believe that all news in the implicit feedback sequence is beneficial to the final result, but it includes the influence of irrelevant news (i.e. false user behavior). This is not ideal for mining user interest information, which usually contains large noise. In order to address this issue, a news recommendation model based on positive and negative implicit feedback feature fusion with sparse attention mechanism is proposed. Firstly, the method adopts the method based on deep learning. In the user interest representation module, the adaptive sparse transformation function is introduced to eliminate the influence of irrelevant news in the user's positive and negative implicit feedback sequence. Secondly, the user's interest features are better constructed by fusing the user's positive and negative implicit feedback features. Finally, the proposed model is examined and analyzed on the MIND dataset. Compared with other news recommendation methods, the recommendation evaluation index of this method has been effectively improved.
With the widespread use of recommendation systems, researchers have gradually started to focus on fairness issues including two-sided fairness along with recommendation accuracy. However, these works often consider the relationship between the provider and the recommended items as one-to-one or one-to-many, without considering the situation that each item may have multiple related individuals on the provider side. In this paper, we implement a two-sided fair perception recommendation method based on fair allocation in a context where the relationship pattern between providers and items is many-to-many relationship. Specifically, on the one hand, the attention of all consumers is considered as the total exposure available to the provider and is fairly allocated to each provider based on the fairness criterion, and on the other hand, each exposure of each item is allocated to each relevant provider based on the fairness criterion, and the sum of the exposure obtained by each provider in all relevant items is taken as its final exposure. We conducted experiments on a real-world dataset, and the results show that the approach in this paper provides better two-sided fairness compared to the benchmark approach while maintaining good recommendation quality.
In the mining and analysis of time series data, the similarity measurement process of two time series is essential. The dynamic time warping algorithm DTW (Dynamic Time Warping) is a common algorithm for measuring the similarity of time series. Although DTW has obtained the global optimal solution, it may not be able to achieve local reasonable matching, and the measurement effect is limited. The shape dynamic time warping algorithm shapeDTW (Shape Dynamic Time Warping) enhances the measurement effect by considering point-wise local structure information. In order to continue to improve the measurement effect of the shapeDTW algorithm, this paper analyzes the definition and common methods of the shape descriptor of its algorithm. On the basis of extending the subsequences at each time point, the method of representing subsequences using shape descriptors is directly changed. The DTW value of the subsequence is calculated as the distance value of each time point in the cumulative cost matrix during the alignment of the entire original time series. This method is called subDTW (Subsequence Dynamic Time Warping). The subDTW algorithm enhances the measurement effect of the shapeDTW algorithm to a certain extent. Through the experimental research and analysis of the approximate window size of the segmentation aggregation on the training set, the subDTW algorithm based on the cumulative cost matrix pruning can keep the classification accuracy unchanged. A significant reduction in time overhead is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.