Paper
5 October 2021 Single task fine-tune BERT for text classification
Zhang Xinxi
Author Affiliations +
Proceedings Volume 11911, 2nd International Conference on Computer Vision, Image, and Deep Learning; 119111Z (2021) https://doi.org/10.1117/12.2604768
Event: 2nd International Conference on Computer Vision, Image and Deep Learning, 2021, Liuzhou, China
Abstract
Over the past decades, natural language processing (NLP) has been a hot topic in many fields, e.g., sentiment analysis and news topic classification. As a very powerful language pre-training model, Bidirectional Encoder Representations from Transformers (BERT) has achieved promising results in many language understanding tasks including text classification. However, fine-tune BERT to adapt different text classification task efficiently is a critical problem that needs improvement. In this paper, a general solution is proposed for BERT fine-tuning on single text classification task. Compared with other traditional fine-tune strategies without any pre-training step, the performance of BERT is boosted by pre-training withintask data. Moreover, the proposed solution obtains superior results on six widely-used text classification datasets.
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zhang Xinxi "Single task fine-tune BERT for text classification", Proc. SPIE 11911, 2nd International Conference on Computer Vision, Image, and Deep Learning, 119111Z (5 October 2021); https://doi.org/10.1117/12.2604768
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Transformers

Neural networks

Back to Top