Zou, J., Xu, X., Hou, J., Yan, Q., & Zheng, H. Self-supervised Bidirectional Prompt Tuning for Entity-enhanced Pre-trained Language Model.
|
 |
|
Abstract
|
A novel two-stage entity knowledge enhancement pipeline for Chinese pre-trained models based on “bidirectional” prompt tuning is proposed, which significantly improves upon the baseline results in most datasets, particularly those that have a strong reliance on diverse and extensive knowledge. With the promotion of the pre-training paradigm, researchers are increasingly focusing on injecting external knowledge, such as entities and triplets from knowledge graphs, into pre-trained language models (PTMs) to improve their understanding and logical reasoning abilities. This results in significant improvements in natural language understanding and generation tasks and some level of interpretability. In this paper, we propose a novel two-stage entity knowledge enhancement pipeline for Chinese pre-trained models based on “bidirectional” prompt tuning. The pipeline consists of a “forward” stage, in which we construct fine-grained entity type prompt templates to boost PTMs injected with entity knowledge, and a “backward” stage, where the trained templates are used to generate type-constrained context-dependent negative samples for contrastive learning. Experiments on six classification tasks in the Chinese Language Understanding Evaluation (CLUE) benchmark demonstrate that our approach significantly improves upon the baseline results in most datasets, particularly those that have a strong reliance on diverse and extensive knowledge.
|