WIKINDX

WIKINDX Resources

Zou, J., Xu, X., Hou, J., Yan, Q., & Zheng, H. Self-supervised Bidirectional Prompt Tuning for Entity-enhanced Pre-trained Language Model. 
Resource type: Journal Article
BibTeX citation key: anon.199
View all bibliographic details
Categories: General
Creators: Hou, Xu, Yan, Zheng, Zou
Attachments   URLs   https://www.semant ... f4bcbb87745bb2fefd
Abstract
A novel two-stage entity knowledge enhancement pipeline for Chinese pre-trained models based on “bidirectional” prompt tuning is proposed, which significantly improves upon the baseline results in most datasets, particularly those that have a strong reliance on diverse and extensive knowledge. With the promotion of the pre-training paradigm, researchers are increasingly focusing on injecting external knowledge, such as entities and triplets from knowledge graphs, into pre-trained language models (PTMs) to improve their understanding and logical reasoning abilities. This results in significant improvements in natural language understanding and generation tasks and some level of interpretability. In this paper, we propose a novel two-stage entity knowledge enhancement pipeline for Chinese pre-trained models based on “bidirectional” prompt tuning. The pipeline consists of a “forward” stage, in which we construct fine-grained entity type prompt templates to boost PTMs injected with entity knowledge, and a “backward” stage, where the trained templates are used to generate type-constrained context-dependent negative samples for contrastive learning. Experiments on six classification tasks in the Chinese Language Understanding Evaluation (CLUE) benchmark demonstrate that our approach significantly improves upon the baseline results in most datasets, particularly those that have a strong reliance on diverse and extensive knowledge.
  
WIKINDX 6.11.0 | Total resources: 209 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)