Alajlouni, A. B., & Li, J. Knowledge Transfer to Solve Split and Rephrase.
|
 |
|
Abstract
|
This study introduces an innovative framework designed to address the issue of dataset insufficiency in the Split-and- Rephrase (SR) task by leveraging the knowledge embedded in a rule-based model and employing it to supervise the fine-tuning process of large pre-trained language models, enabling them to perform the SR task effectively without relying on labeled data. The usage of large pre-trained language models like BERT and GPT has brought a transformative impact on various natural language processing (NLP) tasks in recent times. However, a significant challenge faced in many NLP tasks is the scarcity of high-quality datasets required for fine-tuning these models for specific tasks. In this study, we introduce an innovative framework designed to address the issue of dataset insufficiency in the Split-and- Rephrase (SR) task. We achieve this by leveraging the knowledge embedded in a rule-based model and employing it to supervise the fine-tuning process of large pre-trained language models, enabling them to perform the SR task effectively without relying on labeled data. Our framework for knowledge transfer holds promise as a potential solution for other NLP tasks involving rule-based models.
|