WIKINDX

WIKINDX Resources

Cs224n, S., Project, D., & Guo, M. Multi-task Learning and Fine-tuning with BERT. 
Resource type: Journal Article
BibTeX citation key: anon.46
View all bibliographic details
Categories: General
Creators: Cs224n, Guo, Project
Attachments   URLs   https://www.semant ... tm_medium=33014503
Abstract
This research contributes to the ongoing exploration of how large pre-trained language models like BERT can be effectively fine-tuned and adapted for a range of NLP tasks, offering insights into multitask learning, data sharing strategies, and the optimization of sentence representation methods for enhanced model performance. With the advent of BERT, the capacity for NLP models to understand human language has significantly improved, yet challenges remain in fine-tuning these models for specific tasks without falling victim to issues such as catastrophic forgetting. This research contributes to the ongoing exploration of how large pre-trained language models like BERT can be effectively fine-tuned and adapted for a range of NLP tasks, offering insights into multitask learning, data sharing strategies, and the optimization of sentence representation methods for enhanced model performance. My project investigates various extensions to maximize the utilization of BERT on sentiment classification, paraphrase detection, and semantic textual similarity. Notably, round-robin multi-task learning, cosine similarity fine-tuning, shared relational layer for similar tasks, and the appropriate pooling method enhance BERT’s performance when combined.
  
Notes
[Online; accessed 25. May 2024]
  
WIKINDX 6.11.0 | Total resources: 209 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)