WIKINDX

WIKINDX Resources  

Cs224n, S., Project, D., Lozano, A., & Bravo, L. Exploring Multi-Task Learning for Robust Language Encoding with BERT. 
Resource type: Journal Article
BibTeX citation key: anon.38
View all bibliographic details
Categories: General
Creators: Bravo, Cs224n, Lozano, Project
Attachments   URLs   https://www.semant ... 48b36564e689415411
Abstract
This work proposes a BERT-based architecture that promotes representation generalization by training on multiple tasks: Sentiment Analysis (SA), Paraphrase Detection (PD), and Semantic Textual Similarity (STS). Transformer-based Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP). By analyzing large amounts of text data, LLMs are capable of identifying relationships between words and phrases, as well as their context, resulting in a more nuanced language understanding. LLMs are transferable, allowing them to be pre-trained on large data sets and later fine-tuned on smaller downstream-specific tasks. However, fine-tuning can lead to catastrophic forgetting, where previously learned information is lost. In this work we propose a BERT-based architecture that promotes representation generalization by training on multiple tasks: Sentiment Analysis (SA), Paraphrase Detection (PD), and Semantic Textual Similarity (STS). Our experiments suggest that even when accounting for task interference, a Multi-task Learning (MTL) framework is only effective when it can leverage related tasks.
  
Notes
[Online; accessed 1. Jun. 2024]
  
WIKINDX 6.11.0 | Total resources: 209 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)