Swathi, K., & Reddy, K. Dr. V. SOLVING SENTIMENT ANALYSIS USING TRANSFORMER BASED BERT.
|
 |
|
Abstract
|
For a wide variety of tasks, such as answering questions and making inferences about languages, the pre-trained BERT model may be fine-tuned to create cutting-edge models with just one extra output layer. The most prevalent sequence transduction models use neural networks containing encoders and decoders. The better versions also have an attention mechanism that connects the encoder and decoder. An innovative language representation paradigm, BERT stands for bidirectional Encoder Representations from Transformers. Because it takes into account both the left and right sides of the text at every level, BERT can pre-train deep bidirectional representations from unlabeled text. For a wide variety of tasks, such as answering questions and making inferences about languages, the pre-trained BERT model may be fine-tuned to create cutting-edge models with just one extra output layer. It's easy to use, and it's backed by a tonne of data from the scientific community.
|
| Notes |
[Online; accessed 25. May 2024]
|