WIKINDX

WIKINDX Resources

Mohr, I., Krimmel, M., Sturua, S., Akram, M. K., Koukounas, A., & Günther, M., et al. Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings. 
Resource type: Journal Article
BibTeX citation key: anon.119
View all bibliographic details
Categories: General
Creators: Akram, Fu, Günther, Guzman, Koukounas, Krimmel, Liu, Mart'inez, Mastrapas, Mohr, Ognawala, Ravishankar, Sturua, Wang, Wang, Wang, Werk, Xiao, Yu
Attachments   URLs   https://www.semant ... tm_medium=30248492
Abstract
A novel suite of state-of-the-art bilingual text embedding models that are designed to support English and another target language that outperforms the capabilities of existing multilingual models in both target language understanding and cross-lingual evaluation tasks are introduced. We introduce a novel suite of state-of-the-art bilingual text embedding models that are designed to support English and another target language. These models are capable of processing lengthy text inputs with up to 8192 tokens, making them highly versatile for a range of natural language processing tasks such as text retrieval, clustering, and semantic textual similarity (STS) calculations. By focusing on bilingual models and introducing a unique multi-task learning objective, we have significantly improved the model performance on STS tasks, which outperforms the capabilities of existing multilingual models in both target language understanding and cross-lingual evaluation tasks. Moreover, our bilingual models are more efficient, requiring fewer parameters and less memory due to their smaller vocabulary needs. Furthermore, we have expanded the Massive Text Embedding Benchmark (MTEB) to include benchmarks for German and Spanish embedding models. This integration aims to stimulate further research and advancement in text embedding technologies for these languages.
  
WIKINDX 6.11.0 | Total resources: 209 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)