WIKINDX

WIKINDX Resources

Cao, Q., Min, S., Wang, Y., & Hajishirzi, H. BTR: Binary Token Representations for Efficient Retrieval Augmented Language Models. 
Resource type: Journal Article
BibTeX citation key: anont
View all bibliographic details
Categories: General
Creators: Cao, Hajishirzi, Min, Wang
Attachments   URLs   https://www.semant ... f23bddd07251adaccc
Abstract
BTR is introduced, which use 1-bit vectors to precompute every token in passages, significantly reducing computation during inference, and accelerates state-of-the-art inference by up to 4x and reduces storage by over 100x while maintaining over 95\% task performance. Retrieval augmentation addresses many critical problems in large language models such as hallucination, staleness, and privacy leaks. However, running retrieval-augmented language models (LMs) is slow and difficult to scale due to processing large amounts of retrieved text. We introduce binary token representations (BTR), which use 1-bit vectors to precompute every token in passages, significantly reducing computation during inference. Despite the potential loss of accuracy, our new calibration techniques and training objectives restore performance. Combined with offline and runtime compression, this only requires 127GB of disk space for encoding 3 billion tokens in Wikipedia. Our experiments show that on five knowledge-intensive NLP tasks, BTR accelerates state-of-the-art inference by up to 4x and reduces storage by over 100x while maintaining over 95\% task performance.
  
WIKINDX 6.11.0 | Total resources: 209 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)