WIKINDX

WIKINDX Resources

Williams, M., & Aletras, N. Frustratingly simple memory efficiency for pre-trained language models via dynamic embedding pruning. 
Resource type: Journal Article
BibTeX citation key: anon.179
View all bibliographic details
Categories: General
Creators: Aletras, Williams
Attachments   URLs   https://www.semant ... 6af3519fa09a9f8580
Abstract
This work demonstrates that a significant proportion of the vocabulary remains unused in pre-trained language models deployed in memory-constrained settings, and proposes a simple yet effective approach that leverages this finding to minimize the memory footprint of the embedding matrix. The extensive memory footprint of pre-trained language models (PLMs) can hinder deployment in memory-constrained settings, such as cloud environments or on-device. PLMs use embedding matrices to represent extensive vocabularies, forming a large proportion of the model parameters. While previous work towards parameter-efficient PLM development has considered pruning parameters within the transformer layers, pruning the embedding matrix as part of fine-tuning or inference has yet to be explored. We first demonstrate that a significant proportion of the vocabulary remains unused in these scenarios. We then propose a simple yet effective approach that leverages this finding to minimize the memory footprint of the embedding matrix. We show that this approach provides substantial reductions in memory usage across a wide range of models and tasks. Notably, our approach maintains equivalent downstream task performance while allowing a more efficient use of compute resources.
  
WIKINDX 6.11.0 | Total resources: 209 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)