Advanced search
1 file | 372.00 KB Add to list

Parameter-efficient tuning with adaptive bottlenecks for automatic speech recognition

Author
Organization
Abstract
Transfer learning from large multilingual pretrained models, like XLSR, has become the new paradigm for Automatic Speech Recognition (ASR). Considering their ever-increasing size, fine-tuning all the weights has become impractical when the computing budget is limited. Adapters are lightweight trainable modules inserted between layers while the pre-trained part is kept frozen. They form a parameter-efficient fine-tuning method, but they still require a large bottleneck size to match standard fine-tuning performance. In this paper, we propose ABSADAPTER, a method to further reduce the parameter budget for equal task performance. Specifically, ABSADAPTER uses an Adaptive Bottleneck Scheduler to redistribute the adapter’s weights to the layers that need adaptation the most. By training only 8% of the XLSR model, ABSADAPTER achieves close to standard fine-tuning performance on a domain-shifted Air-Traffic Communication (ATC) ASR task.

Downloads

  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 372.00 KB

Citation

Please use this url to cite or link to this publication:

MLA
Vanderreydt, Geoffroy, et al. “Parameter-Efficient Tuning with Adaptive Bottlenecks for Automatic Speech Recognition.” 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), IEEE, 2023, pp. 1–7, doi:10.1109/asru57964.2023.10389769.
APA
Vanderreydt, G., Prasad, A., Khalil, D., Madikeri, S., Demuynck, K., & Motlicek, P. (2023). Parameter-efficient tuning with adaptive bottlenecks for automatic speech recognition. 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 1–7. https://doi.org/10.1109/asru57964.2023.10389769
Chicago author-date
Vanderreydt, Geoffroy, Amrutha Prasad, Driss Khalil, Srikanth Madikeri, Kris Demuynck, and Petr Motlicek. 2023. “Parameter-Efficient Tuning with Adaptive Bottlenecks for Automatic Speech Recognition.” In 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 1–7. IEEE. https://doi.org/10.1109/asru57964.2023.10389769.
Chicago author-date (all authors)
Vanderreydt, Geoffroy, Amrutha Prasad, Driss Khalil, Srikanth Madikeri, Kris Demuynck, and Petr Motlicek. 2023. “Parameter-Efficient Tuning with Adaptive Bottlenecks for Automatic Speech Recognition.” In 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 1–7. IEEE. doi:10.1109/asru57964.2023.10389769.
Vancouver
1.
Vanderreydt G, Prasad A, Khalil D, Madikeri S, Demuynck K, Motlicek P. Parameter-efficient tuning with adaptive bottlenecks for automatic speech recognition. In: 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE; 2023. p. 1–7.
IEEE
[1]
G. Vanderreydt, A. Prasad, D. Khalil, S. Madikeri, K. Demuynck, and P. Motlicek, “Parameter-efficient tuning with adaptive bottlenecks for automatic speech recognition,” in 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Taipei, Taiwan, 2023, pp. 1–7.
@inproceedings{01JJ1GGD71641TF7AC802SQC5A,
  abstract     = {{Transfer learning from large multilingual pretrained models, like XLSR, has become the new paradigm for Automatic Speech Recognition (ASR). Considering their ever-increasing size, fine-tuning all the weights has become impractical when the computing budget is limited. Adapters are lightweight trainable modules inserted between layers while the pre-trained part is kept frozen. They form a parameter-efficient fine-tuning method, but they still require a large bottleneck size to match standard fine-tuning performance. In this paper, we propose ABSADAPTER, a method to further reduce the parameter budget for equal task performance. Specifically, ABSADAPTER uses an Adaptive Bottleneck Scheduler to redistribute the adapter’s weights to the layers that need adaptation the most. By training only 8% of the XLSR model, ABSADAPTER achieves close to standard fine-tuning performance on a domain-shifted Air-Traffic Communication (ATC) ASR task.}},
  author       = {{Vanderreydt, Geoffroy and Prasad, Amrutha and Khalil, Driss and Madikeri, Srikanth and Demuynck, Kris and Motlicek, Petr}},
  booktitle    = {{2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)}},
  isbn         = {{9798350306897}},
  language     = {{eng}},
  location     = {{Taipei, Taiwan}},
  pages        = {{1--7}},
  publisher    = {{IEEE}},
  title        = {{Parameter-efficient tuning with adaptive bottlenecks for automatic speech recognition}},
  url          = {{http://doi.org/10.1109/asru57964.2023.10389769}},
  year         = {{2023}},
}

Altmetric
View in Altmetric