Advanced search
2 files | 3.46 MB Add to list

Character-level recurrent neural networks in practice : comparing training and sampling schemes

Cedric De Boom (UGent) , Thomas Demeester (UGent) and Bart Dhoedt (UGent)
(2019) NEURAL COMPUTING & APPLICATIONS. 31(8). p.4001-4017
Author
Organization
Abstract
Recurrent neural networks are nowadays successfully used in an abundance of applications, going from text, speech and image processing to recommender systems. Backpropagation through time is the algorithm that is commonly used to train these networks on specific tasks. Many deep learning frameworks have their own implementation of training and sampling procedures for recurrent neural networks, while there are in fact multiple other possibilities to choose from and other parameters to tune. In the existing literature, this is very often overlooked or ignored. In this paper, we therefore give an overview of possible training and sampling schemes for character-level recurrent neural networks to solve the task of predicting the next token in a given sequence. We test these different schemes on a variety of datasets, neural network architectures and parameter settings, and formulate a number of take-home recommendations. The choice of training and sampling scheme turns out to be subject to a number of trade-offs, such as training stability, sampling time, model performance and implementation effort, but is largely independent of the data. Perhaps the most surprising result is that transferring hidden states for correctly initializing the model on subsequences often leads to unstable training behavior depending on the dataset.
Keywords
Recurrent neural networks, Deep learning, Backpropagation through time, Optimization, Performance

Downloads

  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 2.99 MB
  • De Boom C accepted version.pdf
    • full text (Accepted manuscript)
    • |
    • open access
    • |
    • PDF
    • |
    • 473.73 KB

Citation

Please use this url to cite or link to this publication:

MLA
De Boom, Cedric, et al. “Character-Level Recurrent Neural Networks in Practice : Comparing Training and Sampling Schemes.” NEURAL COMPUTING & APPLICATIONS, vol. 31, no. 8, 2019, pp. 4001–17.
APA
De Boom, C., Demeester, T., & Dhoedt, B. (2019). Character-level recurrent neural networks in practice : comparing training and sampling schemes. NEURAL COMPUTING & APPLICATIONS, 31(8), 4001–4017.
Chicago author-date
De Boom, Cedric, Thomas Demeester, and Bart Dhoedt. 2019. “Character-Level Recurrent Neural Networks in Practice : Comparing Training and Sampling Schemes.” NEURAL COMPUTING & APPLICATIONS 31 (8): 4001–17.
Chicago author-date (all authors)
De Boom, Cedric, Thomas Demeester, and Bart Dhoedt. 2019. “Character-Level Recurrent Neural Networks in Practice : Comparing Training and Sampling Schemes.” NEURAL COMPUTING & APPLICATIONS 31 (8): 4001–4017.
Vancouver
1.
De Boom C, Demeester T, Dhoedt B. Character-level recurrent neural networks in practice : comparing training and sampling schemes. NEURAL COMPUTING & APPLICATIONS. 2019;31(8):4001–17.
IEEE
[1]
C. De Boom, T. Demeester, and B. Dhoedt, “Character-level recurrent neural networks in practice : comparing training and sampling schemes,” NEURAL COMPUTING & APPLICATIONS, vol. 31, no. 8, pp. 4001–4017, 2019.
@article{8566067,
  abstract     = {Recurrent neural networks are nowadays successfully used in an abundance of applications, going from text, speech and image processing to recommender systems. Backpropagation through time is the algorithm that is commonly used to train these networks on specific tasks. Many deep learning frameworks have their own implementation of training and sampling procedures for recurrent neural networks, while there are in fact multiple other possibilities to choose from and other parameters to tune. In the existing literature, this is very often overlooked or ignored. In this paper, we therefore give an overview of possible training and sampling schemes for character-level recurrent neural networks to solve the task of predicting the next token in a given sequence. We test these different schemes on a variety of datasets, neural network architectures and parameter settings, and formulate a number of take-home recommendations. The choice of training and sampling scheme turns out to be subject to a number of trade-offs, such as training stability, sampling time, model performance and implementation effort, but is largely independent of the data. Perhaps the most surprising result is that transferring hidden states for correctly initializing the model on subsequences often leads to unstable training behavior depending on the dataset.},
  author       = {De Boom, Cedric and Demeester, Thomas and Dhoedt, Bart},
  issn         = {0941-0643},
  journal      = {NEURAL COMPUTING & APPLICATIONS},
  keywords     = {Recurrent neural networks,Deep learning,Backpropagation through time,Optimization,Performance},
  language     = {eng},
  number       = {8},
  pages        = {4001--4017},
  title        = {Character-level recurrent neural networks in practice : comparing training and sampling schemes},
  url          = {http://dx.doi.org/10.1007/s00521-017-3322-z},
  volume       = {31},
  year         = {2019},
}

Altmetric
View in Altmetric
Web of Science
Times cited: