Advanced search
1 file | 516.64 KB Add to list

Integrated offline reinforcement learning for optimal power flow management in an electric dual-drive vehicle

Arne De Keyser (UGent) and Guillaume Crevecoeur (UGent)
Author
Organization
Abstract
The need for frequent charging is perceived as a common inconvenience related to all-electric vehicles. A reinforcement learning-based strategy is therefore introduced to optimally exploit the possibilities of an electric dual-drive vehicle. Simplified subsystem models are defined to describe the inherent loss mechanisms, allowing the problem to be reformulated as an interconnection of power flows. The optimal power flow management is consequently casted into a model-predictive structure. Inherent stochasticity is coped with by introducing an appropriate Markov transition model and a value function approximation is constructed in an offline phase to accommodate for the long-term impact of current control decisions. Relevant case studies finally show that the proposed method allows to limit deviations from global optimality to less than 1% while only marginally compromising computational efficiency. Reinforcement learning might thus contribute to further advancements in range extensions for electric vehicles.
Keywords
MODEL-PREDICTIVE CONTROL, ENERGY-STORAGE SYSTEM, HYBRID, SPLIT

Downloads

  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 516.64 KB

Citation

Please use this url to cite or link to this publication:

MLA
De Keyser, Arne, and Guillaume Crevecoeur. “Integrated Offline Reinforcement Learning for Optimal Power Flow Management in an Electric Dual-Drive Vehicle.” 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), IEEE, 2019, pp. 1305–10, doi:10.1109/aim.2019.8868330.
APA
De Keyser, A., & Crevecoeur, G. (2019). Integrated offline reinforcement learning for optimal power flow management in an electric dual-drive vehicle. 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 1305–1310. https://doi.org/10.1109/aim.2019.8868330
Chicago author-date
De Keyser, Arne, and Guillaume Crevecoeur. 2019. “Integrated Offline Reinforcement Learning for Optimal Power Flow Management in an Electric Dual-Drive Vehicle.” In 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 1305–10. New York: IEEE. https://doi.org/10.1109/aim.2019.8868330.
Chicago author-date (all authors)
De Keyser, Arne, and Guillaume Crevecoeur. 2019. “Integrated Offline Reinforcement Learning for Optimal Power Flow Management in an Electric Dual-Drive Vehicle.” In 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), 1305–1310. New York: IEEE. doi:10.1109/aim.2019.8868330.
Vancouver
1.
De Keyser A, Crevecoeur G. Integrated offline reinforcement learning for optimal power flow management in an electric dual-drive vehicle. In: 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). New York: IEEE; 2019. p. 1305–10.
IEEE
[1]
A. De Keyser and G. Crevecoeur, “Integrated offline reinforcement learning for optimal power flow management in an electric dual-drive vehicle,” in 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Hong Kong, HONG KONG, 2019, pp. 1305–1310.
@inproceedings{8634385,
  abstract     = {{The need for frequent charging is perceived as a common inconvenience related to all-electric vehicles. A reinforcement learning-based strategy is therefore introduced to optimally exploit the possibilities of an electric dual-drive vehicle. Simplified subsystem models are defined to describe the inherent loss mechanisms, allowing the problem to be reformulated as an interconnection of power flows. The optimal power flow management is consequently casted into a model-predictive structure. Inherent stochasticity is coped with by introducing an appropriate Markov transition model and a value function approximation is constructed in an offline phase to accommodate for the long-term impact of current control decisions. Relevant case studies finally show that the proposed method allows to limit deviations from global optimality to less than 1% while only marginally compromising computational efficiency. Reinforcement learning might thus contribute to further advancements in range extensions for electric vehicles.}},
  author       = {{De Keyser, Arne and Crevecoeur, Guillaume}},
  booktitle    = {{2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM)}},
  isbn         = {{9781728124933}},
  issn         = {{2159-6255}},
  keywords     = {{MODEL-PREDICTIVE CONTROL,ENERGY-STORAGE SYSTEM,HYBRID,SPLIT}},
  language     = {{eng}},
  location     = {{Hong Kong, HONG KONG}},
  pages        = {{1305--1310}},
  publisher    = {{IEEE}},
  title        = {{Integrated offline reinforcement learning for optimal power flow management in an electric dual-drive vehicle}},
  url          = {{http://doi.org/10.1109/aim.2019.8868330}},
  year         = {{2019}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: