Advanced search
1 file | 555.03 KB Add to list

Reduced state space and cost function in reinforcement learning for demand response control of multiple EV charging stations

Author
Organization
Abstract
Electric vehicle (EV) charging stations represent a substantial load with significant flexibility. Balancing such load with model-free demand response (DR) based on reinforcement learning (RL) is an attractive approach. We build on previous RL research using a Markov decision process (MDP) to simultaneously coordinate multiple charging stations. The previously proposed approach is computationally expensive in terms of large training times, limiting its feasibility and practicality. We propose to a priori force the control policy to always fulfill any charging demand that does not offer any flexibility at a given point, and thus use an updated cost function. We compare the policy of the newly proposed approach with the original (costly) one, for the case of load flattening, in terms of (i) processing time to learn the RL-based charging policy, and (ii) overall performance of the policy decisions in terms of meeting the target load for unseen test data.
Keywords
Smart grid, demand response, electric vehicle, smart charging, reinforcement learning, Markov decision process

Downloads

  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 555.03 KB

Citation

Please use this url to cite or link to this publication:

MLA
Lahariya, Manu, et al. “Reduced State Space and Cost Function in Reinforcement Learning for Demand Response Control of Multiple EV Charging Stations.” BUILDSYS’19 : PROCEEDINGS OF THE 6TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, edited by M. Zhang, 2019, pp. 344–45, doi:10.1145/3360322.3360992.
APA
Lahariya, M., Sadeghianpourhamami, N., & Develder, C. (2019). Reduced state space and cost function in reinforcement learning for demand response control of multiple EV charging stations. In M. Zhang (Ed.), BUILDSYS’19 : PROCEEDINGS OF THE 6TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION (pp. 344–345). New York, USA. https://doi.org/10.1145/3360322.3360992
Chicago author-date
Lahariya, Manu, Nasrin Sadeghianpourhamami, and Chris Develder. 2019. “Reduced State Space and Cost Function in Reinforcement Learning for Demand Response Control of Multiple EV Charging Stations.” In BUILDSYS’19 : PROCEEDINGS OF THE 6TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, edited by M. Zhang, 344–45. https://doi.org/10.1145/3360322.3360992.
Chicago author-date (all authors)
Lahariya, Manu, Nasrin Sadeghianpourhamami, and Chris Develder. 2019. “Reduced State Space and Cost Function in Reinforcement Learning for Demand Response Control of Multiple EV Charging Stations.” In BUILDSYS’19 : PROCEEDINGS OF THE 6TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, ed by. M. Zhang, 344–345. doi:10.1145/3360322.3360992.
Vancouver
1.
Lahariya M, Sadeghianpourhamami N, Develder C. Reduced state space and cost function in reinforcement learning for demand response control of multiple EV charging stations. In: Zhang M, editor. BUILDSYS’19 : PROCEEDINGS OF THE 6TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION. 2019. p. 344–5.
IEEE
[1]
M. Lahariya, N. Sadeghianpourhamami, and C. Develder, “Reduced state space and cost function in reinforcement learning for demand response control of multiple EV charging stations,” in BUILDSYS’19 : PROCEEDINGS OF THE 6TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, New York, USA, 2019, pp. 344–345.
@inproceedings{8628793,
  abstract     = {Electric vehicle (EV) charging stations represent a substantial load with significant flexibility. Balancing such load with model-free demand response (DR) based on reinforcement learning (RL) is an attractive approach. We build on previous RL research using a Markov decision process (MDP) to simultaneously coordinate multiple charging stations. The previously proposed approach is computationally expensive in terms of large training times, limiting its feasibility and practicality. We propose to a priori force the control policy to always fulfill any charging demand that does not offer any flexibility at a given point, and thus use an updated cost function. We compare the policy of the newly proposed approach with the original (costly) one, for the case of load flattening, in terms of (i) processing time to learn the RL-based charging policy, and (ii) overall performance of the policy decisions in terms of meeting the target load for unseen test data.},
  author       = {Lahariya, Manu and Sadeghianpourhamami, Nasrin and Develder, Chris},
  booktitle    = {BUILDSYS'19 : PROCEEDINGS OF THE 6TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION},
  editor       = {Zhang, M.},
  isbn         = {9781450370059},
  keywords     = {Smart grid,demand response,electric vehicle,smart charging,reinforcement learning,Markov decision process},
  language     = {eng},
  location     = {New York, USA},
  pages        = {344--345},
  title        = {Reduced state space and cost function in reinforcement learning for demand response control of multiple EV charging stations},
  url          = {http://dx.doi.org/10.1145/3360322.3360992},
  year         = {2019},
}

Altmetric
View in Altmetric
Web of Science
Times cited: