Advanced search
1 file | 1.15 MB Add to list

Adaptive control of a mechatronic system using constrained residual reinforcement learning

Tom Staessens (UGent) , Tom Lefebvre (UGent) and Guillaume Crevecoeur (UGent)
Author
Organization
Project
Abstract
In this article, we propose a simple, practical, and intuitive approach to improve the performance of a conventional controller in uncertain environments using deep reinforcement learning while maintaining safe operation. Our approach is motivated by the observation that conventional controllers in industrial motion control value robustness over adaptivity to deal with different operating conditions and are suboptimal as a consequence. Reinforcement learning, on the other hand, can optimize a control signal directly from input-output data and thus adapts to operational conditions but lacks safety guarantees, impeding its use in industrial environments. To realize adaptive control using reinforcement learning in such conditions, we follow a residual learning methodology, where a reinforcement learning algorithm learns corrective adaptations to a base controller's output to increase optimality. We investigate how constraining the residual agent's actions enables to leverage the base controller's robustness to guarantee safe operation. We detail the algorithmic design and propose to constrain the residual actions relative to the base controller to increase the method's robustness. Building on Lyapunov stability theory, we prove stability for a broad class of mechatronic closed-loop systems. We validate our method experimentally on a slider crank setup and investigate how the constraints affect the safety during learning and optimality after convergence.
Keywords
Mechatronics, Reinforcement learning, Training, Motion control, Stability analysis, Adaptive control, Adaptation models, Mechatronics, motion control, reinforcement learning (RL), servo systems, uncertain systems, MODEL-PREDICTIVE CONTROL, DESIGN, CRANK

Downloads

  • (...).pdf
    • full text (Published version)
    • |
    • UGent only
    • |
    • PDF
    • |
    • 1.15 MB

Citation

Please use this url to cite or link to this publication:

MLA
Staessens, Tom, et al. “Adaptive Control of a Mechatronic System Using Constrained Residual Reinforcement Learning.” IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, vol. 69, no. 10, 2022, pp. 10447–56, doi:10.1109/TIE.2022.3144565.
APA
Staessens, T., Lefebvre, T., & Crevecoeur, G. (2022). Adaptive control of a mechatronic system using constrained residual reinforcement learning. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 69(10), 10447–10456. https://doi.org/10.1109/TIE.2022.3144565
Chicago author-date
Staessens, Tom, Tom Lefebvre, and Guillaume Crevecoeur. 2022. “Adaptive Control of a Mechatronic System Using Constrained Residual Reinforcement Learning.” IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS 69 (10): 10447–56. https://doi.org/10.1109/TIE.2022.3144565.
Chicago author-date (all authors)
Staessens, Tom, Tom Lefebvre, and Guillaume Crevecoeur. 2022. “Adaptive Control of a Mechatronic System Using Constrained Residual Reinforcement Learning.” IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS 69 (10): 10447–10456. doi:10.1109/TIE.2022.3144565.
Vancouver
1.
Staessens T, Lefebvre T, Crevecoeur G. Adaptive control of a mechatronic system using constrained residual reinforcement learning. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS. 2022;69(10):10447–56.
IEEE
[1]
T. Staessens, T. Lefebvre, and G. Crevecoeur, “Adaptive control of a mechatronic system using constrained residual reinforcement learning,” IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, vol. 69, no. 10, pp. 10447–10456, 2022.
@article{8734421,
  abstract     = {{In this article, we propose a simple, practical, and intuitive approach to improve the performance of a conventional controller in uncertain environments using deep reinforcement learning while maintaining safe operation. Our approach is motivated by the observation that conventional controllers in industrial motion control value robustness over adaptivity to deal with different operating conditions and are suboptimal as a consequence. Reinforcement learning, on the other hand, can optimize a control signal directly from input-output data and thus adapts to operational conditions but lacks safety guarantees, impeding its use in industrial environments. To realize adaptive control using reinforcement learning in such conditions, we follow a residual learning methodology, where a reinforcement learning algorithm learns corrective adaptations to a base controller's output to increase optimality. We investigate how constraining the residual agent's actions enables to leverage the base controller's robustness to guarantee safe operation. We detail the algorithmic design and propose to constrain the residual actions relative to the base controller to increase the method's robustness. Building on Lyapunov stability theory, we prove stability for a broad class of mechatronic closed-loop systems. We validate our method experimentally on a slider crank setup and investigate how the constraints affect the safety during learning and optimality after convergence.}},
  author       = {{Staessens, Tom and Lefebvre, Tom and Crevecoeur, Guillaume}},
  issn         = {{0278-0046}},
  journal      = {{IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS}},
  keywords     = {{Mechatronics,Reinforcement learning,Training,Motion control,Stability analysis,Adaptive control,Adaptation models,Mechatronics,motion control,reinforcement learning (RL),servo systems,uncertain systems,MODEL-PREDICTIVE CONTROL,DESIGN,CRANK}},
  language     = {{eng}},
  number       = {{10}},
  pages        = {{10447--10456}},
  title        = {{Adaptive control of a mechatronic system using constrained residual reinforcement learning}},
  url          = {{http://doi.org/10.1109/TIE.2022.3144565}},
  volume       = {{69}},
  year         = {{2022}},
}

Altmetric
View in Altmetric
Web of Science
Times cited: