- Author
- Mattijs Baert (UGent) , Pietro Mazzaglia (UGent) , Sam Leroux (UGent) and Pieter Simoens (UGent)
- Organization
- Project
- Abstract
- When deploying artificial agents in real-world environments where they interact with humans, it is crucial that their behavior is aligned with the values, social norms or other requirements specific to that environment. However, many environments have implicit constraints that are difficult to specify and transfer to a learning agent. To address this challenge, we propose a novel method that utilizes the principle of maximum causal entropy to learn constraints and an optimal policy that adheres to these constraints, using demonstrations of agents that abide by the constraints. We prove convergence in a tabular setting and provide a practical implementation which scales to complex environments. We evaluate the effectiveness of the learned policy by assessing the reward received and the number of constraint violations, and we evaluate the learned cost function based on its transferability to other agents. Our method has been shown to outperform state-of-the-art approaches across a variety of tasks and environments, and it is able to handle problems with stochastic dynamics and a continuous state-action space.
- Keywords
- Inverse constrained reinforcement learning, Principle of maximum causal entropy, Constraint inference, Constraint learning, Safe reinforcement learning, Constrained reinforcement learning
Downloads
-
(...).pdf
- full text (Accepted manuscript)
- |
- UGent only (changes to open access on 2026-08-22)
- |
- |
- 2.68 MB
-
(...).pdf
- full text (Published version)
- |
- UGent only
- |
- |
- 3.10 MB
Citation
Please use this url to cite or link to this publication: http://hdl.handle.net/1854/LU-01JNDR0FDYRRMX54DBX89Q5TSW
- MLA
- Baert, Mattijs, et al. “Maximum Causal Entropy Inverse Constrained Reinforcement Learning.” MACHINE LEARNING, vol. 114, no. 4, 2025, doi:10.1007/s10994-024-06653-5.
- APA
- Baert, M., Mazzaglia, P., Leroux, S., & Simoens, P. (2025). Maximum causal entropy inverse constrained reinforcement learning. MACHINE LEARNING, 114(4). https://doi.org/10.1007/s10994-024-06653-5
- Chicago author-date
- Baert, Mattijs, Pietro Mazzaglia, Sam Leroux, and Pieter Simoens. 2025. “Maximum Causal Entropy Inverse Constrained Reinforcement Learning.” MACHINE LEARNING 114 (4). https://doi.org/10.1007/s10994-024-06653-5.
- Chicago author-date (all authors)
- Baert, Mattijs, Pietro Mazzaglia, Sam Leroux, and Pieter Simoens. 2025. “Maximum Causal Entropy Inverse Constrained Reinforcement Learning.” MACHINE LEARNING 114 (4). doi:10.1007/s10994-024-06653-5.
- Vancouver
- 1.Baert M, Mazzaglia P, Leroux S, Simoens P. Maximum causal entropy inverse constrained reinforcement learning. MACHINE LEARNING. 2025;114(4).
- IEEE
- [1]M. Baert, P. Mazzaglia, S. Leroux, and P. Simoens, “Maximum causal entropy inverse constrained reinforcement learning,” MACHINE LEARNING, vol. 114, no. 4, 2025.
@article{01JNDR0FDYRRMX54DBX89Q5TSW, abstract = {{When deploying artificial agents in real-world environments where they interact with humans, it is crucial that their behavior is aligned with the values, social norms or other requirements specific to that environment. However, many environments have implicit constraints that are difficult to specify and transfer to a learning agent. To address this challenge, we propose a novel method that utilizes the principle of maximum causal entropy to learn constraints and an optimal policy that adheres to these constraints, using demonstrations of agents that abide by the constraints. We prove convergence in a tabular setting and provide a practical implementation which scales to complex environments. We evaluate the effectiveness of the learned policy by assessing the reward received and the number of constraint violations, and we evaluate the learned cost function based on its transferability to other agents. Our method has been shown to outperform state-of-the-art approaches across a variety of tasks and environments, and it is able to handle problems with stochastic dynamics and a continuous state-action space.}}, articleno = {{103}}, author = {{Baert, Mattijs and Mazzaglia, Pietro and Leroux, Sam and Simoens, Pieter}}, issn = {{0885-6125}}, journal = {{MACHINE LEARNING}}, keywords = {{Inverse constrained reinforcement learning,Principle of maximum causal entropy,Constraint inference,Constraint learning,Safe reinforcement learning,Constrained reinforcement learning}}, language = {{eng}}, number = {{4}}, pages = {{44}}, title = {{Maximum causal entropy inverse constrained reinforcement learning}}, url = {{http://doi.org/10.1007/s10994-024-06653-5}}, volume = {{114}}, year = {{2025}}, }
- Altmetric
- View in Altmetric
- Web of Science
- Times cited: