Distributional constrained reinforcement learning for supply chain optimization
OA Location
Author(s)
Bermúdez, Jaime Sabal
del Rio Chanona, Antonio
Tsay, Calvin
Type
Chapter
Abstract
This work studies reinforcement learning (RL) in the context of multi-period supply chains subject to constraints, e.g., on inventory. We introduce Distributional Constrained Policy Optimization (DCPO), a novel approach for reliable constraint satisfaction in RL. Our approach is based on Constrained Policy Optimization (CPO), which is subject to approximation errors that in practice lead it to converge to infeasible policies. We address this issue by incorporating aspects of distributional RL. Using a supply chain case study, we show that DCPO improves the rate at which the RL policy converges and ensures reliable constraint satisfaction by the end of training. The proposed method also greatly reduces the variance of returns between runs; this result is significant in the context of policy gradient methods, which intrinsically introduce high variance during training.
Date Issued
2023
Citation
Computer Aided Chemical Engineering, 2023, pp.1649-1654
ISBN
9780443152740
Publisher
Elsevier
Start Page
1649
End Page
1654
Journal / Book Title
Computer Aided Chemical Engineering
Copyright Statement
Copyright © 2023 Elsevier B.V. All rights reserved.
Identifier
http://dx.doi.org/10.1016/b978-0-443-15274-0.50262-6
Publication Status
Published