Control-informed reinforcement learning for chemical processes
File(s)
Author(s)
Type
Journal Article
Abstract
This work proposes a control-informed reinforcement learning (CIRL) framework that integrates proportional-integral-derivative (PID) control components into the architecture of deep reinforcement learning (RL) policies, incorporating prior knowledge from control theory into the learning process. CIRL improves performance and robustness by combining the best of both worlds: the disturbance-rejection and set point-tracking capabilities of PID control and the nonlinear modeling capacity of deep RL. Simulation studies conducted on a continuously stirred tank reactor system demonstrate the improved performance of CIRL compared to both conventional model-free deep RL and static PID controllers. CIRL exhibits better set point-tracking ability, particularly when generalizing to trajectories containing set points outside the training distribution, suggesting enhanced generalization capabilities. Furthermore, the embedded prior control knowledge within the CIRL policy improves its robustness to unobserved system disturbances. The CIRL framework combines the strengths of classical control and reinforcement learning to develop sample-efficient and robust deep reinforcement learning algorithms with potential applications in complex industrial systems.
Date Issued
2025-03-05
Date Acceptance
2025-01-09
Citation
Industrial and Engineering Chemistry Research, 2025, 64 (9), pp.4966-4978
ISSN
0888-5885
Publisher
American Chemical Society
Start Page
4966
End Page
4978
Journal / Book Title
Industrial and Engineering Chemistry Research
Volume
64
Issue
9
Copyright Statement
Copyright © 2025 The Authors. Published by American Chemical Society. This publication is licensed under CC-BY 4.0 .
License URL
Identifier
https://www.ncbi.nlm.nih.gov/pubmed/40070693
Publication Status
Published
Coverage Spatial
United States
Date Publish Online
2025-02-20