Enforcing state constraints in dynamical systems modelled with neural networks
File(s)Cho_Amato_Constraints.pdf (212.19 KB)
Published version
Author(s)
Cho, Namhoon
Amato, Davide
Type
Conference Paper
Abstract
Deep neural networks (NNs) are usually trained with unconstrained optimisation algorithms. With a reasoning similar to the constrained Kalman filter, incorporating known information in the form of equality constraints at certain checkpoints can potentially improve prediction accuracy. For continuous-time dynamical systems, the state constraints should be enforced in an ordinary differential equation (ODE) model which embeds NNs to represent a learned part of dynamics or a control policy. To this end, incremental correction methods are developed for post-processing of the dynamical systems modelled with NNs for which the parameters are determined by previous optimisation process. The proposed approach is to find a small amount of local correction needed to satisfy given state constraints with the updated solution. Algorithms for updating the neural network parameters and the control function are considered.
Date Issued
2022-06-21
Date Acceptance
2022-03-23
Citation
2022
Copyright Statement
© 2022 The Author(s).
Identifier
https://easychair.org/smart-program/ICCS2022/2022-06-23.html#talk:193444
Source
International Conference on Computational Science 2022
Publication Status
Published
Start Date
2022-06-21
Finish Date
2022-06-23
Coverage Spatial
London, UK