Formalizing consistency and coherence of representation learning
Author(s)
Stromfelt, H
Dickens, L
Garcez, A
Russo, A
Type
Conference Paper
Abstract
In the study of reasoning in neural networks, recent efforts have sought to improve consistency and coherence of sequence models, leading to important developments in the area of neuro-symbolic AI. In symbolic AI, the concepts of consistency and coherence can be defined and verified formally, but for neural networks these definitions are lacking. The provision of such formal definitions is crucial to offer a common basis for the quantitative evaluation and systematic comparison of connectionist, neuro-symbolic and transfer learning approaches. In this paper, we introduce formal definitions of consistency and coherence for neural systems. To illustrate the usefulness of our definitions, we propose a new dynamic relation-decoder model built around the principles of consistency and coherence. We compare our results with several existing relation-decoders using a partial transfer learning task based on a novel data set introduced in this paper. Our experiments show that relation-decoders that maintain consistency over unobserved regions of representation space retain coherence across domains, whilst achieving better transfer learning performance.
Date Issued
2022-11-28
Date Acceptance
2022-11-01
Citation
Advances in Neural Information Processing Systems, 2022, 35
ISBN
9781713871088
ISSN
1049-5258
Journal / Book Title
Advances in Neural Information Processing Systems
Volume
35
Copyright Statement
© 2022 The Author(s).
Source
36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Publication Status
Published
Start Date
2022-11-28
Finish Date
2022-12-09
Coverage Spatial
New Orleans, LA, USA