Heterogeneous graph neural networks for credulous acceptance of assumptions in ABA
File(s)GNN4ABA_Workshop_Paper.pdf (1.19 MB)
Accepted version
Author(s)
Gehlot, Preesha
Rapberger, Anna
Russo, Fabrizio
Toni, francesca
Type
Conference Paper
Abstract
Assumption-Based Argumentation (ABA) is a powerful structured argumentation formalism, but exact com-
putation of extensions under stable semantics is intractable for large frameworks. We present the first Graph Neural Network (GNN) approach to approximate credulous acceptance in ABA. To use GNNs, we represent ABA frameworks via a dependency graph representation that encodes atoms and rules as nodes and distinguishes
support, derive and attack relations by heterogeneous edge labels. We propose two GNN variants—ABAGCN
and ABAGAT—that stack residual heterogeneous convolution or attention blocks, respectively, to learn node
embeddings. Our models are trained on the ICCMA2023 benchmark, augmented with synthetic ABAFs, with
hyperparameters optimised via Bayesian search. Empirically, both ABAGCN and ABAGAT outperform a state-of-the-art GNN baseline that we adapt from the abstract argumentation literature, achieving node-level an F1 score up to 0.71 on the ICCMA instances. Finally, we develop a poly-time extension-reconstruction algorithm driven by our predictor: it reconstructs stable extensions with F1 above 0.85 on small ABAFs and maintains an F1 of
about 0.58 on frameworks with 1,000 atoms. Our work opens new avenues for scalable approximate reasoning in
structured argumentation.
putation of extensions under stable semantics is intractable for large frameworks. We present the first Graph Neural Network (GNN) approach to approximate credulous acceptance in ABA. To use GNNs, we represent ABA frameworks via a dependency graph representation that encodes atoms and rules as nodes and distinguishes
support, derive and attack relations by heterogeneous edge labels. We propose two GNN variants—ABAGCN
and ABAGAT—that stack residual heterogeneous convolution or attention blocks, respectively, to learn node
embeddings. Our models are trained on the ICCMA2023 benchmark, augmented with synthetic ABAFs, with
hyperparameters optimised via Bayesian search. Empirically, both ABAGCN and ABAGAT outperform a state-of-the-art GNN baseline that we adapt from the abstract argumentation literature, achieving node-level an F1 score up to 0.71 on the ICCMA instances. Finally, we develop a poly-time extension-reconstruction algorithm driven by our predictor: it reconstructs stable extensions with F1 above 0.85 on small ABAFs and maintains an F1 of
about 0.58 on frameworks with 1,000 atoms. Our work opens new avenues for scalable approximate reasoning in
structured argumentation.
Date Issued
2025-11-01
Date Acceptance
2025-09-02
Citation
Proceedings of the Second International Workshop on Argumentation and Applications (Arg&App 2025), 2025, pp.14-25
Start Page
14
End Page
25
Journal / Book Title
Proceedings of the Second International Workshop on Argumentation and Applications (Arg&App 2025)
Copyright Statement
© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
License URL
Source
The Second International Workshop on Argumentation and Applications co-located with the 22nd International Conference on Principles of Knowledge Representation and Reasoning
Publication Status
Published
Start Date
2025-11-11
Coverage Spatial
Melbourne, Australia