How well do SOTA legal reasoning models support abductive reasoning?
File(s)paper1LPLR.pdf (391.99 KB)
Published version
Author(s)
Nguyen, Ha-Thanh
Goebel, Randy
Toni, Francesca
Stathis, Kostas
Satoh, Ken
Type
Conference Paper
Abstract
We examine how well the state-of-the-art (SOTA) models used in legal reasoning support abductive
reasoning tasks. Abductive reasoning is a form of logical inference in which a hypothesis is formulated
from a set of observations, and that hypothesis is used to explain the observations. The ability to
formulate such hypotheses is important for lawyers and legal scholars as it helps them articulate logical
arguments, interpret laws, and develop legal theories. Our motivation is to consider the belief that
deep learning models, especially large language models (LLMs), will soon replace lawyers because they
perform well on tasks related to legal text processing. But to do so, we believe, requires some form of
abductive hypothesis formation. In other words, while LLMs become more popular and powerful, we
want to investigate their capacity for abductive reasoning. To pursue this goal, we start by building a
logic-augmented dataset for abductive reasoning with 498,697 samples and then use it to evaluate the
performance of a SOTA model in the legal field. Our experimental results show that although these
models can perform well on tasks related to some aspects of legal text processing, they still fall short in
supporting abductive reasoning tasks.
reasoning tasks. Abductive reasoning is a form of logical inference in which a hypothesis is formulated
from a set of observations, and that hypothesis is used to explain the observations. The ability to
formulate such hypotheses is important for lawyers and legal scholars as it helps them articulate logical
arguments, interpret laws, and develop legal theories. Our motivation is to consider the belief that
deep learning models, especially large language models (LLMs), will soon replace lawyers because they
perform well on tasks related to legal text processing. But to do so, we believe, requires some form of
abductive hypothesis formation. In other words, while LLMs become more popular and powerful, we
want to investigate their capacity for abductive reasoning. To pursue this goal, we start by building a
logic-augmented dataset for abductive reasoning with 498,697 samples and then use it to evaluate the
performance of a SOTA model in the legal field. Our experimental results show that although these
models can perform well on tasks related to some aspects of legal text processing, they still fall short in
supporting abductive reasoning tasks.
Date Acceptance
2023-06-14
Copyright Statement
© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0)
License URL
Source
Logic Programming and Legal Reasoning Workshop@ICLP2023
Publication Status
Accepted
Start Date
2023-07-09
Finish Date
2023-07-15
Coverage Spatial
London, UK