36
IRUS Total
Downloads

Efficient instance and hypothesis space revision in Meta-Interpretive Learning

File Description SizeFormat 
Hocquette-C-2022-PhD-Thesis.pdfThesis3.33 MBAdobe PDFView/Open
Title: Efficient instance and hypothesis space revision in Meta-Interpretive Learning
Authors: Hocquette, Céline
Item Type: Thesis or dissertation
Abstract: Inductive Logic Programming (ILP) is a form of Machine Learning. The goal of ILP is to induce hypotheses, as logic programs, that generalise training examples. ILP is characterised by a high expressivity, generalisation ability and interpretability. Meta-Interpretive Learning (MIL) is a state-of-the-art sub-field of ILP. However, current MIL approaches have limited efficiency: the sample and learning complexity respectively are polynomial and exponential in the number of clauses. My thesis is that improvements over the sample and learning complexity can be achieved in MIL through instance and hypothesis space revision. Specifically, we investigate 1) methods that revise the instance space, 2) methods that revise the hypothesis space and 3) methods that revise both the instance and the hypothesis spaces for achieving more efficient MIL. First, we introduce a method for building training sets with active learning in Bayesian MIL. Instances are selected maximising the entropy. We demonstrate this method can reduce the sample complexity and supports efficient learning of agent strategies. Second, we introduce a new method for revising the MIL hypothesis space with predicate invention. Our method generates predicates bottom-up from the background knowledge related to the training examples. We demonstrate this method is complete and can reduce the learning and sample complexity. Finally, we introduce a new MIL system called MIGO for learning optimal two-player game strategies. MIGO learns from playing: its training sets are built from the sequence of actions it chooses. Moreover, MIGO revises its hypothesis space with Dependent Learning: it first solves simpler tasks and can reuse any learned solution for solving more complex tasks. We demonstrate MIGO significantly outperforms both classical and deep reinforcement learning. The methods presented in this thesis open exciting perspectives for efficiently learning theories with MIL in a wide range of applications including robotics, modelling of agent strategies and game playing.
Content Version: Open Access
Issue Date: Feb-2022
Date Awarded: May-2022
URI: http://hdl.handle.net/10044/1/97356
DOI: https://doi.org/10.25560/97356
Copyright Statement: Creative Commons Attribution NonCommercial Licence
Supervisor: Muggleton, Stephen
Sponsor/Funder: Engineering and Physical Sciences Research Council (EPSRC)
Funder's Grant Number: 1964850
Department: Computing
Publisher: Imperial College London
Qualification Level: Doctoral
Qualification Name: Doctor of Philosophy (PhD)
Appears in Collections:Computing PhD theses



This item is licensed under a Creative Commons License Creative Commons