Can meta-interpretive learning outperform deep reinforcement learning of evaluable game strategies?
File(s)
Author(s)
Hocquette, Céline
Muggleton, Stephen H
Type
Working Paper
Abstract
World-class human players have been outperformed in a number of complex two person games (Go, Chess, Checkers) by Deep Reinforcement Learning systems. However, owing to tractability considerations minimax regret of a learning system cannot be evaluated in such games. In this paper we consider simple games (Noughts-and-Crosses and Hexapawn) in which minimax regret can be efficiently evaluated. We use these games to compare Cumulative Minimax Regret for variants of both standard and deep reinforcement learning against two variants of a new Meta-Interpretive Learning system called MIGO. In our experiments all tested variants of both normal and deep reinforcement learning have worse performance (higher cumulative minimax regret) than both variants of MIGO on Noughts-and-Crosses and Hexapawn. Additionally, MIGO's learned rules are relatively easy to comprehend, and are demonstrated to achieve significant transfer learning in both directions between Noughts-and-Crosses and Hexapawn.
Date Issued
2019-02-26
Citation
2019
Publisher
arXiv
Copyright Statement
© 2020 The Author(s)
Sponsor
Royal Academy Of Engineering
Engineering & Physical Science Research Council (EPSRC)
Identifier
https://arxiv.org/abs/1902.09835
Grant Number
10145/88
EP/R0222091/1
Publication Status
Published