Planning Against Fictitious Players in Repeated Normal Form Games
OA Location
Author(s)
Cote, Enrique Munoz de
Jennings, Nick
Type
Conference Paper
Abstract
Planning how to interact against bounded memory and unbounded memory learning opponents needs different treatment. Thus far, however, work in this area has shown how to design plans against bounded memory learning opponents, but no work has dealt with the unbounded memory case. This paper tackles this gap. In particular, we frame this as a planning problem using the framework of repeated matrix games, where the planner’s objective is to compute the best exploiting sequence of actions against a learning opponent. The particular class of opponent we study uses a fictitious play process to update her beliefs, but the analysis generalizes to many forms of Bayesian learning agents. Our analysis is inspired by Banerjee and Peng’s AIM framework, which works for planning and learning against bounded memory opponents (e.g an adaptive player). Building on this, we show how an unbounded memory opponent (specifically a fictitious player) can also be modelled as a finite MDP and present a new efficient algorithm that can find a way to exploit the opponent by computing in polynomial time a sequence of play that can obtain a higher average reward than those obtained by playing a game theoretic (Nash or correlated) equilibrium.
Date Issued
2010
Citation
2010, pp.1073-1080
Start Page
1073
End Page
1080
Identifier
http://eprints.soton.ac.uk/268481/
Source
9th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS2010)
Publication Status
Unpublished