Automated planning in repeated adversarial games
Publication available at
Author(s)
Cote, EMD
Chapman, A
Sykulski, AM
Jennings, N
Type
Conference Paper
Abstract
Game theorys prescriptive power typically relies on full rationality and/or self-play interactions. In contrast, this work sets aside these fundamental premises and focuses instead on heterogeneous autonomous interactions between two or more agents. Specifically, we introduce a new and concise representation for repeated adversarial (constant-sum) games that highlight the necessary features that enable an automated planing agent to reason about how to score above the games Nash equilibrium, when facing heterogeneous adversaries. To this end, we present TeamUP, a model-based RL algorithm designed for learning and planning such an abstraction. In essence, it is somewhat similar to R-max with a cleverly engineered reward shaping that treats exploration as an adversarial optimization problem. In practice, it attempts to find an ally with which to tacitly collude (in more than two-player games) and then collaborates on a joint plan of actions that can consistently score a high utility in adversarial repeated games. We use the inaugural Lemonade Stand Game Tournament1 to demonstrate the effectiveness of our approach, and find that TeamUP is the best performing agent, demoting the Tournaments actual winning strategy into second place. In our experimental analysis, we show hat our strategy successfully and consistently builds collaborations with many different heterogeneous (and sometimes very sophisticated) adversaries.
Date Issued
2010-12-01
Online Publication Date
2010-07
2016-07-18T13:07:26Z
Start Page
376
End Page
383
Source Database
manual-entry
Identifier
http://eprints.soton.ac.uk/271306/
Source
26th Conference on Uncertainty in Artificial Intelligence (UAI 2010)
Source Place
Catalina Island, California
Notes
Event Dates: 8-11 July, 2010
Publication Status
Unpublished
Start Date
2011-07-08
Finish Date
2011-07-11