Better automated abstraction techniques for imperfect information games Andrew Gilpin and Tuomas...

download Better automated abstraction techniques for imperfect information games Andrew Gilpin and Tuomas Sandholm Carnegie Mellon University Computer Science Department.

If you can't read please download the document

description

Game theory In multi-agent systems, an agent’s outcome depends on the actions of the other agents  an agent’s optimal strategy depends on the strategies of the other agents Game theory defines how agents should play Definition: A Nash equilibrium is a strategy for each agent such that no agent benefits by deviating to another strategy

Transcript of Better automated abstraction techniques for imperfect information games Andrew Gilpin and Tuomas...

Better automated abstraction techniques for imperfect information games Andrew Gilpin and Tuomas Sandholm Carnegie Mellon University Computer Science Department Games and information Games can be differentiated based on the information available to the players Perfect information games: players have complete knowledge about the state of the world Examples: Chess, Go, Checkers Imperfect information games: players face uncertainty about the state of the world Examples: A robot facing adversaries in an uncertain, stochastic environment Almost any economic situation in which the other participants possess private information (e.g. valuations, quality information) Almost any card game in which the other players cards are hidden This class of games presents several challenges for AI Imperfect information Risk assessment and management Speculation and counter-speculation Game theory In multi-agent systems, an agents outcome depends on the actions of the other agents an agents optimal strategy depends on the strategies of the other agents Game theory defines how agents should play Definition: A Nash equilibrium is a strategy for each agent such that no agent benefits by deviating to another strategy Computing equilibrium In two-person zero-sum games, Nash equilibria are minimax equilibria, so there is no equilibrium selection problem Equilibrium can be found using LP Any extensive form game (satisfying perfect recall) can be converted into a matrix game Create one pure strategy in the matrix game for every possible pure contingency plan in the sequential game (set product of actions at information sets) Leads to exponential blowup in number of strategies, even in the reduced normal form Sequence form: More compact representation based on sequences of moves rather than pure strategies [von Stengel 96, Koller & Megiddo 92, Romanovskii 62] Two-person zero-sum games with perfect recall can be solved in time polynomial in size of game tree Not enough to solve Rhode Island Holdem (3.1 billion nodes) or Texas Holdem (10 18 nodes) Automated abstraction [EC-06] Automatic method for performing abstractions in a broad class of sequential games of imperfect information Equilibrium-preserving game transformation, where certain information sets are merged and certain nodes within an information set are collapsed GameShrink, algorithm for identifying and applying all the game transformations (n 2 ) time n = #nodes in the signal tree. In poker, these are possible card deals in the game Run-time tends to be highly sublinear in the size of the game tree Used these techniques to solve Rhode Island Holdem Largest poker game solved to date by over four orders of magnitude Also developed approximate (lossy) version of GameShrink Uses a similarity metric on nodes in the signal tree (e.g., |#wins 1 - #wins 2 | + |#losses 1 - #losses 2 |) and a similarity threshold Illustration of our approach Nash equilibrium Original game Abstracted game Abstraction Nash equilibrium Compute Nash Example: Applying the ordered game isomorphic abstraction transformation Application to Texas Holdem Two-person game tree has ~10 18 leaves Too large to run lossless GameShrink Even after that, LP would be too large Already too large when we applied this to first two rounds We split the 4 betting rounds into two phases Phase I (first 3 rounds) solved offline using new approximate version of GameShrink followed by LP Phase II (last 2 rounds): abstractions computed offline real-time equilibrium computation using updated hand probabilities and anytime LP Optimized approximate abstractions Original version of GameShrink yielded lopsided abstractions when used as an approximation algorithm Now we instead find an abstraction via clustering: For each level of the tree (starting from root): For each group of hands: use k-means clustering to split group i into k i abstract states win probability as the similarity metric (ties count as half a win) for each value of k i, compute expected error (considering hand probs) We find, using integer programming, an abstraction (split of K into k i s) that minimizes this expected error, subject to a constraint on the total number of states, K, at that level (=size of the resulting LP in the zero-sum case) Solving this class of integer programs is quite easy in practice Phase I (first three rounds) Automated abstraction using approximate version of GameShrink Round 1 There are 1,326 hands, of which 169 are strategically different We consider 15 strategically different hands Round 2 There are 25,989,600 distinct possible hands GameShrink (in lossless mode for Phase I) determines that there are about a million strategically different hands This is still too large to solve We used GameShrink to compute an abstraction that considers 225 strategically different hands Round 3 There are 1,221,511,200 distinct possible hands We consider 900 strategically different hands This process took about 3 days running on 4 CPUs LP solve took 7 days and 80 gigabytes using CPLEXs barrier method (interior-point method for linear programming) Mitigating effect of round-based abstraction (i.e., having 2 phases) For leaves in the first phase, we could assume no betting in the later rounds Ignores implied odds Can do better by estimating the amount of betting that occurs in later rounds Incorporate this information into the LP for the first phase For each possible hand strength and in each possible betting situation, we store the probability of each possible action Mine the betting history in the later rounds from hundreds of thousands of played hands Example of betting in fourth round Player 1 has bet. Player 2 to fold, call, or raise Phase II (last two rounds) Abstractions computed offline Betting history doesnt matter => ( ) situations Simple suit isomorphisms at the root of Phase II halves this For each such setting, we use GameShrink to generate an abstraction with 10 and 100 strategically different hands in the last two rounds, respectively Real-time equilibrium computation (using LP) So that our strategies are specific to particular hand (too many to precompute) Updated hand probabilities from Phase I equilibrium using betting histories and community card history: s i is player is strategy, h is an information set Conditional choice of primal vs. dual simplex Achieve anytime capability for the player that is us Dealing with running off the equilibrium path 52 4 Precompute several databases db5 : possible wins and losses (for a single player) for every combination of two hole cards and three community cards (25,989,600 entries) Used by GameShrink for quickly comparing the similarity of two hands db223 : possible wins and losses (for both players) for every combination of pairs of two hole cards and three community cards based on a roll-out of the remaining cards (14,047,378,800 entries) Used for computing payoffs of the Phase I game to speed up the LP creation handval : concise encoding of a 7-card hand rank used for fast comparisons of hands (133,784,560 entries) Used in several places, including in the construction of db5 and db223 Colexicographical ordering used to compute indices into the databases allowing for very fast lookups Experimental results GS1: Game theory-based player, old version of manual abstraction, no strategy simulation in later rounds [GS 2006] Sparbot: Game theory-based player, manual abstraction [Billings et al 2003] Vexbot: Opponent modeling, miximax search with statistical sampling [Billings et al 2004] OpponentSeries wonWin rate (small bets per 100) GS138 of Sparbot28 of Vexbot32 of Summary Competitive Texas Holdem player automatically generated First phase (rounds 1, 2 & 3): automated abstraction & LP solved offline, using statistical data to compute payoffs at end of round 3 Second phase (rounds 3 & 4): abstraction precomputed automatically; LP solved in real-time using updated hand probabilities and anytime Techniques are applicable to many sequential games of imperfect information, not just poker! Where to from here? The top poker-playing programs are fairly equal Recent experimental results show our player is competitive with (but not better than) expert human players Provable approximation, e.g., ex post Other types of abstraction More scalable equilibrium-finding algorithms Tournament poker [e.g. Miltersen & Srensen 06] More than two players [e.g. Nash & Shapley 50] Thank you