RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games.
-
Upload
cory-johns -
Category
Documents
-
view
215 -
download
0
Transcript of RADHA-KRISHNA BALLA 19 FEBRUARY, 2009 UCT for Tactical Assault Battles in Real-Time Strategy Games.
RADHA-KRISHNA BALLA19 FEBRUARY, 2009
UCT for Tactical Assault Battles in Real-Time Strategy
Games
Overview
I. IntroductionII. Related WorkIII. MethodIV. Experiments & ResultsV. Conclusion
I. IntroductionII. Related WorkIII. MethodIV. Experiments & ResultsV. Conclusion
Domain
RTS games Resource Production Tactical Planning
Tactical Assault battles
RTS game - Wargus
Screenshot of a typical battle scenario in Wargus
Planning problem
Large state spaceTemporal actionsSpatial reasoningConcurrencyStochastic actionsChanging goals
I. IntroductionII. Related WorkIII. MethodIV. Experiments & ResultsV. Conclusion
Related Work
Board games – bridge, poker, Go etc., Monte Carlo simulations
RTS games Resource Production
Means-ends analysis – Chan et al., Tactical Planning
Monte Carlo simulations – Chung et al., Nash strategies – Sailer et al., Reinforcement learning – Wilson et al.,
Bandit-based problems, Go UCT – Kocsis et al., Gelly et al.,
Our Approach
Monte Carlo simulationsUCT algorithm
Advantage Complex plans from simple abstract actions Exploration/Exploitation tradeoff Changing goals
I. IntroductionII. Related WorkIII. MethodIV. Experiments & ResultsV. Conclusion
Method
Planning architectureUCT AlgorithmSearch space formulationMonte Carlo simulationsChallenges
Planning Architecture
Online Planner
State space abstraction Grouping of units
Abstract actions Join(G) Attack(f,e)
UCT Algorithm
Exploration/Exploitation tradeoffMonte Carlo simulation – get subsequent
statesSearch tree
Root node – current state Edges – available actions Intermediate nodes – subsequent states Leaf nodes – terminal states
Rollout-based constructionValue estimates
UCT Algorithm – Pseudo Code 1
At each interesting time point in the game:build_UCT_tree(current state);choose argmax action(s) based on the UCT policy; execute the aggregated actions in the actual game;wait until one of the actions get executed;
build_UCT_tree(state):for each UCT pass do
run UCT_rollout(state);
(.. continued)
UCT Algorithm – Pseudo Code 2
UCT_rollout(state): recursive algorithm
if leaf node reached thenestimate final reward; propagate reward up the tree and update value functions; return;populate possible actions;
if all actions explored at least once thenchoose the action with best value function; else if there exists unexplored actionchoose an action based on random sampling;
run Monte-Carlo simulation to get next state based on current state and action;
call UCT_rollout(next state);
UCT Algorithm - Formulae
𝑄+ሺ𝑠,𝑎ሻ= 𝑄ሺ𝑠,𝑎ሻ + 𝑐× ඨ𝑙𝑜𝑔𝑛ሺ𝑠ሻ𝑛ሺ𝑠,𝑎ሻ
𝜋ሺ𝑠ሻ= 𝑎𝑟𝑔𝑚𝑎𝑥𝑎 𝑄+ሺ𝑠,𝑎ሻ
𝑛ሺ𝑠,𝑎ሻ ← 𝑛ሺ𝑠,𝑎ሻ + 1 𝑛ሺ𝑠ሻ ← 𝑛ሺ𝑠ሻ + 1
𝑄ሺ𝑠,𝑎ሻ ←𝑄ሺ𝑠,𝑎ሻ + 1𝑛ሺ𝑠,𝑎ሻሾ𝑅− 𝑄ሺ𝑠,𝑎ሻሿ
Action Selection:
Value Updation:
Search Space Formulation
Abstract State Friendly and enemy groups
Hit points Location
Current actions Current time
Calculation of group hit points:
Calculation of mean location: centroid
𝐻𝑃(𝐺) = ( ඥ𝐻𝑃𝑖)2
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑐𝑡𝑖𝑜𝑛 𝑐ℎ𝑜𝑖𝑐𝑒𝑠= ቀ𝑛𝑓𝑟𝑖𝑒𝑛𝑑𝑙𝑦2 ቁ + 𝑛𝑓𝑟𝑖𝑒𝑛𝑑𝑙𝑦 ∗𝑛𝑒𝑛𝑒𝑚𝑦
Monte Carlo Simulations
Domain-specificActual game play – Wargus
Join actions Attack actions
Reward calculation – objective function Time Hit points
Note: Partial simulations (time cutoff)
Domain-specific Challenges
State space abstraction Grouping of units (proximity-based)
Concurrency Aggregation of actions
Join actions – simple Attack actions – complex (partial simulations)
Planning problem - revisited
Large state space – abstractionTemporal actions – Monte Carlo simulationsSpatial reasoning – Monte Carlo simulationsConcurrency – aggregation of actionsStochastic actions – UCT (online planning)Changing goals – UCT (different objective
functions)
I. IntroductionII. Related WorkIII. MethodIV. Experiments & ResultsV. Conclusion
Experiments
# Scenario Name
# of friendly groups
Friendly groups
composition# of enemy
groupsEnemy groups composition
# of possible ‘Join’ actions
# of possible ‘Attack’ actions
Total # of possible actions
1 2vs2 2 {6,6} 2 {5,5} 1 4 5
2 3vs2 3 {6,2,4} 2 {5,5} 3 6 9
3 4vs2_1 4 {2,4,2,4} 2 {5,5} 6 8 14
4 4vs2_2 4 {2,4,2,4} 2 {5,5} 6 8 14
5 4vs2_3 4 {2,4,2,4} 2 {5,5} 6 8 14
6 4vs2_4 4 {2,4,2,4} 2 {5,5} 6 8 14
7 4vs2_5 4 {2,4,2,4} 2 {5,5} 6 8 14
8 4vs2_6 4 {2,4,2,4} 2 {5,5} 6 8 14
9 4vs2_7 4 {3,3,6,4} 2 {5,9} 6 8 14
10 4vs2_8 4 {3,3,3,6} 2 {5,8} 6 8 14
11 2vs4_1 2 {9,9} 4 {4,5,5,4} 1 8 9
12 2vs4_2 2 {9,9} 4 {5,5,5,5} 1 8 9
13 2vs4_3 2 {9,9} 4 {5,5,5,5} 1 8 9
14 2vs5_1 2 {9,9} 5 {5,5,5,5,5} 1 10 11
15 2vs5_2 2 {10,10} 5 {5,5,5,5,5} 1 10 11
16 3vs4 3 {12,4,4} 4 {5,5,5,5} 3 12 15
Table 1: Details of the different game scenarios
Planners
UCT Planners UCT(t) UCT(hp)
Number of rollouts – 5000Averaged over – 5 runs
Planners
Baseline Planners Random Attack-Closest Attack-Weakest Stratagus-AI Human
Video – Planning in action
Simple scenario<add video>
Complex scenario<add video>
Results
Figure 1: Time results for UCT(t) and baselines.
Results
Figure 2: Hit point results for UCT(t) and baselines.
Results
Figure 3: Time results for UCT(hp) and baselines.
Results
Figure 4: Hit point results for UCT(hp) and baselines.
Results - Comparison
Figures 1, 2, 3 & 4: Comparison between UCT(t) and UCT(hp) metrics
Time results Hit point results
UCT(t)
UCT
(hp)
Results
Figure 5: Time results for UCT(t) with varying rollouts.
I. IntroductionII. Related WorkIII. MethodIV. Experiments & ResultsV. Conclusion
Conclusion
Conclusion Hard planning problem Less expert knowledge Different objective functions
Future Work Computational time – engineering aspects Machine Learning techniques Beyond Tactical Assault
Thank you