Python and Pygame Simulating Ambulance...
Transcript of Python and Pygame Simulating Ambulance...
Python and Pygame Simulating
Ambulance Response
By
Corey Bennett
Submitted in Partial Fulfillment of the Requirements for the Degree of Bachelor of
Computer Science
Department of Computer Science and Mathematics
Undergraduate Studies
Algoma University
Sault Ste. Marie, Ontario
2013-2014
Abstract
The creation of a simulated testing environment is very similar in structure to the creation of modern
video games. The complexities of creating a fully autonomous virtual urban environment are quite
large in nature to create what is effectively a game that plays itself without any user input. By utilizing
the languages of Python and the Pygame library, I illustrate the requirements to creature a fully
autonomous system that generates a graphical output of the state of the system, and outputs a log of the
statistics of the performance of a unique ambulance agent. The two problems of an autonomous agent
are finding a path towards its goal and movement while constraining to the rules of the simulation
environment. To solve these problems the individual agents must navigate from their current location to
their goal, while moving following the rules of the environment, and avoiding collisions with other
agents within the environment. Using the Breadth First Search algorithm and the AI Steering
Techniques to create truly independent agents that will think and move much like a human driver in
reality. The results of this project are the graphical output of the application showing the movement of
the agents within the simulation in real time, and the average statistics for each of the possible decision
making configurations for the unique ambulance agent. The results of this project show that the
configurations where the major goals are given the highest priority grant the ambulance agent the
highest average score, and that an even configuration of goal preferences results in the highest number
of completed goals. The simplicity of the application itself demonstrate the ease of implementing
games within Python using Pygame, and the visual output demonstrates the ease of generating
graphical information based on the state of a simulated system using the Pygame library.
Keywords: Artificial Intelligence, AI Steering, Python, Pygame, Graph Search
i
Acknowledgements
I would like to express my gratitude to my Advisor, Dr. Yi Feng for her help with my research,
guidance on my writing, and the lessons she has taught me over the past few years.
I would also like to thank Dr. Miguel Garcia-Ruiz who introduced me to the topic of my research,
instructed me with the content of my report, and helped me to understand how to make the games I
enjoy.
Lastly I would like to credit my former teacher Gus Sgouraditis, who introduced me to the tools and
techniques of computer programming, and who helped me achieve my success in my early
programming endeavours.
ii
Table of Contents
Abstract.......................................................................................................................................................iKeywords: .............................................................................................................................................i
Acknowledgements....................................................................................................................................iiTable of Contents......................................................................................................................................iiiList of Tables.............................................................................................................................................viList of Figures..........................................................................................................................................viiChapter 1. Introduction.............................................................................................................................1
1.1 Thesis Objective..............................................................................................................................11.2 Relevant Research...........................................................................................................................21.3 Topic Choice....................................................................................................................................21.4 Summary........................................................................................................................................3
Chapter 2. Basis of Simulating using AI....................................................................................................42.1 Environment and Agents.................................................................................................................42.2 Multi-agent System.........................................................................................................................52.3 Path-finding Within a Closed System.............................................................................................62.4 AI Thinking and Steering Behaviours.............................................................................................72.5 Python and Pygame.........................................................................................................................82.6 Summary.......................................................................................................................................10
Chapter 3. Creating a Simulation.............................................................................................................113.1 Simulation Design.........................................................................................................................113.2 Map Module..................................................................................................................................163.3 Vehicle Controller Module............................................................................................................183.4 Summary......................................................................................................................................22
Chapter 4. Implementation of the Key Components................................................................................244.1 Pygame Methodology....................................................................................................................244.2 AI Steering....................................................................................................................................274.3 Graph Searching............................................................................................................................314.4 Summary.......................................................................................................................................33
Chapter 5. Results....................................................................................................................................355.1 Visual Output.................................................................................................................................355.2 Result Tables.................................................................................................................................375.3 Summary.......................................................................................................................................40
Chapter 6. Discussion..............................................................................................................................41Chapter 7. Conclusions............................................................................................................................43
7.1 Future Work...................................................................................................................................457.2 Summary......................................................................................................................................46
8. Bibliography........................................................................................................................................47Appendix A: Input File............................................................................................................................49
Input.xml.............................................................................................................................................49Appendix B: Raw Output.........................................................................................................................51Appendix C: Source Code........................................................................................................................59
Project.py.............................................................................................................................................59CityMap.py..........................................................................................................................................62
iii
Table of Contents
GraphNode.py.....................................................................................................................................66AddressNode.py..................................................................................................................................66LabeledNode.py..................................................................................................................................67PositionNode.py..................................................................................................................................67IntersectionNode.py............................................................................................................................68GraphEdge.py......................................................................................................................................69NavigationEdge.py..............................................................................................................................70TemporaryEdge.py..............................................................................................................................72Search.py.............................................................................................................................................73VehicleController.py............................................................................................................................74Vehicle.py............................................................................................................................................78Ambulance.py......................................................................................................................................80Tools.py...............................................................................................................................................80Steering.py...........................................................................................................................................81
iv
v
List of Tables
Table 5.2.1: Results for Configuration Major:0.8 Minor:0.2...................................................................38Table 5.2.2: Results for Configuration Major:0.5 Minor:0.5...................................................................38Table 5.2.3: Results for Configuration Major:0.2 Minor:0.8...................................................................39Table 5.2.4: Results for Configuration Major:1.0 Minor:0.1...................................................................39Table 5.2.5: Results for Configuration Major:0.1 Minor:1.0...................................................................40
vi
List of Figures
Figure 2.2.1: Multi-agent System Diagram...............................................................................................5Figure 3.1.1: Evolutionary Prototyping Diagram....................................................................................13Figure 4.2.1: The Seek algorithm.............................................................................................................27Figure 4.2.2: The Arrive algorithm..........................................................................................................28Figure 4.2.3: Flee Diagram......................................................................................................................30Figure 4.2.4: The Flee algorithm..............................................................................................................30Figure 4.3.1: The Breadth First Search Algorithm...................................................................................33Figure 5.1.1: The graphical output of the simulation...............................................................................36
vii
Chapter 1. Introduction
The Python language has been shown to be easy to learn and assist students with the creation of video
games. By using the Pygame library the goal of this project is to create a realistic simulation of a
chaotic urban environment populated by independent agents that make use of research into path
finding, AI steering and decision making to achieve the best resulting score. I am approaching this
project out of my own interest in the Python language and the conventions of making games.
1.1 Thesis Objective
The objective of this thesis is to research, familiarize and implement modern approaches to creating a
functional simulated environment using the Python language. Additionally the system will utilize the
Pygame library for Python to make use of its graphical output functions in order to illustrate and
validate the results of the simulation. To accomplish that objective several key topics for creating a
simulation must be researched. These topics include learning the Python language, learning to use
Pygame's graphical functionality, multi-agent implementation, AI steering techniques, AI decision
making, and graph searching methodologies. The goal of the simulation is to be populated with
autonomous agents that will navigate the environment independently to accomplish their goals within a
fixed time frame. The statistics of this test will be recorded and output for analysis. Several different
configurations will be tested with different decision making priorities that the agents will use when
choosing which goals to accomplish, and how to accomplish them as quickly as possible. The use of an
input that defines the environment will illustrate that the agents are autonomous without any form of
scripting present.
1
1.2 Relevant Research
The creation of simulations is important to testing our understanding of how a real life system
functions. To the goal of creating a realistic simulation comes with the importance of populating that
closed system with virtual entities, referred to as agents, that are made to replicate real life entities,
humans as one example [18]. The goal of these agents is to mimic real life as much as possible using a
variety of methodologies to make them appear more intelligent [12, 11]. These agents must be
independent but can display interaction behaviours that are beneficial to the individuals [1, 8]. The goal
of creating a simulation is similar in method to creating an interactive game for a user to interact with
with the exception of the player agent within the simulated environment [1, 10]. To accomplish this
goal Python will be used along with the Pygame library as they are shown to provide a good
introduction to game programming while being easy to learn and use [17, 20, 9, 2]. The autonomous
agents will require the ability to navigate to their goals through the use of navigation graph search
algorithms [4] the extension of which being the examination of the environment to find the best
possible route given current conditions within the environment [5, 7]. Other work on this topic has been
done by [3] to create traffic simulations in a realistic chaotic environment, and creating realistic
autonomous simulations for testing has been discussed in [18] for prediction and testing of real world
systems.
1.3 Topic Choice
I chose this topic in order to learn the Python language and familiarize myself with using the Pygame
library to create simulations and games. The topic of creating autonomous agents within a simulation or
game environment is of importance when designing video games and relevant to creating more
intelligent virtual entities within a closed system. The evaluation of the results of the AI agents
2
demonstrated their autonomous ability to make decisions about what is best to do to accomplish their
own randomly generated goals.
1.4 Summary
Within the development period of this project the Python language and Pygame library will be used to
create a simulation of an urban environment in which autonomous vehicle agents will be populated and
the statistics of the performance of the agents will be tracked. Research will be done regarding the
creation of autonomous agents and the techniques of AI steering and graph search for the purpose of
navigation. This topic was chosen out of personal interest to have experience creating simulations and
games using Python and Pygame and implementing common path finding and steering algorithms
which guide agents toward a goal.
3
Chapter 2. Basis of Simulating using AI
To create an autonomous simulation two key things are required, a closed system in which to function
as the environment and AI entities to exist within that environment referred to as agents. These agents
must be capable of independently thinking and making choices and planning how to reach their
individual goals. A concept known as AI steering techniques can also be used in order to make the
simulation appear more realistic by appearing to mimic the motions and way of thinking of a living
conscious creature. For this research project the Python language will be used for implementation of
the simulation and a library known as Pygame will be used in order to be able to create a graphical
representation of the simulation as it is running.
2.1 Environment and Agents
The basis of a simulation is two requirements; a set of rules for the environment to follow and
intelligent entities within that environment that are providing the feed back of the simulation. With
these two points we have the core aspect of creating a simulated environment. The goal is to try and
create situations that mirror the real world for entities to react to, and to add in virtual entities that will
exist in this environment, either human or AI controlled [18].
For the purpose of this experiment there are two items which must be created in order to run the
simulation, the environment to use and the AI controlled entities that will exist within it. From this
point forwards the AI controlled entities will be referred to as “agents”. The goal of an agent is to
mimic the thought patterns of a real human and act similarly to a real life agent [12]. The goal of the
4
environment is to give the agents a system to exist within and navigate that has its own native rules and
limitations, but gives the agents the freedom to go where they wish and do what they wish. Every agent
is autonomous and has its own goals and logic. For this reason this type of system is referred to as a
multi-agent system.
2.2 Multi-agent System
A multi-agent system is mostly self descriptive in that it is a closed system in which multiple
independent agents exist and must interact, either cooperatively, neutrally or competitively while
working to accomplish their own goals. Agents can be attracted to one another and work together, one
example of this is with flocking behaviours where agents will group together for protection or to
accomplish a task [1]. Alternatively agents can be competitive towards one another when both seek to
accomplish the same goal or when both feel threatened by the other, an example of this is a racing
behaviour or a chase and flee behaviour. Finally agents can simply be neutral towards one another and
only interact by avoiding collision with one another while seeking to accomplish their goals, an
example of this is simple vehicle simulation where vehicles only try to avoid colliding together.
Figure 2.2.1: Multi-agent System Diagram
5
Neutral Agent
Competative Agent
Target
Pursuit ForceFlee Force
Attracti
onForc
e
Goal
Seek force
Seek ForceForces
The other main concern of a multi-agent system is the distribution of computing power in order to
calculate for each separate agent within the system. In a system with many agents the work to calculate
how to best accomplish each of their goals can get very intensive and time consuming. The time spent
doing this work needs to be shared equally between all agents since no one agent should normally get
priority over the others in order to help the simulation to appear more realistic. The time spent
calculating for any one agent must be therefore limited so that it does not increase the calculation time
of other agents in the system if one agent has more work to do than usual. All agents must be treated
equally and given their turn fairly to enforce fairness between agents. This is only a concern in which a
simulating must run within a fixed time or have high responsiveness such as when human players are
entities within the system. This can be accomplished easily within a multi-threaded system in which
each agent is given its own thread of execution and threads are automatically switched if they take too
long [1].
2.3 Path-finding Within a Closed System
For any autonomous agent to navigate itself within a closed system it must have a method of navigation
in order to find a direction towards its goal. The environment must provide a way for any agent to find
its current location and plan a route towards its goal while staying within the constraints of the closed
environment [1]. This method must also be accessible to all agents within the system and be uniform in
nature such that every agent can find its own path simultaneously and quickly from its current location
to its goal independently. This is an important part of an agent's autonomy and every agent must be
capable of finding a path to its goal using a path-finding algorithm [18].
6
There is a large variety of path-finding algorithms that have been created. Depth First Search and
Breath First Search are the two that are easiest to implement, followed by Dijkstra's algorithm and the
A* algorithm [5]. The latter two are considered the most effective and are most suitable for lowest cost
searches when trying to find the shortest path through an environment with varying movement costs to
consider [5]. The Breath First Search algorithm is guaranteed to find the shortest path to any reachable
goal if given enough time, and Depth First Search algorithm is guaranteed to find a path to a goal if it
exists but may take a long time to do so due to the lack of use of heuristics. All of these algorithms can
be used to find a path for an agent to take on its own and return a list of way-points for the agent to use
for navigation [5].
2.4 AI Thinking and Steering Behaviours
As the basis of creating a simulation is to create a system that appears real, and mimics reality, it is
important that entities within the system mimic entities in the real world [2]. This includes having not
only their own mental state, but also their own limitations of movement imposed by the rules of physics
[1, 2].
The basis of intelligence, both natural and artificial, is the feedback loop of; observe the environment,
think and decide what action to take, then take action based on the decision made [10]. This cycle
repeats for every processing cycle of an agent and must take into account all aspects of a system and
the agent itself during the process. This cycle is the most basic example of intelligence and
consciousness that gives any entity the ability to react to the system in which it is in [11].
7
From this comes the concept of AI steering behaviours when it concerns the movement of an AI within
a system. The concept of steering behaviours comes from the idea that an agent will have a multitude of
forces acting upon it at once, from being repelled by competing agents to an attraction to friendly
agents and goals [2]. This concept was created in order to demonstrate the flocking behaviours of
independent agents and the interactions between them much like those of birds or schools of fish [1].
But this also extends to general AI movement within a system by mimicking the motivations of a real
intelligence, avoiding walls and threats and moving towards their goal. This approach to agent
movement relies on the concept of forces acting upon the agent rather, then pre-programmed avoidance
of specific objects. These forces push the agent in a direction and help guide it around obstacles by
using forces that push the agent away and around an object that is blocking their way, and can push
agents away from each other to prevent collisions. AI steering forces also attract agents towards their
goal or current way-point goal but are subject to physics limitations such as mass, acceleration, and
velocity. It is these forces combined with the limitations of physics that create a more realistic
simulation as an agent takes time to accelerate to its maximum speed and cannot stop instantly [12].
These factors can also be used for balance within simulations in with the entities are real players who
have to navigate the environment. The standard forces are; Seek which accelerates an agent towards a
point, Flee which accelerates an agent away from a point, Arrival which accelerates an agent towards a
point but slows it down before it arrives such that it stops on the point, and many other forces [1].
2.5 Python and Pygame
Python is a high-level programming language that will be used for creating the simulation. It is an
object-oriented programming language with an impressive number of supported libraries for extension
8
including Pygame. It is written in a combined object-oriented and procedural format that allows for
greater flexibility and has automatic memory management systems that allow for greater simplicity of
code [2]. The language prioritizes code readability by requiring proper indenting as part of the
language's logic structure. It is a platform independent language due to the use of Python interpreters
rather than compiled code but several tools exist such as, PyPy a Just-In-Time compiler to run python
code, and Py2exe to pre-compile the code into an executable format so that the host machine does not
require a Python installation to run the software. Python was created by the non-profit Python Software
Foundation and is currently used as both a scripting and a programming language for several major
video game titles and open source tools available online. For this project I have chosen Python version
2.7 because while lacking some features of the language from python 3.x it is more well know [2, 9,
20], the small details of its function and optimization are more well researched, and it is what the
Pygame library was optimized to work with [17, 21].
Pygame was a library created for the Python programming language that adds to the base functionality
of the language by making it much easier to create games [21] and simulations using Python by adding
graphical and audio functions to aid in the creation of video games [17]. Pygame is also free to use and
open-source and can be found on its website which is listed in the references section. Pygame aids in
the generation of graphical user interfaces through visual and audio output and is primarily intended for
making games [9]. Additionally, Pygame has functions that help control the speed of a simulation by
limiting the number of iterations per minute can be done by the processor referred to as “ticks”.
Pygame is widely used to create prototype games and simple simulations for the purpose of
demonstration thanks to its easy portability compliant with the Python language.
9
2.6 Summary
By combining this knowledge together it is possible to construct a simulation which appears realistic to
the observer and give more meaningful results. The use of AI steering techniques helps to constrain the
agents as they move about the simulated environment and force them to appear to follow the laws of
physics while being more aware of their environment. The techniques also allow more flexibility to the
agents such that they can adapt to different goals and need not be programmed for every specific
situation they can encounter between the agents and the environment or between agents themselves.
Using the Python language and the Pygame engine the simulation can be easily constructed using
object-oriented methods to be more flexible and the graphical output supplied by the Pygame library
will help to make the results appear more meaningful as agents can be visually observed accomplishing
goals within the simulation.
10
Chapter 3. Creating a Simulation
The design of the simulation engine is split into 3 important parts; the project.py file and main method
that hold the configuration and initialize the simulation when run, the map module which handles the
navigation of the agents and drawing of the visual representation of the map for the display surface, and
lastly the vehicle controller module which handles the movement and states of the vehicle agents
themselves. These three modules work together and request data from one another in order to create the
simulated environment and the agents that exist within it. These modules use many class files to design
the data structures that they control internally that each functions as its own small module. The map
module controls edges and nodes while utilizing the Search utilities, while the vehicle controller
contains the list of all vehicles and utilizes the Steering utilities. The main module contains both the
map module and the vehicle controller module and runs the main simulation loop to request updates
from all other modules.
3.1 Simulation Design
The entirety of the implementation of the system was done in Lubuntu, a Linux distribution that is
running within a Virtual Box Virtual Machine in order to isolate it from the specifics of the
development system [19]. In this environment the development is independent of the processing or
memory limits of the host machine and is instead allocated a fixed speed and memory and runs on
generic installation of Lubuntu with no special features. In this way the performance of the
development and test system is controlled when running performance tests on the finished code to
ensure it is running efficiently. The Virtual Machine is also highly replicable and portable and can be
11
used on any Virtual Box installation or easily recreated in order to confirm the results in separate tests.
The configuration of the virtual machine that development and testing is taking place within are as
follows; 1024MB of memory, 10.70GB of hard drive space, 1 2.93Ghz CPU, with Virtualization
Technology enabled. Virtualization Technology is a setting with the host system CPU that a virtual
machine can utilize to run more efficiently on the host system.
The simulation system was built entirely using the language of Python as specified earlier. The
language's robust nature allows it to easily be used in a wide range of applications from simple scripts
to long looping programs capable of interacting with files, databases, and graphical output. Being an
interpreted language before it is compiled Python is written in a simple text editor within either generic
script or class files made to create objects for instantiation. The use of the Python language for this
program was key to prove the viability of the language for use creating an informative and structured
simulation. All compiled files have been written in Python and run in an IDE known as Stani's Python
Editor [16].
The development methodology used on this program was the evolutionary prototyping life cycle. This
method requires the repeated development of working prototypes that satisfy more requirements of the
finished system with each version [22]. It also allows for experimentation within the development
process and constant testing to ensure that the finished product not only fulfils the requires but also
does so in the most efficient manner possible. Through iterative testing of all added components to each
prototype the finished product can be ensured to be almost entirely free of flaws and meet all of the
requirements set forth by the project [22]. The following diagram helps to visualize the process of the
rapid evolutionary prototyping methodology.
12
Figure 3.1.1: Evolutionary Prototyping Diagram
The simulation is programmed in an object-oriented and procedural manner as supported by the Python
language. All of the source files are written in plain text with the Python file extension of py. They are
all located together in one directory as the source is not complex enough to require a package structure.
The simulation can be run from a Python installation by calling the project.py file, or by being executed
from a compiled instance of the source code into an executable. The Pygame package is not located in
the source directory and must be imported into the Python interpreter or included in the packages
directory in order to run or compile the source code.
The global variable configuration is declared first within the Project.py file before the main method is
run. Here the configuration for things such as colours, ticks per second, and total simulation time is set.
Simulation configuration is also set here such as; the number of vehicles, the average interval between
events being created for the ambulance to respond to, the window size of the simulation's graphical
output, and the input and output file names. All of these values are to be used later in the simulation and
are copied into the modules that use them, therefore must be set before the simulation is run. These
values do not need to be changed normally and can be left alone in order for each test to be run under
13
RequirementsAnalysis
Create Working Prototype of System
Test PrototypeFunctionality
Refine Prototype Design To Better Fulfil Requirements
Accept Final PrototypeAs Finished Product
If Requirem
entsU
nsatisfied
If Requirements S
atisfied
Test
Aga
in
the same conditions. Changes that need to be done to the behaviour of the ambulance vehicle can be
done within the ambulance's class file of Ambulance.py.
A “module” within Python is a self contained object that performs functions on its containing data and
is made only to have its data read by or functions called by other members of the program. All of the
data within a module should not be modified by any outside source and a stand along module is used
for one specific task. In this specific program two modules are used, one for the map information and
navigation, while another for containing the information about each of the agents and controlling them.
Neither module modifies the data held within the other but both call functions from within the other to
perform tasks such as request navigation paths.
After the global variables is the main simulation loop that will run until the simulation is terminated.
This is enclosed within the main method and is run once all of the configuration variables have been
loaded. Before the loop begins the main method initializes the Pygame engine and creates a Clock
object used for synchronizing the speed of the simulation to the maximum ticks per second. Finally the
start time of the simulation is recorded and the display window is created using the display size
specified in the configuration. This display object created will be used for all updates to the visual
output of the simulation and is used by the other modules to draw their data onto the display surface as
part of visual updates. The last steps of the setup are to: set the name of the Pygame window to the
name of the program, create the font object used for drawing text onto the display surface, and initialize
the map and vehicle controller objects that will handle the environment and the vehicles respectively.
The objects are initialized with the global configuration variables such as the name of the map file and
the number of vehicles to create. After the creation the main game loop begins and the simulation starts
14
running.
The game loop of the simulation is run once per tick, it exists as an infinite loop that is only stopped
once the simulation time ends. With each iteration of the loop the simulation first checks if it has
reached its time limit, and if so will break the loop. Otherwise the loop than calculates if the current
number of ticks minus the number of ticks during the last event, is greater then or equal to a random
value between the average rate minus half of itself and plus half of itself. If so then a new event is
created in the vehicle controller to give the ambulance a new goal to respond to, along with a priority
chosen randomly that is associated with that goal.
Next, the loop wipes the display surface clean with a colour fill and updates the local value containing
the current frames per second, or ticks per second, that the simulation is running. This value is used for
displaying on the screen outside of the simulation area for purposes of monitoring the processing speed
of the simulation. The graphical text of this value is then rendered using the text object created earlier
and text colour specified in the configuration, and lastly it is drawn onto the display surface in the top
left corner. The vehicle controller and map objects are then called to update themselves to advance one
tick using their update method. The display surface is then updated with the graphics drawn by the map
and the vehicle controller. The last tasks to compete in the loop are; to check and see if the exit events
have been called and if so to stop the Pygame engine and end the program., to update the tick counter,
to call an update on the graphical display, and finally to cause the program to sleep until it is time for
the next tick to occur in order to keep to the limit.
15
Once the main game loop exits the simulation has ended and will output the results. A file object is
created to append to the output file specified in the configuration and the score of the ambulance agent
is added up from the list of completed goals in the vehicle controller module. The results are grouped
together within a dictionary, a list that has a key-value pair storage system rather then numbered
positions, and added together based on the priority of the goal, major or minor. An output string is than
constructed that contains; the date and time of the end of the simulation, the total number of ticks spent,
the number of seconds the simulation took to run, the configuration of the ambulance agent, and finally
the score and count of what the ambulance accomplished within the time limit. This data is both output
to the console and to the output file for records.
3.2 Map Module
The map module is responsible for all navigation within the simulated environment and is specified
within the CityMap.py file. The map is initialized using the createMap method using the input file
name of the XML document that will serve as the map. The input file is read from the provided file
name in the simulation configuration, and then an XML parser is used to extract the information from
the data structure. First the nodes of the graph are extracted from the XML, and the attributes are read
that specify the name and position of the node on the map. For each of these an IntersectionNode is
created to designate an intersection of two roads on the map. Next the edges are read which specify
nodes to connect in their attributes. For each edge specified in the XML a NavigationEdge object is
created that handles the connection between nodes and any address points in between. The address
locations are automatically generated as long as the edge is long enough. The XML then loads any
specified points on the line and replaces the generated AddressNodes with the specified points. If this
16
point is the start point then it is saved in the map as the start point for the ambulance. Finally the map
parameters are initialized with the configuration from the project.py file, the actual map draw area is
calculated and the X and Y scale are calculated from the parameters, and the map is pre-drawn to create
a basic map as it is at the start of the simulation.
AddressNodes merely represent valid locations for a vehicle to begin at, or a goal for a vehicle to travel
to. They are placed on the edges of the map by the NavigationEdges themselves if they are over a
distance of 1 unit on the map. They are only drawn on the map when the map is in test mode and so are
normally invisible unless they are explicitly labelled by the points in the XML input file. The
AddressNodes are mapped within the parent NavigationEdge using the distance from the source of the
edge, the distance along the edge, that the node is placed. The AddressNode has a reference to its own
distance and its own parent edge within it for easy reference when path-finding to locate the node on
the map.
The map module handles the drawing of its own map onto the display surface using its internally saved
and pre-rendered map data. The map will update if the state of any elements on the map have changed
such as the state of the intersections. And the test lines of the map will not be rendered unless test mode
is enabled in the map modules starting parameters. To redraw the graphical representation of the map
the map module simply draws circles at the locations of all of the intersection nodes, and draws lines
between them for each of the edges.
The map module also handles the functions to find the absolute positions on the display screen of
17
coordinates given in the map's local X and Y values that are specified in the map configuration file.
Lastly the map module handles its own internal updates to update the state of all of its own internal
nodes and edges as time passes with each tick, and the map handles the retrieval of AddressNodes for
the vehicle controller module. This allows the vehicle controller to request random addresses to use as
destinations or to get all of the addresses possible when placing the vehicles initially.
Lastly the map module handles the routes of the vehicles by handling the requests to the Search
utilities. Using the searching utilities the map plots the shortest route between a vehicle's current
location and its destination. The map module uses a Breath First Search algorithm of its own internal
node and edge structure to plot a path for the vehicle and returns a list of nodes to travel to. The search
module makes use of TemporaryEdge objects when plotting the path for the vehicle to travel, which are
similar to the NavigationEdges except lack the AddressNode generation. The map module then sticks
the final destination node on the end of the returned list representing a path to follow and gives this list
back to the vehicle controller module requesting it.
3.3 Vehicle Controller Module
This module handles and controls all of the vehicle entities inside of the simulation and is held within
the VehicleController.py file. It is the central access point for updating all agents every tick and also
handles linking them to the map module. It starts by taking the configuration of the simulation that is
given to it, and initializes itself with the variables. It then gets a list of all possible addresses to place
vehicles from the map module, and shortens the number of vehicles requested down to no more than
the number of addresses on the map, in order to stay compatible and not overfill the map with more
18
vehicles then it can handle. For the first vehicle it creates the unique Ambulance class vehicle and
places it at the start point specified in the XML input file on the map, and removes the position from
the list of possible start positions for other vehicles. For all other vehicles it randomly selects an edge,
then randomly selects an address within that edge to use. Only two vehicles may be placed on any one
address since there is two directions to travel on a street and so once an address has been used a second
time it is removed from the list to be chosen from. All vehicles are initialized at their starting address
but are set to request a new goal location the first time they need to move. All of the vehicles are also
added to a list to track them and iterate through the list when the time comes to update all vehicles
every tick of the simulation.
The Vehicle class is the type of object within the simulation that represents the agents of the simulation.
Each is independent and is only aware of its surrounding vehicle entities. They do not message other
vehicles unless when changing states in order to limit the use of observer objects which can be much
more resource intensive when constantly checking every other vehicle. A vehicle agent contains all of
its own data regarding its current position and state, it is completely autonomous and only linked to
other vehicle agents through the vehicle controller module. A vehicle contains its own; speed, direction,
position on the map, mass, limits on turn rate and speed, list of goals, currently traversing edge, arrival
state, and any contents the vehicle is carrying. The only vehicle which the contents applies to is the
Ambulance subclass but all vehicle possess the ability to carry another object.
The Vehicle update method is where the agent's autonomous thinking is implemented. The vehicle first
calls the Steering utilities to calculate the force applied on the agent using only themselves as a
reference. The steering utilities take the current location and the goal location from the agent and use
19
the Arrive method to calculate the force acting upon the agent. The force vector is then returned back to
the agent and it is scaled based on the maximum amount of force the agent can supply given its mass.
As the mass of the agent increases the maximum force decreases proportionally such that an agent
always tries to apply its maximum force to a movement. If the force is effectively zero then there is a
test to see if the agent is within a very small distance to the target position, if it is true then the agent's
state is set to have arrived and the next position from its goal list is popped off and set as the new
target. The calculated force is then applied to the speed of the agent to modify their movement vector.
The speed vector is then applied to their position and the agent's new direction is calculated from the
movement vector.
The inertial mass of a Vehicle is simply its own mass divided by 100, which is the default mass of a
Vehicle agent. Values under this starting mass will result in an increased maximum force that can be
applied per tick, while values above will result in a reduced force that can be applied per tick. In this
way an agent's movement closer simulates that of an object in reality, with its inertia dampening the
maximum acceleration it can attain. The remaining methods of a Vehicle are used externally to identify
the vehicle as the unique Ambulance vehicle or not, and to allow other agents to see the Vehicles state
after it has notified of a state change.
The Ambulance agent which is specified in Ambulance.py is a unique form of Vehicle that contains the
parameters for the agents decision making. Within the Ambulance class there are two static parameters
that represent the agent's decision making priority that are used when choosing what goal to pursue
next when the agent is without a goal. Unlike a standard vehicle entity this agent will intelligently
decide what goal to pursue next instead of being given a random location to head towards. In all other
20
respects this agent is handled the same way as all others and behaves similarly but still autonomously.
The vehicle controller modules update function is where much of the work is done every tick of the
simulation. Every vehicle is put into a nested dictionary based on its position on the map, such that
every nested dictionary related to one rounded X and Y coordinate on the map. Every vehicle is then
iterated through once again and its list of neighbouring agents within its observational range is added to
its own internal list. This list contains any agents such that; for any agent with a rounded X and a Y
coordinate, its neighbours are those agents with a rounded X and Y coordinate that is equal to or has an
absolute difference of 1 from the agent.
Once a vehicle's neighbours have been set for the current tick the update function goes on to check if
the current vehicle is in the arrived state, if not then it is simply updated and the next vehicle is looped
to. If the vehicle has arrived then the module behaves differently for a generic Vehicle agent and the
unique ambulance. For a generic vehicle the module will simply select a new target AddressNode from
the list of all possible and set that as the new goal of the agent. For the unique Ambulance agent the
module first checks to see if the agent has arrived back at the hospital start point. If so the ambulance is
unloaded and the contents are added to the score in the vehicle controller module. It then iterates
through the list of all generated goals and selects the one that is closest based on the Ambulance static
parameters for the goal priority. The module selects the closest point and sets that as the unique agent's
goal. Otherwise the Ambulance agent has its target set back to the hospital and the goal is added to the
vehicle's contents as its cargo, before finally the goal is then removed from the list of possible goals.
All vehicles are then mapped a route by the map module their goal list is set to the path generated by
the route, their state is set back to in transit, and the vehicle's position is updated.
21
The last two functions of the vehicle controller module are the drawing of the vehicles on the display
surface, and the calculating of the score for recording. The vehicle module uses the surface object
parameter when called and loops through each of the vehicles in the list. Each vehicle is then drawn as
its own surface and the surface is filled with the car colour specified in the simulation configuration,
and then rotated to face the vehicle's movement direction. If the vehicle is not generic then it is filled
with the ambulance colour from the configuration instead, and finally the vehicle's surface is then
drawn into the display surface. If the map module is in test mode then lines are also drawn from the
vehicle's current location to the target location of the vehicle as a visual aid.
The score function serves to generate the absolute score value of the goals accomplished by the
Ambulance agent within the time limit. The vehicle module internally tracks all goals completed by the
unique agent and to calculate the score will iterate through the list of all completed goals and add their
score value to the total. The score value of each goal type is configured here and is simply given an
absolute integer value. The function then returns the absolute integer value to be recorded to the output.
3.4 Summary
The three modules are as follows: the simulation engine, the map module, and the vehicle controller
module, work together in order to create the functioning system. The first module, simulation engine, is
enclosed within the project.py file and handles the simulation control and timing of the other modules.
The first module is the backbone of the simulation and it is where the configuration is set for the
performance and visuals of the simulation. This module is also responsible for controlling the time and
22
speed of the simulation, as well as generating and saving the output from the simulation. The map
module handles the navigation of the autonomous agents and this is where the graphing and search
algorithms are implemented in the simulation. The map module leverages the Search toolkit for path-
finding when an agent requests a path and it is what loads the map design from the input XML and
creates the data structures for it. The vehicle controller module is responsible for containing and
handling all of the independent agents within the simulation. It handles the updating of the vehicles
status every tick and setting the goals for the vehicles when they finish their current list. It also tracks
the performance of the unique vehicle, the Ambulance and assigns the goals for the vehicle to
accomplish. Both of these sub-modules must interact in order to share information between them and
relay the information to the autonomous agents that exist within the simulation. None of the modules
directly control the agents themselves and instead only handle the agent's environment, or giving the
agent's goals to accomplish.
23
Chapter 4. Implementation of the Key Components
In order to create a realistic representation of reality a simulation requires some key components that
give it the functionality to mimic the decision making, navigation, and allow the control of the
simulation. The design of the main simulation file is used to allow easy configuration and ensure
synchronization of the speed of the simulation to the speed specified in the configuration. It also helps
to make debugging of problems within the simulation by splitting up tasks into sections that are
accomplished individually and the modules are processed separately so that they do not interfere. The
main project also handles the recording of the simulation to the output file for easy analysis of the
results. Using AI steering behaviours, the agents within the simulation are able to emulate realistic
movement towards their goals while avoiding collision with one another through the use of the Seek,
Arrive and a modified Flee algorithm. Although the Seek and Arrive functions are used separately from
one another depending on the plans of the agent the Flee method is used with a list of neighbours near
each agent such that every agent can independently avoid others that lie in its path. And lastly the map
module makes use of path-finding algorithms when an agent requests a path towards its goal and makes
use of the Breath First Search algorithm, an algorithm that always returns the shortest path and requires
relatively little processing time within smaller graph spaces such as the simulation. All of these
functions are used together by the simulation in order to help make the autonomous agents appear more
intelligent without the use of scripting in their behaviours.
4.1 Pygame Methodology
The simulation engine of the project is required to run the rest of the system and it follows the basic
24
structure of all projects created with the Pygame engine and a complex program in general. The first
step is to initialize the variables of the system and its settings. This is completed within the first 40 lines
of code. After that the main method of the application is defined where most of the work will be done,
and finally it is called. This was done to shift the design of the system from a simple script to better
mirror that of a traditional application, it also isolates the configuration variables from accidental
modification unless otherwise specified within the method. Because Python is an object oriented
language initialized in a procedural manner the variables declared outside of a method's context are not
able to be modified unless the global identifier is used to reference the outer variable with a duplicate
inner one by the same name, both of these variables point to the same memory location. Otherwise
outer variables can only be read but any attempt to modify their contents instead creates an inner
variable local to the method with the same name.
From within this main method is the functions of the initialization of the Pygame engine only after all
configuration variables have been fully created. By encapsulating all work with the Pygame engine and
ending the engine upon leaving the method it is possible to end the simulation and restart it multiple
times if it is required simply by exiting and recalling the main method. The main method runs from
start to finish of the simulation and is designed so that the main work of the flow of the overall
simulation is evident within it. By having the main method execute all of the steps for each tick of the
simulation it becomes easier to debug as separate modules do not interact other than requesting
information from one another. This isolates the modifications to only the specific module which is
being modified much like the object-oriented encapsulation methods of Java.
The overall flow of the simulation can be seen as:
25
1. The configuration of the simulation is loaded and the constant data members created before any
work begins.
2. The Pygame engine is initialized and the display surface is created.
3. The map is loaded from the input file in the map module and the vehicle objects are created in
the vehicle controller module.
4. The map module is updated once to update the state of the map for the current tick
5. The vehicle module is updated once and all containing vehicles are updated for this current tick,
including state, position and goals.
6. The display surface is updated with the current state of the data members inside the modules
7. The engine checks for any events that have occurred since the last loop
8. The engine updates the visual display, increases the tick increment, and loops back to step 4
unless it has reached the limit of number of ticks
9. The results of the simulation are output to the output file along with a log of what occurred.
These are the main steps taken in the program and are synchronized to only run one loop every 1
divided by the number of ticks per second, a second. The timing synchronization is done by the
Pygame engine using the FPS, Frames Per Second, function that causes the program to sleep until it is
time for the loop to run again. All modules serve the main method and no other threads exist within the
implementation.
26
4.2 AI Steering
This simulation implements 3 different methods of AI steering techniques that the autonomous agents
use to move themselves around the environment of the simulation. Those three are; Seek, Arrive, and
Flee [12]. They function together and are calculated every tick of the simulation when a Vehicle object
requests the forces acting upon it to accomplish its goals while avoiding other vehicles. The Seek and
Arrive algorithms are used separate from one another in that a vehicle will Seek towards an intersection
it can pass through, while a vehicle will use Arrive to decelerate towards an intersection or an
AddressNode it must stop at.
The algorithm for the Seek method is extremely simple, distance vector is created by subtracting the
current position from the target location. The magnitude is taken of the distance vector in order to
calculate the linear distance to the target. If the distance is 0 then return 0 force, otherwise multiply the
vector to the target by the maximum linear speed the vehicle can travel divided by the linear distance.
This creates a vector towards the target that is shortened to the maximum linear distance that the
vehicle can move in a single tick that is the vehicles desired speed. Lastly the vehicle's current speed is
subtracted from the desired speed to create the force to modify the current speed towards the desired
speed.
function Seek (agent)vector_to_target = agent.target – agent.position
linear_distance = magnitude(vector_to_target)if linear_distance > 0 desired_speed = vector_to_target * (agent.max_speed / linear_distance) return desired_speed - agent.speedreturn <0, 0>
Figure 4.2.1: The Seek algorithm
27
This algorithm is used to accelerate an agent towards a target up to its maximum speed. It is used to
accelerate a vehicle towards an IntersectionNode that is considered green where the agent does not
have to stop or make a turn. It is the most common of the AI steering behaviours due to its simplicity
and is seen in most forms of AI when an agent needs to move from a current location to a new one.
The next important steering behaviour is the Arrive function. The difference between this and the Seek
function is that while Seek accelerates up to its max speed towards a target and will overshoot its target
once it passes through it, the Arrive function will slow an agent down as it nears the destination such
that it stops exactly on target. The algorithm first gets the distance vector from the current position to
the target by subtracting the agent's current position from the target position. Next the linear distance is
calculated from the magnitude of the distance vector. Once again if the distance is 0 then simply return
a vector of 0 since the agent is at the target position. Otherwise calculate the optimal speed from the
distance divided by the smoothing coefficient, in this case that is the agent's inertia multiplied by the
smoothing coefficient. The minimum is then taken from the optimal speed and the vehicle's maximum
linear speed to limit the possible speed. The desired speed is then calculated by the movement vector
multiplied by the optimal linear speed divided by the linear distance. Then return the agent's speed
minus the desired speed in order to get the force to adjust the agent's speed to match the desired speed.
function Arrive (agent)vector_to_target = agent.target – agent.position
linear_distance = magnitude(vector_to_target)if linear_distance > 0 optimal_speed = linear_distance / (agent.inertia * smoothing_coefficient) optimal_speed = min(optimal_speed, agent.max_speed) desired_speed = vector_to_target * (optimal_speed / linear_distance) return desired_speed - agent.speedreturn <0, 0>
Figure 4.2.2: The Arrive algorithm
28
The final important algorithm used from the AI steering techniques is the Flee algorithms. This is
another simple algorithm similar to the Seek algorithm except opposite. Instead it is used to move away
from a specified location and is used to maintain distance from other vehicles. The force is modified
based on proximity to the target in order to increase the closer the agent is to the target and decrease as
the agent gains distance from the target. This creates a point between the target of movement and any
vehicles in the agent's path where the forces are equal and the agent will sit until the blocking vehicle
moves. In this way the Flee algorithm is used for avoiding neighbouring agents rather than truly fleeing
and has been modified from the original design to simply move away from a target position at
maximum speed.
The algorithm of the Flee method is very similar to the Seek method and follows the same general
structure. There is a maximum distance over which the force will be calculated over and applied after
which the force is near zero. The vector distance between the agent and the target is calculated by
subtracting the agent's current position from the position of the target. The linear distance to the target
is then calculated from the magnitude of the target vector. If the linear distance is greater than the
maximum distance for the target then the function simply returns 0 force. Otherwise the repulsion force
is calculated from the negative of the agent's max speed divided by the square of the linear distance
with a minimum value of 1. The desired speed is then calculated from the vector to the target multiplied
by the repulsion force divided by the linear distance. This normalizes the target vector and scales it to
the repulsion force. The resulting force is then calculated from the agent's current speed subtracted
from the desired speed.
29
Figure 4.2.3: Flee Diagram
function Flee (agent)maximum_distance = 1.0vector_to_target = agent.target – agent.positionlinear_distance = magnitude(vector_to_target)if (linear_distance > maximum_distance)
return <0, 0>repulsion = -agent.max_speed / (1 + linear_distance) ^ 2desired_speed = vector_to_target * (repulsion / linear_distance)return desired_speed – agent.speed
Figure 4.2.4: The Flee algorithm
The exponential divisor function of the algorithm means that as the distance increases the force
decreases inversely and exponentially. This maintains a minimum distance between individual agents
and is only calculated from the list of an agent's neighbours that are within the bordering integer X and
Y coordinates with an absolute difference of 1. This limits the number of calculations that must be done
for each agent to increase processing speed of the function.
By combining these three AI steering techniques a force is generated acting upon the agent that moves
it towards its goal while keeping a minimum distance from other independent agents without needing
to coordinate or calculate new way points. Either the Seek or Arrive method is used to move towards an
IntersectionNode depending on whether the agent can pass straight through or has to stop or turn and
the cumulative force is calculated for all agents that are considered to be directly in the path of the
agent. These forces are then reduced to the maximum force the vehicle may apply based on its inertia,
and finally the forces are added to the vehicle's speed. The speed vector is then used to update the
position of the agent and the direction. This is done for every agent every tick of the simulation.
30
Agent AgentTargetForce
4.3 Graph Searching
In order to navigate the simulation environment autonomously every agent requires the ability to find a
path from its current location, to its goal. The purpose of a graph searching algorithm is to fulfil this
requirement using as little processing time as possible. For this project I ended up using Breadth First
Search due to time constraints. This algorithm will find the shortest possible path to the destination if it
is possible to reach it from the agent's current position. The algorithm returns a path for the agent to
follow of way-points through the navigation graph. The Breadth First Search algorithm showed the
second fastest search time by processing time of the four algorithms examined, and was the most
simple to program.
The algorithm requires the start and end positions of the path that needs to be found, along with a
reference to the map that contains the navigation nodes and edges. First the map of visited nodes, links
between nodes, and the queue of edges to process must be created. Next the source and destination
nodes of the edge that the agent current rests on are added to the list of visited nodes to prevent the
current edge from being searched again. Next two temporary edges are created from the agent's current
position to the source and destination nodes of the current edge respectively, for the agent to be able to
head in either direction from its current position. Now a loop is created that will not end as long as
there is edges in the queue. The top edge is popped off the queue and is evaluated to see if it equals the
edge that contains the end position, swapping the end points to test if the temporary edge being
examined has been reversed. If the edges have the same two endpoints then return a list of nodes
constructed from the current node backtracked through the map of routes to nodes by their mapping
back to the start position. Otherwise check if the destination node of this edge has already been routed
to another node, and if not add a mapping from the destination of this edge to the source of it in the
31
route map. Finally loop through each of the edges in the map and test them individually to see if either
their source or destination node match the destination node of the current temporary edge and if the
opposite end of the edge has not been already visited. If both are true then add a temporary edge to the
queue that represents this edge with the proper search direction, and add the destination node of the
current edge to the list of visited nodes. The loop then restarts with the next edge on the queue. If no
path is found the loop will exit and the function returns nothing indicating no path.
function BFS (start_position, end_position, map)1. visited_nodes = new map2. route = new map3. queue = new queue4. put (start_position get edge.source_node) in visited_nodes 5. put (start_position get edge.destination_node) in visited_nodes6. create temporary edge from start_position to (start_position get edge.source_node) 7. add temporary edge to queue8. create temporary edge from start_position to (start_position get edge.destination_node)9. add temporary to queue10. while queue is not empty11. edge = pop top edge from queue12. if edge.source_node is (end_position get edge.destination_node) and
edge.destination_node is (end_position get edge.source_node 13. or edge.destination_node is (end_position get edge.destination_node) and
edge.source_node is (end_position get edge.source_node)14. return list of nodes from edge.source_node to start_position
using mapped nodes in route15. if route for edge.destination_node does not exist16. map edge.source_node in route to edge.destination_node 17. for each element e in all edges from map18. if e is edge19. continue to next element20. if e.source_node is edge.destination_node and
visited_nodes for e.destination_node does not exist21. add temporary edge from e.source_node to e.destination_node
to queue22. map e.source_node to True in visited_nodes23. else if e.destination_node is edge.destination_node and
visited_nodes for e.source_node does not exist24. add temporary edge from e.destination_node to e.source_node
to queue25. map e.destination_node to True in visited_nodes26. return nothing
32
Figure 4.3.1: The Breadth First Search Algorithm
This algorithm is a modification from the standard Breath First Search in that it does not simply look
for a path from node A to node E [5], but instead takes into account the fact that the agents are located
at AddressNodes and are finding a path to a goal AddressNode, where the IntersectionNodes are merely
way-points when moving to their goal. For this reason the initial search needs to begin in both
directions from the current AddressNode towards the IntersectionNodes at either end. And while a
stored edge in the map has a direction it may not match the direction of the temporary edge being
examined and so both ends of the NavigationEdge must be examined when testing if it is connected to
the destination node of the current temporary edge. The end goal is added to the list of way-points after
the algorithm returns in the map module before it is returned to the agent. The vehicle controller
module requests a path for an agent each time an agent's goal changes and so this algorithm must be
synchronized and fast for quick response.
4.4 Summary
These algorithms and methods are the core of the simulation engine and have been used to meet the
requirements of the project. The Pygame engine design is used to create a configurable simulation that
will load an input file, run in a controllable manner to ensure synchronization, and output to an output
file. The AI steering techniques detailed in this section from the Steering utilities demonstrate the
methods that are used to allow the autonomous agents to move towards their goal while avoiding other
vehicles that they could collide with without the use of hard coded scripting methods. This combination
of three techniques create a more believable simulation of real world movement and interactions
between independent agents within the closed system. No agent is dependent on another and each
calculates their own forces independently towards their goal using either the Seek or Arrive method,
33
while having a list of neighbours to observe and track to be repelled by to prevent collisions. Every
agent has a path from their current position to their goal requested for them by the vehicle controller
and assigned to them when they require it. The map module uses the Search utility of the Breadth First
Search algorithm to create a list of way-points for the agent to follow to its goal and it uses the steering
behaviours to accomplish that movement.
34
Chapter 5. Results
With the implementation of the simulation system finished the program outputs two types of data, real
time graphical display and a text log of all statistics when a simulation period finishes. An example of
the graphical output can be seen in this chapter, as well as the raw table data of the trials conducted
with varying configurations using the finished simulation. The averages are displayed for each of the
configurations.
5.1 Visual Output
The simulation was designed and implemented with a graphical component to the application in order
to allow for easy visualization of what the simulation is doing. All elements of the simulation are
represented in the graphical output, an example of which can be seen below. This represents a non-
specific point in the operation of the simulation and is not taken from any of the recorded trials.
35
Figure 5.1.1: The graphical output of the simulation.
The visual display for the system has many separate elements which are all visible at once in the
current view. It is possible to see the lines and circles that represent the navigation graph of the
environment. The green circles indicate the current state of an intersection from the edge which the
36
lights are present over, the green indicates that movement is allowed. The H label on the map
represents the position of the start point of the ambulance agent and the hospital. The grey squares
represent the different vehicle agents within the simulation and the red square represents the ambulance
agent itself. On the top left the display of the frame rate can be seen to see exactly what speed the
simulation is running at. Lastly, the title of the program can be seen at the top of the window in the title
bar.
When test mode is enabled in the map module additional information is drawn for debugging reasons
on the map to plot vehicle targets and current goals, but that is not seen under normal circumstances as
it is too visually complex and fast moving to track in operation and merely clutters up the visual
display.
5.2 Result Tables
The result tables were constructed from a series of trails using a number of different configurations. A
total of 5 unique configurations were evaluated and the output can be seen below from the output log
file. The averages are also displayed for each configuration for comparison.
The first configuration was done using the priority settings of 0.8 for Major goals, and 0.2 for Minor
goals. This was the estimated best case scenario which I hypothesized would have the best results.
37
Trial Number Trial Time Major Goals Minor Goals Total Completed Score1 301.57 sec 25 2 27 522 301.62 sec 24 2 26 503 301.61 sec 22 2 24 464 301.51 sec 25 0 25 505 301.41 sec 25 1 26 516 301.62 sec 23 5 28 517 301.44 sec 24 1 25 498 301.57 sec 26 0 26 52
Average 301.54 sec 24.25 1.63 25.88 50.13
Table 5.2.1: Results for Configuration Major:0.8 Minor:0.2
The second configuration tested was done using the priority settings of 0.5 for both Major and Minor
goals such that they were equal and in the middle.
Trial Number Trial Time Major Goals Minor Goals Total Completed Score1 301.44 sec 19 12 31 502 301.49 sec 15 17 32 473 301.72 sec 14 15 29 434 301.68 sec 12 16 28 405 301.56 sec 19 16 35 546 301.60 sec 16 14 30 467 301.51 sec 19 12 31 508 301.53 sec 20 14 34 54
Average 301.57 sec 16.75 14.5 31.25 48
Table 5.2.2: Results for Configuration Major:0.5 Minor:0.5
The third configuration is the reverse of the first, with a priority setting of 0.2 for Major goals and 0.8
for Minor goals.
38
Trial Number Trial Time Major Goals Minor Goals Total Completed Score1 301.46 sec 1 27 28 292 301.73 sec 1 25 26 273 301.25 sec 1 27 28 294 301.50 sec 1 22 23 245 301.36 sec 2 27 28 296 301.68 sec 3 20 23 267 301.46 sec 1 25 26 278 301.62 sec 1 26 27 28
Average 301.51 sec 1.38 24.88 26.13 27.38
Table 5.2.3: Results for Configuration Major:0.2 Minor:0.8
The forth configuration is of maximum priority Major with minimum priority Minor, priority setting of
1.0 for Major and 0.1 for Minor.
Trial Number Trial Time Major Goals Minor Goals Total Completed Score1 301.55 sec 23 1 24 472 301.63 sec 25 1 26 513 301.55 sec 26 0 26 524 301.80 sec 24 2 26 505 301.47 sec 23 3 26 496 301.62 sec 27 2 29 567 301.46 sec 23 1 24 478 301.66 sec 25 3 28 53
Average 301.60 sec 24.5 1.63 26.13 50.63
Table 5.2.4: Results for Configuration Major:1.0 Minor:0.1
Last configuration is the opposite of the fourth, where Minor is giving maximum priority and Major is
given minimum priority, priority setting of 0.1 for Major and 1.0 for Minor.
39
Trial Number Trial Time Major Goals Minor Goals Total Completed Score1 301.73 sec 1 23 24 252 301.89 sec 1 23 24 253 301.63 sec 4 23 27 314 301.65 sec 2 30 32 345 301.63 sec 1 24 25 266 301.59 sec 1 27 28 297 301.47 sec 3 25 28 318 301.62 sec 1 28 29 30
Average 301.65 sec 1.75 25.38 27.13 28.88
Table 5.2.5: Results for Configuration Major:0.1 Minor:1.0
This is a summary of all data collected by running the simulation with 8 tests for each unique
configuration.
5.3 Summary
In this chapter the visual output and the numerical recordings can be seen from the logged output of the
simulation. For the raw data logs please look in the appendix.
40
Chapter 6. Discussion
Through the analysis of the feasibility of constructing a simulated environment in the Python language
within this project the concepts of agent path-finding, steering, and decision making have been
examined. By implementing a unique ambulance agent within the closed environment numerical results
have been collected, and the examination as yielded insight as to the most effective decision making
priorities for a score and goal number driven agent respectively. Two distinct effective configuration
styles were shown to be most effective in separate ways. Additionally there exists room for future work
in expanding this simulated environment to add more realism and add new decision making factors to
the goal picking algorithm.
The visual output of the simulation show the agents within the environment, represented by the grey
rectangles, navigating to their goals using the black lines. The red rectangle is the unique ambulance
agent and the H label represents its start location, the hospital. The green circles represent the status
lights of the Intersection Nodes for the agents to proceed when the light is green. In the picture you can
see the agents navigating the black lines, representing streets, to their goal destination. The visual
output of the simulation is animated and it is possible to witness the agents moving according to the
navigational data and watch the ambulance move from the hospital, to its goal, and back to the hospital
repeatedly.
The problems encountered during the development of the program was the navigational system of the
agents, since each address node is stored within an edge between two intersections. Because of this
added complexity was created finding the shortest path to the destination from each direction to ensure
41
the algorithm does not backtrack. Due to the nature of the edges being stored as links between
IntersectionNodes rather then the nodes relating to one another the directionality of the edges had to be
accounted for and reversed using the TemporaryEdges to ensure the path finding algorithm could
traverse every possible path. This is why the Breadth First Search algorithm is so heavily modified
from the standard form of it.
The limitations of the current design is that it does not at the moment allow for the agents to navigate
between the traffic light nodes of the IntersectionNodes rather then the IntersectionNodes themselves.
For this reason the Flee algorithm does not take into account agents that are going in a direction that is
greater then forty five degrees off the agent's current direction and will instead pass right through them.
The result tables show the summarized raw results of the trials that were run. They number the trials for
each configuration from 1-8 and the configuration priorities are listed in the description of the figures.
The trial time was recorded for each trial to keep track of how long each trial took to run to evaluate
each configuration for significant differences in computational efficiency. The total goal score of each
trial is also recorded in the tables categorized under Major, Minor and the total number of goals
completed for each trial. Finally the score for the trial is given that was achieved by adding the
combined score of all the goals, Major goals being given 2 score while Minor goals are given 1. The
final trial for each table is the average results from all the trials of that configuration combined for
comparison with other priority configurations. These average results are what will be analyzed in the
conclusions and compared.
42
Chapter 7. Conclusions
The purpose of this project was to investigate the use of Python to create applications, and evaluate the
effectiveness of the Pygame library at creating a realistic mimic of reality in simulation. The purpose
was also to use that simulation to evaluate the effects of different AI decision making priorities within a
simulated urban environment. From the visual output of the simulation created using the Pygame
engine, and implemented using the Pygame methodology to create a modular application, it can be
concluded that Python as a language is developed and broad enough to be used to make an autonomous
application. Additionally it can be concluded that the Pygame library makes it simply enough to create
the visual output of a game while ensuring synchronization of the simulation with a speed limitation to
keep the speed of the visual output regular.
Through the research and use of techniques such as AI steering it is possible to creature autonomous
intelligent agents that make their own decisions without the need for scripting or user interaction to
help the agent's reach their goal. And to allow the autonomous agents to find a path from their current
location to their goal within a closed system that is not open to all range movement it is necessary to
implement path-finding algorithms such as Breadth First Search. By combining the knowledge of all of
these tools it was possible to create a modular controlled simulation with a predefined configuration to
test the effect of several different priorities in the decision making of an AI agent as to what goals to
accomplish from a list in order to achieve the maximum results.
From the trial results, it can be seen that the fourth configuration was the most effective from the
standpoint of average score, with the second configuration coming in a close second. For the purpose of
43
this evaluation the effectiveness of both for the purpose of score can be considered equal and the tactic
of giving major goals higher priority has a positive effect on the average score. While the average score
from configurations five and three where the minor goals higher priority can be seen so have similarly
significantly lower scores then the rest of the configurations and a preference for the minor goals had a
negative effect on the average score.
Using a different method of evaluation it can be seen that the second configuration where the priorities
of minor and major goals are equal can be seen to have significantly higher average of goals completed
in the time frame. This agrees with expectations that the agent will simply respond to the closest goal
and achieve the fastest time possible. This alternative method of configuration evaluation illustrates
that this second configuration has its usefulness when completion of goals is more important than the
worth of the goal.
Due to the significant variance in the results of the simulation without any form of user interaction in
the decisions of the agents, and the test defined input file for the navigation environment, it can be
inferred that the autonomous agents are able to navigate the environment to accomplish their goals on
their own successfully. The only changes made to effect the results are the changes to the priority
configuration of the ambulance agent which illustrates that the lack of scripting to the AI entities from
the wide range of results that can be achieved by changing the priority of goal selection.
7.1 Future Work
While this simulation did implement commonly used algorithms they were not the most efficient
44
solutions possible. Methods like Breadth First Search while simple to implement are known for being
much less efficient then the more complicated A* or B* algorithms [5] that are in common use in the
commercial video game industry. Further work can be done to implement one of these more complex
algorithms that use heuristics to further increase the intelligence of the path finding algorithm, and
increase the computational speed.
Additionally a more complex environment can be created that will better restrict the agents within the
simulation using walls that agents will seek to avoid, attempting to stay on the boundaries of the road
itself instead of simply following the navigational graph. A force to draw agents to the centre of the
lane of the road would better increase the sense of realism by having the agents try to stay within the
centre of their lane as best as possible. Lastly the intersections can be made more realistic by instead
directing the agents to stop at separate nodes for each lane of traffic rather than the central point of the
intersection itself. With this, the agents must move themselves into the current lane on the street they
wish to enter instead of simply turning themselves down the steer they desire from the intersection.
While this simulation provides a sufficiently realistic model of a city environment for this experiment,
it can be better created using aspects such as; curved streets, speed limits for generic vehicles, multi-
lane streets, pedestrians, and varying vehicle speeds. Additionally the ability to re-task itself when a
new goal appears that would provide a better results, and the mechanic of goals for the ambulance
agent expiring if it does not respond in time adding a new factor to evaluate when deciding on a goal to
accomplish.
45
7.2 Summary
Throughout this project the techniques of AI steering, graph searching, and creating a simulated
synchronized environment have been examined. Additionally approaches for multi-agent processing,
graphical representation, timing synchronization, and AI decision making have been analyzed and
implemented. This project serves as a test to the abilities of the Python language and the Pygame
library to be used to create a controllable simulation of a test environment in which independent
autonomous AI agents were created and tasked with completing their own goals. Additionally the
project was used to record the performance of several different priority configurations for a unique
ambulance agent. The conclusions of this project are that Python is a language that has been developed
enough to make very intricate applications and simulations that can generate real results, in an object-
oriented and visually represent a simulated environment in detail. The autonomous agents are also able
to perform well at the task of completing their own goals without any form of assistance from scripting
or input in a flexible generated environment specified through an input file. The numerical results show
that preferring the highest scoring goals yields the highest results from the agent's decision algorithm
when examining effectiveness through score, but an equal priority in decision making yields the best
results in the number of goals complete. From these results future work can be imagined using a more
in depth simulation and accounting for an increased number of factors within the decision making of
the autonomous agents within the simulation. More realism can also be added to the simulation
environment by adding more limitations and requiring more complex movement from the virtual agents
in order to reach their goal.
46
8. Bibliography
[1] Buckland, Mat. Programming game AI by example. Jones & Bartlett Learning, Plano, TX,
USA. Wordware Publishing, Inc. 2005.
[2] Ceder, Vern, and Nathan Yergler. "Teaching Programming with Python and PyGame."
Apresentado na PyCon, Canterbury School, Fort Wayne, USA. 2003.
[3] Chaurasia, Gaurav, B. Radhika Selvamani, Nithi Gupta, and Subodh Kumar. "Virtual chaotic
traffic simulation." In Proceedings of the Seventh Indian Conference on Computer Vision,
Graphics and Image Processing, pp. 337-344. ACM, New Delhi, India. IIT Delhi, 2010.
[4] Even, Shimon. Graph algorithms. Cambridge University Press, New York, NY, USA. 2011.
[5] Goldberg, Andrew V., and Chris Harrelson. "Computing the shortest path: A search meets graph
theory." In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms,
pp. 156-165. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA. 2005.
[6] Jamali, Nadeem, and Shangping Ren. "A layered architecture for real-time distributed multi-
agent systems." In ACM SIGSOFT Software Engineering Notes, vol. 30, no. 4, pp. 1-8. ACM,
New York, NY, USA. 2005.
[7] Liu, Lian, and Raymond Chi-Wing Wong. "Finding shortest path on land surface." In
Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, pp.
433-444. ACM, New York, NY, USA. 2011.
[8] Luna, Ryan, and Kostas E. Bekris. "Push and swap: Fast cooperative path-finding with
completeness guarantees." In Proceedings of the Twenty-Second international joint conference
on Artificial Intelligence-Volume Volume One, pp. 294-300. AAAI Press, 2011.
[9] McGugan, Will. Beginning Game Development with Python and Pygame. Will McGugan, New
York, USA. Springer, 2007.
[10] Millington, Ian, and John Funge. Artificial intelligence for games. CRC Press, Burlington, MA,
USA. Morgan Kaufmann Publishers, 2009.
47
[11] Poole, David L., and Alan K. Mackworth. Artificial Intelligence: foundations of computational
agents. Cambridge University Press, New York, USA. 2010.
[12] Reynolds, Craig W. "Steering behaviors for autonomous characters." In Game developers
conference, vol. 1999, pp. 763-782. Foster City, USA. Sony Comptuer Entertainment America,
1999.
[13] Reynolds, Vinny, Vinny Cahill, and Aline Senart. "Requirements for an ubiquitous computing
simulation and emulation environment." In Proceedings of the first international conference on
Integrated internet ad hoc and sensor networks, p. 1. ACM, New York, NY, USA. 2006.
[14] Reynolds, Craig. "Big fast crowds on ps3." In Proceedings of the 2006 ACM SIGGRAPH
symposium on Videogames, pp. 113-121. ACM, New York, NY, USA. 2006.
[15] Silver, David. "Cooperative Pathfinding." In AIIDE, pp. 117-122. Edmonton, CA. University of
Alberta, 2005.
[16] SPE IDE - Stani's Python Editor, [online] 2013, http://pythonide.blogspot.ca/ (Accessed:
November 5th 2014).
[17] Sweigart, Albert. "Making Games with Python & Pygame." Albert Sweigart, 2012.
[18] Tambe, Milind, W. Lewis Johnson, Randolph M. Jones, Frank Koss, John E. Laird, Paul S.
Rosenbloom, and Karl Schwamb. "Intelligent agents for interactive simulation environments."
AI magazine 16, no. 1 (1995): 15.
[19] Chapter 1. First Steps, VirtualBox – Oracle VM VirtualBox, [online] 2013,
https://www.virtualbox.org/manual/ch01.html (Accessed: Novermber 5th 2014).
[20] Wang, Hong. "Engaging Students with Game Programming in Python." In Proceedings of the
World Congress on Engineering and Computer Science, vol. 1. 2009.
[21] Wiki, Pygame, [online] 2014, http://www.pygame.org/wiki/about (Accessed: January 20th
2014).
[22] Simao, Elısio Maciel. "Software Development Methodologies." (2009).
48
Appendix A: Input FileInput.xml
<?xml version="1.0"?><map width="20" height="20"> <node xpos="0" ypos="0" id="n1" /> <node xpos="5" ypos="0" id="n2" /> <node xpos="10" ypos="0" id="n3" /> <node xpos="15" ypos="0" id="n4" /> <node xpos="20" ypos="0" id="n5" /> <node xpos="0" ypos="5" id="n6" /> <node xpos="5" ypos="5" id="n7" /> <node xpos="10" ypos="5" id="n8" /> <node xpos="15" ypos="5" id="n9" /> <node xpos="20" ypos="5" id="n10" /> <node xpos="0" ypos="10" id="n11" /> <node xpos="5" ypos="10" id="n12" /> <node xpos="15" ypos="10" id="n14" /> <node xpos="20" ypos="10" id="n15" /> <node xpos="5" ypos="15" id="n17" /> <node xpos="10" ypos="15" id="n18" /> <node xpos="15" ypos="15" id="n19" /> <node xpos="20" ypos="15" id="n20" /> <node xpos="0" ypos="20" id="n21" /> <node xpos="5" ypos="20" id="n22" /> <node xpos="10" ypos="20" id="n23" /> <node xpos="15" ypos="20" id="n24" /> <node xpos="20" ypos="20" id="n25" /> <edge start="n1" end="n2" /> <edge start="n2" end="n3" /> <edge start="n3" end="n4" /> <edge start="n4" end="n5" /> <edge start="n1" end="n6" /> <edge start="n2" end="n7" /> <edge start="n3" end="n8" /> <edge start="n4" end="n9" /> <edge start="n5" end="n10" /> <edge start="n6" end="n7" /> <edge start="n7" end="n8" /> <edge start="n8" end="n9" /> <edge start="n9" end="n10" /> <edge start="n6" end="n11" /> <edge start="n8" end="n12" /> <edge start="n8" end="n14" /> <edge start="n10" end="n15" />
49
<edge start="n11" end="n12" /> <edge start="n12" end="n14" > <point dist="50" id="start" label="H" /> </edge> <edge start="n14" end="n15" /> <edge start="n11" end="n21" /> <edge start="n12" end="n17" /> <edge start="n14" end="n19" /> <edge start="n15" end="n20" /> <edge start="n17" end="n18" /> <edge start="n18" end="n19" /> <edge start="n19" end="n20" /> <edge start="n17" end="n22" /> <edge start="n18" end="n23" /> <edge start="n19" end="n24" /> <edge start="n20" end="n25" /> <edge start="n21" end="n22" /> <edge start="n22" end="n23" /> <edge start="n23" end="n24" /> <edge start="n24" end="n25" /></map>
50
Appendix B: Raw Output01:12:54 03/23/146000 ticks301.573712111 seconds
Configuration: Major 0.8 Minor 0.2 Major 25 Minor 2 Score: 52
01:24:08 03/23/146000 ticks301.628214121 seconds
Configuration: Major 0.8 Minor 0.2 Major 24 Minor 2 Score: 50
01:42:06 03/23/146000 ticks301.50989604 seconds
Configuration: Major 0.8 Minor 0.2 Major 25 Minor 0 Score: 50
01:49:54 03/23/146000 ticks301.412177801 seconds
Configuration: Major 0.8 Minor 0.2 Major 25 Minor 1 Score: 51
01:58:12 03/23/146000 ticks301.619388103 seconds
Configuration: Major 0.8 Minor 0.2 Major 23 Minor 5 Score: 51
51
02:11:07 03/23/146000 ticks301.44285512 seconds
Configuration: Major 0.8 Minor 0.2 Major 24 Minor 1 Score: 49
02:17:27 03/23/146000 ticks301.569565058 seconds
Configuration: Major 0.8 Minor 0.2 Major 26 Minor 0 Score: 52
02:40:21 03/23/146000 ticks301.606924057 seconds
Configuration: Major 0.8 Minor 0.2 Major 22 Minor 2 Score: 46
03:35:38 03/23/146000 ticks301.44690299 seconds
Configuration: Major 0.5 Minor 0.5 Major 19 Minor 12 Score: 50
04:45:28 03/23/146000 ticks301.492259979 seconds
Configuration: Major 0.5 Minor 0.5 Major 15 Minor 17 Score: 47
04:50:59 03/23/14
52
6000 ticks301.722790956 seconds
Configuration: Major 0.5 Minor 0.5 Major 14 Minor 15 Score: 43
05:04:17 03/23/146000 ticks301.683135033 seconds
Configuration: Major 0.5 Minor 0.5 Major 12 Minor 16 Score: 40
05:11:27 03/23/146000 ticks301.555905819 seconds
Configuration: Major 0.5 Minor 0.5 Major 19 Minor 16 Score: 54
12:56:46 03/23/146000 ticks301.604640961 seconds
Configuration: Major 0.5 Minor 0.5 Major 16 Minor 14 Score: 46
13:06:20 03/23/146000 ticks301.51174593 seconds
Configuration: Major 0.5 Minor 0.5 Major 19 Minor 12 Score: 50 13:37:11 03/23/146000 ticks301.533239126 seconds
53
Configuration: Major 0.5 Minor 0.5 Major 20 Minor 14 Score: 54
15:22:48 03/23/146000 ticks301.458903074 seconds
Configuration: Major 0.2 Minor 0.8 Major 1 Minor 27 Score: 29
17:21:52 03/23/146000 ticks301.725275993 seconds
Configuration: Major 0.2 Minor 0.8 Major 1 Minor 25 Score: 27
17:35:47 03/23/146000 ticks301.25415802 seconds
Configuration: Major 0.2 Minor 0.8 Major 1 Minor 27 Score: 29
18:06:36 03/23/146000 ticks301.500452042 seconds
Configuration: Major 0.2 Minor 0.8 Major 1 Minor 22 Score: 24
18:14:07 03/23/146000 ticks301.355351925 seconds
Configuration:
54
Major 0.2 Minor 0.8 Major 1 Minor 27 Score: 29
18:39:44 03/23/146000 ticks301.676365137 seconds
Configuration: Major 0.2 Minor 0.8 Major 3 Minor 20 Score: 26
18:45:30 03/23/146000 ticks301.464890003 seconds
Configuration: Major 0.2 Minor 0.8 Major 1 Minor 25 Score: 27
19:07:54 03/23/146000 ticks301.626510859 seconds
Configuration: Major 0.2 Minor 0.8 Major 1 Minor 26 Score: 28
19:21:10 03/23/146000 ticks301.551854134 seconds
Configuration: Major 1.0 Minor 0.1 Major 23 Minor 1 Score: 47
19:30:21 03/23/146000 ticks301.634322882 seconds
Configuration: Major 1.0 Minor 0.1 Major 25 Minor 1
55
Score: 51
22:37:42 03/23/146000 ticks301.553597212 seconds
Configuration: Major 1.0 Minor 0.1 Major 26 Minor 0 Score: 52
22:46:26 03/23/146000 ticks301.798549891 seconds
Configuration: Major 1.0 Minor 0.1 Major 24 Minor 2 Score: 50
23:27:07 03/23/146000 ticks301.477723122 seconds
Configuration: Major 1.0 Minor 0.1 Major 23 Minor 3 Score: 49
01:39:49 03/24/146000 ticks301.62116003 seconds
Configuration: Major 1.0 Minor 0.1 Major 27 Minor 2 Score: 56
02:20:15 03/24/146000 ticks301.661678076 seconds
Configuration: Major 1.0 Minor 0.1 Major 25 Minor 3 Score: 53
56
02:43:02 03/24/146000 ticks301.432403088 seconds
Configuration: Major 1.0 Minor 0.1 Major 23 Minor 1 Score: 47
11:43:31 03/24/146000 ticks301.728163004 seconds
Configuration: Major 0.1 Minor 1.0 Major 1 Minor 23 Score: 25
11:56:27 03/24/146000 ticks301.887953043 seconds
Configuration: Major 0.1 Minor 1.0 Major 1 Minor 23 Score: 25
12:04:04 03/24/146000 ticks301.626777887 seconds
Configuration: Major 0.1 Minor 1.0 Major 4 Minor 23 Score: 31
12:11:23 03/24/146000 ticks301.648648977 seconds
Configuration: Major 0.1 Minor 1.0 Major 2 Minor 30 Score: 34
12:24:46 03/24/146000 ticks
57
301.63279295 seconds
Configuration: Major 0.1 Minor 1.0 Major 1 Minor 24 Score: 26
12:30:08 03/24/146000 ticks301.589997053 seconds
Configuration: Major 0.1 Minor 1.0 Major 1 Minor 27 Score: 29
12:39:50 03/24/146000 ticks301.471927881 seconds
Configuration: Major 0.1 Minor 1.0 Major 3 Minor 25 Score: 31
13:05:26 03/24/146000 ticks301.619972944 seconds
Configuration: Major 0.1 Minor 1.0 Major 1 Minor 28 Score: 30
58
Appendix C: Source Code
Project.py# Ambulance Response Simulation# CJ Bennett# COSC Thesis Project# Algoma University
import pygame, sys from pygame.locals import *from CityMap import CityMapfrom Ambulance import Ambulancefrom VehicleController import VehicleControllerimport mathimport randomimport time
TPS = 20 #max speed of simulation in ticks per secondsWINDOWHEIGHT = 800WINDOWWIDTH = 800HALFWINDOWHEIGHT = WINDOWHEIGHT / 2HALFWINDOWWIDTH = WINDOWWIDTH / 2TOPSPACE = 10;AVGTICKSPEREVENT = 100TICKCOUNT = 0;LASTEVENT = 0;MAXTICKS = TPS * 300; #run for 300 seconds, 5 minutes
SPEEDLIMIT = 20 #max speed on any street
MAPFILENAME = "map.xml" #name of the file to load the map fromOUTPUTFILENAME = "output.txt" #default name of the output file
# R G BCARCOLOR = (172, 172, 172) #normal car colorAMBULANCECOLOR = (255, 64, 64) #ambulance car colorLINECOLOR = ( 0, 0, 0) #street line colorBGCOLOR = (255, 255, 255) #map background colorTEXTCOLOR = (255, 0, 255) #text displaty color
NUMOFCARS = 30
def main(): global TICKCOUNT, LASTEVENT; pygame.init() #initialize FPSCLOCK = pygame.time.Clock() #load clock startTime = time.time(); DISPLAYSURF = pygame.display.set_mode((WINDOWWIDTH, WINDOWHEIGHT + TOPSPACE)) pygame.display.set_caption('Ambulance Response Simulation')
59
BASICFONT = pygame.font.Font('freesansbold.ttf', 16) #load map data map = CityMap.createMap(MAPFILENAME, WINDOWWIDTH, WINDOWHEIGHT, BGCOLOR, LINECOLOR, BASICFONT) #generate and save street vehicles controller = VehicleController(NUMOFCARS, map, CARCOLOR, AMBULANCECOLOR);
while True: #one simulation loop #determine if time is up if (TICKCOUNT >= MAXTICKS): break; #determine if random event time halfRate = int(AVGTICKSPEREVENT / 2); randOffset = random.randint(AVGTICKSPEREVENT - halfRate, AVGTICKSPEREVENT + halfRate); if (TICKCOUNT == 0 or TICKCOUNT - LASTEVENT >= randOffset): LASTEVENT = TICKCOUNT; map.goalSpots[map.getRandomAddress()] = random.randint(1,2); DISPLAYSURF.fill(BGCOLOR); currentFps = round(FPSCLOCK.get_fps(), 1); fpsText = BASICFONT.render('FPS: ' + str(currentFps), 1, TEXTCOLOR) fpsRect = fpsText.get_rect() fpsRect.topleft = (10, 5) controller.update(); map.update(); mapSurf = map.drawMap() vehicleSurf = controller.drawVehicles(mapSurf); DISPLAYSURF.blit(mapSurf, (0,TOPSPACE)) DISPLAYSURF.blit(fpsText, fpsRect) #handle all event data for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update() TICKCOUNT += 1; FPSCLOCK.tick(TPS) #outputfile file_object = open(OUTPUTFILENAME, "a"); typeCount = {"Major": 0, "Minor": 0}; for key in controller.points: if (key == 1): key = "Major";
60
elif (key == 2): key = "Minor"; else: key = "Unknown"; if (typeCount.get(key) is None): typeCount[key] = 0; typeCount[key] = typeCount.get(key) + 1; date = time.strftime("%X") + " " + time.strftime("%x"); duration = time.time() - startTime; toOutput = date + "\n" + str(TICKCOUNT) + " ticks\n" +
str(duration) + " seconds\n\nConfiguration:\n Major " \ + str(Ambulance.PRIORITY_MAJOR) + " Minor " +
str(Ambulance.PRIORITY_MINOR) + "\n Major " + str(typeCount.get("Major")) \
+ " Minor " + str(typeCount.get("Minor")) + "\n Score: " + str(controller.getScore()) + "\n\n";
print(toOutput); file_object.write(toOutput);
if __name__ == '__main__': main()
61
CityMap.pyfrom xml.dom.minidom import parsefrom IntersectionNode import IntersectionNodefrom NavigationEdge import NavigationEdgefrom AddressNode import AddressNodefrom Search import Searchfrom Steering import Steeringimport pygameimport mathimport random
class CityMap: INSET = 20
def __init__(self, mapWidth, mapHeight): self.mapWidth = mapWidth; self.mapHeight = mapHeight; self.mapNodes = []; self.mapEdges = []; self.BGCOLOR = (255, 255, 255); self.LINECOLOR = (0,0,0); self.FONT = None; self.DISPLAYWIDTH = 0; self.DISPLAYHEIGHT = 0; self.BASEMAP = None; self.testMode = 0; self.startPoint = None; self.goalSpots = {}; @staticmethod def createMap(MapFile, displayWidth, displayHeight, BGCOLOR, LINECOLOR, FONT): #open file datasource = open (MapFile); dom = parse(datasource) #get map element and map size attributes map = dom.getElementsByTagName("map") firstMap = map[0] newMap = CityMap(int(firstMap.attributes["width"].value), int(firstMap.attributes["height"].value)) #load map nodes nodeList = newMap.mapNodes nodes = firstMap.getElementsByTagName("node") for node in nodes: nodeList.append(IntersectionNode(node.attributes["id"].value, int(node.attributes["xpos"].value), int(node.attributes["ypos"].value)))
62
#load map edges edges = firstMap.getElementsByTagName("edge") for edge in edges: node1Id = edge.attributes["start"].value node2Id = edge.attributes["end"].value node1 = 0; node2 = 0; #find nodes by ID for index, item in enumerate(newMap.mapNodes): if item.index == node1Id: node1 = item if item.index == node2Id: node2 = item #set nodes so edges point top -> bottom, left -> right if (node1.yGraphPos > node2.yGraphPos): temp = node1; node1 = node2; node2 = temp; elif (node1.yGraphPos == node2.yGraphPos): if (node1.xGraphPos > node2.xGraphPos): temp = node1; node1 = node2; node2 = temp; #create new edge connecting nodes newEdge = NavigationEdge(node1, node2); newMap.mapEdges.append(newEdge) #increment node edge count node1.edgeCount += 1; node2.edgeCount += 1; children = edge.getElementsByTagName("point"); for point in children: newPoint = newEdge.addPoint(point.attributes["id"].value, point.attributes["dist"].value, point.attributes["label"].value); if (newPoint.index == "start"): newMap.startPoint = newPoint; #set map globals newMap.BGCOLOR = BGCOLOR newMap.LINECOLOR = LINECOLOR newMap.FONT = FONT newMap.DISPLAYWIDTH = displayWidth newMap.DISPLAYHEIGHT = displayHeight newMap.left = newMap.INSET newMap.top = newMap.INSET newMap.right = displayWidth - newMap.INSET newMap.bottom = displayHeight - newMap.INSET newMap.unitX = float(newMap.right - newMap.left) / newMap.mapWidth + 0.0 newMap.unitY = float(newMap.bottom - newMap.top) / newMap.mapHeight + 0.0
63
newMap.updateMap(); return newMap def drawMap(self): mapSurf = pygame.Surface((self.DISPLAYWIDTH, self.DISPLAYHEIGHT)) mapSurf.blit(self.BASEMAP, (0,0)) return mapSurf def updateMap(self): self.BASEMAP = pygame.Surface((self.DISPLAYWIDTH, self.DISPLAYHEIGHT)) mapSurf = self.BASEMAP mapSurf.fill(self.BGCOLOR) # draw test grid if (self.testMode): for xpos in range(self.mapWidth + 1): pygame.draw.line(mapSurf, self.LINECOLOR, (self.findXPos(xpos), self.top), (self.findXPos(xpos), self.bottom)) for ypos in range(self.mapHeight + 1): pygame.draw.line(mapSurf, self.LINECOLOR, (self.left, self.findYPos(ypos)), (self.right, self.findYPos(ypos))) #draw nodes and edges for edge in self.mapEdges: pygame.draw.line(mapSurf, self.LINECOLOR, self.findPos(edge.source()[0], edge.source()[1]), self.findPos(edge.destination()[0], edge.destination()[1]), 1); fullDirection = Steering.subtract( (edge.destination()[0], edge.destination()[1]), (edge.source()[0], edge.source()[1]) ); direction = Steering.normalize(fullDirection); lineStart = (edge.source()[0], edge.source()[1]); if (self.testMode): pygame.draw.line(mapSurf, (0, 255, 255), self.findPos(lineStart[0], lineStart[1]), self.findPos(lineStart[0] + direction[0], lineStart[1] + direction[1]), 2); for dist in edge.subNodes.keys(): position = edge.getAddressPos(dist); if (self.testMode): pygame.draw.circle(mapSurf, self.LINECOLOR, self.findPos(position[0],position[1]), 3); if (edge.subNodes.get(dist) is self.startPoint): labelText =
64
self.FONT.render(self.startPoint.label, 1, self.LINECOLOR) labelRect = labelText.get_rect() realPos = self.findPos(position[0],position[1]); labelRect.topleft = (realPos[0], realPos[1]) mapSurf.blit(labelText, labelRect) for node in self.mapNodes: for subNode in node.getSubNodes(): pygame.draw.circle(mapSurf, node.getState(), self.findPos(subNode.xGraphPos, subNode.yGraphPos), 5) def findXPos(self, xPos): return self.left + int(xPos * self.unitX);
def findYPos(self, yPos): return self.top + int(yPos * self.unitY); def findPos(self, xPos, yPos): newXPos = self.findXPos(xPos); newYPos = self.findYPos(yPos); return (newXPos, newYPos); def addressNodes(self): list = {}; for edge in self.mapEdges: for dist in edge.subNodes.keys(): position = edge.getAddressPos(dist); list[edge.subNodes[dist]] = position; return list; def findRoute(self, start, end): if (start.edge == end.edge): return [[end],{end : end.edge}]; path = Search.BFS(start, end, self); if (path is None): path = [[],{}]; path[0].append(end); path[1][end] = end.edge; return path; def update(self): for node in self.mapNodes: node.update(); self.updateMap(); def getRandomAddress(self): edge = random.choice(self.mapEdges); while (len(edge.subNodes) < 1): edge = random.choice(cityMap.mapEdges); position = edge.subNodes[random.choice(edge.subNodes.keys())]; return position;
65
GraphNode.pyclass GraphNode (object): def __init__(self, index): self.index = index def __eq__(self, other): return self.index == other.index def __str__(self): return self.index;
AddressNode.pyfrom GraphNode import GraphNode
class AddressNode (GraphNode):
def __init__(self, index, distFromSource, edge): self.dist = distFromSource; self.edge = edge; GraphNode.__init__(self, index); def getPos(self): return self.edge.getAddressPos(self.dist); def __str__(self): return self.edge.fromNode.index + " -" + str(self.dist) + "- " + self.edge.toNode.index; def isReady(self): return True; def isStop(self): return True; def getSubNodes(self): return [self]; def isAddress(self): return True; def getLabel(self): return None;
66
LabeledNode.pyfrom AddressNode import AddressNode
class LabeledNode (AddressNode): def __init__(self, index, distFromSource, edge, label): self.label = label; AddressNode.__init__(self, index, distFromSource, edge);
def getLabel(self): return self.label;
PositionNode.pyfrom GraphNode import GraphNode
class PositionNode (GraphNode): def __init__(self, parent, xPos, yPos): self.xGraphPos = xPos; self.yGraphPos = yPos; GraphNode.__init__(self, parent.index + " " + str(xPos) + ", " + str(yPos)); def isPosition(self): return True;
67
IntersectionNode.pyfrom GraphNode import GraphNodefrom Tools import Toolsfrom PositionNode import PositionNode
class IntersectionNode (GraphNode): LIGHTS = Tools.enum(RED = (230, 0, 0), YELLOW = (230, 230, 0), GREEN = ( 0, 255, 0) ); LIGHT_THRESHOLD = float(2.0/3.0);
def __init__(self, index, xGraph, yGraph): self.xGraphPos = xGraph self.yGraphPos = yGraph self.lightMax = 50; self.lightMin = -50; self.lightTick = 0; self.edgeCount = 0; self.subNodes = []; GraphNode.__init__(self, index) def getPos(self): return (self.xGraphPos, self.yGraphPos); def isStop(self): if (self.edgeCount > 2): return True; return False; def isReady(self): if (self.getState() == self.LIGHTS.GREEN): return True; else: return False; def getState(self): if (self.lightTick < 0): return self.LIGHTS.RED; elif (self.lightTick >= 0): if (float(self.lightTick) / float(self.lightMax) <= self.LIGHT_THRESHOLD): return self.LIGHTS.GREEN; else : return self.LIGHTS.YELLOW; def update(self): self.lightTick += 1; if (self.lightTick >= self.lightMax): self.lightTick = self.lightMin; def addPositionNode(self, xOffset, yOffset): newNode = PositionNode(self, self.xGraphPos + xOffset,
68
self.yGraphPos + yOffset); self.subNodes.append(newNode); return newNode; def getSubNodes(self): return self.subNodes; def isAddress(self): return False; def isPosition(self): return False;
GraphEdge.pyimport mathfrom AddressNode import AddressNode
class GraphEdge (object): def __init__(self, fromNode, toNode, cost=1.0): self.fromNode = fromNode self.toNode = toNode self.cost = cost if (self.cost == 1.0 and self.cost > 0): self.cost = self.distance(); def distance(self): pos1 = self.fromNode.getPos(); pos2 = self.toNode.getPos(); x1 = pos1[0] y1 = pos1[1] x2 = pos2[0] y2 = pos2[1] xdiff = self.difference(x1, x2); ydiff = self.difference(y1, y2); dist = math.sqrt(math.pow(xdiff, 2) + math.pow(ydiff, 2)) return dist def difference(self, num1, num2): diff = 0; if num1 > num2: diff = num1 - num2 else: diff = num2 - num1 return diff; def source(self): return (self.fromNode.xGraphPos, self.fromNode.yGraphPos) def destination(self): return (self.toNode.xGraphPos, self.toNode.yGraphPos)
69
NavigationEdge.pyfrom GraphEdge import GraphEdgefrom AddressNode import AddressNodefrom LabeledNode import LabeledNodefrom Steering import Steeringimport math
class NavigationEdge (GraphEdge): NAVIGATION_OFFSET = 0.2; NAVIGATION_PADDING = 0.2; def __init__(self, fromNode, toNode, cost=1.0): GraphEdge.__init__(self, fromNode, toNode, cost); self.subNodes = {}; if (self.cost >= 1.8 and self.cost > 0): self.createAddresses(); self.createNavigationNodes(); def createAddresses(self): nodeList = self.splitEdge(0.0, self.cost); for node in nodeList: self.subNodes[node.dist] = node; def splitEdge(self, start, end): if (end - start <= 1.8): return None; pos = float(start) + float(end - start) / 2.0 list = []; new1 = self.splitEdge(start, pos); if new1 is not None: list.extend(new1); list.append(AddressNode("", pos, self)); new2 = self.splitEdge(pos, end); if new2 is not None: list.extend(new2); return list; def getAddressPos(self, dist): diffX = self.toNode.xGraphPos - self.fromNode.xGraphPos; diffY = self.toNode.yGraphPos - self.fromNode.yGraphPos; percent = float(dist / self.cost); return (float(self.fromNode.xGraphPos) + percent * diffX, float(self.fromNode.yGraphPos) + percent * diffY); def createNavigationNodes(self): fullDirection = Steering.subtract( (self.destination()[0], self.destination()[1]), (self.source()[0], self.source()[1]) ); direction = Steering.normalize(fullDirection); direction = [direction[0] * self.NAVIGATION_PADDING, direction[1] * self.NAVIGATION_PADDING];
70
angle = math.atan2(fullDirection[1], fullDirection[0]); line1 = (self.NAVIGATION_OFFSET * math.cos(angle + math.pi / 2), self.NAVIGATION_OFFSET * math.sin(angle + math.pi / 2)); line2 = (self.NAVIGATION_OFFSET * math.cos(angle - math.pi / 2), self.NAVIGATION_OFFSET * math.sin(angle - math.pi / 2)); self.sourceRight = self.fromNode.addPositionNode(line1[0] + direction[0], line1[1] + direction[1]); self.sourceLeft = self.fromNode.addPositionNode(line2[0] + direction[0], line2[1] + direction[1]); self.destinationRight = self.toNode.addPositionNode(line1[0] - direction[0], line1[1] - direction[1]); self.destinationLeft = self.toNode.addPositionNode(line2[0] - direction[0], line2[1] - direction[1]); def getDirectionPosition(self, target, nextTarget): if (target == self.fromNode and nextTarget == self.toNode): return [self.sourceRight, self.destinationRight]; elif (target == self.toNode and nextTarget == self.fromNode): return [self.sourceLeft, self.destinationLeft]; elif (target == self.fromNode and nextTarget.isAddress()): return [self.sourceRight, nextTarget]; elif (target == self.toNode and nextTarget.isAddress()): return [self.destinationLeft, nextTarget]; elif (target == self.fromNode): return [self.sourceRight, nextTarget]; elif (target == self.toNode): return [self.destinationLeft, nextTarget]; else: return [target, nextTarget];
def getSourcePosition(self, forwards): if (forwards): return self.sourceRight; else: return self.sourceLeft; def getDestinationPosition(self, forwards): if (forwards): return self.destinationRight; else: return self.destinationLeft; def __str__(self): return str(self.fromNode.index) + " - " + str(self.toNode.index); def addPoint(self, pointId, pointDistance, label): pointSpot = float(pointDistance) / 100 * self.cost; for dist in list(self.subNodes): if (dist < pointSpot + 1 and dist > pointSpot - 1):
71
del self.subNodes[dist]; newPoint = LabeledNode(pointId, pointSpot, self, label); self.subNodes[pointSpot] = newPoint; return newPoint;
TemporaryEdge.pyfrom GraphEdge import GraphEdge
class TemporaryEdge (GraphEdge): def __init__(self, fromNode, toNode, referringEdge, cost=1.0): GraphEdge.__init__(self, fromNode, toNode, cost); self.edgePointer = referringEdge;
72
Search.pyfrom Queue import *from TemporaryEdge import TemporaryEdge
class Search: @staticmethod def BFS(start, end, map): visited = {start.edge.fromNode: True, start.edge.toNode: True}; route = {}; edgeList = {}; queue = Queue(maxsize=0); queue.put(TemporaryEdge(start, start.edge.fromNode, start.edge)); queue.put(TemporaryEdge(start, start.edge.toNode, start.edge)); while(not queue.empty()): edge = queue.get(); if ( (edge.toNode == end.edge.fromNode and edge.fromNode == end.edge.toNode) or (edge.toNode == end.edge.toNode and edge.fromNode == end.edge.fromNode) ): return Search.returnPath(route, edgeList, edge.fromNode, start); if (route.get(edge.toNode) is None): route[edge.toNode] = edge.fromNode; edgeList[edge.toNode] = edge.edgePointer; for e in map.mapEdges: if (e == edge): continue; if (e.fromNode == edge.toNode and visited.get(e.toNode) is None): queue.put(TemporaryEdge(e.fromNode, e.toNode, e, e.cost)); visited[e.fromNode] = True; elif (e.toNode == edge.toNode and visited.get(e.fromNode) is None): queue.put(TemporaryEdge(e.toNode, e.fromNode, e, e.cost)); visited[e.toNode] = True; return None; @staticmethod def returnPath(route, edgeList, end, start): path = []; currentNode = end; while (currentNode != start): path.append(currentNode); currentNode = route[currentNode]; path.reverse(); return [path, edgeList];
73
VehicleController.pyfrom Vehicle import Vehiclefrom Ambulance import Ambulancefrom CityMap import CityMapfrom Steering import Steeringimport pygameimport randomimport math
class VehicleController: def __init__(self, numOfCars, cityMap, CARCOLOR, AMBULANCECOLOR): self.cityMap = cityMap; self.carList = []; self.CARCOLOR = CARCOLOR; self.AMBULANCECOLOR = AMBULANCECOLOR; self.points = []; addresses = cityMap.addressNodes(); listLength = len(addresses) - 1; placed = {}; for index in range(min(numOfCars, listLength)): vehicle = None; if (index < 1): vehicle = Ambulance(); vehicle.edge = cityMap.startPoint.edge; vehicle.xPos = addresses[cityMap.startPoint][0]; vehicle.yPos = addresses[cityMap.startPoint][1]; vehicle.targetNode = cityMap.startPoint; del addresses[cityMap.startPoint]; else: vehicle = Vehicle(); edge = position = None; while (position is None): edge = random.choice(cityMap.mapEdges); while (len(edge.subNodes) < 1): edge = random.choice(cityMap.mapEdges); position = edge.subNodes[random.choice(edge.subNodes.keys())]; if (position not in addresses): position = None; vehicle.xPos = addresses[position][0]; vehicle.yPos = addresses[position][1]; vehicle.targetNode = position; if (placed.get(position) is None): del addresses[position]; vehicle.direction = vehicle.direction + 180; else: placed[position] = vehicle; self.carList.append(vehicle); #used to update all vehicles every tick
74
def update(self): xMap = {}; #loop through all vehicles, grouping them by rounded X and Y coordinates for car in self.carList: xGroup = round(car.xPos); yGroup = round(car.yPos); if (xMap.get(xGroup) is None): xMap[xGroup] = {}; yMap = xMap.get(xGroup); if (yMap.get(yGroup) is None): yMap[yGroup] = []; group = yMap.get(yGroup); group.append(car); #loop through all vehicles, setting their neighbors from the groups and updating them for car in self.carList: #round X and Y positions to find groups xGroup = round(car.xPos); yGroup = round(car.yPos); #add groups to list that are around the vehicles group groups = []; if (xGroup > 0): if (xMap.get(xGroup - 1) is not None): groups.append(xMap.get(xGroup - 1)); if (xGroup < self.cityMap.mapWidth): if (xMap.get(xGroup + 1) is not None): groups.append(xMap.get(xGroup + 1)); if (xMap.get(xGroup) is not None): groups.append(xMap.get(xGroup)); #for each map in the group list, add the applicable lists to the list of neighbors neighbors = []; for yMap in groups: if (yGroup > 0): if (yMap.get(yGroup - 1) is not None): neighbors.extend(yMap.get(yGroup - 1)); if (yGroup < self.cityMap.mapHeight): if (yMap.get(yGroup + 1) is not None): neighbors.extend(yMap.get(yGroup + 1)); if (yMap.get(yGroup) is not None): neighbors.extend(yMap.get(yGroup)); car.neighbors = neighbors; #if a vehicle has arrived then give it a new goal location if (car.arrived): target = None; #if its the ambulance then decide the best destination if (car.isGeneric() is False):
75
if (car.isReturned()): if (car.contents is not None): self.points.append(car.contents); car.contents = None; best = float("inf"); bestNode = None; for index, node in enumerate(self.cityMap.goalSpots): priority = self.cityMap.goalSpots.get(node); position = node.edge.getAddressPos(node.dist); distance = Steering.distance([position[0] - car.xPos, position[1] - car.yPos]); if (priority == 1): distance *= 1 / Ambulance.PRIORITY_MAJOR; elif (priority == 2): distance *= 1 / Ambulance.PRIORITY_MINOR; if (distance < best): bestNode = node; best = distance; car.returned = False; target = bestNode; else: value = self.cityMap.goalSpots[car.targetNode]; self.cityMap.goalSpots.pop(car.targetNode, None); car.contents = value; target = self.cityMap.startPoint; car.returned = True; #normal cars else: addresses = self.cityMap.addressNodes(); target = random.choice(addresses.keys()); path = self.cityMap.findRoute(car.targetNode, target); car.goalList = path[0]; car.edgeMap = path[1]; car.targetNode = target; car.arrived = False; car.update(); def drawVehicles(self, surface): for vehicle in self.carList: car = pygame.Surface((8, 10)); if (vehicle.isGeneric()): car.fill(self.CARCOLOR); else: car.fill(self.AMBULANCECOLOR); if (self.cityMap.testMode): pygame.draw.line(car, (230, 0, 0), (0, 0), (8, 0),
76
2); car.set_colorkey((255, 255, 255)) car = pygame.transform.rotate(car, vehicle.direction); mapPos = self.cityMap.findPos(vehicle.xPos, vehicle.yPos); surface.blit(car, mapPos); if (self.cityMap.testMode): if (vehicle.getTarget() is not None): pygame.draw.line(surface, (0, 0, 255), self.cityMap.findPos(vehicle.xPos, vehicle.yPos), self.cityMap.findPos(vehicle.getTarget()[0], vehicle.getTarget()[1]), 2); if (vehicle.goalList is not None and len(vehicle.goalList) > 0): goal = vehicle.goalList[len(vehicle.goalList) - 1]; goalPos = goal.getPos(); pygame.draw.line(surface, (255, 0, 0), self.cityMap.findPos(vehicle.xPos, vehicle.yPos), self.cityMap.findPos(goalPos[0], goalPos[1]), 2); def getScore(self): score = 0; for key in self.points: if (key == 1): score += 2; elif (key == 2): score += 1; return score;
77
Vehicle.pyimport math;from Steering import Steering
class Vehicle (object): def __init__(self): self.speed = (0.0, 0.0); self.rotation = 0; self.xPos = 0.0; self.yPos = 0.0; self.mass = 500.0; self.maxSpeed = 0.5; self.minSpeed = 0.01; self.force = 10; self.turnRate = 5.0; self.target = None; self.targetNode = None; self.goalList = []; self.edgeMap = {}; self.direction = 0; self.arrived = 1; self.neighbors = []; self.returned = True; self.contents = None; def update(self): #get the force applied force = Steering.calculateSteering(self); #get the strength of the force and dont move if too small forceStrength = Steering.distance(force); if (forceStrength < 0.002): force = [0,0]; #if the force is none and at position, target next node if (force == [0,0] and abs(self.xPos - self.getTarget()[0]) < 0.05 and abs(self.yPos - self.getTarget()[1]) < 0.05): #if the current node is ready to be removed then clear the target if (self.goalList[0].isReady()): self.target = None; #if the goal list has more then 1 node left, remove the top one and set the new target if (len(self.goalList) > 1): self.goalList.pop(0); self.getTarget(); #otherwise just say the vehicle has arrived and clear the queue else : self.arrived = True; self.goalList = None;
78
return; #calculate the max force that can be applied at once from the mass maxForce = self.maxSpeed / self.inertia(); #scale force to not exceed max force if (forceStrength > maxForce): percent = float(maxForce / forceStrength); force[0] = force[0] * percent; force[1] = force[1] * percent; #add force to speed, then calculate velocity, if too low set it to none self.speed = [self.speed[0] + force[0],self.speed[1] + force[1]]; velocity = Steering.distance(self.speed); if ((velocity > 0 and velocity < 0.001) or (velocity < 0 and velocity > -0.001)): self.speed = [0.0, 0.0]; return; #update position using speed self.xPos += self.speed[0]; self.yPos += self.speed[1]; #update direction self.direction = math.degrees(math.atan2(self.speed[0], self.speed[1])); def inertia(self): return self.mass / 100 ; def getTarget(self): if (self.target is None): if (self.goalList is not None and len(self.goalList) > 0): self.target = self.goalList[0].getPos(); self.edge = self.edgeMap.get(self.goalList[0]); return self.target; def isGeneric(self): return True; def isReturned(self): return False;
79
Ambulance.pyfrom Vehicle import Vehicle
class Ambulance (Vehicle): #priority in range of 0-1, 1.0 being the highest PRIORITY_MAJOR = 0.5 PRIORITY_MINOR = 0.5 def isGeneric(self): return False; def isReturned(self): return self.returned;
Tools.pyclass Tools: @staticmethod def enum(**enums): return type('Enum', (), enums);
80
Steering.pyimport math;
class Steering:
@staticmethod def seek(vehicle): toTarget = Steering.subtract(vehicle.getTarget(), [vehicle.xPos, vehicle.yPos]); distance = Steering.distance(toTarget); if (distance > 0): desiredSpeed = Steering.multiply(toTarget, vehicle.maxSpeed / distance); return Steering.subtract(desiredSpeed, vehicle.speed); return [0.0, 0.0]; @staticmethod def arrive(vehicle): toTarget = Steering.subtract(vehicle.getTarget(), [vehicle.xPos, vehicle.yPos]); distance = Steering.distance(toTarget); if (distance > 0): speed = float(distance / (1.5 + vehicle.inertia() / 2) ); speed = min(speed, vehicle.maxSpeed); desiredSpeed = Steering.multiply(toTarget, float(speed / distance)); return Steering.subtract(desiredSpeed, vehicle.speed); return [0.0, 0.0]; @staticmethod def calculateSteering(vehicle): force = [0, 0]; #if the current target node is already ready to drive past, dont slow down #if (vehicle.goalList[0].isReady() and not vehicle.goalList[0].isStop()): # force = Steering.seek(vehicle); #else: force = Steering.arrive(vehicle); return force; @staticmethod def distance(vector): return math.hypot(vector[0], vector[1]);
@staticmethod def normalize(vector): length = Steering.distance(vector);
81
return [float(vector[0]) / float(length), float(vector[1]) / float(length)]; @staticmethod def multiply(vector, value): list = []; for index in range(len(vector)): list.append(float(vector[index]) * float(value)); return list; @staticmethod def subtract(vector1, vector2): list = []; length = min(len(vector1), len(vector2)); for index in range(length): list.append(vector1[index] - vector2[index]); return list;
82