The Effect of Quadcopter Guidance in Crowd Emergency...
Transcript of The Effect of Quadcopter Guidance in Crowd Emergency...
a
The Effect of Quadcopter Guidance in Crowd Emergency
Evacuation Scenarios: Simulation and Analysis
Michael Huang, Dr Nilanjan Chakraborty, Dr. Minh Hoai Nguyen
b
Abstract
In the case of sudden disasters in a populated space, evacuation is often slowed down by panicked
crowds clogging exits. Quadcopters provide a promising tool for coordinating evacuation efforts to
improve survival rates. However, designing and testing effective quadcopter aided evacuation systems in
real world scenarios is cost prohibitive and unsafe. Therefore, there is a need for a crowd-robot
simulation system that can be used to verify designs before testing and deploying quadcopter-based
evacuation systems in the real world. Though many existing components can be found for separate robot
and crowd behavior simulations, none are suitable for integrated crowd-robot simulations that can
generate realistic video feeds for training computer vision algorithms and model crowd interaction with
robots. This study’s purpose is to explore and create such a simulation system that can be run on
commercially available hardware. This report first describes the reused simulation components, the
architecture, and implementation of the new simulation system. To demonstrate feasibility, performance,
and flexibility of the new simulation system, case studies are performed and the results are presented.
Finally, there is discussion about the results of the case studies, and predictions for potential future
enhancement and applications of the system.
1
1 Introduction
Figure 1-1 Quadcopter trying to redirect crowds
away from a crowded exit
Figure 1-2 Quadcopter bringing a crowd to a
less crowded exit to safety
In case of disasters such as fires, shootings or explosions in crowded areas, terrain may be damaged,
and communication may have broken down or been delayed. Life-threatened and confused, crowd
evacuees may panic and cause the so called “faster-is-slower effect” [1]: “herds” of evacuees try to push
through the same exit as fast as they can, causing clogged exits, which in turn causes deaths and injuries
that could have been avoided.
In recent years, quadcopters, or small unmanned aerial vehicles (sUAVs) have been undergoing
tremendous technological advancement. With increasing computing power, a variety of sensors
(cameras, GPS, Gyroscope), and on board wireless communication modules, sUAVs have become more
versatile and easily maneuvered. In fact, these flying robots are already being deployed in the field to help
gather information of the situation on the ground, assess damages, or even help with search and rescue
[2]. Using a network of UAVs in disaster management has also been explored [3]. Xiong et al [4]
proposed a UAV fleet control system architecture based on crowd simulation for industrial disaster (e.g.,
large scale gas leaks) evacuations. Taking advantage of high speed and flexible mobility of quadcopters,
the architecture uses UAVs carrying sensors to detect gas leaks and broadcast the pollution distribution to
pedestrians. The pedestrians then use the info to avoid polluted areas.
Despite prior research and field applications, we noticed an integrated crowd and robotics simulation
tool had yet to be developed to help realize the full potentials of quadcopters in aiding crowd evacuation
[5]. Such a simulation tool would have many potential applications. For example, there is a great need for
simulated realistic videos/pictures with high density crowds that can be used to train and validate
computer vision systems in crowd monitoring and analysis applications [6]. A simulation tool that models
interactions between quadcopter and crowd panic behaviors can also help develop effective evacuation
strategies and plans. Therefore, we proposed a new crowd-robot simulation system that reuses the
packages and infrastructure provided by Gazebo and Robot Operating System [7, 8], with an integration
2
of Menge [9] as the added crowd simulation engine. Shown in Figure 1-1 and Figure 1-2 are some visuals
of scenarios created using the new simulator.
To our knowledge, no research has been done on whether small or mid-sized quadcopters can be an
effective tool in crowd evacuation assistance, specifically, how effectively quadcopter provided guidance
can reduce the slowing down caused by the “faster-is-slower” effect [1] in crowded situations. To
showcase the capability of the simulator, we used it to study effectiveness of using quadcopters for
evacuation guidance through multiple case studies of various evacuation scenarios.
The rest of this paper is organized as follows. First, it surveys prior work in both crowd and robot
simulations within context of this project and identifies the components that are used or extended to create
the integrated crowd-robot simulation system. Then, the high level architecture and a few implementation
details of the simulation system are described. Next, case study scenarios are described, and the results of
the simulations for those scenarios follow. After that, the results are analyzed and discussed. Finally, the
conclusion and potential uses of the simulation system, as well as proposed future enhancements are
presented.
2 Related Work
2.1 Crowd Evacuation Simulation Methods
A significant amount of research has been done [1, 9-19] on understanding and modeling
human/crowd behaviors in evacuation scenarios. Many crowd behavioral models and simulators have
been created based on results from these researches. Some are commercial products [20, 21], that help
officials and architects in evacuation planning. The models used in these simulation methods can be
roughly classified into two major categories: macroscopic models and microscopic models [5, 13].
The macroscopic models consider a crowd as a whole, flowing through an environment, and treats
individual evacuees as identical entities similar to the way fluid dynamics or gas kinetics treat individual
particles [18, 19]. This type of models can predict the overall flow of the crowd with a low computation
demand, but retain a drawback of lacking detail and specific prediction for individuals’ movement and
behavior. For the purpose of this study, we needed to model the impact of quadcopter guidance on
individuals. Therefore, we focused on utilizing microscopic crowd models.
Microscopic models treat each evacuee as individuals with varying characteristics, focusing mainly
on the behavior and decision making of individuals, including their interactions among each other. This
category can be further classified into many subcategories [5, 9] such as Cellular Automata Models,
Social Force Model, and Agent-Based Model. It is also possible to combine multiple approaches to create
hybrid models, e.g., Social Force Based Agent Model, Agent Based Cellular Automata Model. Readers
are referred to [5] and [9] for detailed analysis of these models, which are beyond the scope of this study.
3
We wanted our simulation framework to not to be tied with any specific microscopic model, and found
that the flexible crowd simulation framework proposed and implemented by Curtis et al [9] suites this
purpose very well. This crowd simulation framework, named Menge, decomposes the crowd simulation
problem into related sub-problems: goal selection, plan computation, plan adaptation, and spatial queries,
and its architecture allows plugins of different solutions for each sub-problem. With the flexibility
provided by this framework, a wide variety of scenarios can be simulated by combining different crowd
models and algorithms. It is for this flexibility that we choose Menge as the primary sub-component for
crowd simulation in the system. While Menge is primarily an agent-based framework, it is also capable of
using other approaches for specific sub-problems. For example, it can use force based models in solving
the plan adaptation sub-problem. In addition, its reliance on an agent-based framework does not affect
the initial use cases we are targeting for this study.
2.2 Robot Simulation
Simulation is an essential tool in robotics research for the rapid testing of algorithms, and visualizing
and verifying behavior of new designs. Many robot simulation tools are available, offering ever
increasingly advanced features that make simulations more and more realistic [5, 22]. Although popular
commercialized software such as Webots [23], COSIMIR [24], Microsoft Robotics Developer Studio [25]
may be more powerful, this research aims to create a system that can be adapted and extended by the
research community to meet its ever changing needs. Therefore, open source is a requirement. The
Gazebo simulator coupled with Robot Operating System (ROS) [8] has become the most popular open
source simulation framework for robotics study in recent years. ROS provides a platform for different
robotics functionalities to be implemented and run as independent “nodes” collaborating with each other
through a messaging mechanism. A robotics simulation can be quickly assembled by reusing existing
packages contributed from the community. The Gazebo simulator can also be run as a node in a ROS
system, taking advantages of many packages from the ROS community that provides functionalities like
motors, sensor modeling, control algorithms, and computer vision subsystems. In addition, the ROS
community has contributed multiple packages for simulating quadcopters. Because of the flexibility
enabled by the ROS distributed architecture, active community, and availability of quadcopter models, we
choose Gazebo and ROS as the backbone of the simulation system.
3 Simulation System Architecture
As mentioned above, the proposed simulation system takes advantage of reusable components from
ROS (Robot Operating System), Gazebo (3D robot simulator), and Menge (crowd simulation
framework). The overall architecture is shown in Figure 3-1. As Gazebo is well integrated with ROS, and
in itself can be run as a ROS node, the existing ROS packages that provide quadcopter simulation and
4
control algorithms are reused. The Menge crowd simulation framework is integrated with Gazebo by
implementing a Gazebo plugin, and extensions are made to adapt Menge for specific needs in our study.
Our goal is to integrate Menge as a generic crowd simulation engine for Gazebo that other researchers can
easily take advantage of. The rest of this section briefly introduces these existing frameworks in context
of the intended usage, extensions and integration specific to this research.
Gazebo
Menge PluginMenge
(Core)
World XML
Spec (SDF):
Models
Plugins
Lights
Obstacles
Menge Agent
BFSM and Scene
Spec in XML
Quadcopter
Position
Crowd
Positions
Quadcopter
Position
Crowd
Positions
Insert Agent
Models into
Gazebo World
at Startup
ROS Plugins
Robot Spawner
Sensory
Motory
Hector
Quadrotor
ROS
Packages
Robot
XML
Spec
(URDF)
Motor
Control
Quadcopter
Pose, Sensor
Data
Spawn Quadcopter Upon Start
Figure 3-1 Crowd-robot simulation system architecture
3.1 ROS and Gazebo
Gazebo is a 3D multi-robot simulator well integrated with Robot Operating System [26]. Details
about the integration can be found online. In short, Gazebo is run as a ROS node and the interaction
between Gazebo and other ROS nodes is a mixture of ROS messages through its ROS plugins. For
example, those messages can be spawning a quadcopter/robot as models, or sending control commands
into simulated world, and getting info of each model, sensor data, etc., from the simulated world. Gazebo
itself also provides a mechanism for other nodes to directly inject plugin modules (via XML spec). Those
plugins are dynamically loaded inside Gazebo, and have full access to all the internal APIs Gazebo
provides. Because of this loosely coupled nature of the architecture, over the years, ROS and Gazebo
community has created a large set of re-usable components for robotics research. For this study, we
reused a quadcopter ROS package called hector quadrotor, which has all functionality needed for
modeling, simulating and controlling a quadcopter.
5
3.2 Menge
Figure 3-2 Menge’s computation pipeline, taken from [9]: The simulator definition (including initial
conditions and BFSM) is given as an XML specification. At each time step, the system updates event
state and task state. Then the BFSM is updated for each agent. Next, the preferred velocity for each agent
is computed. The pedestrian model is used to compute a feasible velocity. Finally, the agent position is
updated
Shown in Figure 3-2, Menge features a flexible modular pipeline architecture that allows plugins and
extensions for different “sub-problems” in crowd simulation. We took advantage of this architecture,
reusing its built-in pedestrian model, and extended parts of the behavioral state machines to model the
specific crowd behaviors to this study. The reader is referred to [9] for details on Menge and a detailed
example on how it is set up. In short, Menge takes xml definitions as input for the crowd behavior finite
state machine, scene setup and rendering configurations. Following are the major terms in crowd behavior
specifications.
An agent is a crowd member with a unique “personality”, which can be modified through attributes
like “personal space”, “mass”, and “turning bias”. Agents have “a-states” that hold their position and
velocity, and “b-states” holding current states of their behavior finite state machine (BFSM). The
BFSM is predefined and specified in XML. As an agent enters a state of the defined BFSM, it may take
an action, such as increasing walking speed, and select a goal (a region or point in space) to move to,
from a group of goals specified as a GoalSet. The goal selection policy is specified by a GoalSelector,
e.g. selecting nearest goal using the NearestGoalSelector type. Finally, the agent would transition to a
new state when a condition is met, such as reaching the goal selected for the state.
3.3 Menge-Gazebo Plugin
To allow for communication between Menge and Gazebo, we created a Menge-Gazebo plugin. It is
implemented as a Gazebo World Plugin [27], loaded when Gazebo finishes loading the initial simulation
world (which is specified in an XML format, called SDF). Upon loading, the plugin then instantiates and
manages an instance of the Menge simulator, and then injects human models for each Menge agent into
6
Gazebo world. At runtime, it extracts the positions of each crowd member in each Menge simulation time
step. It then sets the position of each human model inside Gazebo to their corresponding agents’ position,
while maintaining synchronization between the two simulators. In return, the plugin takes the position of
the quadcopter model in Gazebo to update the coordinates of a unique quadcopter goal in Menge, toward
which the agents in follow state would run. Finally, the plugin also records the percentage of agents
escaped from the disaster scene at each simulation time step.
4 Implementation Considerations
This section describes a few implementation details.
4.1 Synchronization between Menge and Gazebo
Menge by default uses a much bigger time step than Gazebo rendering and physics update time steps.
Directly using the Menge time step as the update period for Gazebo would result in unstable physics
simulation for quadcopters. On the other hand, directly using the Gazebo update period for Menge
simulation would drastically slow down rendering performance, because the crowd positions are updated
by Menge at a much higher frequency- which is unnecessary because the models don’t move much
between such small time steps anyways. In the end, we kept the different time steps, and buffered the
crowd position update to Gazebo until a Menge time step had passed. As another difficulty, Gazebo
always inserted models in a separate thread, leading to problems with updating the position of null
models. To resolve this, we added a delay to start the Menge simulation only if all models had been
successful inserted into the Gazebo simulation.
4.2 Crowd Actor Animation
For simulated videos/pictures to be useful in research involving computer vision applications, the
simulator should be capable of rendering large number of characters with convincing animations of
walking and/or running. The rendering frame rate should be suitable for interactive visualization, that is,
above 10 frames per second. Also, the simulation should not be slower than real time, to be useful for
applications such as training for quadcopter operators.
In state-of-the-art implementations, a technique called skeletal animation is used for realistic human
character animations. In this technique, a character is modeled with a mesh that consists of many vertices
defining the geometry of the character in 3D space. The mesh is then “skinned” with textures, which are
basically 2D images painted on the surface formed by vertices of the mesh. To animate the character, the
positions of vertices of the mesh are influenced by a set of skeletal “bones”. Moving the bones essentially
moves the vertices which move, stretch, and compress parts of the character. The Gazebo simulator at the
time of writing does not have a human character model in the latest stable release (version 7.1). However,
in the latest Gazebo development branch, there is an Actor model implementation with skeletal
7
animations. Experiments were conducted to determine if the Actor was suitable for our simulations, and
results showed that the implementation was not scalable to the large crowd populations. Simple CPU and
GPU usage profiling results also showed that the implementation was under-utilizing GPU power.
To improve the solution, we first identified the high-level design of the Actor character, shown in
Figure 4-1. Gazebo separates GUI rendering into a separate process as a client (the gzclient process)
communicating with a server process (gzserver) that encompasses all other aspects of physics simulation,
e.g., solving ODEs for all mechanical dynamics, controllers, and detecting and resolving collision. In this
research, the newly added Menge plugin described above is also part of the server. In the original Gazebo
Actor design, to animate each Actor in the scene, the poses (position and rotation angles) of each bone in
the skeleton are calculated on the server side, then updated to the client side one by one. The client side
then uses the bone poses to transform the mesh vertices. Although the final rendering of the transformed
mesh with textures is done on the GPU, this design causes significant communication overhead for large
crowds. In addition, it slows down other parts of simulation because all bone pose calculations are done
by the CPU. As a result, the overall simulation performance is not satisfactory. We were only able to get
10 FPS for 16 Actors for real time simulation (1:1 ratio of simulation time to real time) as shown in
Figure 4-2. All results presented in this report are measured on a PC with Intel Core i7-6700 and Nvidia
GTX960 PCIe graphics card.
Our first attempt to improve the design was to remove the communication overhead. To do that, we
rewrote the Actor implementation on the server side to just send the Actor pose (output from Menge
simulation) at each time step to the client side. The client side was also modified to directly use the
Ogre3d library functions to update the individual bone poses based on the actor’s pose and simulation
time, “software skinning” in Ogre3d terms. With this improvement, we could simulate up to 64 actors at
real-time speed and interactive frame rate, as shown in Figure 4-4. With this result, we were given room
to create more varieties of textures used to change appearances of each character. To further improve the
performance, we implemented “hardware skinning” in which all the animation related calculations are
done on GPU. This design, shown in Figure 4-5, utilizes the GPU vertex shader to offload vertex
transformation calculations from CPU. The gzclient code on CPU then only controls the frame of the
bone animation data to be used at a particular iteration. This greatly improves the performance, and
allowed up to 625 characters to be rendered in real time simulations. The shader program is written in Cg
language, shown in Appendix A, currently only supported on NVidia GPUs supporting vp40 profile [28].
The Gazebo code base by default does not turn on Cg support, which we re-configured and rebuilt with
the support.
Figure 4-7 shows the achievable frame rates with varying numbers of actors for all three approaches.
It clearly shows the significant improvement that comes from using hardware skinning. It should be noted
8
that the frame rate may have also been impacted by how intensive the physics and Menge simulation was.
The results presented here were obtained with Gazebo running without any other models other than the
crowd actors roaming randomly. All measurements were also made when the simulation was running at
or close to real time speed with time step less than 0.02 seconds, which are reasonable constraints for the
type of usages we are targeting for.
Figure 4-1 Original Gazebo Actor Rendering
Design – Each skeletal bone pose is manually
controlled from server side
Figure 4-2 Gazebo Original Actor Rendering –
limited to 16 actors with the same textures for
real time simulations
Figure 4-3 Software Skinning – Bone poses
calculated using actor pose on client side, then
applied to mesh vertices.
gzserver
Physics sim
gzclient
Boneposes
GPU
Meshes
Bonesdata
Textures
GZ Ogre3d Engine
Transform vertices
gzserver
Physics sim
gzclient
Actorpose
GPU
Meshes
Bonesdata
Textures
GZ Ogre3d Engine
Transform vertices
Figure 4-4 Rendering with Software Skinning
– up to 64 actors for real time simulations
9
Figure 4-5 Hardware Skinning – SW only controls
animation frame index and vertex program in GPU
does tranform using proper bone data for the frame
Figure 4-6 Rendering with Hardware Skinning –
up to 625 actors for real time simulations
Figure 4-7 Frame Rate vs Number of Actors, measured when simulations runing at real time speed with
time step no more than 0.02 seconds.
4.3 Extending Menge State Machine Definitions
To fulfill the unique requirements for our simulations, we added following new conditions, actions,
and goal types to the Menge framework. A special movable goal is added to model the quadcopter.
Agents can enter into a new Follow state in which they follow the quadcopter. A FollowCondition is
gzserver
Physics sim
gzclient
Actorpose
GPU
Meshes
Bone data
Textures
GZ Ogre3d Engine
Transform vertices
Vertex shader
controlsframe index
10
added to check if an agent is close to the quadcopter before entering the Follow state. A
ProximityCondition is defined for checking whether an agent is close to a “disaster agent”. A
SecondNearestGoalSelector is created to allow agents to select the second nearest exit as the goal in
certain states in one of the case studies described below. Shown in Figure 5-2 is the BFSM for the case
studies defined with XML using these extensions, and Appendix B shows an example XML specification
using those extensions. Given the BFSM and scene definition XML files, Menge can generate the
positions of crowd members for each time step, which are then communicated to the Gazebo side by the
Menge-Gazebo plugin as described above in Section 3.3.
5 Case Studies
5.1 The Scene Setup
Figure 5-1 (above) The Scene for Case Studies:
The scene is based off a “city square” type
situation. A crowd of people initially move
randomly about the area enclosed by the five
buildings. Suddenly, and loud explosion goes off
near the north building! People in the crowd run
towards the exits to escape.
Figure 5-2 (right) Agent Behavioral Finite State
Machine for Case Studies: Different scenarios use
a variant of this state machine. For example, for
the case that all agents run to the nearest exit,
probability r is set to 100%. Making goal set 2 a
subset of goal set 1 is to simulate some agents
initially running to some blocked exits (goals not
present in set 2, the good exits).
Wait Walk
RunToSafety
Run to the nearest safety
point
RunToExit1
Run to the nearest exit in goal set1 that
may have blocked exits
RunToExit2
run to the second nearest exit in goal set1 that may have
blocked exits
RunToGoodExit
Run to the nearest exit in goal set2 that
have no blocked exits
Follow
Follow the quadcopter
Goal reached
Timeout
Disasternoticed
Probability: r
Goal reached
Goal reachedApproached to
any goal in set2
Timeout andQuadcopter is near
Probability: (1-r)
The scene setup is shown in Figure 5-1. This project considers the rate at which agents escape and the
time by which all agents have escaped the scene when determining the efficiency of an evacuation
11
method. The expected optimal solution to this evacuation problem is for crowd members in the top half of
the scene to escape in equal groups through Exits 1, 2, and for the bottom half of the crowd to escape in
roughly equal groups through Exits 3, 4, 5, with a slight bias towards the Exits 3, 5 to account for the
narrowness of the Exit 4. The goal of the quadcopter tele-operator, then, is to lead the crowd members to
tend towards this optimal solution.
5.2 Scenarios and Results
Four different case studies were conducted, in which common evacuation obstructions and crowd
behaviors were simulated. Within each case study, the amount of crowd members successfully escaping
the scene at set time-steps was recorded for both the case in which a quadcopter provided guidance, and
the case in which there was no quadcopter to provide guidance. To reduce noise caused by the
randomized starting positions and somewhat random behavior of the agents, average numbers of 5 runs
for each case are used in results reported below.
Case Study 1: Nearest Exit
In this case, the crowd members, upon realizing the danger of the situation, head for the nearest exit.
Because of its central location, the narrowest exit, Exit 4, is the most popular exit choice to the crowd
members. Therefore, a large amount of clogging, as shown in Figure 5-3, is built up around Exit 4,
creating a long and inefficient evacuation process. Knowing this, during the quadcopter guided portion of
the case, the quadcopter tele-operator controls the quadcopter to lead crowd members away from Exit 4
towards the empty side exits.
Figure 5-3 The clogging effect Figure 5-4 Nearest exit choice
12
Figure 5-5 Middle exit blocked Figure 5-6 Side exit blocked
Figure 5-7 Second nearest exit choice Figure 5-8 Gazebo simulation GUI
The results of this case, Figure 5-4, were consistent with expectations. The clogging of the middle
exit did cause a large build-up of crowd members; the slope of the curve representing the rate at which
crowd members escaped during quadcopter guided simulations was slightly steeper than those of the
simulations without quadcopter guidance. The benefit of the quadcopter is more prominent towards the
end of the evacuation, because it helps “stragglers” find their way to exits.
Case Study 2: Crowded Exit Blocked
In this case, it is also assumed that the crowd members will head for the closest exit. However, the
most favored exit, the Exit 4, is blocked. When the crowd members heading towards the Exit 4 reach
within a certain distance of the exit, they will head to the nearest exits of their current positions. During
the quadcopter guided simulations, the tele-operator’s goal was to even the number of agents heading to
the two side exits. Because the crowd behavior in this case is close to the optimal evacuation solution, it
was expected to have less pronounced quadcopter guidance benefits.
The results are shown in Figure 5-5. Because the middle exit was blocked, most of the remaining
crowd members were redirected to the wider two side exits. Because the agents were roughly separated
equally between the two larger exits, the rate of escape was steeper and the total evacuation time was
shortened. While the quadcopter was able to cut short the total evacuation time by helping “stragglers”,
13
the rate at which agents escaped was actually lowered by the quadcopter. This is because in effort to
maintain consistency the quadcopter always moved with constant speed across the cases, and, in order to
collect more following agents, often zigzagged across the crowd, increasing the travel time of the agents.
Case Study 3: Less Crowded Exit Blocked
This case is similar to the previous case, only that instead of the most crowded exit being blocked, it
is a less crowded side exit that is blocked. Because this case even further encourages agents to head
towards the narrow Exit 4, the evacuation is expected to be extremely inefficient. Here, the quadcopter
tele-operator controls the quadcopter to direct most agents to the remaining side exit.
The results are shown in Figure 5-6. Because a side exit was blocked, more strain was placed on the
middle exit and remaining side exit, causing even severer clogging than the “Nearest Exit Choice” case.
So the average total evacuation time for the simulations without quadcopter guidance was around 100
simulation seconds greater than that of the “Nearest Exit Choice”. Thus, when the quadcopter was able to
guide a large portion of agents to an exit across the square from the remaining side exit, the rate of escape
and total evacuation time received a large boost.
Case Study 4: Second Nearest Exit Choice
However, crowd members are not always familiar enough with their surroundings to know where the
nearest exit is. Therefore, this case study simulates the case when a portion of the agents (50%) choose to
head towards the second nearest exit. While this means that some agents who would have chosen Exit 4
would head for Exits 3, 5, the opposite is also true. Thus the congestion level at each exit of this scenario
and the first scenario were expected to be similar, only that the agents of this simulation traveled longer
distances. In the quadcopter-aided version, the tele-operator still tries to direct agents toward less crowded
Exits 3, 5.
The results are shown in Figure 5-7. Because some agents would be heading towards their second
nearest exits, it makes sense that the averaged total time for all agents to escape without quadcopter
guidance is shown to be around 100 simulation seconds later than that of the first scenario. With
quadcopter guidance, however, the total evacuation time is shortened significantly, to almost the same as
the total evacuation time of the “no quadcopter” simulations of the first scenario. The rate of escape with
quadcopter guidance is also much steeper than that without quadcopter guidance.
To summarize all the results, the tele-operated quadcopter was shown to be most effective in
scenarios where significant clogging is built up at an exit, as evident in case studies 1, 3, 4. Even in case 2
with less or almost no clogging, tele-operated quadcopter was able to shorten the total evacuation time in
the end by effectively directing the “stragglers”. Therefore, the simulation results support the promised
effectiveness of using tele-operated quadcopter for crowd evacuation.
14
5.3 System Flexibility, Performance and Limitations
We found the simulation system easy to use and flexible enough to handle all the scenarios. The
configuration for crowd behavior, goals and the scene are all defined in simple XML files. Because of
this, we were able to construct the general state machine shown in Figure 5-2 using a XML file, then just
modify some parameters to setup all 4 cases. The Gazebo GUI user interface, shown in Figure 5-8 is
intuitive to use as well, and many graphics in this report are captured directly from the virtual camera
inside Gazebo. Thanks to a ROS package, we was also able to use a gamepad to control the quadcopter
without writing additional code. We were able to spawn and control multiple quadcopters into the
simulator, as shown in Figure 5-9 (Although not used to collect data in the case studies). This opens doors
for future studies on evacuation strategies using multiple quadcopters coordinating with each other.
The system was able to simulate 260 characters and 3 quadcopters with a 1:1 simulation to real time
ratio, rendering at about 21 frames per second. As mentioned in Section 3.2, the GPU accelerated
animation frees up CPU to accommodate the extra computation load related to quadcopter aerodynamics
and control. Because of this, the frame rate is roughly the same as that of the hardware skinning curve in
Figure 4-7, measured without quadcopters.
One limitation of the current implementation of the system is that the crowd models are “static”
models ignoring any laws of physics. Hence it is not suitable for simulations that require modeling of
human-robot physical interaction such as robots carrying or helping injured humans.
Figure 5-9 Multi-quadcopter simulation
15
6 Conclusions
In this paper, we presented a robot-crowd simulation system that integrates Menge as the crowd
simulation engine into the ROS-Gazebo robotics simulation framework. We extended the Menge
framework to incorporate quadcopters and disasters as “agents” to interact with crowd members. We also
dramatically improved the performance of the Actor animation in Gazebo simulation framework, making
the simulation tool suitable for large scale crowd simulation, and capable of generating realistic
video/pictures with commercially available PC platforms. To our knowledge, this is the first attempt in
integrating crowd simulation with 3D robotics simulation, and it opens new opportunities for many future
projects involving robot-crowd interaction, without the reliance on costly and unsafe testing during early
development stages.
To showcase the capability of the simulation system, we used it to explore the effects of quadcopter
guidance in evacuation efficiency. The simulation results of four case studies indicate that tele-operated
quadcopters are a promising tool in re-directing crowd members from clogging exits, improving overall
evacuation efficiency and potentially saving lives.
Through those case studies, we have demonstrated that this simulation system is easy to use and
adaptable enough to simulate different scenarios. This is because the system is built on top of the
powerful infrastructure provided by ROS and Gazebo, reusable packages contributed from the
community, as well as the modular structure for crowd simulation from Menge. We hope the simulation
system will be a powerful tool used and enhanced by many researchers for different future studies. Our
code is published as an open source project on GitHub [29].
Future improvements include creating even more variety of the human models in Gazebo with
walking/running animations. Adding more variety and realism into the human models would make the
system suitable for using computer vision to detect crowd members and clogging exits. Instead of totally
relying on human operators, we would also consider making quadcopters more intelligent and
autonomous by implementing some machine learning and advanced control features to automate some
tasks, such as recognizing crowded exits and auto-navigating to another exit. These would make
quadcopters easier to operate, as well as cut down on costs associated with a quadcopter based evacuation
guidance system in the real world.
As for future applications, this simulation system could be an effective tool to study and derive
optimal evacuation strategy for different situations, with or without robots, or using multiple robots
coordinating with each other to provide crowd evacuation guidance. As reported in [6], there is a need for
simulated realistic videos/pictures with high density crowds that can be used to train and validate
computer vision systems in crowd monitoring and analysis applications, which is another potential use of
this simulation system.
16
Appendix A. Hardware Skinning Vertex Program in Cg Language
Adapted from an Ogre3d example which supports fewer bones.
void hardwareSkinningFourWeightsFullHANIM_vp(
float4 position : POSITION,
float3 normal : NORMAL,
float2 uv : TEXCOORD0,
float4 blendIdx : BLENDINDICES,
float4 blendWgt : BLENDWEIGHT,
out float4 oPosition : POSITION,
out float2 oUv : TEXCOORD0,
out float4 colour : COLOR,
// Support up to 32 bones of float3x4
uniform float3x4 worldMatrix3x4Array[32],
uniform float4x4 viewProjectionMatrix,
uniform float3 lightPos[2],
uniform float4 lightDiffuseColour[2],
uniform float4 ambient)
{
// transform by indexed matrix
float4 blendPos = float4(0,0,0,0);
int i;
for (i = 0; i < 4; ++i)
{
blendPos += float4(mul(worldMatrix3x4Array[blendIdx[i]], position).xyz, 1.0) * blendWgt[i];
}
// view / projection
oPosition = mul(viewProjectionMatrix, blendPos);
// transform normal
float3 norm = float3(0,0,0);
for (i = 0; i < 4; ++i)
{
norm += mul((float3x3)worldMatrix3x4Array[blendIdx[i]], normal) *
blendWgt[i];
}
norm = normalize(norm);
float3 lightDir0 = normalize(lightPos[0] - blendPos);
float3 lightDir1 = normalize(lightPos[1] - blendPos);
oUv = uv;
colour = ambient +
(saturate(dot(lightDir0, norm)) * lightDiffuseColour[0]) +
(saturate(dot(lightDir1, norm)) * lightDiffuseColour[1]);
}
17
Appendix B. Example Menge State Machine XML for the Case
Studies
<BFSM>
<GoalSet id="0">
<Goal capacity="10000" id="0" type="AABB" weight="1.00" max_x="-9.90" min_x="-12.34"
max_y="1.28" min_y="0.58"/>
<Goal capacity="10000" id="1" type="AABB" weight="1.00" max_x="-7.10" min_x="-9"
max_y="1.28" min_y="0.57"/>
<Goal capacity="10000" id="2" type="AABB" weight="1.00" max_x="-5.00" min_x="-8.5"
max_y="1.25" min_y="0.58"/>
<Goal capacity="10000" id="3" type="AABB" weight="1.00" max_x="-2.70" min_x="-7.5"
max_y="1.28" min_y="0.63"/>
<Goal capacity="10000" id="4" type="AABB" weight="1.00" max_x="-10" min_x="-11.175"
max_y="7.54" min_y="6.98"/>
<Goal capacity="10000" id="6" type="AABB" weight="1.00" max_x="-1.50" min_x="-7.315"
max_y="-4.42" min_y="-9.57"/>
...
</GoalSet>
<GoalSet id="1">
<Goal type="point" id="0" x="-40" y="40"/>
<Goal type="point" id="1" x="-40" y="-40"/>
<Goal type="point" id="2" x="40" y="40"/>
<Goal type="point" id="3" x="40" y="-40"/>
<Goal type="point" id="4" x="-55" y="0"/>
</GoalSet>
<GoalSet id ="2">
<Goal type="point" id="0" x="7" y="0"/>
</GoalSet>
<GoalSet id="3">
<Goal capacity="70" id="0" type="AABB" weight="1.00" max_x="-10" min_x="-20"
max_y="18" min_y="16"/>
<Goal capacity="70" id="1" type="AABB" weight="1.00" max_x="-15" min_x="-20"
max_y="1" min_y="-1"/>
<Goal capacity="70" id="2" type="AABB" weight="1.00" max_x="-15" min_x="-20"
max_y="-15" min_y="-17"/>
<Goal capacity="70" id="3" type="AABB" weight="1.00" max_x="21" min_x="19"
max_y="22" min_y="17"/>
<Goal capacity="70" id="4" type="AABB" weight="1.00" max_x="21" min_x="19"
max_y="-17" min_y="-22"/>
</GoalSet>
<GoalSet id="4">
<Goal capacity="180" id="1" type="AABB" weight="1.00" max_x="-15" min_x="-20"
max_y="1" min_y="-1"/>
<Goal capacity="10" id="2" type="AABB" weight="1.00" max_x="-15" min_x="-20"
max_y="-15" min_y="-17"/>
<Goal capacity="70" id="3" type="AABB" weight="1.00" max_x="21" min_x="19"
max_y="22" min_y="17"/>
18
<Goal capacity="70" id="4" type="AABB" weight="1.00" max_x="21" min_x="19"
max_y="-17" min_y="-22"/>
</GoalSet>
<GoalSet id ="99">
<Goal type="circle" id="0" x="100" y="1000" radius="1"/>
</GoalSet>
<State name="Walk" final="0" >
<GoalSelector type="random" goal_set="0" />
<VelComponent type="goal"/>
</State>
<State name="Wait" final="0" >
<GoalSelector type="identity"/>
<VelComponent type="goal"/>
</State>
<State name="Grow" final="0" >
<GoalSelector type="identity"/>
<VelComponent type="goal"/>
</State>
<State name="RunToExit1" final="0" >
<GoalSelector type="nearest" goal_set="3"/>
<VelComponent type="goal"/>
<Action type="set_property" property="pref_speed" dist="c" value="2"
exit_reset="1"/>
</State>
<State name="RunToExit2" final="0" >
<GoalSelector type="second_nearest" goal_set="3"/>
<VelComponent type="goal"/>
<Action type="set_property" property="pref_speed" dist="c" value="2"
exit_reset="1"/>
</State>
<State name="RunToGoodExit" final="0" >
<GoalSelector type="nearest" goal_set="4"/>
<VelComponent type="goal"/>
<Action type="set_property" property="pref_speed" dist="c" value="2"
exit_reset="1"/>
</State>
<State name="RunToSafety" final="0" >
<GoalSelector type="nearest" goal_set="1" />
<VelComponent type="goal"/>
<Action type="set_property" property="pref_speed" dist="c" value="2"
exit_reset="1"/>
</State>
<State name="WaitFinal" final="1" >
<GoalSelector type="identity"/>
<VelComponent type="goal"/>
</State>
<State name="temp" final="0" >
19
<GoalSelector type="identity"/>
<VelComponent type="goal"/>
</State>
<State name="Follow" final="0" >
<GoalSelector type="explicit" goal_set="99" goal="0" />
<VelComponent type="goal"/>
<Action type="set_property" property="pref_speed" dist="c" value="2"
exit_reset="1"/>
</State>
<Transition from="Walk" to="Wait" >
<Condition type="or">
<Condition type="goal_reached" distance="0.4" />
<Condition type="timer" dist="u" min="50" max="60" per_agent="0"/>
</Condition>
</Transition>
<Transition from="Wait" to="Walk" >
<Condition type="timer" dist="u" min="2" max="7" per_agent="0"/>
</Transition>
<Transition from="Walk">
<Condition type="proximity" distance="2.0" agentToAvoid="260"/>
<Target type="prob">
<State name="RunToExit1" weight="1"/>
<State name="RunToExit2" weight="0"/>
</Target>
</Transition>
<Transition from="Wait">
<Condition type="proximity" distance="2.0" agentToAvoid="260"/>
<Target type="prob">
<State name="RunToExit1" weight="1"/>
<State name="RunToExit2" weight="0"/>
</Target>
</Transition>
<Transition from="RunToExit1" to="RunToGoodExit" >
<Condition type="goal_reached" distance="5" />
</Transition>
<Transition from="RunToExit2" to="RunToGoodExit" >
<Condition type="goal_reached" distance="5" />
</Transition>
<Transition from="RunToSafety" to="WaitFinal" >
<Condition type="goal_reached" distance="0.4" />
</Transition>
<Transition from="RunToGoodExit" to="RunToSafety" >
<Condition type="goal_reached" distance="5" />
</Transition>
<Transition from="RunToGoodExit" >
<Condition type="and">
20
<Condition type="follow" distance="5.0" goalSetToFollow="99"
weight="1"/>
<Condition type="timer" dist="u" min="5" max="6" per_agent="0"/>
</Condition>
<Target type="prob">
<State name="Follow" weight="1"/>
<State name="temp" weight="0"/>
</Target>
</Transition>
<Transition from="temp" >
<Condition type="auto"/>
<Target type="return"/>
</Transition>
<Transition from="RunToExit1">
<Condition type="follow" distance="5.0" goalSetToFollow="99" weight="1"/>
<Target type="prob">
<State name="Follow" weight="1"/>
<State name="temp" weight="0"/>
</Target>
</Transition>
<Transition from="RunToExit2">
<Condition type="follow" distance="5.0" goalSetToFollow="99" weight="1"/>
<Target type="prob">
<State name="Follow" weight="1"/>
<State name="temp" weight="0"/>
</Target>
</Transition>
<Transition from="Follow" to="RunToSafety" >
<Condition type = "and">
<Condition type="via" distance="7.0" goalsVia="4"/>
<Condition type="timer" dist="u" min="15" max="16" per_agent="0"/>
</Condition>
</Transition>
</BFSM>
21
7 References
[1] D. Helbing, I. Farkas, and T. Vicsek, “Simulating dynamical features of escape panic,” Nature,
vol. 407, no. 6803, pp. 487–490, 2000.
[2] B. Duncan, R. R. Murphy, "Field study identifying barriers and delays in data-to-decision with
small unmanned aerial systems," in the IEEE International Conference on Technologies for
Homeland Security, DOI: 10.1109/THS.2013.6699029, Waltham, MA, USA, pp. 354-359, Nov.
12-14, 2013.
[3] M. Erdelj, E. Natalizio, “UAV-assisted disaster management: Applications and open issues,” in
International Conference on Computing, Networking and Communications, Feb 2016, Kauai,
United States. 2016.
[4] M. Xiong, D, Zeng, and H. Yao, “A crowd simulation based UAV control architecture for
industrial disaster evacuation,” IEEE 83rd Vehicular Technology Conference (VTC Spring), pp.
1-5, 2016
[5] I. Sakour and H. Hu, “Robotic aid in crowd evacuation simulation,” 7th Computer Science and
Electronic Engineering Conference, 2015.
[6] J.C.S. Jacques Jr., S. R. Musse and C. R. Jung, “Crowd analysis using computer vision
techniques,” IEEE Signal Processing Magazine, Vol. 66, September 2010.
[7] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, and A. Ng,
“ROS: An open-source robot operating system,” in Proc. Open-Source Software Workshop Int.
Conf. Robotics and Automation, Kobe, Japan, 2009.
[8] N. Koenig and A. Howard, “Design and use paradigms for Gazebo, an open-source multi-robot
simulator,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai,
Japan, Sep. 2004
[9] S. Curtis, A. Best and D. Manocha, “Menge: A Modular Framework for Simulating Crowd
Movement.” Tech. rep., Department of Computer Science, University of North Carolina at
Chapel Hill, http://gamma.cs.unc.edu/menge/, 2014
[10] X. Pan, C. S. Han, K. Dauber, and K. H. Law, “A multi-agent based framework for the
simulation of human and social behaviors during emergency evacuations,” Ai & Society, vol. 22,
no. 2, pp. 113–132, 2007.
[11] D. Helbing, A. Johansson, and H. Z. Al-Abideen, “Dynamics of crowd disasters: An empirical
study,” Phys. Rev. E, vol. 75, no. 4, 2007, Art. No. 046109.
[12] Narain, R., Golas, A., Curtis, S., and Lin, M. C., “Aggregate dynamics for dense crowd
simulation,” ACM Trans. Graph. 28, 122:1–122:8, 2009.
[13] E. Boukas, I. Kostavelis, A. Gasteratos, and G. Sirakoulis, “Robot guided crowd evacuation,”
IEEE Trans. Autom. Sci. Eng., vol. 12, no. 2, pp. 739–751, Apr. 2015.
22
[14] P. Robinette and A. M. Howard, “Incorporating a model of human panic behavior for robotic-
based emergency evacuation,” in Proc. IEEE Int. Workshop Robots Human Interact. Commun.,
2011, pp. 47–52.
[15] M. Moussaïd, D. Helbing, and G. Theraulaz, “How simple rules determine pedestrian behavior
and crowd disasters,” Proc. Nat. Acad. Sci. USA, vol. 108, no. 17, pp. 6884–6888, 2011.
[16] A. Hutton, “London bridge station, the role of ped modelling: Pedestrian modelling and design
development,” in 6th International Conference on Pedestrian and Evacuation Dynamics, 2012.
[17] B. Tang, C. Jiang, H. He and Y. Guo, “Human mobility modeling for robot-assisted evacuation
in complex indoor environments,” in IEEE Transactions on Human-Machine Systems, Vol. 46,
No. 5, 2016
[18] D. Helbing, “A fluid-dynamic model for the movement of pedestrians,” Complex Systems, vol.
6, pp. 291-415, 1992.
[19] R. Colombo and M. Rosini, “Pedestrian flows and non-classical shocks,” Mathematical Method
in Applied Science, vol. 28, pp. 1557–1567, 2005.
[20] C. Dynamics. Myriad ii - evacuation module, available online,
http://www.crowddynamics.com/evacuation-demo.php
[21] Pedestrian Dynamics, available online, http://www.pedestrian-dynamics.com/
[22] Most Advanced Robotics Simulation Software Overview, available online,
http://www.smashingrobotics.com/most-advanced-and-used-robotics-simulation-software/,
retrieved as of 9/7/2016.
[23] Webots, available online, https://www.cyberbotics.com/overview
[24] COSIMIR, available online, http://industrial-robotics.co.uk/simulations.htm
[25] Microsoft Robotics Developer Studio, available online, https://msdn.microsoft.com/en-
us/library/bb648760.aspx
[26] ROS wiki, available online, http://wiki.ros.org/
[27] Gazebo Tutorial on WorldPlugin, available online,
http://gazebosim.org/tutorials?tut=plugins_world&cat=write_plugin
[28] NVidia Cg profiles and spec, http://http.developer.nvidia.com/Cg/vp40.html
[29] GitHub repo hosting the code for this study:
https://github.com/michaelhuang14/quadevac_sim_ws