An Investigation into High-level Behaviour Specification ... · An Investigation into High-level...

138
An Investigation into High-level Behaviour Specification for Autonomous Avatars Thomas Peach Bachelor of Science in Computer Science with Honours The University of Bath May 2007

Transcript of An Investigation into High-level Behaviour Specification ... · An Investigation into High-level...

An Investigation into High-level Behaviour Specification for

Autonomous Avatars

Thomas Peach

Bachelor of Science in Computer Science with HonoursThe University of Bath

May 2007

This dissertation may be made available for consultation within the Uni-versity Library and may be photocopied or lent to other libraries for thepurposes of consultation.

Signed:

An Investigation into High-level Behaviour

Specification for Autonomous Avatars

Submitted by: Thomas Peach

COPYRIGHT

Attention is drawn to the fact that copyright of this dissertation rests with its author. TheIntellectual Property Rights of the products produced as part of the project belong to theUniversity of Bath (see http://www.bath.ac.uk/ordinances/#intelprop).This copy of the dissertation has been supplied on condition that anyone who consults itis understood to recognise that its copyright rests with its author and that no quotationfrom the dissertation and no information derived from it may be published without theprior written consent of the author.

Declaration

This dissertation is submitted to the University of Bath in accordance with the requirementsof the degree of Batchelor of Science in the Department of Computer Science. No portion ofthe work in this dissertation has been submitted in support of an application for any otherdegree or qualification of this or any other university or institution of learning. Exceptwhere specifcally acknowledged, it is the work of the author.

Signed:

Abstract

The rise in global communication has led to a huge commercial emphasis in the future ofbelievable avatars. This dissertation looks at simplifying the process of building a virtualavatar by providing a high-level, code-free specification of user behaviour that will guide theselection of a virtual avatar’s actions within an environment. The created avatar is basedon an evaluation of existing work in the field with an emphasis on behaviour modellingand virtual avatar scripting languages. This project demonstrates that appropriate avatarbehaviour within an environment can be determined based on limited user contribution.User testing has shown the ease at which users can create personification avatars, and thestrong engagement felt by the user when their avatar is exploring the environment.

Acknowledgements

My thanks go to my supervisor Dr J. Bryson for her encouragement, advice and guidancethroughout the project, to my parents for their invaluable contributions with proof reading,and to all those who have supported me throughout.

ii

Contents

1 Introduction 1

1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Literature Review 4

2.1 What are Avatars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.1.1 The Difference between Bots and Avatars . . . . . . . . . . . . . . . 5

2.1.2 Intelligent Virtual Avatars . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Human Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2.1 Primary Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2.2 Secondary Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2.3 The Effect of Mimicking Human Behaviour . . . . . . . . . . . . . . 9

2.3 What is Intelligence? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3.1 Intelligent Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4 Modelling Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.5 Virtual Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.5.1 IVRS: Intelligent Virtual Reality Systems . . . . . . . . . . . . . . . 15

2.6 Ethical Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.7 Scripting Language Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.7.1 AML: Avatar Markup Language . . . . . . . . . . . . . . . . . . . . 19

2.7.2 VHML: Virtual Human Markup Language . . . . . . . . . . . . . . . 19

2.8 BOD: Behaviour Orientated Design . . . . . . . . . . . . . . . . . . . . . . . 21

2.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

iii

CONTENTS iv

3 Design 24

3.1 Consideration of Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.1.1 Modelling and Game-Engines . . . . . . . . . . . . . . . . . . . . . . 24

3.1.2 Online-Communities . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1.3 Scripting-Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.3 User Avatar Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.4 Other Object Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.5 Action Selection Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.6 Avatar Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.7 Profile Interface Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4 User-Centred Avatar System 43

4.1 An Itterative Phase Development . . . . . . . . . . . . . . . . . . . . . . . . 43

4.2 The Final Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.3 Decisions taken during production . . . . . . . . . . . . . . . . . . . . . . . 51

4.3.1 The Visual Representation of the User Avatar . . . . . . . . . . . . . 52

4.3.2 The Camera Viewpoint . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.3.3 Changes to the Colour Pad Object . . . . . . . . . . . . . . . . . . . 54

4.3.4 A Simple Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.3.5 The Blender Environment . . . . . . . . . . . . . . . . . . . . . . . . 55

4.3.6 The First Profile Interface . . . . . . . . . . . . . . . . . . . . . . . . 57

5 Results 59

6 Project Analysis and Further Work 62

6.1 Analysis of the User Avatar . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

6.2 Further Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

6.2.1 Emotion and Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 65

6.2.2 Adaptability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

6.2.3 Learning and Memory . . . . . . . . . . . . . . . . . . . . . . . . . . 67

6.2.4 Movement within the Environment . . . . . . . . . . . . . . . . . . . 69

CONTENTS v

6.2.5 Line of Sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

6.2.6 Behavioural Control to the User . . . . . . . . . . . . . . . . . . . . 73

6.3 Envisaged System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

7 Project Conclusions 76

7.1 Summary of Achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

7.2 Evaluation into the Believability of the User Avatar . . . . . . . . . . . . . 79

7.3 Evaluation of the Blender Environment . . . . . . . . . . . . . . . . . . . . 83

7.4 Whole Project Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

A System Screenshots 90

B An example Profile File 95

C Log File Output 96

D Environment Code 98

E Interface Code 118

List of Figures

2.1 A ‘punisher’ and ‘reward’ system . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 A filter level reduces pressure on real-time calculations . . . . . . . . . . . . 15

2.3 Generic script language control levels . . . . . . . . . . . . . . . . . . . . . . 18

2.4 Three-level architecture of the VHML language . . . . . . . . . . . . . . . . 20

3.1 Action selection using ‘States’ and ‘Events’ from LSL . . . . . . . . . . . . 27

3.2 Expression selection in The Palace . . . . . . . . . . . . . . . . . . . . . . . 29

3.3 The sensors and actuators available as part of the GameLogic package inBlender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.4 A common approach to real time system design architecture . . . . . . . . . 33

3.5 The high-level architecture of the virtual avatar system . . . . . . . . . . . 35

3.6 Finite State Machine representation of the anticipated action selection . . . 37

3.7 The three stages of the wall walk behaviour . . . . . . . . . . . . . . . . . . 39

3.8 Calculation of the exploration direction into the environment . . . . . . . . 39

3.9 The autonomous avatar’s quadrant selection during a chase . . . . . . . . . 41

3.10 This figure is the intial design of the interface screens . . . . . . . . . . . . 42

4.1 A screenshot of the final Application Menu screen . . . . . . . . . . . . . . 45

4.2 A screenshot of the final Profile Manager screen . . . . . . . . . . . . . . . . 46

4.3 The final environment from the birds-eye camera . . . . . . . . . . . . . . . 47

4.4 The final environment from the third-person camera . . . . . . . . . . . . . 47

4.5 The Implementation FSM’s for object action selection . . . . . . . . . . . . 49

4.6 The ‘Ludwig’ character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.7 A complex scene within Blender . . . . . . . . . . . . . . . . . . . . . . . . 55

vi

LIST OF FIGURES vii

4.8 A screenshot of the first version of the profile screen . . . . . . . . . . . . . 57

6.1 The radial method of wall detection . . . . . . . . . . . . . . . . . . . . . . 67

6.2 Reynolds (1999) seek and flee movements . . . . . . . . . . . . . . . . . . . 70

6.3 Reynolds (1999) offset pursuit movement . . . . . . . . . . . . . . . . . . . 71

6.4 Reynolds (1999) wander movement . . . . . . . . . . . . . . . . . . . . . . . 71

6.5 Solutions to the user avatar’s global line of sight . . . . . . . . . . . . . . . 73

6.6 High-level specification of the envisaged system . . . . . . . . . . . . . . . . 75

7.1 Sliders and Guages used to select behaviour levels . . . . . . . . . . . . . . 77

A.1 Screenshots of the virtual environment system . . . . . . . . . . . . . . . . . 91

A.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

A.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

A.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

List of Tables

2.1 Mood table with emotional weightings . . . . . . . . . . . . . . . . . . . . . 14

4.1 A justification of time spent interacting . . . . . . . . . . . . . . . . . . . . 50

5.1 The results of Test Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

5.2 A Table to show the accuracy of user to avatar’s decision making . . . . . . 61

viii

Chapter 1

Introduction

There is currently a significant commercial focus on the world of avatars; more particularlyintelligent, believable avatars within virtual environments. One of the biggest expansionsin the avatar world has taken place on the internet, with numerous virtual environmentsbeing created; such as ‘Habbo Hotel’ (Habbo Hotel, 2006) and ‘The Palace’ (The Palace,2000). These environments are communities of virtual avatars that exist to heighten theonline communication experience. However all of these internet based virtual worlds sharecommon problems. The most significant of these being insufficient user immersion, causedthrough a lack of believability in the avatar’s actions within the virtual environment.

One aim of this project is to develop a believable avatar that displays the fundamentalskills required to interact with other dynamic objects within its environment. Users shouldfeel that they are fully connected to their avatar, thereby developing the necessary senseof identification and believability.

Much work has been done in the field of Computer Graphics to improve the appearance ofvirtual avatars. However as Maes, Darrell, Blumberg and Pentland (1995) deduced fromthe ‘Alive’ project, great looking graphics are not necessarily an important factor in makingusers engage with their avatar. Graphics are only a part of the problem of creating a senseof believability. Other essential aspects of the problem are considered by this project laterin this introduction.

Belivability is synonomous with enhancing usability, and usability is particularly reducedby distractions in controlling the avatar. A key factor in minimising distractions is seento be in making the avatar partially autonomous and that can be brought about by itemploying behaviour selection techniques. For the resulting more sophisticated avatar tooffer a greater opportunity for response to delegated tasks, it must have an ‘understanding’of how the human user it is modelling, would act. Thereby, given the same situation theavatar’s response would be comparable to that of the human user.

Believability in the virtual avatar also reinforces the users enjoyment of the experience ofonline communication. Achieving this can be aided by an increased delegation of tasks to

1

CHAPTER 1. INTRODUCTION 2

the partially autonomous avatar.

Believability is further dependant upon a consistency of behaviour of the avatar’s responses;this project therefore focuses on human behaviour modelling, within virtual avatars, inorder to achieve this consistency.

Commercially viable virtual environments will likely demand that the sophisticated avatarbe capable of being setup by a user of the age of fourteen. But in doing this there should beno loss of the believabilty in the avatar’s behaviour. This project discusses these mattersand seeks to solve the issue.

The following sections look to elaborate on the topics mentioned in this introduction andto critique the key area of the existing behaviour modelling methods that are currentlyavailable. There is no one methodology that is tailored to the needs of high-level behaviourspecification and therefore this project uniquely combines the valued elements of currentapproaches. In so doing, this project defines a new approach to designing believable avatars,essential to creating opportunities for a new generation of virtual environments.

1.1 Contributions

The principle contributions of this dissertation are:

• To facilitate the easy creation of a user-controlled behavioural avatar.

• To create an avatar that displays a visual and behavioural personifiation. This shouldbe achieved within a virtual environment.

• The user avatar should achieve consistency through the use of appropriate behaviour.

• The avatar should achieve pre-specified goals within the environment.

• The project must consider the overall problem whilst maintaining the project scope.

1.2 Document Structure

Chapter 2 provides a review of the behavioural techniques that have been employed byothers in the past to generate beleivable avatars. If you are interested in existing scriptinglanguages, and a discussion of how they could be utilised to control high-level avatar actions,then Section 2.7 and Section 6.2.6 are respectively the sections to read.

If you are interested in the analysis of the potential technologies to be used in the projectthen Section 3.1 should be read; for a broader look at the design of the system then readthe Chapter 3

CHAPTER 1. INTRODUCTION 3

If you are interest in the final system and how it was implemented then Chapter 4 discussesthe implementation changes that were made during production and Chapter 7 summarisesthe critical evaluation of the success of the project with respect to the intial objectives.

Finally, if the area you are interested in is the possible extensions to this project, then theAnalysis and Further Work Chapter 6 critically discusses techniques used throughout thisproject, providing suggestions to areas that could be further developed beyond the scopeof this project.

Chapter 2

Literature Review

2.1 What are Avatars

Popovici, Buche, Querrec and Harrouet (2004) defines avatars as being particular casesof virtual agents which are complex entities in a virtual environment that are able toperceive, to decide, and to react based on its internal structure and tasks. This is agood working definition however the words used in it require further definition. The term‘perceive’ may constitute a level of comprehension which infers assimilation of subjectiveknowledge. Comprehension is out of the scope of this project and therefore it is suggestedthat the phrase ’receive sensory information from the environment’ is better suited withinthe working definition. The decision is made using the sensory data as well as various factorssuch as experience, but once a decision is made an appropriate action will be selected inorder to react. The avatar makes a decision not only from the sensory information fromthe virtual environment, but also from the dynamic objects within the environment (Yang,Petriu, Whalen and Petriu, 2003)

An intelligent behaviour avatar as defined by Chen, Lin, Bai, Yang and Chao (2005) servestwo purposes; imitating the user’s behaviour model (configured by the system to the tailoredneeds of the user) and improving on the behaviour performance by method of ‘reinforcementlearning’. This is an AI technique whereby an avatar exploring an environment is givenpositive or negative feedback from the environment about their course of action.

Their avatar has two modes; an active and a passive mode. The passive mode is definedas being the state where the user is online, which the avatar uses to learn the behaviourmodel of the user in the virtual environment. The active mode is the off-line mode whichis where the avatar learns the behaviour required to survive independently in the virtualenvironment. This methodology effectively subdivides the setup required for an intelli-gent behaviour avatar into user input (taught) and self-learning (experience). Althoughthe learning factor is not being proposed in this project, Chen et al. (2005) presents aninteresting approach to modelling the behaviour of an avatar.

4

CHAPTER 2. LITERATURE REVIEW 5

A quite different viewpoint is where for the most part the avatar is directed by user control;however it also receives feedback from the environment and potentially other avatars withinrange of the avatar. One way of looking at this is that the user chooses the actions to beperformed by the avatar (high-level goals), but the way the avatar carries them out ischosen by the avatar (Imbert and de Antonio, 2000). This approach could be consideredto be less autonomous than Chen et al.’s (2005) avatar.

The biggest explosion in the use of avatars has been in online communication with avatarsbeing used across a broad range of age groups for a more ‘real’ virtual chat experience.An avatar, in the online communication sense, is seen as a perception or projection ofthe user’s mood, used to highlight or strengthen what is being communicated. A simpleexample of this being the chat phenomenon known as ‘smileys’. Many studies have beendone that show emotions and avatars improve the users experience and level of interaction;furthering enthusiasm toward global/intercultural communication. But by communicatingthrough such an ambiguous medium as a caricature or smiley the risk, as highlighted byKoda and Ishida (2006), is that a difference in interpretation of communication avatarscan lead to misunderstandings. The example given in Koda and Ishida’s (2006) paper isthat of a ‘wide eyed’ smiley which in his study was perceived as ‘surprised’ by Japanesesubjects, while the Chinese users interpreted it as ‘intelligent’. This indicates that althougha personal avatar should possess the ‘behaviour’ tailored by its user, an ‘intelligent’ avatarshould also be aware of the environment in which it is operating.

Avatars are not just limited to the virtual communities on the internet, and their broadeningpurpose can be adapted to a variety of situation summarised by Yang et al. (2003):

• As virtual models that eliminate the need for ‘the real thing’ allowing ergonomicevaluation of computer-based-designs

• For embedding real-time representations of users or other living participants into thevirtual environment

2.1.1 The Difference between Bots and Avatars

There is no one definition for the distinction between bots and avatars; however a generalone proposed by Whalen, Petriu, Yang, Petriu and Cordea (2003) is as follows:

Bot: is an autonomous agent that pursues its own goals

Avatar: is a representation of a human being and is under the direct control of thathuman being

For the purposes of this document, bots (being wholly autonomous) will not be considered,however our definition of an avatar will be extended to subsume the humanoid representa-tion, direct control of the avatar and a restricted set of the autonomous features of bots.

CHAPTER 2. LITERATURE REVIEW 6

As stated by Whalen et al. (2003) it is highly impractical for the human user to have controlover each joint movement; the definition of a high-level set of complex behaviours will beone of the goals of this project.

2.1.2 Intelligent Virtual Avatars

There is some divide in the approach to the development of Intelligent Virtual Avatars(IVA) which centres on the classic form vs. function problem. Those from a more graph-ics background are concerned with the directional control, where as researchers from AIbackgrounds adopt a more autonomous approach. The idea of autonomy is used across awide range of fields, with equally varying definitions, but for the purposes of this projectthe definition given by Aylett and Cavazza (2001) will be assumed

“An autonomous agent has a sense-reflect-act cycle of its own operating inreal-time in interaction with its environment”

The sense-reflect-act cycle mentioned in Aylett and Cavazza’s (2001) definition is compa-rable to the perceive-decide-reflect avatar description given by Popovici et al. (2004)

Equally the concept of people’s perception of intelligence has been debated by philosophersand psychologists for over a century; what matters in human-human interaction is theindividuals subjective beliefs about each other, not the objective truth of the interaction(Bailenson, Beall, Blascovich, Raimundo and Weisbuch, 2001) quoted from (Selvarajah andRichards, 2005). Bailenson et al.’s (2001) research found that the way we interact withvirtual avatars is comparable to the way we respond to humans; even when the avatardoesn’t possess photo-realism. However Bailenson et al. (2001) went on to identify themain problem with early avatars as being their ’stiff’ behaviour.

Non-verbal behaviour is extremely significant for an avatar to act in a realistic manner;which includes behaviour such as facial expressions, raising eyebrows, the movement ofthe head, and mutual gaze (Cassell and Vilhjalmsson, 1999) quoted from (Selvarajah andRichards, 2005). It was also suggested by Canamero (1997) quoted from (Selvarajah andRichards, 2005) that more realistic interaction between humans and avatars can be achievedwhen the avatars showed emotional awareness as ‘emotions are a distinct characteristicof human like intelligence’. Although many different aspects comprise one’s emotionalstate (such as tone of voice, volume, body and hand gestures) the most effective methodof depicting one’s current emotional state is through facial expressions (Selvarajah andRichards, 2005).

The conclusion drawn from these studies is that it is not necessary to provide an avatarwith intelligence, merely to give the impression of intelligence by displaying an appropriateemotional state. By doing this, the studies cited above have proved, that there is betterinteraction between the human user and the virtual avatar; leading to the virtual avatarbeing treated similarly to a human.

CHAPTER 2. LITERATURE REVIEW 7

2.2 Human Behaviour

From a sociology viewpoint behaviour is considered to be the most basic human action asit is not directed at one particular person and therefore beyond definition. However, whenit comes to an implementation for a virtual avatar, a definition is required and behaviourpatterns need to be established.

The design and implementation of the behavioural rules is an equal amount of work to themodelling phase and not to be underestimated. The behaviour the avatar exhibits definesthe way the avatar is to be perceived by the user. Ortiz, Oyarzun, Aizpurua, Posada,Center and San Sebastian (2004) breaks down the general behaviour of an avatar into thefollowing behaviour rules:

• Predefined behaviour rules define the avatar’s appearance and personality in its nat-ural state

• Behaviour rules based on the tone of the voice

• Behavioural rules that are affected by environment the avatar is surrounded by andthe interaction it has with the objects within the environment

The behaviour of a personal avatar should be customisable so that each avatar is distin-guishable, not just because of its physical features, but because of the way it interacts withits environment and surrounding avatars. It is very important that individual charactersbehave differently from each other and that their behaviour should be determined by theuser who controls them (Gillies and Dodgson, 2004). They observe that users of onlineworlds are keen to customise the graphical representation of their avatars, and hypothesisethat they, the users, would be equally keen to adjust the behaviour of their avatars shouldsuch a tool be commonly available.

Gillies and Dodgson (2004) suggest that because controlling behaviour can potentially becomplicated depending on the level of abstraction you work at, that two tiers of cus-tomisation should be made available; the first for ‘expert’ users who may be the originaldevelopers of the online virtual world, and ‘end-users’ who would be able to specify high-level behaviour commands to their avatars. Looking at this from the perspective of theoriginal aims of the project (that a fourteen year old child should be able to build their ownavatar) this two tier approach would seem desirable; keeping more complex customisationhidden for more advanced users.

They also suggest a distinction between two types of behaviour; primary and secondarybehaviour. Primary behaviour covers behaviour that is directly controlled by the user,essentially actions formed of primitives that are pre-defined within the system. An exampleof this being the action of going through a door; formed of turning the knob, opening thedoor, and walking through. The secondary behaviour is defined as being autonomous andinfluenced by the primary behaviour. The distinctions between these behaviour types arediscussed in the following sections.

CHAPTER 2. LITERATURE REVIEW 8

2.2.1 Primary Behaviour

The actions of an avatar are often formed of a sequence of existing basic behaviours. Whatconstitutes a basic behaviour will not be discussed in this section but will be covered underthe behaviour library section. Traditionally the movement of objects has been created usingan inverse-kinematics system, but due to the limited number of motions per action, thisoften doesn’t sufficiently map to the object being targeted. A solution to this (Gleicher(2001) quoted from Gillies and Dodgson (2004)) is described as applying iterative-adaptivetechniques to a pre-existing motion to adapt better to the latest position of the target.This re-calculation has a minimal computational cost, but vastly improves the inverse-kinematics technique and the believability of the seek movement. Smoother, more accuratemovements will ultimately lead to a greater sense of realism.

When a string of basic behaviours are put together to form a primary behaviour actionthe obvious problem would be with awkward transitions when switching between the basicbehaviours in the action. As such some blending procedure needs to be considered to reducethis affect and move more believably between each stage of the action.

2.2.2 Secondary Behaviour

In the everyday situation of an avatar in a virtual environment, the user should only need tohave control over a limited scope of available actions, essentially defining the overall goalsof the avatar. However, the avatar should be able to react to any situation that confronts itwithin the virtual environment; this should include instinctive reactions to sudden eventsand expressive body language (Gillies and Dodgson, 2004). As discussed previously semi-autonomous behaviour is not a new concept, but more recently the emphasis has been onnon-verbal expressive behaviour.

There is a cross-over between primary and secondary behaviour in that the primary be-haviour module should influence the secondary. Using the same door example, the primaryaction is to open the door, which at first involves turning the handle; but this action maycontain messages to be sent to the secondary behaviour module that tells the avatar head toautonomously ‘focus’ on the handle. This sort of secondary behaviour runs in parallel withthe primary behaviour and creates a greater sense of realism. Alternatively secondary be-haviour might be initiated by the avatar itself without requests from a primary behaviour.A situation where this might be applicable could be where the avatar is standing still withinthe environment, but to make it more realistic the avatar should not be standing like aplank; instead it could be moving slightly on the spot, or appearing to chew, or lookingaround, etc.

Target search is another technique that employs semi-autonomous avatar behaviour. Thelevel of pro-activeness would be defined by the user, but its human behaviour analogy wouldbe curiosity. Without input from the user (primary behaviour) the avatar can be designedto ‘amuse’ itself. Such a system could be created by performing scans on the current visibleenvironment for any attention requests, which could be from other avatars or from dynamic

CHAPTER 2. LITERATURE REVIEW 9

inanimate objects, i.e. a coke machine. If the request can be interpreted by the avatar (ithas the knowledge to deal with the situation) then the avatar can walk over and interactwith the object making the request. Similarly if there is no attention requests within thecurrent vision of the avatar it can decide to walk on to a new part of the environment,continuing to perform a scan for attention requests as it moves.

These secondary behaviours are crucial to developing a believable avatar that displaysintelligence (something discussed in a later section). Within the scope of this project itcould be considered excessively complex, but is a concept worth considering even if only asa simple implementation or as an extension to the final project.

2.2.3 The Effect of Mimicking Human Behaviour

There has been much debate over the idea that avatars as a whole should have humanlike features and mimic human behaviour. After-all we do not look to break down theboundaries between human and machine merely to ease the communication between thetwo.

Koda and Maes (1996) devised an experiment to gather information of users changingperceptions of different avatars as attributes such gender, humanity, realism and facialexpressions were changed. They argued that using a facial representation as an avatar is‘engaging’ and makes the user ‘pay attention’. The results from their experiment showedthat:

1. Personified interfaces help users engage in tasks, and are well suited to an ‘entertain-ment domain’

2. People’s impressions of a face in a task are different from ones of the face in isolation;and perceived intelligence of a face is determined not by the agents appearance butby its competence

3. There is a dichotomy between user groups which have differing views on personifica-tion; thus interfaces should be flexible to support the diversity of users’ preferences.

The final point is key to understanding what makes an interface successful in that to everypossible degree no two users should be considered to be the same, instead each avatarshould be customisable to more accurately be tailored to each individual users needs.

Other research has been done by Walker, Sproull and Subramani (1994) who concludedthat adding a face to an interface does not necessarily result in better HCI and in factrequires greater attention from the user. An interface with a face takes a greater amountof effort from the human user as people try to interpret the human images (Takeuchi andNaito, 1995).

From these studies it is clear that the competence of the avatar is just as important asa visually realistic character. In fact the conclusion to be formed is that the two are

CHAPTER 2. LITERATURE REVIEW 10

directly proportionately linked; as the realism of the face increases so should the levels ofcompetence displayed. Otherwise having a more life-like visual representation acts only todistract the user who becomes more anticipatory towards the avatars actions.

2.3 What is Intelligence?

Intelligence, or to say its concept, dates back to the ancient Greeks, or arguably evenfurther. Famous philosophers like Plato, Aristotle, Augustine all had different views onintelligence, and indeed to this day there is a lack of agreement between psychologists(Sternberg, 1990).

Again there are many ideas of what constitutes intelligence within the context of an avatar.One such method proposed by Chen et al. (2005) is that an intelligent behaviour avatar(IBA) acquires behaviour by interactions between the user and smart objects in the virtualenvironment. Their method uses a Bayesian network with reinforcement learning to jointhe behaviour decision model and self-learning model. The agent uses statistical reasoningto make its decisions and learns by experimentation; in affect a trial and error method.

A Bayesian Network is defined as a directed acyclic graph that is formed of a set containingvariables and a set of directed links between variables. The probability of each variable iscontained in the conditional probability table which is created by the available action node.The action node refers to the action the variable defines and is formed by a combinationof inputs from the virtual environment and the user. The essence of the Bayesian networkis that it makes decisions based on these probabilities. By making decisions based on thehighest probability the avatar is conceived to have made an educated or intelligent choiceas to the correct path to follow. This logical approach to decision making is one of thebiggest advantages of artificial intelligent systems but is also seen to be one of its greatestweaknesses.

As well as making the logical choice Allen, Wallach and Smit (2006) argued that an avatarshould be moral and factor ethical calculations into its decision making process. These socalled AMA’s (Artificial Moral Agents) should make decisions that honour privacy, upholdshared ethical standards, protect civil rights and individual liberty to further the welfareof others. This topic is discussed further in the ethical standards section 2.6. But it’s clearthat logical thinking itself does not produce appropriate behaviour in virtual avatars.

2.3.1 Intelligent Behaviour

From a general point of view, intelligent behaviour consists of determining the best sequenceof actions to be executed in the virtual world, considering the overall goal of the agent andthe current situation in the environment (Aylett and Cavazza, 2001). However this is noteasy to model within a virtual environment. Path planning is not a new concept, withheuristic search algorithms such as A* being well developed. A* works by dissecting thevirtual environment into a grid of cells, then at each cell, eight surrounding cells provide

CHAPTER 2. LITERATURE REVIEW 11

eight options with the choice of which cell to move into being made by some distancebased heuristic (e.g. Euclidean distance). But this is not a real-time algorithm and cannotbe used within a dynamic environment such as a virtual world; once the path has beenplanned, it doesn’t receive any feedback from the environment, so a situation such as a boxbeing moved in the way, could not be dealt with. This method is therefore not suitable forour virtual environment. Adaptations such as using A* based planning in conjunction withsynthetic vision have been suggested (Monsieurs, Coninx and Flerackers, 2000) quoted from(Aylett and Cavazza, 2001). It’s very difficult to implement real-time depth calculation inrobots, however with virtual avatars in virtual environments this calculation is alreadydone for you by way of the depth buffer. The avatar can sample the depth buffer andcreate a 360 degree view area around the avatar. This approach has the advantage thatwhen the agent moves around the environment the image buffers get updated, and thedepth buffer is re-calculated; allowing a consistent map of the surrounding area around theavatar to be easily computable. However this approach does not solve the problems of A*path planning, rather ease some of the problems with it.

Realistic behaviour could not be based on procedural animation alone, as human-like virtualavatars have to deal with and adapt to a changing environment (Aylett and Cavazza, 2001).Research done at the University of Pennsylvania has sought to produce a ‘comprehensiveframework for intelligent virtual agents’. Their aim was to combine seamlessly the integra-tion of sensing, planning and acting, and they broke this down into three control levels:

• AI Planner: based on the avatars intentions, using incremental symbolic reasoning

• Parallel Transition Networks: type of action formalism

• Sense-Control-Act: a loop performs low-level, reactive control involving sensorfeedback and avatar motion control

The AI planner level deals with the high-level reasoning of the avatar. It is a form ofhierarchical planner that only expands leaves containing high-level actions to the degreerequired, thus making it possible to interleave planning with real-time execution whichtakes into account the dynamic nature of the environment.

The Parallel Transition Networks (Pat-Nets) are called by the planner level and invokethe lowest level motion onto the avatar. A common problem with a scripting approach aspreviously highlighted is how the transition is made from one basic behaviour to the next.There is also a problem with running action sequences in parallel that in real-life situationscould never occur. Examples of this would be that an avatar can ‘walk and chew’ butcannot ‘walk and run’ at the same time. Pat-Nets are a popular method for ensuring thelater situation cannot take place. The principle behind them is that each node representsa process while the edges contain predicates, conditions, rules, and other functions thatcause transitions to other process nodes. This has the advantage of providing a non-linearanimation technique; in that movements do not need to be laid out linearly along a timeline,and instead can be modified, triggered, or stopped. The Pat-Net method also has parallelswith autonomous behaviour since it has interaction and feedback with the environment.

CHAPTER 2. LITERATURE REVIEW 12

Finally the sense-control-act layer relays data back from the environment to deal with un-factored events or objects that affect the avatar. The sense-control-act approach is anotherform of the sense-reflect-act and perceive-decide-reflect methodologies proposed by Aylettand Cavazza (2001) and Popovici et al. (2004) respectively. OHare, Campbell and Stafford(2005) summarises quite well stating that:

“The pursuit of behavioural realism is a laudable goal though one fraught withcomputational complexity. Many factors influence and enhance the sense ofauthentic behaviour but above all the ability of the avatar to perceive theirenvironment and act in a manner commensurate with those perceived events”

2.4 Modelling Emotions

As with other definitions in this document, there is not always agreement to one soliddefinition. However at least in this case there does seem to be some agreement that thereis a key distinction between emotion and feeling, and the two should not be confused. It’ssafe to say that emotion is complex, but the defining feature of emotions is that they are anunconscious act manifested as a physical expression. When translating human emotion intosome equivalent representation for a virtual avatar, there is again no one implementationmethod.

The definition proposed by Rolls (1999) is a good approximation of emotion. He proposesthat emotions are states elicitated by ‘rewards’ and ‘punishers’. A reward is defined asbeing anything for which we will work to obtain, for instance a hug, the company of a lovedone, a large sum of money etc. A punisher is the opposite, something that we will workto avoid. He stipulates emotions are formed by (the delivery, omission, or termination of arewarding or punishing stimuli). This is a useful approach when considered from a virtualavatar perspective in that each dynamic object or action can have a numerically weightedstate relating to positive ‘rewards’ or negative ‘punishers’ which would have an affect onthe virtual avatars own emotional state (a sliding scale). See Figure 2.1

However this definition is not flawless; a situation cited in Rolls’s (1999) book is that offood on a table. Food itself does not (necessarily) produce an emotional reaction, exceptwhen combined with the hungry state, where it’s conceivable that some pleasure would bereached by finding food. Rolls (1999) set out something he calls the reinforcer set whichencapsulates these stimuli which he wishes to exclude. I do not agree with this methodfor dealing with these anomalies. Exclusion is rarely an appropriate solution; trivial casessuch as the example given may be dealt with in this manner, however for the more serioussituations this will not result in the avatar reacting in a realistic consistent manner. Ifrealism is to be achieved emotion will have to be cross referenced with the internal state ofthe avatar, but such a complication may be out of the scope of this project.

Another method proposed by Selvarajah and Richards (2005), who based their conceptson the work of Oliveira and Sarmento (2003), was to divide their emotional architecture

CHAPTER 2. LITERATURE REVIEW 13

Figure 2.1: Illustration of Rolls’s (1999) ‘punisher’ and ‘reward’ system

into two components:

• Emotional Elicitation: the process that stimulates and triggers particular emo-tional reactions

• Emotional Accumulation: describes how conscious the agent is about experiencingthe emotion

They focused on seven key emotions for their experiment as they were only interested inthe short-term affects of emotion. Using their seven emotions they setup a weighting tablewhich assigned a numerical value which affected the mood of the avatar (See Table 2.1).The environment their avatars were placed into was the scenario of a cocktail party, andusers were asked to provide feedback on their emotional state at given intervals throughoutthe virtual evening. Their own emotion feedback (equating to numbers in the mood table)was then averaged with the feedback of the avatars within their vicinity. In affect theavatars mood is affected by the general mood within the environment, which would be thecase in a real-life cocktail party situation.

This promotes the idea that as well as every avatar within the virtual environment, theenvironment itself should carry some emotional state which has an affect on the avatarswithin it. A useful analogy of this might be the weather within the environment; in thatwhen it’s sunny, humans’ moods are happier and when it is raining their mood is moresad (although not always the case). So if the virtual environment carried some emotionalweighting this could be factored into the mood of the virtual avatar.

Their approach is an over simplification of the problem of mapping emotional state whichwas adequate for their experimental purposes, however when trying to model realisticvirtual humans, limiting them to a set of primitive emotional states would not be sufficient.

CHAPTER 2. LITERATURE REVIEW 14

Emotion Type Emotion Assigned WeightPositive Happiness 6Positive Surprise 4Neither Neutral 0Negative Sadness −1Negative Fear −2Negative Anger −3Negative Disgust −4

Table 2.1: Mood table with emotional weightings, Adapted from (Selvarajah and Richards,2005)

Emotions and behaviours are linked. An example of this is the emotional state of happinesswhich can lead to the behaviour of a smile. Vinayagamoorthy (2002) quoted from OHareet al. (2005) also iterates this point claiming that emotional state is a major factor inachieving behavioural realism. Going on to state (emotions are fluctuated and related toevents). The correlation between human emotion and human behaviour is something thatis currently being studied in depth.

2.5 Virtual Environments

There are many applications for virtual environments in the fields of science, engineering,education, and environments (Yang et al., 2003). A virtual environment can contain avatarsthat can move and interact with other dynamic objects that exist within the environment.Virtual environments can create the illusion of a real world by providing synthetic sensoryinformation to the sight, sound, and touch senses (Selvarajah and Richards, 2005). Theobjective being to provide the person with a sense of ‘presence’ and connection to thevirtual world. Within a virtual environment Chen et al. (2005) defines two main types ofsmart object:

• Virtual Humans: the purpose that the avatar or user have within the environment

• Environment Objects: other objects within the virtual environment

This in my opinion is an over simplification, environment objects should be divided intoinanimate non-consequential objects which have no affect on the avatar or the user, andthose objects which do impact on the existence of the avatar or user.

An avatars decision is based on its environment and may become obsolete between themoment the decision is taken and when the action takes place (Popovici et al., 2004). Thisis a problem that needs to be considered when working on any real-time system; howevermethods for dealing with them in the context of intelligent avatars in virtual environmentsare limited and far too complex for the scope of this project. But an idea worth considering

CHAPTER 2. LITERATURE REVIEW 15

might be assigning incoming requests to the avatar with a priority. This would allow afilter level on the avatar to prioritise incoming requests and ensure the more importantfunctionality is dealt with first. An example of this would be a request to open a door,being assigned a higher priority than a request to stop chewing. See Figure 2.2

Figure 2.2: Illustration of the filter level proposed to reduce pressure on the real-timecalculations done by the avatar

2.5.1 IVRS: Intelligent Virtual Reality Systems

The term IVRS, as proposed by Aylett and Cavazza (2001), stands for an ‘IntelligentVirtual Reality System’. They propose this distinction for environments that integrateArtificial Intelligence techniques into the virtual environment architecture itself.

Completely integrating AI techniques into a virtual environment is awkward because ofthe difficulty with developing real time systems. Various different methods are proposed intheir paper to overcome this. Constraint Logic Programming (CLP) can be used because ofits quick computation time and incremental solution approach; although strictly speakingis not a reactive technique. Other more conventional AI approaches can be used for real-time problem solving within virtual environments such as heuristic repair and local search.However these approaches are still being developed and are not a focus of this project.

The concept of building ‘intelligence’ into virtual environments has several advantages thatare highlighted by Aylett and Cavazza (2001):

• It adds a problem solving layer to the environment

• It builds a knowledge level which supports the high level processing of the scene itself,especially with natural language processing systems

• Describes casual behaviours within the environment as an alternative to physicalsimulation

CHAPTER 2. LITERATURE REVIEW 16

• Strengthens the communication between the user and the environment allowing forgreater determination of adaptive behaviour from the system

Their final point is, in my opinion, the most important and the reason for studying theirtechniques. By developing better interaction between the user, the avatar and the environ-ment a more ‘realistic’ model can be created. Although when the paper was written (2001)the approach was relatively knew and there were many unanswered questions, importantconsideration should be given to considering the environment to be globally intelligent, notjust the objects and avatars within it.

2.6 Ethical Issues

The creation of an avatar that looks and mimics human behaviour has raised several issuessurrounding the ethics behind such work. There is significant economic reward driving thecreation of agents that exploit human social drives (Bryson, 2000). In her paper’s Brysonproposes a code of ethics that covered three main elements:

• Honesty: this refers to honesty amongst developers who should not seek to coheresconsumers into putting equal value on the needs of an avatar to that of human oranimal life

• Serenity: despite the first rule, an attachment could conceivably be formed betweenan avatar and human user; as such preserving the ‘personality’ of the avatar in theform of saving their developed state should be provided

• Selflessness: developers should not provide any avatar with a feeling of loss, sufferingor awareness of its own termination. The preservation of an avatar should neverconflict with the preservation of any human or animal life

These standards affect both the developer and the avatar, and although the likelihood ofthe doomsday scenario of artificial agents taking control is rightly dismissed, it should notbe cast aside.

The philosopher (Foot, 1978) quoted from (Allen et al., 2006) was the first to introduce the‘trolley case’ which highlights the problem of how a machine or indeed a human rationalisestwo negative choices, i.e. someone gets run over and killed no matter which track is chosen.How do AI systems have this judgement built into them, and who decides which causeof action is appropriate, can it always be considered to be the most appropriate? Allthese questions have brought about the field of machine ethics (machine morality, artificialmorality, or computational ethics) which looks to implement moral decision making facilitiesin computers and robots (Allen et al., 2006).

Semi-autonomous avatars already exist that violate ethical standards on a regular basis.The example given by Allen et al. (2006) is that of a search engine that collates data

CHAPTER 2. LITERATURE REVIEW 17

on those that use it; for example what is entered into the search query. Most users areunaware of this going on and might consider the information being put into the search boxas private.

An avatar being a computer program and operating in a virtual environment must considerthese ethical issues. Although it is difficult to conceive a situation where this projects avatarcould have to make life or death choice, it is within the scope to expect the avatar to holdsome influence with the user, especially as emotional attachment is formed. As such thedecisions the avatar makes must be considered ethical and appropriate for the user. At thesame time the avatar itself should expect to be developed and used in an ethical mannerin keeping with Bryson’s (2000) three ethical standards.

2.7 Scripting Language Approach

Action selection is a common problem amongst AI avatar developers; one method for deal-ing with it is scripting. Scripting has proved to be a very popular method for allowingdevelopers and authors to control intelligent virtual avatars at a higher-level than mostanimation (Aylett and Cavazza, 2001). An example of an existing high-level control mech-anism for character animation is BEAT (Behaviour Expression Animation toolkit) whichuses text input to create behaviour animation in accordance with the behaviour rules setout by the animator (Cassell, Vilhjalmsson and Bickmore (2001), quoted from Kshirsagar,Magnenat-Thalmann, Guye-Vuilleme and Kamyab (2002)). However all such attempts havethe same problem in that they rely on pre-defined behaviours and are therefore limited bythe depth of their behaviour library.

There are several ways of implementing a virtual avatar, varying vastly in complexity. Ifthe avatar is to be capable of simple pre-determined movement a scripting language likeVRML (Virtual Reality Modelling Language). However because VRML is not a program-ming language it cannot cope with more sophisticated animation techniques for instancewhere representation of real-time agents is required. These are formed by a combina-tion of separate parts and the connections or transforms that define them. These jointsare constrained by the programmer to mimic the constraints on a human subject. Forthis complicated modelling a neural networks approach is normally implemented (Rigotti,Cerveri, Andreoni, Pedotti and Ferrigno (2001), quoted from Yang et al. (2003))

Yang et al. (2003) define a three level avatar control system (See Figure 2.3):

• Joint-level control: the lowest level of control, the restrictions on the avatars jointmovement

• Basic Behaviour / Skills: common humanoid functions such as running, jumping,walking, etc.

• Script Language: task specification is done through an ‘English-like’ instructioncommanding

CHAPTER 2. LITERATURE REVIEW 18

Figure 2.3: Illustrates how a generic script language could control, through various levelsan avatar in a virtual environment. Adaptation of a Whalen et al. (2003) diagram.

CHAPTER 2. LITERATURE REVIEW 19

While I agree that it is an advantage to have decoupling between control levels in order topreserve user friendliness through the hiding of complicated low-level details. The restric-tion here is that tasks can only be carried out by applying a series of behaviours or skillsthat the avatar has already had defined in the middle control level. Their way around this isto employ ‘composed behaviours’ which are the combination of primary (basic) behavioursin sequence or in parallel.

2.7.1 AML: Avatar Markup Language

One concept developed by Kshirsagar et al. (2002) is that of a universal avatar markuplanguage. The AML seeks to allow high-level definition of avatar animation to be usedin intelligent avatar systems and web-based systems that can be created with ‘ease andrapidity’. They chose to use MPEG-4 standard Facial Animation Parameters (FAP) andBody Animation Parameters (BAP) as the basis for their animation. They cite the mainadvantage of AML as being the high-level interface that does not require intricate knowledgeof low-level animation techniques to create and control the avatar.

FAP’s are defined in terms of the normalised displacements of feature points from theirneutral positions, thus giving the ability to create different expressions. There are 66 low-level and 2 high-level FAP’s; low-level examples include stretch right corner lip, raises leftinner eye-brow, puff cheek, etc and the two high-level FAP’s are visemes and expressions.The two high-level FAP’s can be defined by using the low-level parameters.

BAP’s are defined by 296 separate body animation parameters, 186 control full-body ani-mation while the further 110 parameters are kept for extension animation beyond the usualmovement of body parts. The BAP are provided with ‘Degrees of Freedom’ (DOF) whichare translations or rotations along an axis.

Despite the fact there are a large number of parameters available, it could be argued thatthis method still limits the human body to a pre-defined set of movements and emotionswhich would not be suited to applications that require a higher level of flexibility and believ-ability. However being a high-level definition means that intricate knowledge of transformscan be reduced to a defined set of natural language commands; ideal for user control oftheir avatar.

2.7.2 VHML: Virtual Human Markup Language

An alternative language that has been the subject of an initiative started within a EuropeanUnion 5th Framework Project is the Virtual Human Markup Language (VHML). VHMLis an XML based language that allows interactive virtual humans to be controlled suchthat human to virtual human communication is more productive. It divides the problem ofavatar animation into sub-divisions in an effort to modularise the problem to ease usability.

• Emotional Human Markup Language (EML)

CHAPTER 2. LITERATURE REVIEW 20

• Gesture Markup Language (GML)

• Speech Markup Language (SML)

• Facial Animation Markup Language (FAML)

• Body Animation Markup Language (BAML)

• eXtensible HyperText Markup Language (XHTML) (only a subset is used)

• Dialogue Management Markup Language (DMML)

(Mariott and Stallo, 2002)

VHML has three-tier architecture; in the middle there are two controlling layers one foremotions the other for gestures (See Figure 2.4). The affect of having different modulesis an effort to more accurately model ‘realistic’ human behaviour. Their assumption isthat as human behaviour is made up of a collection of self-supporting properties such asbody language, facial expression, and tone of voice; therefore the virtual avatar should bedeveloped with this in mind.

Figure 2.4: The three-level architecture of the VHML language. NOTE: A fully drawnline implies that the lower level establishes the upper level, while a dashed line showsinheritance.

VHML’s main advantage over AML is that is not restricted by a set number of controls.By modularising the problem it is conceivably easier to add new emotions and gestures

CHAPTER 2. LITERATURE REVIEW 21

as the development process continues. Therefore this approach would favour an iterativemethod; something that is discussed in a later section. It still should be considered asa high-level definition as like XML it is formed of tags that take in a set of parameters,and does not require any low level knowledge of the face/body movement transforms. Theparallels of the EML layer to the FAP’s given in AML and GML to BAP’s are self-evident,but the lower tier in the VHML architecture gives functionality for far greater control overthe avatar and expansion for further development.

2.8 BOD: Behaviour Orientated Design

An interesting design methodology proposed by Bryson (2003b) is centred on the idea ofdesigning ‘complex’ and ‘complete’ virtual characters. Bryson defines complete agents asself-supporting, in that they can function separate from any system. Further, a complexagent is one that can deal with multiple, possibly conflicting goals and many ways ofpotentially reaching those goals.

BOD is in part inspired by its name-sake ‘Object Orientated Design’. The connectionis not limited to the presumption that an object is equivalent to a behaviour, but themodular development strategy of the design itself. However modularising is not a straight-forward process as often boundaries between modules can be considered soft; determiningon which side of a module a particular functionality lies requires an iterative approach andsubsequent refinement.

BOD reduces the problem of avatar intelligence into two parts; the first is a behaviourlibrary which is a collection of actions and sensors which are related to an environment.The library is normally implemented using a modular approach to allow for extensibilityand re-use of libraries accross multiple BOD agents.

The second aspect is know as ‘Parallel-rooted, Ordered, Slip-stack, Hierarchical’ (POSH)reactive plans, which are used to control the action selection of BOD agents. The POSHplan defines the ‘personality’ of the agent and is normally unique to a BOD agent, althoughas previously mentioned it may make use of a common behaviour library. Action selectionrefers to when an action should take place and the business of ordering actions in a sequenceto prioritise goals.

The POSH plan links to the behaviour library via ‘senses’ and ‘acts’ which are normally rep-resented as function calls to the behaviour library modules. Senses are recieved informationfrom the environment, while acts makes some change within the environment (normallyto the BOD agent itslef). The POSH plan itself is made up of ‘action patterns’ (simplesequences), ‘competences’ (pairs of sensory preconditions and their action consequences),and a ‘drive collection’ (the root consequence of the action selection hierarchy) (Bryson,Caulfield and Drugowitsch, 2005). The actual order of action expression is affected by theuse of competences and the sensory information they are relayed from the environment.BOD handles the situation where multiple actions are possible by assigning priorites toactions; higher-priorities get actioned in preference to those with a lower priority much like

CHAPTER 2. LITERATURE REVIEW 22

in basic reactive plan.

PyPOSH is a scripting language originally implemented by Kwong (2003) as an adaptionof Bryson’s Lisp POSH action selection implementation. The advantages of basing theagent framework on Python are that it is a generally available language that operates at ahigh-level and is loosely typed. It has also been proven to be an ideal language for rapid-development particularly important when implementing using an itterative design cycle.Python is object-orientated which means that it is well suited to the BOD methadologypreviously discussed.

Using POSH for action planning provides numerous adavantages to the developer; perhapsthe most valuable of which being that it allows development of virtual agents that focuson what the agent needs to do rather than the action plan itself. However this project willnot require such a complex action selection plan becuase there is only intended to be arestricted set of transitions between each state (For futher discussion see Section 3.2)

Although BOD does modularise the behaviours into an overall behaviour library it doesnot reduce it to the level of abstraction that the VHML does. This is in an effort to providehigh-level control; however it is important to keep the distinctions set out in the VHMLarchitecture forefront when trying to model realistic human behaviour. Doing this wouldprovide the action section part of a BOD system with a broad range of acts and senses tocreate a complex action plan.

Designing using the BOD methodology is very much an iterative process, which leads torapid prototyping and refinement of the specification after each cycle. This approach iswell suited to BOD and to this project; as the development process continues a greaterunderstanding of necessary refinements and limitations of current approaches will be un-derstood.

2.9 Summary

This review has looked to cover a broad area of topics around avatar creation with a viewto better understanding a complicated research field in which there is still a considerableamount of change and development.It has documented and critiqued some of the exist-ing work with a view to better understanding current approaches to aid in this project’sdevelopment.

As the research progressed it became clear there was no one agreed definition of what anavatar actually is; with boundaries between an avatar and other names such as agent orbot being unclear. The first section of this review explored these differences and settledon the term avatar being used to describe a complex entity in a virtual environment thatis able to receive sensory information, reflect, and to react appropriately within a virtualenvironment. It also became clear that one of the biggest failings of existing avatars isthe lack of believability which results in a reduced sense of belonging for the human users.Techniques for improving the modelling of human behaviour have been discussed (See

CHAPTER 2. LITERATURE REVIEW 23

Section 2.2) and this project will seek to incorporate these improvements into the design.

Many PhD’s or indeed companies could easily be formed for investigation into intelligencein avatars, there are countless methods, and a lot of money is currently being invested onresearching and improving methods in intelligent behaviour. However this is not the mainfocus of this project; topics covered in this review have culminated in the understandingthat, simply speaking, intelligence is choosing the right course of action at the right time.This project will be looking at intelligence for its influence on behaviour, and will incor-porate intelligent behaviour techniques into the design, but because of constraints broughtabout by complexity this integration will be small and should only be considered as anallusion to potential development.

Emotions are very complex, and although models such as those presented by Selvarajahand Richards (2005) sought to simplify the problem down to a set of primary emotionsthis approach is only an approximation. Emotion in this project needs to be consideredto be both autonomous and user controlled. Most existing avatars give a sense of emotionby changing the static facial expression on their face; however emotion is not static and isaffected by the environment, dynamic objects within the environment and other avatars.As stated in the review, one of the key factors in achieving believability and promoting aconnection between user and avatar hinges on the affective use of emotion.

Various methodologies for achieving realistic behaviour modelling have been discussed inthis review; an iterative development approach is well-suited to this project and althoughthe design process will not be directly following the principles of BOD as described byBryson (2003b) many discussed parallels can be drawn to it.

Throughout the course of the project I will seek to uphold the code of ethics as set out byBryson (2000); honesty, serenity, selflessness. The second of which being the most applicableas this project has set out to strengthen the sense of identity between the avatar and thehuman user. As such the avatar should not be treated as any standard application, andthe profile of the avatar should be saveable and every effort made to ensure it is restorable.

With this review in mind the design phase of this project needs to reflect what has beenlearnt to facilitate the achievement of the objectives.

Chapter 3

Design

As discussed in the Litterature Review section, an itterative approach will be used duringthe projects development life cycle. The following section first gives a critique of tech-nologies that could be used to develop this system, then goes on to describe the envisagedsystem architecture

3.1 Consideration of Technologies

Its important when undertaking any new project to evaluate the possible technologies thatcould be used in order to make a decision on which packages are not only suitable butpractical. The following section breaks down the analysis into the need for a gameing-engine and a suitable scripting language.

3.1.1 Modelling and Game-Engines

Blender

Blender is an open source professional quality 3D animation and modelling package avail-able for Windows, Macintosh and Linux. It offers excellent cross-platform compatibility,a unique user interface thats aim is to improve workflow through being intuitive and flex-ibility in importing and exporting objects, scenes and files from other 3D packages. Itsimulates natural physics such as gravity and friction, and has a powerful fast lightingimplementation. Also most importantly for this project is gives the ability to run scriptswritten in the Python language that can control the objects and scenes within Blender.

Blender has a GNU Public License that means that Blender’s source code is completelyfree for anyone to download, copy, amend, use, or distribute provided they abide by theguidelines laid out in the GPL. These guidelines state that any change to the code mustbe identified and remain open and freely available. It is not anticipated that any change

24

CHAPTER 3. DESIGN 25

to the Blender source code will be necessary during this project.

There is now a large Blender community on both the developer and user side, and thismeans importantly for those wishing to get into Blender, or indeed improve on existingskills, there is a large amount of documentation including detailed API’s (The BlenderModule API, 2006) and (The Game Logic Module API, 2004), tutorials (The Blender WikiTutorial, 2006) and forums. However based on personal experience of the tutorials, mostof the emphasis appears to be on the fixed animation and modelling aspect of Blender, notnecessarily the game engine. In that respect the API’s will prove more useful.

Other Similar Animation Modelling Packages

Lightwave: Like Blender, Lightwave is a 3D modelling and animation suite that offers highquality rendering and utilises a descriptive based user interface (there are no icons).It uses the C language instead of Python to give low-level scripted control over itsfeatures, however its main difference from other similar packages is that it keeps themodelling and animation in different programs. The modeller is where developers cancreate 3D objects, and the layout manager is where lighting, animating and renderingtake place. These two programs are linked via a TCP/IP connection using the ‘Hub’program for synchronous development. Lightwave claim this increases the productsversatility however in their recent version they have been experimenting with bringthe two programs together under one application. Lightwave was considered for thisproject however some of the features it provides could be considered to complex forthe scale of the project as well as the cost being aproximately £400 for one license.

3D Studio Max: It is one of the most widely used 3D modelling programs available and isused by the games, construction, and film industries. Unlike the others, it has its ownscripting language. Its popularity with game developers comes from its capabilitieswith polygon modelling, where a set of predefined primitives have Boolean operationsapplied to them in order to make more complex shapes. This method allows for muchfaster real-time rendering. This product focuses more on the modelling and less onthe real-time animation and would have had to have been used in conjunction withanother game engine, and at around £750 would be completely unfeasible.

Other Similar Game Engines

Crystal Space: It is in essence a software development kit written in C++ for the creationof 3D applications primarily focused on gaming. It mainly serves to manage whatis being rendered, but can control facilities like file input/output, natural physics,controller input and GUI’s1. Similar to Blender it has a free license. The main drawback of Crystal Space is that it does not provide game logic as part of the standardapplication, unlike Blender. The same developers have come up with a product called

1GUI: Graphical User Interface

CHAPTER 3. DESIGN 26

CEL (Crystal Entity Layer) which can accompany Crystal Space providing the gamelogic it requires to act as a game engine but it was considered this combination,coupled with only a limited amount of documentation would prove unnecessarilycomplicated to learn within the time frame of the project.

Delta 3D: Can be considered to be more of a fully featured game-engine which triesnot to target one application such as gaming, instead being more diverse looking atmodelling and simulation applications in education, training and entertainment as awhole. Like Blender, Delta 3D is open-source, and makes use of many other open-source projects and combines them as modules into its overall functionality. There isa lot of supporting documentation and a good community attached to the product.The scripting language employed is complex owing to the fact it uses so many externalmodules. Delta 3D was strongly considered, however problems with the complexityof setting it up and combining all the linked open-source projects proved to be tootime consuming and led to it being removed from this projects plans.

Summary of Decision

Cost: Blender is an open-source project released under a free GNU Public License.

Game Engine: Blender has a game engine with object control and game logic attachedto the main application.

Scripting Language: Blender uses the Python language as its main scripting language,advantages of which have been discussed separately.

3.1.2 Online-Communities

Second Life

Nowhere has the world of user avatars expanded greater and faster than in the realm ofonline virtual communities. Second Life is such a 3D virtual community and is currentlyinhabited by 5,824,516 people spread over 100 countries across the globe and is growingby 36% a month. It opened to the public in 2003 and has been built and expanded by theresidents of the world. Immersion within the environment is not considered a problem withthe average user spending 40 hours per month in the online world. (Facts taken publishedby www.wired.com/wired/archive/14.10/slfacts.html)

Users are able to setup standard accounts for free, and then can upgrade to a subscription ifthey wish to own some land, either for a virtual home or business. Second Life has its owneconomy trading in Linden Dollars, however these can be converted to real money; in facta recent news report detailed the story of the first millionaire to have amassed their fortunepurely through the trading of goods and services within the Second Life environment.

CHAPTER 3. DESIGN 27

Second Life claims to be all about ‘personal expression’ and the main tool they cite forachieving this is their avatar creator. This tool allows users to personalise the appearanceof their virtual selves from body shape to the colour of their socks. However as discussedin the Literature Review section, having a true visual representation is only part of theproblem of achieving realistic behaviour and believability.

LSL or Linden Script Language is an event driven internal language whose syntax is broadlysimilar to C/Java. The language focuses on the two principles of ‘states’ and ‘events’. Statescan be assigned to any real-life object such as a light being on or off. Events are the triggersthat cause a change of state or for some function to be carried out, a potential restriction ofLSL is that the set of events is finite and predefined within the language. Figure 3.1 belowillustrates how the two principles of state and event could be applied to the action selectionof a virtual avatar. A further downside to this approach might be seen as the deterministicapproach this form of scripting adopts, for instance there are degrees of happiness, and ifyour touched whilst in a happy state, you would not always say ‘what a lovely day it is’and then shake someone’s hand. What about the case where you had touched before whilestill in the happy state?

Figure 3.1: An illustration of how the ‘state’ and ‘event’ principles in LSL could be usedin action selection

While it would be interesting to progress further the design of an autonomous avatar tobe launched within the Second Life environment, the restrictions imposed by the LindenScripting Language make this projects aim of believability difficult to achieve. However apairing of Second Life avatars and the BOD methodology proposed by Bryson (2003b) ispossible and could achieve interesting results.

CHAPTER 3. DESIGN 28

Habbo-Hotel

Habbo Hotel is another virtual community that unlike Second Life is targeted towardsteenagers. The avatars within this environment are referred to as ‘Habbos’, and are uniquelyidentified by their name. Simialalry to Second Life, personalisation of the appearance ofindividuals Habbos is possible, although behaviour customisation is not. The main functionof Habbo Hotel is to allow teenagers to communicate in a safe environment. Chat roomsare divided into public and guest rooms, with the analogy of the hotel being consistentwith this; for instance the hotel lobby is a public room, while a hotel room is a guest room.These guest rooms are customisable by the user and take the form of nightclubs, themeparks, hospitals, etc.

Although this environment is targeted towards children and gives an ideal introductionto the field of virtual worlds, very little other than appearance is customisable and thereis no supporting scripting language to allow more advanced users to show their creativeskills. For these reasons Habbo Hotel is completely unsuited to further development forthis project.

The Palace

This online community is quite different to both Second Life and Habbo Hotel. The Palacecommunity is a client/server system operating on individual computers, LAN’s and over theInternet. You enter a Palace site using your Palace client, which is designed to communicatewith any other Palace server. The Palace is an open system, allowing users to create andcontrol much of their own site’s content. Using the built in operator powers providedthrough Palace tools and the Iptscrae scripting language users can create their own virtualworld and share it with others. Your appearance within the Palace is configurable usingthe ‘Avatar Dispenser’ which is a list of possible images to represent yourself. The defaultavatar is a spherical yellow happy face, and it is possible to change the expression on thisface (See Figure 3.2), the colour of it which could be used to indicate mood and to addprops to avatar. A restricted set of animations such as bouncing are possible.

Similar to the LSL language in Second Life, Iptscrae is made up of two parts, ‘handlers’and ‘events’. Handlers are short programs in their own right, and can be loosely comparedto states in LSL (an example handler is the INCHAT handler which responds to wordssaid by others in a conversation). Events are the same as events in LSL and are triggeredby circumstances within the environment (an example event would be CHAT which occurswhen any conversation is initialised).

The problem with the Iptscrae language is similar to the LSL, events and handlers arehard-coded into the language, and there is no ability to create new events. In addition thedocumentation surrounding the Iptscrae language is not complete, and would most likelylead to complications during development. With this solution a lot of emphasis would haveto be given to development of a virtual world in which this project’s avatar could be tested;this would detract from the objectives of the project.

CHAPTER 3. DESIGN 29

Figure 3.2: This figure illustrates the selection of expressions available to the user in thePalace environment, allowing them to customise the expression on their avatar’s face.

Summary of Decision

While the development of a virtual avatar using an online community such as Second Liferemains a viable option for implementation due to the fact that the environment is alreadysetup and a dedicated scripting language currently exists, neither of these features areparticuarly tailored to the needs of this project. Both ‘Habbo Hotel’ and ‘The Palace’although providing useful functionality to their own communities are not really suitablefor avatar development with respect to autonomous behaviour. As already stated a pairingof the Second Life world and the BOD methodology should be further considered but isconsidered outside the scope of this project.

3.1.3 Scripting-Languages

Python

Python is considered to be a multi-paradigm language which means that it gives program-mers the flexibility to adopt a style that best suits them and their application. Python’sextensibility makes it idea for add-ons to existing modules and applications that are notnecessarily written in Python. It has a large standard library which is often seen as one ofits biggest strengths. It steers clear of unnecessary syntax in favour of a more structured,clear, one way approach claiming to promote higher quality, maintainable code.

It’s ideally suited to an AI application because of it’s flexibility and hierarchical modularapproach which allows extensible libraries to be built up linked by natural language script-ing. In the case of this project Python also holds the advantage that it is the languageused for scripting within the Blender environment and as such can be easily extended byfurther use of Python scripts. Also available are tutorials, detailed API’s and reference

CHAPTER 3. DESIGN 30

books for supporting the learning and developing of Python code. wxPython is a GUItoolkit written for the Python language. It is an extension module to the existing Pythonlanguage which aims to allow easy creation of stable and full functioning graphical userinterfaces. Like Python it has an open-source license, and is cross platform which means asingle application created with it can run across Windows, Mac OS, and UNIX systems.

Boa Constructor is an Interactive Development Environment (IDE) written in Python thatmakes use of the wxPython module previously discussed to generate GUI’s rapidly. Itallows interfaces to be visually generated using a drag-and-drop approach with automatedcode generation. A reasonable debugger and doc string documentation creation facility isalso provided. For this project, the GUI layout is only a secondary consideration to thefunctionality it provides; rapid development techniques such as Boa allow for a sufficientlevel of complexity without detracting from the goals of the overall project.

Other Potential Scripting Languages

Java: This object orientated language was developed in the 1990’s by Sun Microsystems. Ittakes a lot of it’s syntax from the older C and C++ languages but has a simpler objectmodel and a restricted set of low-level controls. One of the objectives when Java waswritten was to have a language that was platform independent. The lack of low levelcontrol is part of the reason that makes Java less suited to an AI application. Swingis the GUI library for Java, and although it provides most of the commonly requiredfunctionality of a GUI through its use of widgets it has been criticised for not havinga consistent look-and-feel which leads to different rendering on different platforms.Although plug-ins for native look-and-feels are available these still only emulate thenative system and are memory intensive, again leading to slower execution.

C++: This language stemmed from the C language, with the main change being theintroduction of classes, multiple inheritance, overloading, and exception handling.C++ is one of the most commonly used commercial languages and owes its successto providing developers with both a high-level language for rapid development andcontrol over low-level facilities. The fact that C++ is a procedural language anddoesn’t have a fixed syntax makes it ideally suited to AI applications. As far asGUI creation goes C++ relies on external modules such as the Microsoft DirectXAPI, and as the GUI is not the main focus of the project a Integrated DevelopmentEnvironment (IDE) is really a necessity.

Other Potential Integrated Development Environments

Eclipse featuring PyDev: Eclipse is an open-source IDE designed primarily for use withthe Java language although plug-ins exist for most of the common languages. Eclipsehas a large supporting community working on over 60 open-source projects withthe aim of developing tools, frameworks and runtimes to aid in building industrialstrength software. PyDev is a plug-in for Eclipse that turns it into a complex Python

CHAPTER 3. DESIGN 31

IDE, it comes with syntax highlighting, refactoring, debugging tools included. Eclipseis a powerful IDE and PyDev makes it an ideal choice for this project however thecombination of the two was only discovered someway into the project and thereforeswitching across was considered an unnecessary distraction.

Stani’s Python Editor: This IDE is full of features that aid in rapid Python GUI devel-opment such as auto-completion, syntax highlighting, drag-and-drop, auto-indentation,and context sensitive help. It main advantage over IDE’s was its dedicated Blendersupport that allowed it to be run interactively within Blender, including an objectbrowser. Also as standard it comes with wxGlade which is GUI designer allowing forquick generation of Python GUI’s. However it proved difficult to get hold of the lateststable version, and building the files threw up numerous complications. As ideal asthis tool may have been, it proved too difficult too get working.

Summary of Decision

Availability: Python is freely available and there is a lot of support. Both wxPython andBoa Constructor are open-source projects.

Blender: Blender scripts are written in the Python language so there would be easy com-patibility with the GUI

Popularity: Python is a popular language for use in AI because it is flexible and extensibleand provides high-level facilities without restricting low-level control

3.2 System Architecture

One of the first steps in designing a real-time system is identifying the stimuli and associatedresponses (Sommerville, 2000). For the purposes of this project these will be known as thesensors and actuators respectively. Within the Blender GameLogic API descriptions ofthe built-in available sensors and actuators can be found, detailing the data that can bereturned, the effect of the action, and the level of user control over what the sensors andactuators actually do. Figure 3.3 details those available in Blender using a generic real-time system model. Not all of these sensors and actuators will be applicable to the project;however the set that are required will be specified by the needs of the objects within theenvironment.

The handling of sensors in real-time requires an architecture that is capable of dealingwith an input as soon as it is received so that in the case of this project the user avatarhas the latest information on which to base it’s actions. A common approach to real timesystem architecture would be similar to Figure 3.4. In this model the listener modulewould continuously be checking the input ports for any state changes from the sensors.Such a change would result in the listener module sending an event to the event handlermodule; this is responsible for queing up and sending on the events to the control system.

CHAPTER 3. DESIGN 32

Figure 3.3: The sensors and actuators available as part of the GameLogic package inBlender

CHAPTER 3. DESIGN 33

However incorporated into it may be facilities to fast track higher priority events and removeunnecessary events before they take up processing time in the control system module. Thecontrol system is responsible for decision making and provides the necessary computationson the sensory information to determine what actuators to call, and with which parameters.

Figure 3.4: This figure illustrates a common approach to real time system design architec-ture

Fortunately Blender already provides its own form of the listener module and relays anychange of state in the sensors to the python script that is controlling the object. Whilst theadvantages of an event handler have been discussed as the priority handling and bufferingof information from the control module a design decision was made at an early stage thatfor within the scope of this project the event handler module would not be necessary. Thevolume of input sensory information and the number of resulting control processes doesnot warrant an event module. If the project were to be expanded as part of further workthe design and incorporation of an event handler would need to be considered.

After considering how the user avatar will recieve sensory information and action its de-cisions through actuators focus turns to the control system; in the AI world this is oftenreffered to as the action selection2. Bryson (2003a) discusses in her paper on ‘ActionSelection and Individuation in Agent Based Modelling’ the principle of organising avatarintelligence clearly in order to keep the code clean, ease maintainability, and make modelseasier to communicate to others. Bryson’s (2003a) paper illustrates four approaches todesigning avatar action selection, ranging in complexity.

1. Bryson’s (2003a) first model is named Environmental Determinism. It is thesimplest of the models and can be applied when there are only a small set of situationswhich are mutually exclusive, and actions that can be linked to these situations. Acommon implementation would involve the use of ‘if-then’ statements which could beenumerated to determine what action should be taken.

2. Finite State Machines (FSM) are based on the principle of enumerating states2Action selection is the process by which the avatar decides what to do next

CHAPTER 3. DESIGN 34

and transitions. From a programmer’s point of view it is easier to base avatars aroundactions as supposed to events; FSM’s look at action selection in terms of the particularaction it should exhibit given a particular situation. If there is only a restricted set oftransitions between each action the FSM’s are a good way of describing the problem.The main advantage with this approach is that there are no possible transitions thatthe programmer could have neglected to spot in the implementation. However thedisadvantage is that code can quickly become quite intricate, and requires a thoroughuse of informative commenting.

3. The next model is given the term Basic Reactive Plan (BRP) by Bryson (2003a)and is in essence a prioritised list of actions that allow an avatar to achieve a specifiedgoal. They operated on the premise that if an avatar can perfrom the top action, thenit does not need to execute any other. A BRP is a hierarchy of situations mappedto actions. The most common implementation of a BRPis usung a switch statement,however it is possible (if not a little untidy) to use nested ‘if-then’ statements.

4. The most complex action selection model that could be implemented for use in thisproject is called a POSH reactive plan, POSH being an abbreviation of ‘Parallel-rooted, Ordered, Slip-stack Hierarchical’ reactive plan. This model deals with thesituation where avatars have more than one goal and where some actions requiresub-actions to be completed. Bryson (2003a) states there are three types of stimulithat need to be considered; those that need to be checked continuously, those thatvery rarely need to be checked, and those that only need to be checked given certaincircumstances. Her POSH reactive plan deals with these different cases by havingthree components; drive-collections, action patterns, and competences which maprespectivly to the stimuli types above. For more details on POSH action selection,see Section 2.8

From these analysing these different action selection models it is apparent that the designapproach most suited to this project’s needs is the Finite State Machine model because thereis only a limited set of actions and transitions which can be described deterministically.Adopting a Finite State Machine approach avoids making the action selection unnecessarilycomplex which is consistent with what Bryson (2003a) states in her paper; ‘modellers shoulduse the simplest mechanism possible’.

This project will look to implement an adapted version of Aylett and Cavazza’s (2001)sense-reflect-act cycle which was described in Section 2.1.2, where the ‘refect’ part repre-sents action selection. In order for the user avatar to have interests within the environment,a set of other objects must be created for it to interact with. The objects to be used for thisproject are a hover pad, a colour pad, a lighting box, and another autonomous avatar (notuser controlled). The type of behaviour exibhited will vary depending on which object isbeing interacted with, however each object should be viewed as possessing certain commoncharacteristics.

The high-level system design in Figure 3.5 reflects the need for common characteristicsby adopting a modular approach. Each object is given access to the ‘common functions’

CHAPTER 3. DESIGN 35

module which within itself, has access to the ‘sensor’ and ‘actuator’ modules. Each objectwill need to have its own ‘object module’ which will control all decision making for thatparticular object. In every object module an instance of the common functions module willbe created upon intitialisation which will take as a parameter which object the functionsare to be applied to. Within the common functions module itself the object passed as aparameter at initialisation is used again as a parameter to create an instance of the sensorand actuator modules. It’s important to be clear that each object module has its owninstance of the common functions, the sensor and the actuator modules; if another objectis to be added subsequently it too will have access to all the functionality provided bythese three modules. This approach does mean every object has the ability to perform anyfunction that any another object can, but in practice the object’s functionality is restrictedby the individual object module design. The system model illustrated in Figure 3.5 alsohas clear strengths for ensuring extensibility, and allows the project to be implemented inphases (See Section 4.1).

Figure 3.5: This figure illustrates the high-level architecture to be implemented for thevirtual avatar system

3.3 User Avatar Design

With a decision made to implement the virtual avatar system using the Blender gameengine and modelling environment, consideration needs to be turned to the user avatar3

itself. The user avatar modelled using the Blender environment should be able to action anddisplay the behaviours necessary to conduct itself appropriately within the created virtualenvironment. The user avatar design needs to take into consideration the user profile filethat will be used to specify user parameters that control the behaviour of the user avatar.Further discussion on the user avatar can be found in Section 4.3.1

3Throughout this dissertation any reference to the user avatar will refer to the avatar that the profilefile setup by the user has an effect over.

CHAPTER 3. DESIGN 36

3.4 Other Object Design

It is important that within the environment the user avatar has other ‘objects’ with whichto interact. Otherwise there would be very little reason to be within the environment, andthe user avatar would become ‘bored’ quickly. It is only through the interaction with theseother objects that the user avatar itself can display a sense of coherency and believabilityin the actions it performs. Below is a summary of the objects chosen to be implementedfor this project and a justification for their choice.

Colour Pad: The role of the colour pad is to change colour to suit its mood (i.e. bright,high-energy colours would represent positive moods). When the user avatar sits onthe colour pad, its form of interaction would be to copy the behaviour of the colourpad, it would do this by changing colour itself to match the colour produced by thepad. We are all affected by the moods of the people around us, and one of theprimitive forms of behaviour exhibited by humans from an early age is the ability tomimic the actions of those around them. Displaying this primitive behaviour withinthe system will illustrate not only that the user avatar has awareness, but also thatit’s able to reflect the behaviour of the objects surrounding it.

Hover Pad: The purpose of the hover pad is to show that a change in the user avatar’senvironment has an effect on the user avatar itself. The idea is that when the useravatar is drawn onto the pad, it will begin to appear to hover, which is a result ofthe environmental gravitational force being changed, giving a sense of weightlessness.Environmental conditions change around humans all the time, and their moods andactions are affected as a result; this is what the hover pad seeks to illustrate.

Light Box: Another proposed object is a light box. This will be a model of some formof box that the user avatar can enter. When it does so the light within the box willcome on illuminating both the box and the user avatar. This illustrates the reversesituation to the hover pad, where the case now is that the user avatar is affectingthe environment; its presence is enough to trigger the light to come on. Humanscontinuously look to alter their environment, be it switching on the light when theywalk in a room or turning the air conditioning on in the car on a hot day.

Autonomous Avatar: The final object to be implemented will be a second avatar (theautonomous avatar4) which will have it’s behaviour set by the environment ratherthan by the user. The main goal for the autonomous avatar will be to satisfy it’sown interest, but it will also look to avoid confrontation with bigger avatars. In thisproject scenario the user avatar will always be bigger than the autonomous avatarand therefore it will always look to run away from the user avatar. One of theobjectives for the autonomous avatar is for the user to have a degree of empathywith its situation; whether or not it is justified in its action to move away from the

4Throughout this dissertation any reference to the autonomous avatar will refer to the avatar that existswithin the environment for which the user has no control over.

CHAPTER 3. DESIGN 37

user avatar. Further consideration needs to be given to what happens when the useravatar catches the autonomous avatar.

3.5 Action Selection Model

As decision reached in Section 3.2 was that the appropriate action selection representationwould be a Finite State Machine (FSM). Figure 3.6 illustrates the design of the actionselection to be implemented in this project using a FSM representation. The initial state is‘wall walk’ where the user avatar is walking round the wall, the transition from this stateto the explore state is to be triggered by time. Whilst the user avatar is in the ‘explore’state, it is wandering round the environment looking for interest. It returns to the ‘wallwalk’ state if it exceeds the time the user avatar believes it should be exploring for. Else, ifit gets within its proximity range of any of the other objects, interaction with the triggeredobject begins. This continues in all cases until the time the user avatar wants to interactwith that particular object is exceeded. The only exception to this rules is when the useravatar catches the autonomous avatar, in which case the user avatar returns to the ‘wallwalk’ state

Figure 3.6: This figure is a Finite State Machine representation of the action selection tobe implemented in this project.

CHAPTER 3. DESIGN 38

3.6 Avatar Behaviour

This section details some of the important functionality that will need to implemented suchthat the user avatar and other objects can perform consistently within the environment.It is important to state that while the architecture (Section 3.2) is modular and all thefunctionality will be available to all the objects, the functions below are tailored to thebehavioural needs of the user avatar.

Wall Walk

Aim: The aim of the wall walk behaviour is to provide the user avatar with the abilityto navigate around the perimeter of the environment where it is considered the useravatar will be safe.

Method:

1. If the user avatar is in the middle of the environment somewhere (i.e. away fromthe wall), then calculate the closest wall to the user avatar’s current positionand move towards it.

2. If it is the first wall the user avatar has hit since starting the wall walk behaviourthen make a random decision as to which way along the wall the user avatar willmove, and then apply the necessary force.

3. If it is NOT the first wall to be hit then, ascertain the name of the newly hitwall, check which wall was previously hit, and if they are different then thisinformation will uniquely identify the current location of the user avatar withinthe rectangular environment. The sequence of the old wall to the new wallindicates the direction the user avatar is travelling in, and from the direction oftravel and the current location the next direction of travel and required forcecan be deduced.

Explore

Aim: The explore behaviour allows the user avatar to wander within the environment withthe objective of finding something ‘interesting’ to interact with.

Method:

1. A random x and y value needs to be calculated to provide the user avatar withthe direction and speed (velocity) of travel. The range of the x and y value tobe randomly calculated is set by parameters provided in the user profile.

2. Determine which wall the user avatar is currently on, and return a pair of signsthat ensure movement in the opposite direction to that of the wall e.g. ( ‘+’ , ‘-’ ). In the example given in Figure 3.8 the user avatar is on the left wall; it

CHAPTER 3. DESIGN 39

Figure 3.7: This figure illustrates the stages at which calculations are needed whilst runningthe wall walk behaviour

will always want to move in the positive y direction however it can move in thepositive or negative x direction and so this decision is made randomly. The pairof signs returned in this case is either ( ‘-’, ‘+’ ) or ( ‘+’, ‘+’ )

3. Apply the appropriate force onto the user avatar using the generated x and yvalues paired with the calculated sign values respectively.

Figure 3.8: This figure illustrates how the direction away from the current wall is calculatedto allow exploration of the environment

Hover

Aim: To illustrate the effect the environmental conditions have on the user avatar bygetting the user avatar to appear to hover over a pad where the gravitational forcesare different to those in the rest of the environment.

CHAPTER 3. DESIGN 40

Method:

1. Calculate the current location of both the user avatar and the centre of the hoverpad.

2. Use the difference of the two locations to create a vector that provides the newforce that needs to be applied to the user avatar in order to move it from itscurrent position to the centre of the hover pad. This process should be iterateduntil the location of the user avatar and the centre of the hover pad match, atwhich point all force on the user avatar should cease.

3. Make the user avatar track towards a point above the hover pad set by thelocation of an empty object5

4. Once the user avatar has hovered for as long as it has interest in doing so (set byparameters from the user’s profile), then the user avatar should appear to burst(due to the altitude pressure) and return to the surface of the environment.

The Autonomous Avatar

Aim of the User Avatar: When the user avatar notices the autonomous avatar (it iswithin the curiosity range of the user avatar, set by the user profile), the user avatarwants to chase the autonomous avatar with a view to catching it.

Aim of the Autonomous Avatar: The autonomous avatar feels threatened by the largeruser avatar and therefore its aim is simply to avoid confrontation.

Method:

1. The autonomous avatar will carry out its own interests within the environmentwhile it has not noticed the user avatar.

2. When the user avatar moves within interest range of the autonomous avatar,the user avatar should stop its exploration of the environment and track theautonomous avatar with a view to catching it. The autonomous avatar’s rangeof visibility is less than the user avatar and therefore there should be a slightdelay before the autonomous avatar notices the user avatar is tracking it andmoves away.

3. The direction the autonomous avatar moves in is calculated by determiningwhich quadrant the user avatar is in, in relation to the autonomous avatar’sposition. A vector using a pair of signs similar to those used in the explorebehaviour is produced to move the autonomous avatar into the opposite quadrantfrom the user, with a randomly generated speed (the range of the random speedis calculated from parameters provided in the user profile). The only exceptionto this rule should be when the autonomous avatar hits a wall, in which case itmoves away from the wall and the user avatar. See Figure 3.9

5An empty is an object that will exist in the scene, but one that does not have any geometry, and willnot be visible in the scene

CHAPTER 3. DESIGN 41

4. Once the user avatar has got bored chasing the autonomous avatar or has caughtit, then the user avatar returns to walking the walls, and the autonomous avatarwill find its way back to the location it was chased from and then continuepursuing its own interests.

5. If the autonomous avatar is caught it will disappear, and then a new autonomousavatar will be created at its original starting location and it will resume pursuingthe autonomous avatar’s own interests.

Anticipated Problem: When the autonomous avatar is being chased by the user avatar,and its calculated route results in it finding a corner, there is a risk that the calculateddirection would bring the autonomous avatar’s path back toward to user avatar. Thisproblem should be investigated during the Implementation phase of the project.

Figure 3.9: This figure illustrates the difference in the autonomous avatar’s quadrant se-lection, for its next movement, between when it is in the open scene to when it has hit awall

3.7 Profile Interface Design

As well as designing the user avatar and the environment in which the avatar will exist,consideration needs to be given to the means by which the user will input their profile; theprofile interface. The GUI will not only deal with the inputting of the user profile, but willalso serve as a platform from which to initiate the environment (See Figure 3.10).

Upon loading the user avatar system, the ‘Application Menu’ screen will be loaded. Themain functionality behind this screen will be to guide the user through the process ofbuilding their user avatar. The first step will be to click on the button object labelled‘Profile Manager’, which will load a new screen, refferred to as the ‘Profile Screen’.

The ‘Profile Screen’ will use sliders to provide the varied behaviour parameters requiredto describe the high-level personality of the user creating the profile. The varied range ofthe parameter values should allow the profile file to be unique to the human user creatingthe profile; this uniqueness will be mirrored in the actions of the user avatar within theenvironment.

CHAPTER 3. DESIGN 42

Once the behaviour profile has been generated it will need to be saved to disk, which willbe done through a button object, loading a ‘save file’ dialog box and permiting the userto save their profile in the current (profile) directory. Another button object on the profileinterface will need to be created for opening an existing profile, when clicked this will loadan ‘open file’ dialog, setup such that the profile folder is the current directory.

After the profile to be used is on screen, and has been saved, the user will need to clickon a further button object called ‘Load’. This will restore the ‘Application Menu’ screenand load the profile for use when launching the environment. The only remaining buttonobject to be pressed will be the ‘Run environment’ button which will launch the Blenderenvironment with the user profile that has been supplied. Once the environment hascompleted the scene will close and the application menu will be returned.

Figure 3.10: This figure is the intial design of the interface screens

Chapter 4

User-Centred Avatar System

The aim of this chapter is to explain the implementation life-cycle of the preoject, to give adetailed description of the final system and to provide a rationale for all the key decisionstaken during development.

4.1 An Itterative Phase Development

The development of this project was seperated into implementation phases to ensure thatthe objectives of the system could be realised. After the goals of each phase were achieved,critical analysis techniques were employed to match the created system to the originalobjectives. Each phase outlined below builds on the work done in the previous phases,so in addition to looking at each phase individually, an overall assessment of the solutiondelivered thus far was required whenever a new phase was completed. This sometimes ledto ammendments needing to be made to the previous phases, something that is discussedin Section 4.3. Adoptinging an itterative phase approach to development has parallelswith the style of the BOD methodology proposed by Bryson (2003b), as disscuessed inSection 2.8.

Phase One involves making the user avatar realise its own position within the virtualenvironment, the relationship of that position to the boundaries of the environment, gettingthe user avatar to move towards the closest boundary and then to follow the perimeter untilan interrupt is received. This is a classical AI1 problem which demonstrates that an avatarhas both an awareness of the environment it is in and has the freedom to explore it.Awareness and freedom are human qualities that an avatar must demonstrate to give thesense of consistent behaviour with the actions that would be taken by a human.

Due to the set of sensors provided by the Blender game engine the user avatar will onlyoperate using a very simple wall walk path planner, which is based on the principle ofmoving forward until the avatar hits something. Many more complicated path planners

1Throughout this dissertation, the term AI is used as an acronym for Artificial Intelligence

43

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 44

such as the popular A* algorithm could have been chosen for implementation, however thestudy of path planning could quite easily be a dissertation subject in its own right. Pathplanning is an area within AI that continues to have a strong emphasis for development.More detailed discussion of A* and the path planning problem itself is given in Chapter 6of this dissertation.

Phase Two expands on the boundary walk implemented in the first phase by allowing theuser avatar to explore within the bounds of the environment, in essence moving away fromthe safety of the wall. The reason for the user avatar’s exploration is based on the ideathat a human user would have a varied degree of curiosity in the surrounding environmentand an equally varied desire to satisfy that curiosity. The frequency and duration of theexploration is to be calculated based on parameters provided by the human user in theirprofile; this will enable the user avatar to mirror how the human user would behave giventhe same cirumstances.

Phase Three involves the development of other objects of interest within the environmentwith which the user avatar should be able to interact in a variety of ways. The four objectswithin the environment are to be the light box with lighting effects, the hover pad, thecolour pad and the autonomous avatar2. The exploration element to be implemented inphase two allows the user avatar to move away from the wall and as such come into contactwith these other objects within the scene. Each user avatar will be initialised with a levelof interest which is specified in the profile file from the human user. If an object falls withinan area of interest around the user avatar, the avatar will want to interact with that object;this is on the conditions that it has not already done so in the recent past or on too manyprevious occasions.

The other objects within the environment are intended to be examples of avatar interactiontechniques that can be modelled in a virtual environment and to demonstrate the principleof an ‘area of interest’ for avatars. The disadvantages and limitations of this simplificationare detailed later.

Phase four moves the emphasis away from Blender and onto the front-end profile ap-plication. This is going to be created using Python including the wxpython module (seeSection 3.1.3) and will provide a GUI3 through which the human user can load, amend andsave their profile. A profile file should allow the user to control the high-level behaviourcharacteristics of their avatar within the environment. The simplicity of the profile setupand the varied response the user avatar exhibits from it is at the fore-front of this disser-tation. However this project should not be considered to be an exercise in delivering goodHCI4 principles and therefore the look and feel of the GUI will concentrate on ease of useand simplicity to understand.

The specification of behaviour being kept at a sufficiently high-level such that a child could2Throughout this dissertation any reference to the autonomous avatar will refer to the avatar that exists

within the environment for which the user has no control over.3GUI is an acronym for Graphical User Interface4HCI is an acronym for Human Computer Interaction

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 45

understand and control it is one of the key objectives of this project. How the interfaceachieves this is an important consideration, however creating a ‘fun’ interface as describedby Read (2005), seen as being one of the main focuses for children using interfaces, is nota primary objective for this project. Discussion on decisions made during the interface’sdevelopment is given in Section 4.3.6.

4.2 The Final Solution

During each phase of the implementation changes had to be made to accommodate technicalrestrictions that occured during the system development process. As such the final user-centered avatar system differs from the initial design description.

Figure 4.1: This figure is a screenshot of the final Application Menu screen

The application menu (Figure 4.1) is the first screen the user is presented with when theyrun the system. This screen lists the steps required to create a user profile, and to launcha user avatar into the environment with that profile governing it’s behaviour. By selectingthe ‘Profile Manager’ button a new screen appears (Figure 4.2) which allows the user tocreate a new profile or amend an existing profile.

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 46

Figure 4.2: This figure is a screenshot of the final Profile Manager screen

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 47

From the profile screen the user can adjust the levels of their avatar’s behaviour. This isdone using sliders, and visualised through the guages next to each slider. The behavioursthat are adjustable are; exploration level, boredom level, and curriosity level. Any variationof these three values will result in the user avatar behaving differently within the finalenvironment. Once the user is happy with the values they must save their profile, usingthe ‘Save’ button which will save the user profile into the profile directory. At this pointthe ‘profile to load’ box will be populated with the profile name; the user can then clickthe ‘Load Profile’ button to specify their desire to use this profile for the user avatar’sbehaviour.

Loading the profile returns the screen back to the application menu. The second option isto select which camera perspective the user avatar will be viewed from in the environment;the reason behind the need to choose a camera is discussed in Section 4.3.2. Finally, withthe current profile box populated since the users return from the Profile Manager, theenvironment can be launched using the ‘Run Environment’ button.

Figure 4.3: This figure illustrates the final envrionment as seen from the birds-eye camera.

Figure 4.4: This figure illustrates a typical view of the final environment as seen from thethird-person camera.

At this point, the environment created in Blender is launched as an executable, runningthrough an application called ‘blenderplayer.exe’. This application allows any Blender fileto be executed without the whole Blender development suite being intstalled on the user’scomputer. Figure reffig:impenvbirds views the environment from the birds-eye perspectivecamera and Figure 4.4 shows a view from the third person camera that tracks the avatar(See Section 4.3.2 for more details).

The scenario the user should view the user avatar as being placed in when it is first launchedinto the environment is that of someone suddenly appearing in a new (unknown) environ-ment; the avatar’s behaviour should be consistent with this situation and the informationprovided by the human user in their profile file.

As discussed in the design section, a Finite State Machine model was used to approach thetask of action selection. Such a model involved the creation of states and the realisation oftransitions. The following is a breakdown of the key states involved.

State 0: The wall walking state, when the user avatar is in this state it will find the closestwall to its current location and then move toward it. When it meets the wall, it willthen turn to follow the wall (Note the direction it turns is random, see Chapter 6)and will continue to do so until a change of state.

State 1: When in this state the user avatar will move way from wall and go explore theenvironment. The speed at which it moves around is determined from the profile filealthough this only provides the range from which the speed of the user avatar can be

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 48

determined; the precise value within the range is calculated by a random factor tomaintain user interest through variation. The direction that the user avatar decidesto explore in is always away from the closest wall, however the degree at which theavatar moves away is again randomly calculated within a range. (See Figure 3.8)Whilst in state one, the direction and speed (velocity) of the user avatar is updatedat varied intervals, the determining of which is restricted by a range that has beendeduced from the user profile.

State 2: This state is activate when the user avatar comes within ‘curriosity’ range of thehover pad. The first action for the user avatar when in this key state is to move fromits current location onto the hover pad. Once on the pad the user avatar appearsto begin to get lighter; this appearance of floating is gradual until it settles at apoint above the environment ground level. The time the user avatar spends hoveringis determined in part by the parameter the user has given indicating their level ofsuseptability to boredom and in part by a random factor whose range is determinedfrom other parameters within the user’s profile.

State 3: This state signifies the user avatar has come with interest range of the expansionpad. Similarly to the hover pad, the user avatar will move onto the pad and will thenbegin to interact with it. However in this state the user avatar will increase it’s sizeby a scaling factor, and while it remains on the pad the user avatar will continue toexpand, until it eventually bursts.

State 4: Interaction with the autonomous avatar takes place if the user avatar is in thisstate. When the autonomous avatar notices the user avatar is within its ‘curriosity’range it leaves pursuit of its current task, which for the autonomous avatar involvestracking an empty5 within the environment, and moves away from the user avatar.The user avatar notices the autonomous avatar is moving away and because of it’swish to interact with the autonomous avatar, the user avatar tracks it.

The autonomous avatar, to avoid confrontation, is designed to avoid colliding intowalls and makes its decision on what direction to move based on the location of theuser avatar and the proximity of any wall. It only re-calculates the direction it shouldmove in at random intervals as in modelling a ‘real-life’ situation the human wouldonly look back every so often to check they were moving away in the right direction,or indeed there was still need to keep moving away. As the velocity of the autonomousavatar and the user avatar is calculated seperatly it is possible for the user avatar tocatch the autonomous avatar.

State 5: If the user avatar is in this state it means it is interested in the light box object.Consistent with the behaviour of a human who has never seen a light box before theuser avatar moves to a location close to the light box before moving into it, almostappearing cautious. When the user avatar moves into the light box, the light insidecomes on for a set length of time. The user avatar then moves back to its location

5An empty is an object that will exist in the scene, but one that does not have any geometry, and willnot be visible in the scene

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 49

just outside the light box and if it still has further interest, it will repeat the process.The time spent interacting with the light box is determined by parameters passedwithin the user profile.

These key states are only appropriate at certain times within the user avatar’s lifespan,and the transition between the states has had to be clearly defined (See Figure 4.5). Thisfigure is an updated version of Figure 3.6 from the design section, and reflects the changesmade during implementation.

Figure 4.5: This figure illustrates the finite state machine’s for the user avatar, the au-tonomous avatar and the light box; these will guide the action selection of these objects.

The FSM for the user avatar has an additional three states to those given in the designFSM, these breakdown the process of interaction with the hover pad, the sizer pad, and thelight box into two parts; the first is the movement to object, and the second is the respectiveinteraction itself. New transitions have also been shown for these new states to signify theappropriate circumstances for changing between moving to an object to interacting withit.

The wall walk state, as originally stated in the design, is the default state, and subsequentlythe state the user avatar is in when it first enters the environment. The only transition outof this state to the explore state is time, this is calculated from a mixture of the last timethe user avatar went for an explore, the exploration value (given in the user’s profile file)and a random factor.

Once in the explore state the user avatar either comes with the curriosity range of one ofthe other objects or it exceeds the time it has calculated it wants to be wandering for. Inthis second case, the time is calculated from the time the user avatar went for the explore,

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 50

the value of suseptability to boredom and again a random factor. If the time exploringis exceeded then the user avatar reverts back to the wall walk state. However if whileexploring the user avatar comes within curriosity range of another object it will trigger therespective proximity transition and the user will move into the state asociated with thatobject.

If it is the hover pad or the sizer pad that has been triggered then the user avatar is in thestate that actions the avatar to move to the respective object. The transition in both casesout of these states is for the user avatar to have reached the respective pad. In the caseof the hover pad, this transition moves to the hover state, and if it is the sizer pad beinginteracted with then the transition causes a move of state to the expand state. In eithercase the only transition out of these states is time, which is calculated from the time theinteraction started, a random factor, and the difference of the desire to explore minus thesuseptability to boredom. The reason this difference is taken is explained in the Table 4.1

Desire to Explore Suseptability to Boredom ResultHigh High The user avatar wants to be

away from the wall, it getsbored interacting quickly, soonly small time is spent inter-acting.

High Low The user avatar likes to beaway from the wall, anddoesn’t get bored easily, so along time is spent interacting.

Low High The user avatar does not likebeing away from the wall, itgets easily bored, and is quickto return to walking the walls.

Low Low The user avatar does not likeexploring, it does however en-joy interacting and so a rea-sonable time is spent doing so.

Table 4.1: This table explains why a difference of the desire to explore minus the susept-ability to boredom is taken when calculating the interaction time with an object.

If it is the light box that the user avatar has triggered the proximity transition for thenthe user avatar moves into the ‘move to outside the lighting box’ state. Once in this state,the transitions available to the user avatar are time and ‘outside box’; the second one istriggered when the avatar reaches the location outside the light box, at which point itmove to the ‘sit the lighting box’ state where it stays until a time transition calcuated fromthe time entered the box and a fixed time period becomes active at which point the useravatar returns to the ‘move outside the lighting box’ state. This process continues until atime transition calculated from the time the interaction started, a random factor, and the

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 51

difference of the desire to explore minus the suseptability to boredom (see Table 4.1 for anexplanation of this difference calculation). If the time transition is triggered then the useravatar returns to the default (wall walk) state.

Finally if the user avatar and the autonomous avatar come within appropriate proximitythen the user avatar moves into the ‘chase autonomous avatar’ state. From there, thereare two available transitions, one a caught transition which signigfies the user avatar hascaught the autonomous avatar in which case the user avatar immediatly returns to thedefault state. If however the caught transition is not triggered there is a time transitionthat returns the user avatar to the default (wall walk) state once the current time is greaterthan the addition of the time started to chase the autonomous avatar, a random factor,and the difference calculation previously mentioned (see Table 4.1)

This completes the FSM cycle for the user avatar, which it abides by until it has exploredthe entire environment or has interacted with all the other objects as many times as itwanted to. Both these factors are supplied as part of the user profile file.

As well as the FSM’s for the user avatar, Figure fig:figurey illustrates the states and tran-sitions that govern the action selection of the light box and the autonomous avatar. Thelight box has two states, ‘Light Off’ and ‘Light On’; to move from the off (defualt) stateto on, the user avatar must enter the light box. Then to move back to the off state a timetransition that is triggered when the current time is greater than the addition of the timethe light came on and a fixed period of time.

The autonomous avatar FSM has three states and three transitions. The default state isto track the empty. If the user avatar comes within the autonomous avatar’s proximityrange, then the transition to the ‘move away from the user avatar’ takes place. From thisstate the only transition is triggered when the user avatar is no longer inside the proximityrange of the autonomous avatar, which leads the autonomous avatar into the ‘return toposition chased’ from state. From here, the only transition possible is triggered when theautonomous avatar reaches the location it was triggered from and returns to the defaultstate, tracking the empty.

Both the light box and the autonomous avatar FSM’s are active throughout the entiretyof the environments existence.

Once the user avatar has completed it’s session in the environment, the environment closes,and the system returns to the main screen where there is the option to look at the ‘LogFile’ for the user avatar’s ‘experience’. This log file provides a dialogue of the user’s actionsand has primarily been created to facilitate user testing (See Appendix ENTERHERE foran example log file)

4.3 Decisions taken during production

No matter how well conceived the design of a project, it is always necessary to makeammendments whilst in the development phase. The following subsections detail the im-

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 52

portant decisions that distinguish the final system from the intended design.

4.3.1 The Visual Representation of the User Avatar

Early on in the project a decision was taken to restrict the complexity of the user avatardown to a primitive object, in the end a sphere representation was chosen. However beforethis decision was taken initial work was done on learning the necessary skills to control thearmature movements of a Blender character named ‘Ludwig’.

Ludwig is the creation of Jason Pierce and is a fully rigged animation character for usewithin Blender. He was originally created to promote the potential of Blender within thefield of character animation, and was released as an open-source project for other Blenderusers to alter and re-distribute as required. Not only is Ludwig’s body fully rigged buta lot of control can be taken over the face, the eyes and the mouth, allowing for detailedcontrol over expressions that would be able to represent complex behaviours and emotions.For instance with just the eyes the rig allows control of viewing direction, the amount theyopen, the degree of squint, the brow position, the brow rotation, the brow wrinkle, and thesetup of object tracking. Any variation in the degree of one or more of these controls couldsuggest a quite different change in the emotional state or behaviour of the user avatar.

Figure 4.6: The left hand figure shows the ‘Ludwig’ character as a 3D model, and the figureon the right shows the level of armature control that is available for facial expressions.

As powerful a tool as Ludwig is, it became clear after working with the character for a

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 53

while that the degree of control was too low level (See Figure 2.3), and that the wholeproject could be spent writing Python scripts to control the armature movements in aconsistent manner to that of the emotion required. This would be an interesting project toget working however the fully controllable body rig of the Ludwig character is not critical tocompleting the objectives of this project, hence the decision to use more primitive shapes.

The final choice of a spherical object as the primitive object was taken because it pro-duced the most realistic movement within the Blender environment. Other mesh shapeswere experimented with such as a cube and a prism, however their movement within theenvironment appeared unnatural and problems occured when the objects rolled over. Thesphere was the logical choice as by its nature it does not suffer from the problem of beingtipped over and moves fluidly over the environment floor.

4.3.2 The Camera Viewpoint

Initial implementations of the user avatar were developed with the scene camera in atraditional third person position6, tracking behind the user avatar wherever it moved. Thisproved initially difficult to implement because the natural rolling motion of the chosensphere shaped avatar caused the camera to roll in sync with the user avatar. Also whenthe user avatar got too close to certain walls, the camera would disappear behind them.Rotation actions carried out on the camera object itself proved difficult to accuratly control,even when supplying the camera rotate argument with an appropriate radian value.

As such, a decision was taken to have an environment camera sitting in a birds-eye per-spective. This viewpoint the added advantage that the user could see what was happeningacross the entire environment, rather than being soley focused on the viewpoint of theuser avatar. However, the disadvantage of this approach is that the user does not get theimmediate sense of the scope of the user avatar’s vision. In that the user avatar only has acuriosity range and this is in essence how far the user avatar can ‘see’, but as the user cansee the whole environment they may conclude that the user avatar can see everything aswell; as a result believing that the user avatar is not making the right decisions in compar-ison with how they would behave. This problem is solveable and is discussed in chapter 6of this document.

Because both cameras provide the user with unique advantages a final decision was taken togive the user the option to select which viewpoint they wanted to display the environmentfrom. The birds-eye perspective was however moved to be off centre and angled at anaproximate 30 degree pitch, allowing a necessary sense of perspective to be achieved for the3D environment. The option of which camera to view the environment from is given onthe application menu screen before the environment is loaded, and required the necessarychanges to be made to the interface. As both camera implementations had already beenexperimented with few ammendments were required to the environment code.

6The third person perspective is sometimes called the ‘over the shoulder’ viewpoint

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 54

4.3.3 Changes to the Colour Pad Object

During the development of the other objects phase, it became apparent some of the func-tionality originally quoted in the Blender API was not accessible when running throughthe game-engine. A further analysis of this issue is given in the Evaluation of the BlenderEnvironment (Section 7.3); however the result was that the implementation of a colourchanging pad, and indeed the changing of the user avatar’s own material colour was notpossible during run-time.

An alternative object had to be conceived to replace the colour changing pad; the resultwas to continue using the theme of a pad, but to have the interaction change to expansion.By the user avatar’s action of moving onto the pad the user avatar begins a process ofscaling it’s size, much as if it had been connected to an air pump. This response is notconsistent with the original intention which was to display ‘mimicking’ behaviour, but itwould not have been desirable to expand the pad, or indeed the scene because the effecton the overall scales within the environment would have looked inconsistent. As such, onlythe user avatar expands and the interaction instead displays the principle that an actionmay cause an unstopable reaction. In the real-world if that reaction is considered negativeit may lead to the action that started the sequence not being taken again. This is a conceptthat is discussed further in chapter 6. For the other three objects the interaction remainedas designed.

4.3.4 A Simple Scene

During the process of learning how to model using Blender, experimentaion with newprimitives, new materials, and new effects was undertaken with an aim of building up agame scene that spanned across multiple levels, had varying terrains with different lightingconditions and a diverse set of objects for the user avatar to potentially interact with. (SeeFigure 4.7)

Going through this task was important as it made it possible to develop the necessary skillsand build up familiarity with the facilities provided by Blender in order to progress furtherwith the environment. However a large scene with lots of objects of interest was not criticalto the success of the project, and so a decision was taken to create a scene that provided justthe necessities to allow illustration of avatar behaviour within an environment. The finalsolution was created as four walls and a floor providing a rectangular scene. One advantageof doing this was that it restricted the set of environmental complications and allowed moretime to be concentrated on the behaviour of the user avatar. A further decision was takenregarding the walls in the scene; because of the perspective used in one of the camera views(see Section 4.3.2) the walls had their translucency factor increased so there was no chanceof potentially obscuring the user avatar’s movements around the environment.

It was important to remember as development continued to keep the actions and sensorsindependent of the environment they were being created for so the user avatar’s behaviourcould be transferable to any more complex scene it might be placed into. In practice

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 55

Figure 4.7: This figure is an illustration of a complex scene within Blender

adaptability is difficult to achieve, however the topic is discussed furhter in chapter 6.

4.3.5 The Blender Environment

Problem One: An objects location within Blender is a point which is not necessarily thecentre of all vertices7; therefore when doing comparisons of the distance between theobject and the user avatar the results can appear unreliable because the user avatarcould be close to the face8 of an object but some way from the point that defines theobjects location. In some situations this gave the appearance that the user avatarhad not seen an object.

This issue could have been overcome by using the Blender sensor called ‘Ray’ whichcasts a ray along an axis, and returns true if an object with a certain propertyappears within a specified distance. However, because the user avatar is representedas a sphere it rotates as it moves, as a result the axis to be cast along rotates withit, and the ray ends up being cast in all directions, not the line of sight as intended.This resulting issue could be overcome by adding an empty object at the centre of theuser avatar and casting the ray from that object (being exactly in the centre wouldmean the empty object would not rotate like the user avatar), however there wouldstill be a restriction as the rays can only be cast allong the line of one of the axis.

Problem Two: The initial implementation of the explore method which makes the useravatar leave the safety of the wall to explore into the environment appeared to have aproblem where the user avatar always tracked to the centre of the scene. This provedto be caused by the fact that the user avatar continuously refreshed its direction,causing it to always move away from the closest wall; therefore it did not take longfor it to find exact centre (the furthest point from all the walls), and subsequentlynot roam from that spot.

7Vertices are the points that define the shape of the object.8A face of an object is defined as the plane created by a set of three or more vertices.

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 56

The solution was to change it so the speed and direction (velocity) of travel is updatedat calculated intervals rather than continuosly. The rate of these intervals is deducedfrom parameters provided in the profile file. The intervals between updates are relatedto how frequently the user reassesses their movement, rather than reassessing at everyframe refresh. The result is an improvement in movement realism for the user avatarduring its explorations, and most importantly that it no longer tracks to the centreof the scene.

Problem Three: The manner in which Blender refreshes objects and their correspodingscripts at the frame rate makes the concept of game variables awkward. Game vari-ables in this project are variables that are updated during runtime and are requiredto retain their updated values. If objects and scripts are re-intialised at the framerate, which in Blender is approximately every twenty milli-seconds, any variable valueset during runtime would appear to be continuously reset.

A workaround is achievable which creates an empty object that is present in the sceneon the first frame, but then destroys itself so that the script linked to the empty objectis only run the once. The Python script contains the game variables that are requiredto only be intialised once and need to be available to all objects within the scene.

Problem Four: The manner in which the user avatar moves within the environment isachieved by applying a force to the user avatar in the direction that it is requiredto move; the size of the force is what determines the speed at which the user avatarmoves. In an initial implementation of the method that makes the user avatar tracktoward a specified object, the calculation for the vector force was done by subtractingthe position of the user avatar from the position of the object being moved toward.This led to two problems with the movement exhibited.

The first was that when the user avatar was some distance from the target object,the x-value and y-value from the calculated force vector would be in the region often to twenty, which when applied had the result of moving the user avatar acrossthe screen at a rate that was unviewable to the user watching the scene, giving theappearance of a jerky instantaneous movement.

The solution to this is to conduct validation on the distance comparison being re-turned, and subsequently scale down the result proportionally to the force vector’soriginal magnitude. A slight disadvantage of this approach is that the user avataris restricted as to how quickly it can move toward an object, and it also makes theassumption that the velocity of the movement towards the object will always decreaseas the user avatar gets closer to the object. In a real-life situation this may not alwaysbe the case. For example, during a race, when the runner gets close to finish line theymay speed up to ensure a good time. The assumption created by this solution couldpotentially restrict the believability of the avatar within the environment, althoughit is feasable for this project.

The second issue caused by the subtraction of the two vector positions was that,intially the z-value was included in the subtraction and the subsequent force vector,

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 57

resulting in the user avatar moving in the z-direction. This was not at all desirable asit had the effect of making the user avatar leave the scene floor. The straight forwardsolution to this was to remove the z-value from the vector position subtraction andto default the z-force applied to the user avatar to zero.

4.3.6 The First Profile Interface

Figure 4.8: This figure is a screenshot of the first version of the profile screen before userfeedback was incorporated into the design.

The profile interface was initially developed as a colection of sliders and labels (See Fig-ure 4.8); while this functionally achieved what was required of the interface, the informationbeing asked of the user was misleading. For instance with using horizontal sliders the senseof setting the level for each of the behaviours was not aparent to the user. This informa-tion was collected during initial user testing which is not intended to be discussed here, seeChapter 5, however as a result of the user discussions, several simplifications were made tothe interface; These included:

• The removal of the wander slider as it caused confusion between the difference to theexplore slider. Instead the wandering calculation is made from the remaining three

CHAPTER 4. USER-CENTRED AVATAR SYSTEM 58

behaviours.

• The sliders were changed from their default horizontal style to vertical mode. This hasthe effect of distinguishing between high and low without having to independentantlydefine them. Labels for each of the sliders were removed and replaced by one newlabel for each slider referring to the behaviour level in each case.

• Guage objects were added to each of the remaining sliders to illustrate more clearlythe fact the user was effecting the levels of behaviour.

• The overall look and feel of the interface was changed to remove the formal conotationsthat the users had cited as a reason for feeling unsure as how to interact with theinterface.

Chapter 5

Results

The following chapter is an overview of the user testing that was carried out and the resultsthat were returned from it.

Setting up the User Test

Each user was given the same scenario, and had to participate in three pre-determinedtasks.

1. The user was asked if they could create their own user profile unaided? The userswere given no previous experience with the system, and were left to navigate andinteract in any manner they saw fit. The purpose of this exercise is to distinguish ifthe system designed is indeed easy to use for a user who has had no exposure to it.

2. The second task asked of the user to now use their created avatar and launch theenvironment. Once the environment was loaded they were told that for the durationfor their avatar’s existence within the environment, they should keep a tally of allthe decisions they noticed, recording if they agreed with the user avatar’s chosencourse of action, or thought they would have done it differently. The purpose of thisexercise is to establish how reliably the user profile, which is defined by the user,maps behaviour across from the human user onto the user avatar.

3. After the user has completed tasks one and two, and the user avatar and environmenthave closed, this task will ask the user two questions:

(a) Having seen the environment, what would you consider to be your goals withinit if you were to be placed inside, and it was completely unknown to you?

(b) Having seen your user avatar in action around the environment, what in youropinion are its driving goals?

59

CHAPTER 5. RESULTS 60

The reason for asking these two questions is to try and establish whether the userand the user created avatar share the same goals.

The User Test Parameters

1. The user testing was carried out over two days.

2. The sample age range was 17 - 56, with predominantly more in the age bracket 20-23

3. There were 9 users who took part in the testing

4. The breakdown was 6 male to 3 female

5. The experience of interacting with computers ranged from basic word processor skillsto final year computer scientists.

The Results from Usability Test One

The user was asked if they could create their own user profile unaided. For all the nineusers creating their own profile, all nine of them were able to generate a new user avatarwithout having to ask for assistance. All nine users were able to use the sliders to controltheir behavioural specification.

The Results from Usability Test Two

The following table gives the breakdown of the users and the number of decisions theyagreed with and disagreed with during the runtime of each users avatar.

User No. of Agree’s No. of Disagree’sOne 11 10Two 21 5Three 9 6Four 33 17Five 19 17Six 25 9

Seven 12 37Eight 12 4Nine 13 9

Table 5.1: The results of test two, whether users agreed or disagreed with the users deci-sions.

As can be seen some of these results are quite varied. This is a positive thing, eachuser selected slightly different levels for each of the specifiable behaviours within the user

CHAPTER 5. RESULTS 61

profile, and as such different amounts of interaction would be expected to take place. Thepromising trend is that the number of agreed decisions, in most cases, outweighs the numberof disagreed decisions.

From Table 5.1 a calculation of the accuracy for each user’s avatar can be calculated as afunction of the number of agreed decisions in relation to the total number of listed decisions.

One Two Three Four Five Six Seven Eight Nine52% 81% 60% 66% 53% 74% 24% 75% 59%

Table 5.2: A Table to show the accuracy of user to avatar’s decision making

The data presented in Table 5.2 presents some illustrating findings. The average accuracyof the user avatar in comparison to the human user is 60% which taking into account thecomplexity of action selection techniques implemented in the project is a strong average.Interestingly if you remove user seven’s result then the average accuracy goes up to 65%.User seven’s result is the anomaly in the set, and was caused by user seven marking everychange in direction during the explore method as a disagreeable decision.

The Results of Usability Test Three

The aim of this usability test was to show consistency between the actions of the useravatar and the actions of the user. It is not faesable to write down every answer given bythe nine participants in this section, however the following is a summary of the findings.

The most common goal suggested by the users is the need to explore the environment.They then split, 6 of them stated that their next goal would be to interact with an object,the other 3 said theres would be to follow the wall. At some point all the users mentionedthe goals that are exhibited by the user avatar.

Similarly to their own suggested goals, 8 out of the 9 users said they thought the first goalof the avatar is to explore the environment. Interestingly, the other user said they thoughtthe first goal was to interact with objects. Again all of the users covered the goals of theavatar, although the actual default goal of the avatar (the safe state) appeared below thethird goal for 4 of the participants. This fact could be put down to some of the usersselecting very high exploration rates, which meant their avatars spent very little time onthe walls and subsequently this might be seen as a lack of importance in this goal.

Chapter 6

Project Analysis and Further Work

The development of virtual avatars has been expanding for over a decade and has encom-passed many disciplines including artificial intelligence, computer graphics, psychology, andhuman-computer-interaction. The scope of this dissertation is such that the work carriedout has only touched on this vast area of research. The following section is presented as ananalysis of what the project has achieved, developing into an acknowledgement of the areasthis project could further develop and how creating believable avatars is a highly complexaffair.

6.1 Analysis of the User Avatar

During earlier chapters, the idea of believability has been central to the argument forthe creation of a realistic avatar in this project. Over fifty years ago, Alan Turing cameup with a method for evaluating the realism of artificial intelligence (AI). Central to hisdefinition is that the user conversing with the AI avatar believes they are interacting witha human being. The Turing Test is therefore incredibly difficult to pass because evenwithout visual stimulations, getting an avatar to behave exactly as a human, in a consistent,but emotionally-driven way, that draws on experience from past interactions, and offers aconversation suited to the user it is interacting with, requires a lot of complexity. So,to pass the believability test, an understanding of what makes the user believe they areinteracting with a human is required, and the following explores this premise.

Hayes-Roth (2003) breaks down the required behaviour of a virtual avatar into ‘sevenqualities of life-likeness’. These are intended to be high-level groupings of much morelow-level behaviour, and a correct emphasis is put on the avatar appearing to have thesequalities without necessarily actually possessing them.

Avatars should seem conversational: The term ‘conversational’ means in this casethat an avatars should be able to have an exchange of thoughts, opinions or feel-ings via language, gesture, actions, facial expressions ((Hayes-Roth, 2003)). As well

62

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 63

as being able to initiate ‘conversation’ an avatar should be able to respond to a re-quest for interaction from another object. Being responsive or indeed unresponsiveis a powerful way of showing that an awareness of each other exists.

In this project, when the user avatar moves into the light box, the light comes on;this is an example of one way communication, where the user avatar has initiatedand the light box object has reacted. Only the user avatar has the ability to initiate‘conversation’ in this example; however if the light box controlled when the light cameon, and upon doing so the user avatar rushed towards the light box, the light boxwould have initiatied the ‘conversation’. Although in these examples there is little inthe way of mutually beneficial exchange between the user avatar and the light box,the user avatar has displayed converstaional characteristics by entering the light box,and the light box has responded with an appropriate reaction. Similarly the useravatar’s interactions with the hover pad and the expansion pad can only be viewedas one way communication; but with the autonomous avatar, the user avatar has twoway communication with both objects responding to the others ations.

Avatars should seem intelligent: Hayes-Roth’s (2003) use of the term intelligence isbased on the principle of displaying a role-appropriate to knowledge and expertise;in essence responding in a manner that is appropriate to the request being made.General intelligence and intelligent decision making is a vastly complex subject inits own right, one which is out of the scope of this project, however intelligence isdiscussed in greater depth later on the this chapter (See Section 6.2.1).

Avatars should seem individual: Hayes-Roth (2003) states that avatars should have aunique and distinctive persona with distinctive back stories, personalities and emo-tional dynamics that drive their behaviour towards others. Furthermore, avatarsshould reveal and express their personas in every aspect of their beings and behaviour.

The autonomous avatar in this project is considerably smaller than the user avatar,the autonomous avatar is a less dominant yellow colour than the bright red useravatar, the autonomous avatar just moves within its area and does not go exploringlike the user avatar does (unless the avatar chases it), and the autonomous avatar isnon-confrontational in that it likes to keep its distance from the user avatar. All ofthese unique characteristics come together to give the user a sense of the autonomousavatar’s persona.

The user avatar’s behaviour towards the environment and others is driven in partby its back story (it has entered this unknown environment), and by the personalitygiven to it through the user profile file. The ways in which it interacts with otherobjects within the environment gives off an indication as to what its character is like;this is very different from the persona given off by the autonomous avatar.

Avatars should seem social: This principle hinges on the idea that an avatar shouldseem to display social awareness and apply itself appropriately given a social context.By this it is meant that an avatar should relate to others within the environmentbased on their shared history and their relationship to each other. This is something

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 64

that the avatar needs to build up over a period of time, and involves the avatar havingan implementation of episodic memory.

In this project the user avatar keeps a record of all the other objects it has interactedwith, the number of instances of interaction which have occured with that particularobject, and the time that interaction took place. This record allows the user avatar todismiss actioning a potential interaction with an object if it has only recently finisheddoing so. The record also allows the user avatar to only interact with an object forthe number of times it sees fit too. The log file that is generated as a result of theuser avatar’s activities within the environment is a primitive form of memory. Thelog file keeps a list of the order in which user actions and interactions have taken placepreviously; in effect a memory dump. Although the disadvantage with this as a formof memory is that it is not accessible during runtime and the user avatar’s decisionsare not made based on it. The log file does not add to the user avatars sense of socialawareness, however the record of interaction does; both forms of memory have theirown unique advantages, something discussed further in Section 6.2.3.

Avatars should seem empathic: A virtual avatar should not only respond with actions,but with a sense of emotion and feeling. For instance if something is disagreeable thenthe avatar should seem saddened or angry, and these emotions should be refelectedin its interactions with other objects, until something causes its emotional state tochange such as another interaction or the passing of time. If the avatar is seen tobe displaying a feeling then the user associated with it will have empathy with itssituation and their mood will ultimately be effected. At this stage a strong connectionbetween the virtual avatar and the human will have been achieved.

Empathy is a stong human emotion, and although user tests showed that the userfelt a sense of belonging towards their avatar, their emotional state did not changenotably whilst their avatar was in the environment. To achieve a sense of empathy,the displaying of emotional state by the user avatar needs to be improved. Previouslydiscussed issues with the restrictions discovered within the Blender environment suchas the changing of an objects material colour during runtime have limited the rangeof emotion the user avatar can display. An example of a suitable advancement forthe user avatar could be, during the avatars inflation on the expansion pad, the useravatar could appear to quiver; illustrating a feeling of uncertainty for what is about tohappen. This quivering could continue after the user avatar has burst and is walkingaway; showing that the experience has made the user avatar nervous and on-edge.This feeling would diminish with time or if interaction with another object distractedthe user avatar.

Avatars should seem variable: Hayes-Roth’s (2003) use of the term variable is intendedto infer that a virtual avatar’s behaviour should not be predictable. This may seemto be contradictory to Hayes-Roth’s (2003) claim that a avatar’s behaviour shouldbe consistent with its past experiences. This is not the case; consistency should notrestrict the avatar from varying it’s behaviour selection. Past experience should becombined with appropriate action selection and an account for emotional state when

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 65

making a decision as to the behaviour to exhibit.

Appropriate action selection and a limited scope of past experience awarness (see‘Avatars should seem social’) has been employed in the user avatar’s behaviour se-lection. With decisions such as which direction to turn when first reaching a wall,there is no consideration for emotional state, instead a random factor is used to makethe decision. This means that whether the user avatar turns one way or the other isunpredictable to the user; and the user avatar appears variable.

Avatars should seem coherent: Virtual avatars should hold true to their beliefs just ashumans do; they should have a sense of right and wrong and their interests shouldbe dictated by their own identity (Hayes-Roth, 2003). It is important to consider theeffect of giving an avatar a sense of right and wrong; it may lead to a direct contra-diction with its social awareness. For example, if a human is told to do somethingwrong by their manager, are they necessarily going to make the manager aware thatthey disagree with their judgement?

The user avatar in this project has not been given a sense of belief or right and wrongbecause that functionality is considered to be out of scope; giving a virtual avatar asense of internal beleifs also raises numerous ethical implications (See Section 2.6).

Possessing all these seven qualities is not a comprehensive assurance of creating life-likebehaviour in the user avatar; the real complication comes from how these principles arebrought together and centrally controlled by the avatar. Where there is conflict, how doesthis get resolved, what is the hierarchy of commands? The scope of this project does notassume that all of these questions, or indeed all of the seven-qualities as defined by Hayes-Roth (2003) have to be achieved; in fact one of the initial objectives state that the projectmust be focused.

6.2 Further Development

The following subsections will analyse well publicised aspects of virtual avatars that areconsidered to be out of the scope of this project, however present interesting areas for futuredevelopment. Each subsection will provide a definition of the area, analysis of techniquesthe current system may utilise, and then make suggestions as to how and why this may beimproved.

6.2.1 Emotion and Intelligence

Intelligence, as defined by Merriam-Webster (1989) is the ability to learn or understand orto deal with new or trying situations.

In this project the user avatar has to deal with a new and potentially trying situation bybeing placed within the unknown environment. However, it is clear that even by this broad

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 66

definition, the decisions taken by the user avatar in this project do not exhibit much degreeof intelligence. For instance, once the user avatar has discovered where an object is in theenvironment, it does not remember the objects location so that if it wants to go back, itcan track directly to it. This means that after leaving an object, having interacted with it,the user avatar has gained little from the experience; only that it has had an interactionwith the object, and the realisation that it does not want to go interact with the objectagain for a while.

Where an avatar has to display intelligent behaviour selection it may be necessary for theavatar to keep a record of how it ‘fealt’ whilst it was interacting with an object. Would itinteract with the object again, would it do so for longer or shorter time? This project keepsa record of the number of visits the user avatar has made to each other object, from this itis able to decide whether it wishes to interact with the object again. However, this is nota record of any feeling towards the object, and the user avatars decision not to interact ismade based on a maximum number of visits.

In practice, developing a measurement of the user avatars pleasure from interacting withany given object and displaying an appropriate emotion can be difficult to define because ofthe avatars susceptibility to moods. It would not be enough to assign each object with anemotional weighting and to say that everytime the user avatar interacted with this object,this will be its emotional state. This would lead to irregular behaviour patterns. A moreconcrete implementation would follow the work of Rolls (1999), discussed in Section 2.4,where rewards and punishers are passed from an object to the avatar’s overall emotionalstate.

Although this approach would still be an approximation, it would lead to more realisticmodelling of an objects effect on the user avatar. Making intelligent decisions based onprevious emotional interactions is something humans do all the time, and if the user avataris to achieve a greater sense of believability, one of this projects fundamental aims, thenincorporating emotional based intelligent decision making is crucial to further success.

6.2.2 Adaptability

Adaptability in the context of the user avatar is considered to be the avatars ability toexist and survive in any virtual environment, with any given set of dynamic objects. Whilewhat some would consider ‘true’ adaptability1 is not a practical or realisable goal, a levelof adaptability is always required unless the application is deterministic.

In this project the notion of adaptability was forefront during development. The multi-environment application was an aspiration, however the limitations of the game environ-ment within Blender restricted the degree to which this aim was achievable. An exampleof where the current implementation falls short of the adaptability aspiration is in the wallwalk method. This method uses the direction of travel and the previous turn to calculate

1An example of true adaptability: An avatar designed to keep track of a users shares having the abilityto adapt its skills to guide a doctor through a complicated operation

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 67

which corner it has reached, then making the appropriate turn; for any implementationother than the current rectangular environment, this action would cease to function.

Figure 6.1: This figure illustrates four examples of the radial method to wall detection thatcould be employed in future implementations of the wall walk action.

This action could be improved by applying a radial sweep wall detection technique as shownin Figure 6.1. Determing the new direction of travel using this approach would involveperfroming a radial sweep from the angle opposite to the direction of travel, projectedround away from the closest wall until it makes contact with another wall object. The fourexamples shown in Figure 6.1 illustrate how the direction of travel is calculated in differentsituations. (Figure 6.1 (a) ) illustrates when there is no walls around, and the user avatarwould continue its own path. (Figure 6.1 (b) ) and (Figure 6.1 (c) ) both show reflex anglecases, and (Figure 6.1 (d) ) illustrates an acute turn. The purpose of these illustrations isto indicate how the wall walking action might be improved to become adaptable to variedenvironments. Although user adaptability to varied environments is not an objective of thisproject, having a greater diversity in the environment is a natural progression to improvinguser interest and immersion with the project as a whole.

6.2.3 Learning and Memory

The ability to learn from experiences is a major part of human development and as such iscrucial to the development of realistic virtual avatars. The study of learning and differentlearning techniques dates back a long time. In Voss’s (2005) article on General Intelligence,he describes intelligence as not having a lot of knowledge and skills except those required toacquire, improve, and appropriately apply new skills. Voss (2005) is looking at learning from

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 68

a non-domain specific viewpoint and suggests that the ability of an avatar to learn anythingstems from the ability to be autonomous, goal-directed and adaptive in its approach tolearning.

Autonomous Learning: is where learning occurs automatically through sensory datafrom the environment and through exposure to tasks within the environment.

Goal-directed Learning: learning is achieved by completing goals and sub-goals bothexternally and internally set. Goal-directed learning leads to very focused data ac-quisition.

Adaptive Learning: allows the learning to be tailored to the situational context, ad-justing to changes in the context, building up over time, allowing a more focuseddevelopment to be achieved.

(Voss, 2005)

This project does not strictly employ a hierarchical goal-driven approach to learning withinthe environment, however it is with this form of learning that it has the most comparisons.The goals of the user avatar are to be safe within the environment (walking the walls), toexplore the environment, and to interact with objects where there is desire to do so. Thelearning that is achieved by exploring the entire environment, and interacting with all theother objects in the environment is that the environment is ‘safe’ to exist in; the result ofthis is the termination of the environment and the user avatar within.

Once again it is not an objective of this project to develop an avatar that exhibits consistentbehaviour having learnt from its experiences and interactions. This is still considered outof scope. However a potential future development to the work in this project could look atemploying a combination of the three learning techniques described by Voss (2005). Evenon a simple level having the user avatar adaptively learn how to navigate to objects thatwere enjoyable, and how to avoid those that were not enjoyed would lead to more realisticmovement of the user avatar within the environment.

The ability for an avatar to learn from its environment is fundamental to it behavingconsistently with its experiences when presented with familiar or similar tasks. Howeverif the avatar is not able to draw on the knowledge it has accumulated then acquiring it isessentially pointless. This is where a coherent approach to memory and memory recall isrequired.

Initially it might seem that memorising the results of what has been learnt is straightforward, after all modern computers have an abundance of memory. But this is not the re-ality. In fact memory storage and memory recall, when dealing with adaptable autonomousavatars needs a coherent and potentially complex way of allocating information into blocks.

In this project a primitive form of memory has been implemented, one where pre-specifiedenvironmental and user avatar details are saved into pre-initialised variables which are

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 69

known to the user avatar from intialisation. This form of memory is sufficient for to meetthe objectives and the current scope of the project; but if learning and intelligent decisionmaking were to be factored in, as disscussed in the subsections above, this hard-codedapproach would not be feasible.

Common implementations of memory mapping utilise neural networks, which are networksof simple processing elements called neurons, which are connected together in order toexhibit more complex global behaviour. The need for more complex memory mapping stemsfrom the desire to have the user avatar learning from its environment and interactions;although many action selections can be made reactivly, more complex decisions requireknowledge of past experiences to formulate a coherent an appropriate response.

6.2.4 Movement within the Environment

Currently the movement force applied to the user avatar within the environment is constantwhilst the user avatar is walking the perimeter walls. When the user avatar is exploringthe environment the force is a constant, however that constant is randomly selected atgiven time intervals. This is seen by the user to illustrate the avatars fluctuation in mood,exhibiting eagerness or apathy in its movements. The only example of where non-constastmovement is applied is in the ‘moveToObject’ function which calculates the movementvector to the target object and then applies the force linearly (the force decreases as theobjects get closer together). The disadvantage of all three of these types of movement isthat they do not lead to realistic movement of the user avatar within the environment. Onenotable example is when the user avatar is returning to the wall after an exploration orinteraction and it moves in a straight line towards the wall with a constant force, suddenlychanging direction once it has reached the wall, still moving with a constant force. This ismore representational of how a robot moves, not a human.

Reynolds (1999) wrote a paper on the ‘Steering Behaviours for Autonomous Characters’which describes different movement patterns as behaviours which can be applied hierarchi-cally to an autonomous character like the user avatar in this project. Reynolds suggestsa three tier approach; action selection, steering, and locomotion. The action selection tieris concerned with choosing the right movement patterns to achieve the current goal of theavatar, the steering tier is the movement definitions themselves where the calculation ofthe path is determined, and the locomotion tier is the actual movement of the avatar. Itis this second tier that is of most interest providing the calculations for achieving differentstyles of movement.

Some of these ‘steering behaviours’ are discussed below, however the list is far from adefinitive and as each behaviour is a separate function they can be written independentlyand built up as a library.

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 70

Seek and Flee

These behaviours equate to the ‘trackTo’ actuator provided by Blender, and the ‘move-Away’ function written for the user avatar in this project; an implementation of thesemovement behaviours would improve the realism of these movements by softening thechange of velocity from the current velocity to the desired velocity (see Figure 6.2). Aswith all the movement calculations a combination of the direction of travel and the speedof travel are required to calculate the new trajectory of the moving object.

Figure 6.2: This figure illustrates the seek and flee movements as described by Reynolds(1999)

Offset Pursuit

This movement is a calculation that brings the velocity of one object into line with the ve-locity of another. Being offset means that the user avatar’s distance would be kept a certaindistance away from the object being tracked. The offset pursuit movement behaviour iscomparable to the movement of the user avatar when it is following the perimeter wall, withthe exception that it would always keep a certain distance from the wall (See Figure 6.3).

Wander

A wander moevement has been implemented in the existing project (refered to as theexplore method), however the implementation of this movement is jerky as the new velocityis calculated based on avoidance of any close wall, or lacking proximity to any wall, thenew velocity is assigned based on a random x and y value (bound by a range set in theuser profile file). The wander method proposed by Reynolds (1999) provides a sense of‘sustained turns’, whilst always having a fixed target point in mind. As shown in Figure 6.4,the wander movement is calculated by having a virtual large sphere projected in front of theuser avatar that provides the maximum wandering strength (how far it can deviate fromits target path) and a small sphere on the circumference of the larger one gives the wander

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 71

Figure 6.3: This figure illustrates the offset pursuit movement as described by Reynolds(1999)

rate (the rate of change in the displacement). This ensures that as each frame is calculated,only small displacements of the user avatar’s velocity are made giving the appearance ofsustained turns. The advantage of implementing Reynolds (1999) wander movement is thatmore believable wander paths are created resulting from no sharp or sudden turns duringthe user avatars exploration of the environment.

Figure 6.4: This figure illustrates the wander movement as described by Reynolds (1999)

Additional Movements

As stated previously there are a vast selection of potential movements that Reynolds (1999)disscusses which could be built up into a movement behaviour library. Particular othersof interest include separation, cohesion, and alignment, which are particularly importantwhen dealing with crowded environments. An arrival movement which is similar to theseek function explained previosuly, except it slows down and eventually stops as it trackstoward an object; this would be important as the movement behaviour that defines how

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 72

the user avatar moves onto one of the pad objects, or into the light box. Also a movementReynolds (1999) calls ‘unaligned collision avoidance’ which is a movement that seeks outpotential collisions between the avatar and other objects moving in arbitrary directions.This movement behaviour could be useful when trying to avoid objects in the environmentthat the user avatar no longer wishes to interact with.

In summary of Reynolds (1999) behaviour movements, as part of any further developmentor improvement to the the existing movement behaviours exhibited within the environment,a movement behaviour library would need to be built up. The behaviours in this librarywould then need to be assigned hierarchical weightings because during runtime it may benecessary to apply two or more of these movements in parallel in which case the ratio oftime spent applying each one becomes an important factor. Being able to apply multiplebehaviour movements in parallel is an advantage as once again a more realistic movementwould be achieved; possibly displaying a sense of conflict in the user avatar’s goals. Forexample the user avatar may be going to explore the environment, but it may also need toavoid colliding with any of the other objects, in which case a combination of the wanderand ‘unaligned collision avoidance’ movements would be exhibited.

6.2.5 Line of Sight

Within the current implementation the user avatar appears in the representational form ofa sphere. This sphere has continuous shading across its whole surface, and this means thereare no distinguishing marks to give an indication as to which direction is the user avatar’sline of sight. Being merely a representational form may infer that the user avatar does notneed to have one particular viewing direction, instead how it is currently incarnated witha 360 degree viewpoint could be considered sufficient. In fact in the scope of this projectit is sufficient. But when looking to give the user avatar human qualities and asking it toperform within the environment in a consistent way to how the human user would behave,having an unnatural advantage such as being able to see what is going on all around, isnot desirable.

There are numerous methods that any future developer could approach this problem from;the first slightly simplistic approach, would be to give the user avatar a 180 degree vision(Figure 6.5 (a) ). This would improve the scenario in the sense that the user avatarwould not be able to see what was going on behind it, but the avatar would still haveperfect peripheral vision. A more complex and sophisticated implementation of the viewingdirection is shown in (Figure 6.5 (b) ) whereby perfect vision is given in the primary regioncovered by the eyes (100%), and then the sample rate would drop to 60% and then to 30%off to the side of the viewable direction. The drop in sample rate would mean a loss ofinformation returned about the environment and the loss of smaller details.

Both these implementations would show an improvement on the current project solution,but with their instigation the user avatar would have to have either distinguishable markingson the user avatar’s surface, a field of vision marked on the environment floor or the additionof another shape to its mesh in order to provide the user viewing the environment with

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 73

Figure 6.5: This figure illustrates two possible solutions to the problem of the user avatar’sline of sight

an indication as to which direction the user avatar was currently looking. Restricting theviewable scene in this way inhibits the information available to the user avatar, and as suchmakes its decision processing as falable as a human.

6.2.6 Behavioural Control to the User

All the aforementioned techniques are aimed at improving the believability and realism ofthe user avatar within the environment; however there is also work for further developmentwithin the high-level specification of the behaviour to be exhibited by the avatar.

A Better HCI Perspective

The objective of facilitating the easy creation of a user controlled behavioural avatar isthe only objective with an indirect weighting on the design of the profile interface throughwhich the high-level user behaviour specification can be made. Therefore correctly littleemphasis has been placed on the look and feel of the interface, or ensuring that it meetsgood HCI principles.

Currently simplistic sliders and guages are used to input the user behaviour into the profilefile. In addition the interface utilises buttons, text boxes, labels, a combo box, and staticpicture boxes. These more than satisfy the objectives for the scope of this project. Howeverthese classic GUI2 objects are purely functional and do not make any attempt for novelinteraction, or employing empirical data collection techniques.

Being a project based normally associated with such disciplines as artificial intelligence,computer graphics and psychology, developing HCI issues may seem unnecessary; howeverthe advanatage of a good interface is that it faciliates good data capture, and the better thedata capture about the user, the more accurate the user avatar’s behaviour will be insidethe virtual environment.

2Graphical User Interface

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 74

Existing Virtual Avatar Scripting Languages

During the litterature review stage of this project, research was carried out into scriptinglanguages; however this research did not culminate in the use of any particular existingvirtual avatar scripting lanaguage. There were several main reasons for this choice genericto all the scripting languages researched.

• The lack of reference material regarding the setup and use of the scripting langauges,in the particular case of VHML3 a need for appropriate Document Type Definition(DTD) support.

• The need to write a compiler layer from the chosen existing scripting language to thePython lanaguage that the Blender environment supports.

• The scope of the project is such that only a small proportion of high-level scriptingcontrol level is required, most complexity is in the traditional ‘behaviour level control’tier.

These generic problems resulted in the decision being taken to not include an existing avatarscripting language in the design of the project. Given the appropriate reference material tofully understand the existing avatar scripting language, it would be appropriate to assumethat given some of the other further work suggested in this chapter, the increased complexityin the user avatar would require a well structured approach to the script level control tier,something that exisiting scripting lanaguages have already achieved. Writing a compilerwould, in principle, not prove to difficult and the advantages of using a well-tested reactiveplan would justify the effort required.

Therefore inclusion of a high-level avatar scripting language given the added complexityof the further work suggested in this chapter is appropriate. The following section givesa broad overview of the potential envisaged system if the further work discussed in thischapter were to be implemented to good effect.

6.3 Envisaged System Overview

Each one of the allusions to further work discussed in this chapter could be considereda project or in some cases projects in their own right. Figure 6.6 illustrates a potentialhigh-level specification for the further development of this project.

The standard three tier approach to levels of control given by Whalen et al. (2003) wouldremain an important model to the breakown of the avatar control. The scripting levelcontrol tier would utilise the functionality of an exisiting avatar scripting language. Furtherinvestigation into the most appropriate language for the purpose would need to be carried

3VHML: Virtual Human Markup Language

CHAPTER 6. PROJECT ANALYSIS AND FURTHER WORK 75

Figure 6.6: This figure illustrates the high-level specification for a potential envisagedsystem.

out, however from the intial studies conducted in this project the POSH reactive plans seemwell suited to the task of breaking down and structuring appropriate behaviour selection.

The middle tier, the behaviour level control, would be implemented in the game-engineof choice. This would be used to create libraries of commonly associated behaviours, forinstance a library module devoted to the movement behaviours suggested in section 6.2.4that the user avatar could action based on an appropriate ‘act’ call from the POSH reactiveplan. Another library for behaviours that would need to be created would be the emotionallibrary, this would include necessary emotional states, stimulated from the script levelcontrol, and implemented using function calls to the joint control level. The ‘senses’ wouldalso be developed at this abstraction level, utilising any exisiting sensor functions in thegame-environment where possible, or creating customised sensor functions from accessibleobject states.

The lowest tier is the joint level control, which would be covered by the chosen gameenvironment. This level is too low for the creation of virtual avatars as it detracts fromthe high-level specification of behaviour which would still be central to the objectives ofthis further work. The joint level control should be dealt with by the chosen game-enginewhich should provide all the necessary avatar joint controls in the form of function calls.

This envisaged system is just one potential direction the development of the intial workcarried out in this project could be taken. In fact the creation of an overall system asdescribed above will be more likely years away; research into specific targeted areas ofthese further work suggestions are a more probable outcome of this analysis.

Chapter 7

Project Conclusions

This chapter begins by summarising the achievements of this dissertation, categorisingachievements by completion of the original project objectives. Among the many achieve-ments of this project, certain aspects will be evaluated; first a critical evaluation of the useravatar’s effectivness at propelling a sense of believability toward the user is then given. Thenfollows an evaluation into the advantages and disadvantages of developing with the Blenderenvironment, with a view to making recomendations for future deployments. This chapteris completed with an overall project evaluation.

7.1 Summary of Achievements

• To facilitate the easy creation of a user-controlled behavioural avatar:

The process of creating a user-controlled behavioural avatar has been simplified to userspecification of high-level behaviours namely; exploration, boredom and curriosity. Insteadof the user being required to have knowledge of a particular scripting language they candefine the behaviour of the avatar through the use of sliders. These three sliders have guagesattached which visually convey the sense of a behaviour level to the user (See Figure 7.1).

The values of the three behaviours provided when the user profile is setup, are used through-out the user avatar’s lifespan in the environment to calculate appropriate actions for theuser avatar to display. The user setting up the avatar does not need to know either anydetails about how the application uses the data, or how the user avatar illustrates it; merelythat by specifying their behaviour, the user will impact the actions of the avatar withinthe environment. Being able to perform this task at the first attempt without having

The user testing carried out on the system showed convincingly that all the users found it‘easy’ to navigate to the profile manager, and to move the slider positions such that theywere happy the levels of behaviour accurately resembled their own behaviour. Every userwas then able to save their profile, although one needed prompting to do so by the dialog

76

CHAPTER 7. PROJECT CONCLUSIONS 77

Figure 7.1: This figure illustrates the sliders and guages used in the Profile Manager toselect behavioural levels

box warning, and to load their profile into the application setup to be launched. Fromthis point, every person was able to select their camera viewpoint and select the ‘Runapplication’ button. For further details on this user test, see Chapter 5.

Having achieved this without any direct tuition, only being informed of the overall aimwhich was to create an avatar profile of themeselves consititutes ease of use, and sufficientsimplification to make the process readily accessible.

• To create an avatar that displays a visual and behavioural personifiation. This shouldbe achieved within a virtual environment:

Breaking down this objective into it’s subparts; the creation of an avatar has been achievedin the Blender environment. The chosen representation was to implement the user avataras a sphere, it displays a physical presence in the environment, most notably in contrastto the autonomous avatar which is considerably smaller than the user avatar. The personagiven off by the user avatar is that it dominates the autonomous avatar, and when the useravatar chases the autonomous avatar, empathy with the predicament of the autonomousavatar was fealt by one user during testing.

The behavioural personification is achieved through the finite state machine action selectiontechniques available to user avatar. For instance the behaviour of choosing whether theuser avatar should leave the wall is made based on an a user-specified desire to explore,a time since the last explore, and a user bound random factor. This example shows thata large emphasis on the decision to explore is made based on user specifications in the

CHAPTER 7. PROJECT CONCLUSIONS 78

user avatar profile file. This is also true with many other action selection calculations;it illustrates that the personification of the user is passed using the profile file to achievevaried, yet calculated behaviours being exhibited by the user avatar.

A virtual environment has been created using the Blender open-source game engine. Theuser avatar in its sphere form exists within the created scene, which for testing purposesand to maintain simplicity of environmental variables, the scene is made up of a floor, andfour walls.

• The user avatar should achieve consistency through the use of appropriate behaviour:

Using parameters set in the behaviour high-level specification done by the user, the useravatar’s actions within the environment should be consistent with the profile provided toit. This objective has been shown by user testing to have been achieved, to a degree. Theuser testing carried out showed that the accuracy of the calculation based on the seconduser test was (INSERT PERCENTAGE). For more details on how the second test wascarried out, see Chapter 5. This consistency percentage could be improved, however thatis out of the scope of this project, and the (INSERT PERCENTAGE) consistency ratingis sufficient for the needs of this project.

• The avatar should achieve pre-specified goals within the environment:

Implemented within the environment are four dynamic objects with which the user avatarcan interact; interacting with these objects should be considered to be sub-goals. The maingoal of the user avatar is to stay safe, but second to this goal is the desire to explore theenvironment; inferred in this that the user avatar will walk around the walls to remain safe,and also wander off into the environment to explore it. If one of aforementioned sub-goalscomes within the curiosity range then the user avatar becomes interested in completingthat sub-goal.

As well as achieving pre-specified goals it is important for the user to have an understandingthat goals have been achieved. As part of the user testing, the users were asked to identifywhich goals they would have in the environment, and also what goals they thought theuser avatar possessed. While all the user’s identified the avatar’s goals as walking the walland exploring the environment, only one user questioned identified the interaction goalsas being individual to the objects. The others classified it all under the heading of objectinteraction. The conclusion of the discussion in Chapter 5 is that this result is because theuser does not view the user avatar as behaving differently after each interaction (it alwaysreturns to walking the wall). The decision to return to the default state (wall walking) ismade in the finite state machine action selection model, and any change to refect the usersobservations would require the model to change.

• The project must consider the overall problem whilst maintaining the project scope:

CHAPTER 7. PROJECT CONCLUSIONS 79

It is particularly important in a project like this one to maintain an awareness of scope.This project has provided a wide variety of background analysis before deciding on a singleimplementation strategy; however the other objectives as set out at the beginning of thisdissertation limit the scope of what is expected to be achieved. While a more focussedapproach earlier on in the project may have resulted in a more polished final system, itwould not have allowed the level of broader learning in a complex and excting field.

7.2 Evaluation into the Believability of the User Avatar

The aim of this section is to critically evaluate the effectivness of the user avatar in itsdisplay of beleievable behaviour that is considered to be consistent with the user’s ownactions. The criteria defined by Merriam-Webster (1989) as to what makes a believablevirtual human will be used to benchmark the success of the user avatar. The criteriaheadings are shown in bold.

Behaviour: The way in which characters act

Behaviour is not just one movement or change of state; but rather behaviours shouldbe considered self-supporting, in that many low-level behaviours go together to form morecomplex ones. One of the aims of this project is to allow high-level control of behaviour, andthis has been achieved by limiting the behaviour available to the user and using graphicalinterface objects to facilitate the input of the behaviour specification. However the questionraised is, has the restriction on the number of behaviours effected the believability?

The evidence suggests that believability has been effected but not to such an extent thatwithin the scope of this project, the user avatar behaves radically differently to the be-haviour of a human user given the same circumstances. Although the project’s implementa-tion adopts a modular approach, it could look to develop a more tightly controlled library ofcommon behaviours. One of the ideas supported by the BOD methodology Bryson (2003b)is that behaviours should be extensible, and supplied in the form of a behaviour library.This approach is something that should be further considered as a clearer structure of lowand high level behaviours would lead to greater use and thus greater realism of behaviourswithin the environment.

Intelligence: The ability to learn or deal with new situations

The notion of intelligence was not part of the original objectives for this project, and assuch was considered out of scope for the development. Consideration for the advantagesand disadvantages of intelligent decision making was made during the litterature reviewphase, but it was deemed to be too complex and would detract from the overall objectivesof the project. The incorporation of attributes of intellience was therefore ommited fromthe objects in this project.

CHAPTER 7. PROJECT CONCLUSIONS 80

When modelling beleivable avatars the importance of learning and application of thatknowledge in new situations cannot be underestimated. The decision not to include in-telligence in this project’s plans is still the correct one, but it is clear than by employingintelligent techniques greater beleievability could be achieved.

Autonomy: The ability to be self-governing

The user avatar governs itself during runtime, it does not require stimulus from the user,and makes decisions based on environmental and profile information available to it. Anexample of autonomy within the project would be when the user avatar reaches a wall, itmakes the decision to turn left or right independent of any input from the user. This casealso illustrates the problem the lack of intelligence has on autonomy within the system.The decision to turn when reaching the wall, albeit autonomously, is not reached fromsome calculation of previous turns, or preferred directions, the choice is made randomly.While this is acceptable for this project any further development of the system would needto consider more closely the impact on realistic behaviour that this completly randomdecision has.

Adaptation: Is the adjustment to environmental conditions

The user avatar in this project is effected by the impact environmental conditions have onits own state; although it does not adjust or react to these circumstances. This qualityis not stated in the objectives of this project and is therefore out of scope; any avatarwith awareness is effected by the state of the environment it is in, and is able to adaptaccordingly. Although given a new environment, the user avatar would be able to survive(i.e. it would still find the nearest wall and then walk around it, exploring occasionally) itwould not factor its environment into its decisions.

Perception: The awareness of the environment through physical sensation

The Blender game-engine provides a range of sensors that can be linked to objects withinthe environment. These sensors relay information about the environment and other objectswithin it back to the object to which they are associated. They are an important part ofthe user avatar’s response to events and changes in conditions around it. The set of built-insensors can provide a wide range of data back to the object, however the Blender sensorscan be difficult to control. As an example, the ‘Near’ sensor is supposed to trigger a positiveflag if an object with a specified property or material comes within range of the specifieddistance. However the point at which it measures the near distance from is unclear and theresults are inconsistent. A reimplementation of this project could overcome this by writingnew sensor functions that would perform the task expected, following an expected method.

CHAPTER 7. PROJECT CONCLUSIONS 81

Memory: Reproducing or recalling what has been learnt from an organ-ism’s activity or experience

Memory has been implemented within this project; an example of which is when the useravatar completes interaction with another object, it remembers what object it was an howmany times the user avatar has interacted with this particular object in the past. A furtherexample is implemented for the autonomous avatar, when it gets moved from it’s main goalof tracking, it remembers where it left and always travels back to that point once it is nolonger being chased by the user avatar.

However the form of memory being applied is only primitive and stores the results in pre-established variables. This would be no use when placed inside an unknown environmentwith unexpected interactions. Possible types of memory implementation are discussed inChapter 6, however it is clear that not many of the avatar’s past experiences get storedand reused in calculations for further decisions, and to implement such an upgrade wouldrequire a clear, consistent and concise memory structure.

Emotion: A state of feeling, a psychic or physical reaction

The way emotions are described within the environment is not as originally envisaged whenundertaking theisproject. This is largely due to the failings of the Blender environmentand its inability to allow dynamic control of objects attributes; this is discussed further inthe Evaluation of the Blender Environment Section 7.3.

An area that as been discussed previously is how the user avatar responds to emotion, orin other words what emotion it assigns to behaviours, especially when they are new andit has no precedent to formulate from. This is complex; however the user avatar can onlyassign an emotion based on its intelligence, memory and perceptions (all topics discussedin this section).

One example where consistent emotion ought to have been shown is where the user avataris exploring, and it comes in range of another object. If it is responsive about this objectit chould show it by perhaps accelerating toward it, or bouncing on the way; if it has neverseen this object before it could stop and consider going closer, if its profile suggests it isan exploratory type it should proceed with caution, perhaps changing colour to show it isnervous. All these changes would heighten the user’s perception of what the avatar wasfeeling, strengthening the bond between them. Currently the user avatar appears to bedead-pan.

Consciousness: The state of being aware

The business of artificial consciousness is beyond the scope of this document but for infor-mation on the area, a good starting point is Aleksander (1994) as it is a good introductionto ‘artifical consciousness’ without immediatly delving into deep theory.

CHAPTER 7. PROJECT CONCLUSIONS 82

Freedom: Absence of necessity or coercion or constraint in choice of action

Within the scope of this project the user avatar can be considered to have freedom, becauseit behaves in a random fashion at many points throughout its decision making lifespan.For instance when it first hits the wall it randomly decides to move left or right; when it isgoing for an explore it randomly decides what speed, and within a range, what direction itis moving in, this behaviour is unpredictable to the human user watching it.

Unpredictability is a good way of retaining user interest; if the user knew every decisionthe user avatar would make then watching the avatar would quickly become tiresome. Bydemonstrating freedom in this way the avatar can give the impression to the user that itis not just following a pre-determined script, and this has shown through user testing toimprove the sense of believability in the user avatar’s actions.

Presence: The condition of feeling a sense of being present

This last quality is targeted more to the human user while using the system. It is coupledby Merriam-Webster with the sense of immersion which is a state where the boundariesbetween real and virtual worlds become unclear. An increase in user immersion is coupledwith an increase in believability; the deeper immersed a user becomes the more they believein what they are seeing.

The objectives do not metion presence, and this project never expected the level of user im-mersion required to break down the boundaries between real and virtual worlds. Through-out the user testing, all the users were aware they were interacting with a virtual avatar.This is not a negative, meerly an indication of the complexity in connecting a user withtheir avatar.

Summary

The user avatar has overall met the objectives of the project. In most cases of Merriam-Webster’s (1989) ten definitions of ahuman life, the user avatar has displayed some aspectthat could be consistent to the description given. With such a scoped project, it was neverthe intention to try and meet all of Merriam-Webster’s (1989) definitions, however they doillustrate the key areas of human life-likeness that this project is tailored toward.

Most of Merriam-Webster’s (1989) definitions refer to pshycological principles, howeverthe user avatar also has a visualisation which supports these principles through displayingappropriate behaviour. This visualisation is generated in the Blender environment; thefollowing section evaluates the Blender environment, and careful focus has been given tothe effect that Blender’s short-comings has on modelling appropriate diverse behaviour.

CHAPTER 7. PROJECT CONCLUSIONS 83

7.3 Evaluation of the Blender Environment

When first choosing the Blender environment it was assessed that its primary functionalitywas targeted towards static or pre-determined animation modelling; however it was alsoone of the few packages that offered this in combination with a built in game-engine. Afterinitially looking through Blender, the game-engine, and the API’s it was a reasonableassumption that the functionality for what was required was present.

It was only well into development that it became clear that some of the functionality,mainly that expected to create the emotional sense to the avatar was not available duringgame runtime. Specifically if you return an object from the current scene, it is returned asan object type ‘KX Game Object’. This is different to the ‘Blender Object’ that providesthe functionality to control materials, colours, sizes, and many of the other common objectattributes. You are able to explicitly call the ‘Blender Object’ from the ‘Blender Scene’,however once the game is in progress this is only possible with read-only privileges.

It is not necessary to provide further detail on the technicalities of this issue, however theresult is that attribute changes were not possible and therefore much control over the avatarwas lost. This has had a major affect on the ability of the user avatar to display a senseof being affected by its interactions. This coupled with the fact that the system was notdesigned to have memory of the emotional state felt, meant that the main reasons for theavatar choosing to interact with an object became limited to its distance to an object andits frequency of interaction.

Using the system also demonstrated that the sensors and the actuators provided as partof the Blender game-engine, seemed to yield inconsistent results, and any further workwould require new sensor and actuator functions to be written. The reasons for theseinconsistencies are unclear, but are part of the functionality provided within the game-engine. An example of what is deemed inconsistent would be the use of the Touch sensorwhich would sometimes return a positive state when it first made contact with an objectsuch as a wall, then return to a false state immediately whilst the avatar was moving alongthe wall, and on other occasions it would remain positive for a random length of timeand then flicker between false and true. This makes coding for certain sensor states verydifficult; a work-around was devised that compared the hit objects names to see whetherthe touched object changed rather than just relying on the hit state changing. A number ofthe other used sensors including the ‘Near Sensor’ and the ‘Collision Sensor’ had problemsas well.

The issue of game variables has already been discussed in Section 4.3.5 however theworkaround that has been implemented in this project is a well documented solution withinthe Blender community. The limitations of the the system should be more fairly seen inthe context of the application envisaged by the game-engine’s designer. Interestingly a poston a forum recently indicated that the Blender Team are not planning on developing theirinternal game-engine further; instead they plan to make use of an external open-sourceproject named Ogre. Blender is very good at modelling and animation, and perhaps this

CHAPTER 7. PROJECT CONCLUSIONS 84

is a sign their developers are sticking to what they do best.

7.4 Whole Project Evaluation

The experience gained and the lessons learnt from this project have been entirely positive.The objectives set at the outset have been achieved:

Foremost, the project has created a user-centred avatar system; this incorporates the en-vironment in which the user avatar exists and the development of a profile manager GUIthat uses graphical objects to capture information about the users behaviour in order togovern the behaviour of the user avatar. The most important point with this interface isthat it does not assume the user has any prior behavioural scripting knowledge.

The user avatar has been created; it is consistent with Popovici et al.’s (2004) definition ofa complex instance of an agent within a virtual environment that is able to perceive, decide,and to react based on its own internal structure. The user avatar makes decisions basedon its environment in a timely fashion as proposed by Popovici et al. (2004) and makesthem autonomously based on a version of Aylett and Cavazza’s (2001) sense-reflect-actcycle. The key benefit of the user avatar is that it is able to make autonomous decisionswhich are consistent with the user profile it has been provided with; leading to a sense ofbelievability, by the user, in the avatar’s actions.

A virtual environment consistent with the definitions given by Yang et al. (2003) and bySelvarajah and Richards (2005) has been produced with synthetic sensory information,movement of objects and interaction between dynamic objects all evident. The virtualenvironment provides an appropriate setting for the virtual avatar to display its behaviour.

At the beginning of the project it was correct to make the decision to use Blender, howevernow toward the end of the project Blender’s limitations can be appretiated such that anyfuture work will require a reassessment of the Blender environment. Blender appeared tohave potential beyond the scope of its abilities, and it is clear that the game-engine sideof it has not been developed further for some time. In any future undertakings I woulddefiantly switch to a more game-engine specific environment. However, I would still lookto use Blender as a modelling tool as the exposure I’ve been given to it has shown it to bea very powerful modelling tool, and through this project I’ve already picked up a lot of theskills necessary to develop in it.

This project has also made recomendations for further work. It has considered the impli-cations of the overall conclusions on the future development of the user-centered avatarsystem. Further work has been suggested across a wide spectrum of paths that this projectcould develop into. There is a vast amount of information and research still being done andany complete system first requires an in-depth study of a much broader base of knowledge.

I would consider my tendency to look at the bigger picture rather than focusing on smallerareas to be one of my strengths. In undertaking this project, while I have strengths inviewing the bigger picture, I have enjoyed the necessary focus on smaller subject areas. It

CHAPTER 7. PROJECT CONCLUSIONS 85

is clear that any aspiring venture into the world of behaviour modelling with virtual avatarsand virtual environments requires more time than an undergraduate dissertation allows.However a ‘PhD’ research project would allow sufficient time, and I would strongly urgeanyone to pursue it in this field, as the material surrounding this subject area is incrediblydiverse and interesting and a lot of successful research has already been done. A rewardingexperience can only assist in achieving the dream of a ‘complete solution’.

Bibliography

Aleksander, I. (1994), ‘Artificial Consciousness’, Artificial Life and Virtual Reality, JohnWiley, Chichester pp. 73–81.

Allen, C., Wallach, W. and Smit, I. (2006), ‘Why Machine Ethics?’, IEEE IntelligentSystems 21(4), 12–17.

Aylett, R. and Cavazza, M. (2001), ‘Intelligent Virtual Environments-A State-of-the-artReport’, Eurographics 2001 .

Bailenson, J., Beall, A., Blascovich, J., Raimundo, M. and Weisbuch, M. (2001), ‘Intelli-gent agents who wear your face: Users reactions to the virtual self’, Lecture Notes inArtificial Intelligence 2190, 86–99.

Bryson, J. (2000), ‘A Proposal for the Humanoid Agent-builders League (HAL)’, AISBQUARTERLY pp. 11–12.

Bryson, J. (2003a), ‘Action Selection and Individuation in Agent Based Modelling’, Pro-ceedings of Agent pp. 317–330.

Bryson, J. (2003b), ‘The Behavior-Oriented Design of Modular Agent Intelligence’, AgentTechnologies, Infrastructures, Tools, and Applications for e-Services pp. 61–76.

Bryson, J., Caulfield, T. and Drugowitsch, J. (2005), ‘Integrating Life-Like Action Selec-tion into Cycle-Based Agent Simulation Environments’, Proceedings of Agent 2005:Generative Social Processes, Models, and Mechanisms .

Canamero, D. (1997), ‘Designing emotions for activity selection’, Technical Report .

Cassell, J. and Vilhjalmsson, H. (1999), ‘Fully Embodied Conversational Avatars: MakingCommunicative Behaviors Autonomous’, Autonomous Agents and Multi-Agent Sys-tems 2(1), 45–64.

Cassell, J., Vilhjalmsson, H. and Bickmore, T. (2001), ‘BEAT: the Behavior ExpressionAnimation Toolkit’, Proceedings of the 28th annual conference on Computer graphicsand interactive techniques pp. 477–486.

86

BIBLIOGRAPHY 87

Chen, J., Lin, W., Bai, H., Yang, C. and Chao, H. (2005), ‘Constructing an intelligentbehavior avatar in a virtual world: a self-learning model based on reinforcement’,Information Reuse and Integration, Conf, 2005. IRI-2005 IEEE International Con-ference on. pp. 421–426.

Foot, P. (1978), ‘The problem of abortion and the doctrine of double effect’, Virtues andVices pp. 19–32.

Gillies, M. and Dodgson, N. (2004), ‘Behaviorally rich actions for user controlled charac-ters’, Computers & Graphics 28(6), 945–954.

Gleicher, M. (2001), ‘Comparing constraint-based motion editing methods’, Graphical Mod-els 63(2), 107–134.

Habbo Hotel (2006), Internet. www.habbohotel.co.uk.

Hayes-Roth, B. (2003), ‘What makes characters seem life-like’, Life-like Characters. Tools,Affective Functions and Applications. Springer, Berlin pp. 447–462.

Imbert, R. and de Antonio, A. (2000), ‘The Bunny Dilemma: Stepping Between Agents andAvatars’, Proceedings of the 17th Twente Workshop on Language Technology (TWLT17) .

Koda, T. and Ishida, T. (2006), ‘Cross-cultural Study of Avatar Expression Interpretations’,Proceedings of the International Symposium on Applications on Internet pp. 130–136.

Koda, T. and Maes, P. (1996), ‘Agents with faces: The effect of personification’, Proceedingsof the Fifth IEEE International Workshop on Robot and Human Communication (RO-MAN96) .

Kshirsagar, S., Magnenat-Thalmann, N., Guye-Vuilleme, A. and Kamyab, K. (2002),‘Avatar Markup Language’, Proceedings of the workshop on Virtual environments 2002pp. 169–177.

Kwong, A. (2003), A framework for reactive intelligence through agile component-basedbehaviors, PhD thesis, Masters thesis, University of Bath. Department of ComputerScience.

Maes, P., Darrell, T., Blumberg, B. and Pentland, A. (1995), ‘The ALIVE system: Full-body interaction with autonomous agents’, Proceedings of the Computer Animation95.

Mariott, A. and Stallo, J. (2002), ‘VHMLUncertainties and problems. A discussion’, Pro-ceedings AAMAS-02 Workshop on Embodied conversational agentslets specify and eval-uate them .

Merriam-Webster, I. (1989), The New Merriam-Webster Dictionary, Merriam-Webster Inc.

BIBLIOGRAPHY 88

Monsieurs, P., Coninx, K. and Flerackers, E. (2000), ‘Collision avoidance and map con-struction using synthetic vision’, Virtual Reality 5(2), 72–81.

OHare, G., Campbell, A. and Stafford, J. (2005), ‘NeXuS: Delivering Behavioural Realismthrough Intentional Agents’, Active Media Technology, 2005.(AMT 2005). Proceedingsof the 2005 International Conference on pp. 481–486.

Oliveira, E. and Sarmento, L. (2003), ‘Emotional advantage for adaptability and autonomy’,Proceedings of the second international joint conference on Autonomous agents andmultiagent systems pp. 305–312.

Ortiz, A., Oyarzun, D., Aizpurua, I., Posada, J., Center, V. and San Sebastian, S. (2004),‘Three-dimensional whole body of virtual character animation for its behavior in avirtual environment using H-Anim and Inverse kinematics’, Computer Graphics Inter-national, 2004. Proceedings pp. 307–310.

Popovici, D., Buche, C., Querrec, R. and Harrouet, F. (2004), ‘An Interactive Agent-BasedLearning Environment for Children’, Proceedings of the 2004 International Conferenceon Cyberworlds (CW’04)-Volume 00 pp. 233–240.

Read, J. (2005), ‘The ABC of CCI’, Interfaces 62, 8–9.

Reynolds, C. (1999), ‘Steering behaviors for autonomous characters’, Game DevelopersConference 1999.

Rigotti, C., Cerveri, P., Andreoni, G., Pedotti, A. and Ferrigno, G. (2001), ‘Modeling anddriving a reduced human mannequin through motioncaptured data: a neural networkapproach’, Systems, Man and Cybernetics, Part A, IEEE Transactions on 31(3), 187–193.

Rolls, E. (1999), The Brain and Emotion, Oxford University Press.

Selvarajah, K. and Richards, D. (2005), ‘The use of emotions to create believable agentsin a virtual environment’, Proceedings of the fourth international joint conference onAutonomous agents and multiagent systems pp. 13–20.

Sommerville, I. (2000), Software engineering, 6 edn, Addison-Wesley.

Sternberg, R. (1990), Metaphors of Mind: Conceptions of the Nature of Intelligence, Cam-bridge University Press.

Takeuchi, A. and Naito, T. (1995), ‘Situated facial displays: towards social interac-tion’, Proceedings of the SIGCHI conference on Human factors in computing systemspp. 450–455.

The Blender Module API (2006), Internet. http://www.blender.org/documentation/242PythonDoc/index.html.

BIBLIOGRAPHY 89

The Blender Wiki Tutorial (2006), Internet. http://wiki.blender.org/index.php/Main_Page.

The Game Logic Module API (2004), Internet. http://www.blender.org/documentation/pydocgameengine/PyDoc-Gameengine-2.34/index.html.

The Palace (2000), Internet. www.palaceplanet.net/manuals/guides/whatisthepalace.htm.

Vinayagamoorthy, V. (2002), ‘Emotional personification of humanoids in Immersive VirtualEnvironments’, Proceedings of the Equator Doctoral Colluquim .

Voss, P. (2005), ‘The Essentials of General Intelligence’, Goertzel and Pennachin (Ed.),Artificial General Intelligence, Springer-Verlag .

Walker, J., Sproull, L. and Subramani, R. (1994), ‘Using a human face in an interface’,Proceedings of the SIGCHI conference on Human factors in computing systems: cele-brating interdependence pp. 85–91.

Whalen, T., Petriu, D., Yang, L., Petriu, E. and Cordea, M. (2003), ‘Capturing Be-haviour for the Use of Avatars in Virtual Environments’, CyberPsychology & Behavior6(5), 537–544.

Yang, X., Petriu, D., Whalen, T. and Petriu, E. (2003), ‘Avatar animations in a 3D vir-tual environment’, Electrical and Computer Engineering, 2003. IEEE CCECE 2003.Canadian Conference on 2.

Appendix A

System Screenshots

Figure A.1: Screenshots of the virtual environment system

90

APPENDIX A. SYSTEM SCREENSHOTS 91

Figure A.2:

APPENDIX A. SYSTEM SCREENSHOTS 92

Figure A.3:

APPENDIX A. SYSTEM SCREENSHOTS 93

Figure A.4:

Appendix B

An example Profile File

The following is an example of the profile file passed from the user interface to the Blenderenvironment.

<a t t r i bu t e s >name−Franks i z e −2co lour −(255 , 255 , 0 , 255)</a t t r i bu t e s ><behaviours>cur ious−7bored−6explore−5wander−8</behaviours><repeats>r e p e a t v i r t u a l −2repeat phonebox−2repeat hover−2repeat co lourpad−2t ime on v i r tua l −10time on phonebox−10t ime on hover−10t ime on co lourpad −10</repeats>

94

Appendix C

Log File Output

The following is an extract from the log files generated during the runtime.########################################This i s a Lo g f i l e for the VAvatar SystemCreat ion Time/Date : Thu Apr 19 10 : 25 : 55 2007

########################################

Thu Apr 19 10 : 25 : 55 2007> Done I n i t i a l i s i n g the game va r i a b l e sThu Apr 19 10 : 25 : 55 2007> PLAYER: I ’m moving towards the nea r e s t wa l lThu Apr 19 10 : 25 : 58 2007> PLAYER: I ’ ve h i t the wa l l and I ’m turn ing

RIGHTThu Apr 19 10 : 26 : 04 2007> PLAYER: I f e e l s a f e enough to go f o r a wanderThu Apr 19 10 : 26 : 04 2007> PLAYER: I ’m o f f WanderingThu Apr 19 10 : 26 : 16 2007> PLAYER: I want to go back to the s a f e t y o f

the wa l lThu Apr 19 10 : 26 : 16 2007> PLAYER: I ’m moving towards the nea r e s t wa l lThu Apr 19 10 : 26 : 18 2007> PLAYER: I ’ ve h i t the wa l l and I ’m turn ing

RIGHTThu Apr 19 10 : 26 : 20 2007> PLAYER: This l ooks l i k e a phonebox , but I ’m

going to approach s low lyThu Apr 19 10 : 26 : 31 2007> PLAYER: I ’m going to out s id e the phoneboxThu Apr 19 10 : 26 : 35 2007> PLAYER: I ’m moving in to the phoneboxThu Apr 19 10 : 26 : 39 2007> LIGHT: Someone i s in the phonebox , I ’m onThu Apr 19 10 : 26 : 39 2007> PLAYER: The l i g h t ’ s on in t h i s phoneboxThu Apr 19 10 : 26 : 41 2007> LIGHT: They ’ re no l onge r in phonebox , I ’m

turn ing o f fThu Apr 19 10 : 26 : 41 2007> PLAYER: I ’m going to out s id e the phoneboxThu Apr 19 10 : 26 : 41 2007> LIGHT: Someone i s in the phonebox , I ’m onThu Apr 19 10 : 26 : 43 2007> LIGHT: They ’ re no l onge r in phonebox , I ’m

turn ing o f fThu Apr 19 10 : 26 : 43 2007> PLAYER: I ’m going to out s id e the phoneboxThu Apr 19 10 : 26 : 44 2007> PLAYER: I ’m moving in to the phoneboxThu Apr 19 10 : 26 : 45 2007> PLAYER: I ’m done with t h i s Phonebox

95

APPENDIX C. LOG FILE OUTPUT 96

Thu Apr 19 10 : 26 : 45 2007> PLAYER: I ’m moving towards the nea r e s t wa l lThu Apr 19 10 : 26 : 50 2007> PLAYER: I ’ ve h i t the wa l l and I ’m turn ing

DOWNThu Apr 19 10 : 26 : 56 2007> PLAYER: I ’m turn ing to f o l l ow the wal lThu Apr 19 10 : 26 : 59 2007> PLAYER: I f e e l s a f e enough to go f o r a wanderThu Apr 19 10 : 26 : 59 2007> PLAYER: I ’m o f f WanderingThu Apr 19 10 : 27 : 00 2007> PLAYER: I ’m chas ing the Vi r tua lThu Apr 19 10 : 27 : 00 2007> VIRTUAL: I ’ ve spotted the Player , run−away !Thu Apr 19 10 : 27 : 00 2007> VIRTUAL: The p laye r can ’ t be seen anymore ,

I ’m moving backThu Apr 19 10 : 27 : 00 2007> VIRTUAL: I ’m going back to what I was doingThu Apr 19 10 : 27 : 00 2007> PLAYER: I ’m chas ing the Vi r tua lThu Apr 19 10 : 27 : 07 2007> VIRTUAL: I ’ ve spotted the Player , run−away !Thu Apr 19 10 : 27 : 07 2007> VIRTUAL: The p laye r can ’ t be seen anymore ,

I ’m moving backThu Apr 19 10 : 27 : 07 2007> VIRTUAL: I ’m going back to what I was doingThu Apr 19 10 : 27 : 08 2007> PLAYER: I ’m chas ing the Vi r tua lThu Apr 19 10 : 27 : 08 2007> VIRTUAL: I ’ ve spotted the Player , run−away !Thu Apr 19 10 : 27 : 08 2007> VIRTUAL: The p laye r can ’ t be seen anymore ,

I ’m moving backThu Apr 19 10 : 27 : 08 2007> PLAYER: I ’m chas ing the Vi r tua lThu Apr 19 10 : 27 : 08 2007> VIRTUAL: I ’ ve spotted the Player , run−away !Thu Apr 19 10 : 27 : 08 2007> VIRTUAL: The p laye r can ’ t be seen anymore ,

I ’m moving back

Appendix D

Environment Code

File: vaactuator.py

import Blenderimport GameLogic as g

class actuat :currObj=””c=””

def i n i t ( s e l f , objnm) :””” s e t the o b j e c t f o r t h i s o b j e c t name”””s e l f . currObj=g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ objnm ]s e l f . c = g . ge tCur r entCont ro l l e r ( )

def mlv ( s e l f , l s t ) :””” t h i s de f t e l l s the o b j e c t to move in

s p e c i f i e d way”””actu = s e l f . c . getActuator ( ”motion” )actu . s e tL i n ea rVe l o c i t y ( l s t [ 0 ] , l s t [ 1 ] , l s t [ 2 ] , 0 )g . addActiveActuator ( actu , 1 )

def mrot ( s e l f , l s t ) :””” t h i s de f t e l l s the o b j e c t to r o t a t e by the

g iven amounts”””

actu = s e l f . c . getActuator ( ”motion” )actu . setDRot ( l s t [ 0 ] , l s t [ 1 ] , l s t [ 2 ] , 1 )g . addActiveActuator ( actu , 1 )

def mforce ( s e l f , l s t ) :actu = s e l f . c . getActuator ( ”motion” )actu . s e tForce ( l s t [ 0 ] , l s t [ 1 ] , l s t [ 2 ] , 1 )g . addActiveActuator ( actu , 1 )

def mlforce ( s e l f , l s t ) :actu = s e l f . c . getActuator ( ”motion” )actu . s e tForce ( l s t [ 0 ] , l s t [ 1 ] , l s t [ 2 ] , 0 )g . addActiveActuator ( actu , 1 )

def mTrackTo( s e l f , obj ) :actu = s e l f . c . getActuator ( ” trackTo” )actu . s e tObjec t ( obj )g . addActiveActuator ( actu , 1 )

def mstop ( s e l f ) :s e l f . mlv ( [ 0 , 0 , 0 ] )

def mreplace ( s e l f , mesh ) :actu = s e l f . c . getActuator ( ” r ep l a c e ” )actu . setMesh (mesh )

97

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E98

g . addActiveActuator ( actu , 1 )

def mend( s e l f ) :actu = s e l f . c . getActuator ( ”end” )g . addActiveActuator ( actu , 1 )

File: vacolourpad.py

import Blenderimport GameLogic as gimport time as timport random as rnd

from va funct ion import cmnfunc

class co l ourchgr :#Rotates the co lour o f the co lour pad#co lour func=cmnfunc (”OBPlane .013”)c = g . ge tCur r entCont ro l l e r ( )#what time was the co lour l a s t changedchangeTime=t . time ( )

def changer ( s e l f ) :o b j e c t s = Blender . Object . Get ( ”Plane .013 ” )ob j e c t s . c o l b i t s =(1<<0)i f t . time ( )>co l ourchgr . changeTime :

#change the co lour o f the o b j e c t#ob je=g . getCurrentScene () . g e tOb j e c tL i s t ( ) [”OBPlane .013” ]#pr in t ” t h i s i s : ” , type ( ob j e )#ob je . setRGBCol ( [ rnd . random() , rnd . random() , rnd . random() ] )#co lourc = s e l f . c . getOwner ()#pr in t ” t h i s i s : ” , type ( co lourc )#co lourc . setRGBCol ( [ rnd . random() , rnd . random() , rnd . random() ] )#se t the next change timeco l ourchgr . changeTime=t . time ( )+3#ob j e c t s = Blender . Object . Get (” Plane .013”)#pr in t ”NAEM ” , o b j e c t s . name#pr in t type ( o b j e c t s )mat1=Blender . Mater ia l .New( ”newMat” )#pr in t ”mater ia l ” , type (mat1)mat1 . setRGBCol ( [ rnd . random ( ) , rnd . random ( ) , rnd . random ( ) ] )ob j e c t s . s e tMa t e r i a l s ( [ mat1 , None ] )#ob j e c t s . c o l b i t s =(1<<0)

#Blender . Redraw ()

File: vacontroller.py

import Blenderimport GameLogic as gimport random as rndfrom vaactuator import actuatfrom vasensor import s ensefrom va funct ion import cmnfunc

class r un s c r i p t :currObj=””c=””

def i n i t ( s e l f , objnm) :””” s e t the o b j e c t f o r t h i s o b j e c t name”””global currObjglobal ccurrObj=g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ objnm ]c = g . ge tCur r entCont ro l l e r ( )g . avs = sense ( ”OBSphere” )g . ava = actuat ( ”OBSphere” )g . avc = cmnfunc ( ”OBSphere” )

def oute r l oop ( s e l f ) :””” pr in t ” s a t t e n t i on ” , g . s a t t e n t i on ”””print g . x r f r sh t ime””” r e f r e s h the random numbers at i n t e r v a l time”””i f Blender . sys . time ( )>(g . r f r s h t ime+g . i t r v t ime ) :

g . r f r s h t ime=Blender . sys . time ( )g . avc . r e f r e s h ( )

””” r e f r e s h the wander depending on exp l o r era t e ”””

i fBlender . sys . time ( )>(g . x r f r sh t ime+g . exp lo r e+g . wanderx ) :i f not g . wander and g . onwal l :

g . swander=Blender . sys . time ( )g . wander=True

else :” h e l l p ”

i f not g . wander and not g . a t t en t i on :””” c a l l the roam method”””

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E99

g . avc . roam ( )””” i f t he re ’ s something a t t r a c t i n g

a t t en t i on ”””g . avc . c u r r e n t I n t e r e s t ( )

””” i f i t want ’ s to go f o r a wander then ”””i f g . wander and not g . a t t en t i on :

”””run the wander code”””g . onwal l=Fal seg . avc . wander ( )””” pr in t g . swanderp r in t g . wandertimepr in t g . wanderx”””i f

Blender . sys . time ( )>(g . swander+g . wandertime+g . wanderx ) :g . wander=Falseg . x r f r sh t ime=Blender . sys . time ( )

e l i f g . a t t en t i on :””” e l s e i f something a t t r a c t s i t ’ s

a t t en t i on ””””””what has a t t t r a c t e d i t ’ s a t t en t i on ??”””i f

g . i n t e r e s t [ l en ( g . i n t e r e s t )−1]==”OBPlane .010 ” :i f

g . i n t e r e s t [ l en ( g . i n t e r e s t ) −2]!=”OBPlane .010 ” :”””Then the pad has a t t r a c t e d i t s

a t t en t i on ”””s e l f . innerpad ( )

e l i fg . i n t e r e s t [ l en ( g . i n t e r e s t )−1]==”OBPlane .014 ” :i f

g . i n t e r e s t [ l en ( g . i n t e r e s t ) −2]!=”OBPlane .014 ” :”””Then the phonebox has a t t r a c t e d

i t s a t t en t i on ”””s e l f . innerphone ( )

e l i fg . i n t e r e s t [ l en ( g . i n t e r e s t )−1]==”OBYellowBall” :i f

g . i n t e r e s t [ l en ( g . i n t e r e s t ) −2]!=”OBSphere” :”””Then the ye l l ow b a l l has

a t t r a c t e d i t s a t t en t i on ”””s e l f . i n n e r b a l l ( )

i f g . reached==2:print ” reached2 ” , g . reachedg . avc . r e turnF loor ( ”OBSphere” )g . onwal l=Fal se

g . a t t en t i on=Falseg . reached=0print ” r e s e t i n g the x f r sht ime ”g . x r f r sh t ime=Blender . sys . time ( )g . s a t t en t i on =999999999.99

def innerpad ( s e l f ) :”””what to do i f the a t t en t i on o f pad i s

ra i s ed ”””print ” s a t t en t i on ” , g . s a t t en t i onprint ” time ” , Blender . sys . time ( )print ” t o t a l ” , ( g . s a t t en t i on+g . bored+g . wanderx )i f g . reached==0:

g . avc . moveToObject ( ”OBSphere” , ”OBEmptyPad” )e l i f g . reached==1:

g . avc . moveToObject ( ”OBSphere” , ”OBEmptyPad2” )print ” time ” , Blender . sys . time ( )print ” t o t a l

” , ( g . s a t t en t i on+g . bored+g . wanderx )i f

Blender . sys . time ( )>(g . s a t t e n t i on+g . bored+g . wanderx ) :g . avc . moveToObject ( ”OBSphere” , ”OBEmptyPad” )g . i n t e r e s t . append ( ”OBPlane .010 ” )g . reached=2

def innerphone ( s e l f ) :”””what to do i f the a t t en t i on o f the phonebox

i s ra i s ed ”””print ” innerphone ””””what to do i f the a t t en t i on o f pad i s

ra i s ed ”””print ” s a t t en t i on ” , g . s a t t en t i onprint ” time ” , Blender . sys . time ( )print ” t o t a l ” , ( g . s a t t en t i on+g . bored+g . wanderx )i f g . reached==0:

g . avc . moveToObject ( ”OBSphere” , ”OBEmptyPhone” )e l i f g . reached==1:

g . avc . moveToObject ( ”OBSphere” , ”OBEmptyPhone2” )print ” time ” , Blender . sys . time ( )print ” t o t a l

” , ( g . s a t t en t i on+g . bored+g . wanderx )i f

Blender . sys . time ( )>(g . s a t t e n t i on+g . bored+g . wanderx ) :g . avc . moveToObject ( ”OBSphere” , ”OBEmptyPhone” )g . i n t e r e s t . append ( ”OBPlane .014 ” )

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E100

g . reached=2

def i n n e r b a l l ( s e l f ) :”””what to do i f the a t t en t i on o f the b a l l i s

ra i s ed ”””print ” i n n e r b a l l ”””” s e t the g . s a t t e n t i on time when on the o b j e c t

or i n t e r a c t i n g with ”””g . s a t t en t i on=Blender . sys . time ( )

”””””””””” o ld va func t ion s t u f f ! ”””

””””””””””””””””””””””””””””””””””””””””””

def r t r nD i r e c t i on ( s e l f , obj ) :””” g l o b a l cwa l l ”””idx = g . cwa l l . va lue s ( ) . index ( obj )return g . cwa l l . keys ( ) [ idx ]

def move( s e l f , obj ) :””” g l o b a l crds ”””d i r = r t r nD i r e c t i on ( obj )return g . c rds . get ( d i r )

def awaywall ( s e l f , c l o s e s t , rndx , rndy ) :i f rndx !=0:

rndx=rndx/2i f rndy !=0:

rndy=rndy/2g . onwal l=Fal sei f c l o s e s t==”OBPlaneUwall” :

i f rnd . rand int (0 , 1 )==0:return ( rndx ,−rndy , 0 )

else :return (−rndx ,−rndy , 0 )

i f c l o s e s t==”OBPlaneLwall” :i f rnd . rand int (0 , 1 )==0:

return ( rndx , rndy , 0 )else :

return ( rndx ,−rndy , 0 )i f c l o s e s t==”OBPlaneDwall” :

i f rnd . rand int (0 , 1 )==0:return ( rndx , rndy , 0 )

else :return (−rndx , rndy , 0 )

i f c l o s e s t==”OBPlaneRwall” :i f rnd . rand int (0 , 1 )==0:

return (−rndx , rndy , 0 )else :

return (−rndx ,−rndy , 0 )else :

””” Just s top ”””return ( 0 , 0 , 0 )

def roam( s e l f ) :print ”roam”i f not g . onwal l :

g . lstmove =s e l f . r t r nD i r e c t i on ( s e l f . f i n dwa l l ( ) )

g . f stTurn=””””” i f the c o l l i s i o n i s with the wa l l ”””i f g . avs . mtouch ( g . cwa l l [ g . lstmove ] ) :

g . onwal l=True””” look l e f t and r i g h t to a s c e r t a in i f in

corner ”””i f g . fstTurn==” r i gh t ” :

””” lookup d i r e c t i on from r i g h td i c t ionary , move accord ing l y ”””

g . lstmove=g . r i g h t s e q [ g . lstmove ]e l i f g . fstTurn==” l e f t ” :

””” lookup d i r e c t i on from l e f td i c t ionary , move accord ing l y ”””

g . lstmove=g . l e f t s e q [ g . lstmove ]e l i f g . fstTurn==”” :

””” generate a random number 0or1 , i f 0then turn r i gh t , i f 1

turn l e f t ”””randomno = rnd . rand int (0 , 1 )i f randomno==0:

””” turn r i g h t ”””g . fstTurn=” r i gh t ”g . lstmove=” r i gh t ”g . onwal l=True

e l i f randomno==1:””” turn l e f t ”””g . fstTurn=” l e f t ”g . lstmove=” l e f t ”g . onwal l=True

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E101

else :”””move”””g . ava . mlv ( g . c rds [ g . lstmove ] )

def c u r r e n t I n t e r e s t ( s e l f ) :global currObjl s t =[ ]l s t . append ( currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBYellowBall” ] ) )l s t . append ( currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBPlane .010 ” ] ) )l s t . append ( currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBPlane .014 ” ] ) )””” i f t he re ’ s an o b j e c t w i th in c u r i s u i t y range ”””i f min( l s t )<g . c u r i o s i t y :

i f( l s t . index (min ( l s t ) )==0)&(g . i n t e r e s t [ l en ( g . i n t e r e s t ) −1]!=”OBYellowBall” ) :”””The c l o s e s t i s the ye l l ow b a l l ”””g . a t t en t i on=Trueg . i n t e r e s t . append ( ”OBYellowBall” )

e l i f ( l s t . index (min ( l s t ) )==1)&(g . i n t e r e s t [ l en ( g . i n t e r e s t ) −1]!=”OBPlane .010 ” ) :”””Yhe c l o s e s t i s the co lour pad”””g . a t t en t i on=Trueg . i n t e r e s t . append ( ”OBPlane .010 ” )

e l i f ( l s t . index (min ( l s t ) )==2)&(g . i n t e r e s t [ l en ( g . i n t e r e s t ) −1]!=”OBPlane .014 ” ) :”””The c l o s e s t i s the phonebox”””g . a t t en t i on=Trueg . i n t e r e s t . append ( ”OBPlane .014 ” )

else :””” nothing i s w i th in c u r i o s i t y range or not

j u s t been exp lored ”””g . a t t en t i on=False

def returnVector ( s e l f , e v l ) :i f ev l < −1.0:

return−1.0e l i f evl >1.0 :

return 1 .0else :

return ev l

def moveToObject ( s e l f , obj1 , obj2 ) :l s t =[ ]l s t . append ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj1 ] )l s t . append ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj2 ] )g . ava . mlv ( [ 0 , 0 , 0 ] )

c1=l s t [ 0 ] . g e tPo s i t i on ( )c2=l s t [ 1 ] . g e tPo s i t i on ( )vec to r =[ s e l f . r e turnVector ( c2 [0]− c1 [ 0 ] ) , s e l f . r e turnVector ( c2 [1]− c1 [ 1 ] ) , s e l f . r e turnVector ( c2 [2]− c1 [ 2 ] ) ]print ” vec to r ” , vec to ri f vec to r [0 ] <0 .3 and vec to r [1 ] <0 .3 and

vec to r [ 2 ] <0 . 3 :””” j u s t s e t the l i n e a r movement to zero and

s e t thepo s i t i o n s the same”””print ”im c l o s e enough”g . ava . mlv ( [ 0 , 0 , 0 ] )g . ava . mforce ( [ 0 , 0 , 0 ] )l s t [ 0 ] . s e tPo s i t i o n ( c2 )””” s e t the g . s a t t e n t i on time when on the

o b j e c t or i n t e r a c t i n g with ”””i f g . s a t t en t i on ==999999999.99:

g . s a t t en t i on=Blender . sys . time ( )g . reached=1

else :i f vec to r [0 ] >1 :

vec to r [0 ]= vec to r [ 0 ] / 1 0i f vec to r [1 ] >1 :

vec to r [1 ]= vec to r [ 1 ] / 1 0i f vec to r [2 ] >1 :

vec to r [2 ]= vec to r [ 2 ] / 1 0print ”new vecto r : ” , vec to rg . ava . mlv ( vec to r )

def r e turnF loor ( s e l f , obj1 ) :l s t = g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj1 ]tmp1 = l s t . g e tPo s i t i on ( )l s t . s e tPo s i t i o n ( ( tmp1 [ 0 ] , tmp1 [ 1 ] , 1 . 5 ) )

File: vafunction.py

import Blenderimport GameLogic as gimport random as rfrom vaactuator import actuatfrom vasensor import s ense#from vava r i a b l e import gamevarimport time as t

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E102

class cmnfunc :currObj=””c=g . ge tCur r entCont ro l l e r ( )ob j s en s e=””ob jac t=””ob j e c t=””#ob j e c t game v a r i a b l e s#i s the o b j e c t on the wa l lonwal l=Fal se#f i r s t wa l l to be h i t , and l a s t movement madef i r s t w a l l=−1l a s t w a l l=””f i r s t move=−1last move=−1#3darray o f next movesnext move = [ [ 0 , [ 1 , 2 , 3 , 0 ] , 0 , [ 3 , 0 , 1 , 2 ] ] , [ [ 3 , 0 , 1 , 2 ] , 0 , [ 1 , 2 , 3 , 0 ] , 0 ] , [ 0 , [ 3 , 0 , 1 , 2 ] , 0 , [ 1 , 2 , 3 , 0 ] ] , [ [ 1 , 2 , 3 , 0 ] , 0 , [ 3 , 0 , 1 , 2 ] , 0 ] ]wa l l ob j =[”OBPlaneUwall” , ”OBPlaneLwall” , ”OBPlaneDwall” , ”OBPlaneRwall” ]#random in t e g e r s f o r p layer , f r e q u en t l y updatedrndx=0rndy=0rnd int=0#random in t e g e r s f o r v i t u a l , f r e q u en t l y updatedrndxv=0rndyv=0c l o s e s t w a l l=0#cords array [ up , l e f t , down , r i g h t ]move cords =[ (0 ,3 ,0 ) ,( −3 ,0 ,0) ,(0 , −3 ,0) , ( 3 , 0 , 0 ) ]#f l a g f o r moving away from wa l lwall move=False#when s t a r t e d moving aways ta r t ed wa l l move=t . time ( )+1#storeed movestored move =(0 ,0 ,0)

def i n i t ( s e l f , objnm) :””” s e t the o b j e c t f o r t h i s o b j e c t name”””s e l f . currObj=g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ objnm ]s e l f . ob j s en s e = sense ( objnm)s e l f . ob jac t = actuat ( objnm)s e l f . ob j e c t=objnm

def f i n dwa l l ( s e l f ) :l s t =[ ]l s t . append ( s e l f . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ ”OBPlaneUwall” ] ) )l s t . append ( s e l f . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ ”OBPlaneLwall” ] ) )

l s t . append ( s e l f . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ ”OBPlaneDwall” ] ) )l s t . append ( s e l f . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ ”OBPlaneRwall” ] ) )return l s t . index (min ( l s t ) )

def wal lwalk ( s e l f ) :#i f on the wa l li f s e l f . onwal l :

#i f the f i r s t wa l l hasn ’ t been h i t ye ti f s e l f . f i r s t w a l l ==−1:

#i f the l a s t move was up or down thenrandom l e f t or r i g h t

i f( s e l f . las t move==0) | ( s e l f . las t move==2) :randomno=r . rand int (0 , 1 )s e l f . f i r s t w a l l=s e l f . las t movei f randomno==0:

#turn r i g h ts e l f . f i r s t move=3s e l f . las t move=3g . l og . l og ( ”PLAYER: I ’ ve h i t the

wa l l and I ’m turn ing RIGHT” )else :

#turn l e f ts e l f . f i r s t move=1s e l f . las t move=1g . l og . l og ( ”PLAYER: I ’ ve h i t the

wa l l and I ’m turn ing LEFT” )s e l f . l a s t w a l l=s e l f . ob j s en s e . getTouchObj ( ” wa l l ” )

else :#the l a s t move must be l e f t or r i g h t

and so random down or uprandomno=r . rand int (0 , 1 )s e l f . f i r s t w a l l=s e l f . las t movei f randomno==0:

#turn ups e l f . f i r s t move=0s e l f . las t move=0g . l og . l og ( ”PLAYER: I ’ ve h i t the

wa l l and I ’m turn ing UP” )else :

#turn downs e l f . f i r s t move=2s e l f . las t move=2g . l og . l og ( ”PLAYER: I ’ ve h i t the

wa l l and I ’m turn ing DOWN”)

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E103

s e l f . l a s t w a l l=s e l f . ob j s en s e . getTouchObj ( ” wa l l ” )else :

#i f the h i t wa l l i s d i f f e r e n t to thecurrent wa l l

i fs e l f . ob j s en s e . mtouch ( s e l f . wa l l ob j [ s e l f . las t move ] , ” wa l l ” ) :#s e l f . l a s t w a l l=s e l f . ob j s ense . getTouchObj (” wa l l ”)tmp1=s e l f . next move [ s e l f . f i r s t w a l l ]tmp2=tmp1 [ s e l f . f i r s t move ]s e l f . las t move=tmp2 [ s e l f . last move ]g . l og . l og ( ”PLAYER: I ’m turn ing to

f o l l ow the wa l l ” )else :

#check i f on the wa l ls e l f . onwal l=s e l f . ob j s en s e . mtouch ( s e l f . wa l l ob j [ s e l f . las t move ] , ” wa l l ” )i f s e l f . f i r s t w a l l ==−1:

#f ind the neares t wa l lnea r e s t=s e l f . f i n dwa l l ( )#se t the las t move to d i r e c t i on o f

neares t wa l ls e l f . las t move=nea r e s tg . l og . l og ( ”PLAYER: I ’m moving towards

the nea r e s t wa l l ” )#move toward the neares t wa l ls e l f . ob jac t . mlv ( s e l f . move cords [ s e l f . las t move ] )

def wander ( s e l f ) :g . l og . l og ( ”PLAYER: I ’m o f f Wandering” )s e l f . ob jac t . mlv ( s e l f . awaywall ( s e l f . c l o s e s t w a l l ) )

def v i r tua lwander ( s e l f ) :#j ii f s e l f . wall move :

i f t . time ( )> s e l f . s ta r t ed wa l l move :cmnfunc . wall move=False

else :#keep moving away from wa l ls e l f . ob jac t . mlv ( s e l f . stored move )

else :#re f r e s h the s t a r t e d wa l l move timecmnfunc . s ta r t ed wa l l move=t . time ( )+3#produce the next s e t o f cordscmnfunc . stored move=s e l f . awayplayer ( ”OBYellowBall” , ”OBSphere” )s e l f . ob jac t . mlv ( s e l f . stored move )

def r e f r e s h ( s e l f ) :cmnfunc . rndx=r . rand int ( g . var . speedlow , g . var . speedup )cmnfunc . rndy=r . rand int ( g . var . speedlow , g . var . speedup )cmnfunc . rnd int=r . rand int (0 , 1 )cmnfunc . c l o s e s t w a l l=s e l f . f i n dwa l l ( )cmnfunc . rndxv=r . rand int ( g . var . speedlow , g . var . speedup )cmnfunc . rndyv=r . rand int ( g . var . speedlow , g . var . speedup )

def awayplayer ( s e l f , obj1 , obj2 ) :l s t =[ ]l s t . append ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ obj1 ] )l s t . append ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ obj2 ] )#get the po s i t i o n s o f the o b j e c t sc1=l s t [ 0 ] . g e tPo s i t i on ( )c2=l s t [ 1 ] . g e tPo s i t i on ( )#get the d i f f e r e n c e vec tor o f c2−c1v=[( c2 [0]− c1 [ 0 ] ) , ( c2 [1]− c1 [ 1 ] ) , ( c2 [2]− c1 [ 2 ] ) ]i f v [0] >0 and v [1 ] >0 :

#obj2 must be up and r i g h t o f the ob j1i f s e l f . c r a shwa l l ( 1 ) ==(0 ,0 ,0) :

#i f not going in to wa l lreturn (− s e l f . rndxv ,− s e l f . rndyv , 0 )

else :#w i l l need to move away from wa l l#s e t a f l a g to say moving away from wa l lcmnfunc . wall move=Truereturn s e l f . c r a shwa l l ( 1 )

e l i f v [0] >0 and v [1 ] <0 :#obj2 must be down and r i g h t o f the ob j1i f s e l f . c r a shwa l l ( 1 ) ==(0 ,0 ,0) :

#i f not going in to wa l lreturn (− s e l f . rndxv , s e l f . rndyv , 0 )

else :#w i l l need to move away from wa l l#s e t a f l a g to say moving away from wa l lcmnfunc . wall move=Truereturn s e l f . c r a shwa l l ( 1 )

e l i f v [0] <0 and v [1 ] >0 :#obj2 must be up and l e f t o f the ob j1i f s e l f . c r a shwa l l ( 0 ) ==(0 ,0 ,0) :

#i f not going in to wa l lreturn ( s e l f . rndxv ,− s e l f . rndyv , 0 )

else :#w i l l need to move away from wa l l#s e t a f l a g to say moving away from wa l l

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E104

cmnfunc . wall move=Truereturn s e l f . c r a shwa l l ( 0 )

e l i f v [0] <0 and v [1 ] <0 :#obj2 must be down and l e f t o f the ob j1i f s e l f . c r a shwa l l ( 0 ) ==(0 ,0 ,0) :

#i f not going in to wa l lreturn ( s e l f . rndxv , s e l f . rndyv , 0 )

else :#w i l l need to move away from wa l l#s e t a f l a g to say moving away from wa l lcmnfunc . wall move=Truereturn s e l f . c r a shwa l l ( 0 )

def c ra shwa l l ( s e l f , inp ) :#change . .tmp=s e l f . ob j s en s e . ifTouchObj ( ” Mater ia l . 003 ” )print ” c ra shwa l l ” , tmpi f tmp==”OBPlaneUwall” :

i f inp==0:return ( s e l f . rndx ,− s e l f . rndy , 0 )

else :return (− s e l f . rndx ,− s e l f . rndy , 0 )

e l i f tmp==”OBPlaneLwall” :i f inp==0:

return ( s e l f . rndx , s e l f . rndy , 0 )else :

return ( s e l f . rndx ,− s e l f . rndy , 0 )e l i f tmp==”OBPlaneDwall” :

i f s e l f . rnd int==0:return ( s e l f . rndx , s e l f . rndy , 0 )

else :return (− s e l f . rndx , s e l f . rndy , 0 )

e l i f tmp==”OBPlaneRwall” :i f s e l f . rnd int==0:

return (− s e l f . rndx , s e l f . rndy , 0 )else :

return (− s e l f . rndx ,− s e l f . rndy , 0 )else :

return ( 0 , 0 , 0 )

def awaywall ( s e l f , c l o s e w a l l ) :i f c l o s e w a l l ==0:

i f s e l f . rnd int==0:return ( s e l f . rndx ,− s e l f . rndy , 0 )

else :

return (− s e l f . rndx ,− s e l f . rndy , 0 )i f c l o s e w a l l ==1:

i f s e l f . rnd int==0:return ( s e l f . rndx , s e l f . rndy , 0 )

else :return ( s e l f . rndx ,− s e l f . rndy , 0 )

i f c l o s e w a l l ==2:i f s e l f . rnd int==0:

return ( s e l f . rndx , s e l f . rndy , 0 )else :

return (− s e l f . rndx , s e l f . rndy , 0 )i f c l o s e w a l l ==3:

i f s e l f . rnd int==0:return (− s e l f . rndx , s e l f . rndy , 0 )

else :return (− s e l f . rndx ,− s e l f . rndy , 0 )

else :””” Just s top ”””print ” stopping ”return ( 0 , 0 , 0 )

def moveToZObject ( s e l f , obj1 , obj2 ) :l s t =[ ]l s t . append ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ obj1 ] )l s t . append ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ obj2 ] )#stops the p layer from movings e l f . ob jac t . mlv ( [ 0 , 0 , 0 ] )c1=l s t [ 0 ] . g e tPo s i t i on ( )c2=l s t [ 1 ] . g e tPo s i t i on ( )tmpz=c2 [2]− c1 [ 2 ]i f tmpz <0.2 :

l s t [ 0 ] . s e tPo s i t i o n ( c1 [ 0 ] , c1 [ 1 ] , c2 [ 2 ] )else :

i f tmpz>5:tmpz=tmpz/10

e l i f tmpz>2 and tmpz<5:tmpz=tmpz/2

s e l f . ob jac t . mlv ( ( 0 , 0 , tmpz ) )

def moveToLocation ( s e l f , obj1 , cords ) :l s t =[ ]l s t . append ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ obj1 ] )l s t . append ( cords )#stops the p layer from movings e l f . ob jac t . mlv ( [ 0 , 0 , 0 ] )

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E105

c1=l s t [ 0 ] . g e tPo s i t i on ( )c2=l s t [ 1 ]vec to r =[ ]vec to r . append ( c2 [0]− c1 [ 0 ] )vec to r . append ( c2 [1]− c1 [ 1 ] )#pr in t ”VECTOR ” , vec tori f (−0.3< vec to r [ 0 ] <0 .3 ) and (−0.3< vec to r [ 1 ] <0 .3 ) :

””” j u s t s e t the l i n e a r movement to zero ands e t the

po s i t i o n s the same”””#pr in t ”IM Close enough”s e l f . ob jac t . mlv ( [ 0 , 0 , 0 ] )s e l f . ob jac t . ml fo rce ( [ 0 , 0 , 0 ] )l s t [ 0 ] . s e tPo s i t i o n ( ( c2 [ 0 ] , c2 [ 1 ] , c1 [ 2 ] ) )

else :i f vec to r [0] >5 or vec to r [0] <−5:

vec to r [0 ]= vec to r [ 0 ] / 7e l i f ( vec to r [0] >2 and vec to r [0 ] <5) or

( vec to r [0]<−2 and vec to r [0]>−5) :vec to r [0 ]= vec to r [ 0 ] / 2

i f vec to r [1] >5 or vec to r [1] <−5:vec to r [1 ]= vec to r [ 1 ] / 7

e l i f ( vec to r [1] >2 and vec to r [1 ] <5) or( vec to r [1]<−2 and vec to r [1]>−5) :vec to r [1 ]= vec to r [ 1 ] / 2

s e l f . ob jac t . mlv ( ( vec to r [ 0 ] , vec to r [ 1 ] , 0 ) )

def moveToObject ( s e l f , obj1 , obj2 ) :l s t =[ ]l s t . append ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj1 ] )l s t . append ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj2 ] )#stops the p layer from movings e l f . ob jac t . mlv ( [ 0 , 0 , 0 ] )c1=l s t [ 0 ] . g e tPo s i t i on ( )c2=l s t [ 1 ] . g e tPo s i t i on ( )vec to r =[ ]vec to r . append ( c2 [0]− c1 [ 0 ] )vec to r . append ( c2 [1]− c1 [ 1 ] )#pr in t ”VECTOR ” , vec tori f (−0.3< vec to r [ 0 ] <0 .3 ) and (−0.3< vec to r [ 1 ] <0 .3 ) :

””” j u s t s e t the l i n e a r movement to zero ands e t the

po s i t i o n s the same”””#pr in t ”IM Close enough”s e l f . ob jac t . mlv ( [ 0 , 0 , 0 ] )

s e l f . ob jac t . ml fo rce ( [ 0 , 0 , 0 ] )l s t [ 0 ] . s e tPo s i t i o n ( ( c2 [ 0 ] , c2 [ 1 ] , c1 [ 2 ] ) )

else :i f vec to r [0] >5 or vec to r [0] <−5:

vec to r [0 ]= vec to r [ 0 ] / 7e l i f ( vec to r [0] >2 and vec to r [0 ] <5) or

( vec to r [0]<−2 and vec to r [0]>−5) :vec to r [0 ]= vec to r [ 0 ] / 2

i f vec to r [1] >5 or vec to r [1] <−5:vec to r [1 ]= vec to r [ 1 ] / 7

e l i f ( vec to r [1] >2 and vec to r [1 ] <5) or( vec to r [1]<−2 and vec to r [1]>−5) :vec to r [1 ]= vec to r [ 1 ] / 2

s e l f . ob jac t . mlv ( ( vec to r [ 0 ] , vec to r [ 1 ] , 0 ) )

def sameLoc ( s e l f , obj1 , obj2 ) :l s t =[ ]l s t . append ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ obj1 ] )l s t . append ( g . getCurrentScene ( ) . g e tOb j ec tL i s t ( ) [ obj2 ] )#get the po s i t i o n s o f the o b j e c t sc1=l s t [ 0 ] . g e tPo s i t i on ( )c2=l s t [ 1 ] . g e tPo s i t i on ( )i f ( c1 [0]− c2 [ 0 ] ) <0.02 and ( c1 [0]− c2 [ 0 ] ) >−0.02

and ( c1 [1]− c2 [ 1 ] ) <0.02 and( c1 [1]− c2 [ 1 ] ) >−0.02:return True

else :return False

def sameLocation ( s e l f , cord1 , cord2 ) :i f ( cord1 [0]− cord2 [ 0 ] ) <0.02 and

( cord1 [0]− cord2 [ 0 ] ) >−0.02 and( cord1 [1]− cord2 [ 1 ] ) <0.02 and( cord1 [1]− cord2 [ 1 ] ) >−0.02:return True

else :return False

def r e turnLocat ion ( s e l f , obj1 ) :tmp=g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj1 ]return tmp . g e tPo s i t i on ( )

def s e tLoca t i on ( s e l f , obj1 , l o c ) :tmp=g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj1 ]tmp . s e tPo s i t i o n ( l o c )

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E106

def s izeChanger ( s e l f , obj1 ,num) :obje=g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj1 ]#HAVE ADDED SCALING TO Z THIS MAY BE A

MSTAKE!!∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗obje . s c a l i n g =[ obje . s c a l i n g [0 ]+num, obje . s c a l i n g [1 ]+num, obje . s c a l i n g [2 ]+num]

def g e tS i z e ( s e l f , obj1 ) :obje=g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj1 ]return obje . s c a l i n g

def s e t S i z e ( s e l f , obj1 , sz ) :obje=g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj1 ]obje . s c a l i n g=sz

def doneBox ( s e l f ) :l s t =[ ]l s t . append ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBSphere” ] )l s t . append ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBEmptyPhone2” ] )c2=l s t [ 1 ] . g e tPo s i t i on ( )l s t [ 0 ] . s e tPo s i t i o n ( ( c2 [ 0 ] , c2 [ 1 ] , c2 [ 2 ]+1 . 7 ) )

def r e turnF loor ( s e l f , obj1 ) :#l s t =[]#l s t . append ( g . getCurrentScene () . g e tOb j e c tL i s t ( ) [ ob j1 ] )#c l e a r the current t r ac k ing o f any o b j e c t#l s t [ 0 ] . c learTrack ()l s t =[ ]l s t . append ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ obj1 ] )l s t . append ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBPlane .013 ” ] )c2=l s t [ 1 ] . g e tPo s i t i on ( )l s t [ 0 ] . s e tPo s i t i o n ( ( c2 [ 0 ] , c2 [ 1 ] , c2 [2 ]+2) )

def respawn ( s e l f ) :s = Blender . Object .New( ”Mesh” , ”Sphere ” )#nm = Blender .Mesh .New()#pr in t s , type ( s )

mat = Blender . Mater ia l .New( ’newMat ’ ) #crea te a new Mater ia l c a l l e d ’newMat ’

#pr in t mat . rgbCol # pr in ti t s rgb co l o r t r i p l e t sequence

mat . rgbCol = [ 0 . 0 , 0 . 5 , 0 . 2 ] # changei t s co l o r

#mat . setAlpha (0 . 2 ) #mat . a lpha = 0.2 −− almost t ransparent

#mat . emit = 0.7 #equ i v a l en t to mat . setEmit (0 . 8 )

#mat .mode |= Blender . Mater ia l .Modes .ZTRANSP #turn on Z−Buf fer transparency

mat . setName ( ’ RedBansheeSkin ’ ) # changei t s name

#mat . setAdd (0 . 8 ) # make i tglow

#mat . setMode ( ’ Halo ’)

s . s e tMat e r i a l s ( [ mat ] )#pr in t s . g e tMate r i a l s ( )s . c o l b i t s =(1<<0)#rep lace the current mesh with the new ones e l f . ob jac t . mreplace ( s . getName ( ) )Blender . Redraw ( )

def endGame( s e l f ) :s e l f . ob jac t .mend ( )

File: vagame.py

import Blenderimport GameLogic as gfrom va funct ion import cmnfuncimport time as t

class newvagame :#setup the p l aye r s func t i on c l a s sp laye r func=cmnfunc ( ”OBSphere” )

def gameplay ( s e l f ) :”””where the appropr ia te running o f the game

take s p lace ”””i f g . var . c u r r s t a t e ==0:

#player wa l l wa lk ings e l f . p l aye r func . wal lwalk ( )

e l i f g . var . c u r r s t a t e ==1:#re s e t the p layer func t i on v a r i a b l e s f o r

when done wanderings e l f . r e setPlayerFunc ( )

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E107

#player i s wanderings e l f . p l aye r func . wander ( )#check to see i f a l l the o b j e c t s have been

v i s i t e d and so want to end games e l f . endgame ( )

e l i f g . var . c u r r s t a t e ==2:#player i s hover ings e l f . hoverPad ( )

e l i f g . var . c u r r s t a t e ==3:#player i s changing s i z es e l f . colourPad ( )

e l i f g . var . c u r r s t a t e ==4:#player i s chas ings e l f . cha s eV i r tua l ( )

e l i f g . var . c u r r s t a t e ==5:#player i s in phoneboxs e l f . phonebox ( )

else :#something has gone ary and the p layer

shou ld resume walk ing#pr in t ” the current s t a t e i s incorrec t ,

r e s e t i n g to zero ”g . var . c u r r s t a t e=0

s e l f . r e s e t ( )s e l f . s ta t echange r ( )#i f the current s t a t e i sn ’ t 2 ,3 ,4 ,5 ,6i f ( g . var . c u r r s t a t e==0) or

( g . var . c u r r s t a t e==1) :s e l f . c u r r e n t I n t e r e s t ( )

def endgame ( s e l f ) :#phone , colour , hover , v i r t u a li f

g . var . i n t e r e s t s . count ( ”phonebox” )>=g . var . r epea t s [ 0 ] :i f

g . var . i n t e r e s t s . count ( ” colourpad ” )>=g . var . r epea t s [ 1 ] :i f

g . var . i n t e r e s t s . count ( ”hover ” )>=g . var . r epea t s [ 2 ] :i f

g . var . i n t e r e s t s . count ( ” v i r t u a l ” )>=g . var . r epea t s [ 3 ] :#can end the game caus done

eve ry th ing as many times aswant to

g . l og . l og ( ”END GAME: I ’ ve doneeveryth ing in t h i s

environment that I wantedto ” )

s e l f . p l aye r func . endGame ( )

def s ta t echanger ( s e l f ) :#pr in t ” current s t a t e : ” , g . var . c u r r s t a t e#pr in t ” b l ender system time : ” , time . time ()#pr in t ” l o ca l t ime :” , time . l o c a l t ime ()#pr in t ” l a s t wandering time : ” , g . var . las twander”””This method w i l l be change the s t a t e i f

matching c r i t e r i a arefound to be t rue . ”””i f g . var . c u r r s t a t e ==0:

#can e i t h e r wander or be d i s t r a c t e di f g . var . d i s t r a c t e d :

print ”im d i s t r a c t e d ”else :

#go fo r a wander i f the time i s r i g h ti f

t . time ( )>(g . var . lastwander+g . var . p l a y e r e xp l o r e+s e l f . p l aye r func . rndx ) :g . l og . l og ( ”PLAYER: I f e e l s a f e

enough to go f o r a wander” )#se t the l a s t wander time to the

current time p l s 2 secsg . var . lastwander=t . time ( )+2#se t the s t a t e to wander s t a t eg . var . c u r r s t a t e=1

i f g . var . c u r r s t a t e ==1:#can e i t h e r be d i s t r a c t e d or go back to wa l li f g . var . d i s t r a c t e d :

print ”im d i s t r a c t e d ”else :

#go back to wa l lwa l k ing i f exceededboredom

i ft . time ( )>(g . var . lastwander+g . var . p l aye r bored+s e l f . p l aye r func . rndx ) :g . l og . l og ( ”PLAYER: I want to go back

to the s a f e t y o f the wa l l ” )#se t the l a s t wander time to the

current time p l s 6 secsg . var . lastwander=t . time ( )+6#se t the s t a t e to wa l l walk s t a t eg . var . c u r r s t a t e=0

i f g . var . c u r r s t a t e ==2:#hover ing

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E108

#pr in t ” time : ” , t . time ()#pr in t ” time current i n t e r e s t :

” , g . var . t ime cu r r i n t e r e s t#pr in t ” time on i n t e r e s t :

” , g . var . t ime on in t e r e s t [” hover ” ]#pr in t ”random : ” , s e l f . p l ayer func . rndx#ge t a va lue o f time on i n t e r e s t +

d i f f e r e n c e o f ( p l ay e r s wish to exp l o r e#minus how qu i c k l y the p layer g e t s bored )tmpvar=g . var . t ime on i n t e r e s t [ ” hover ” ]+(g . var . p l aye r exp l o r e−g . var . p l aye r bored )#i s i t done with the hover pad?i f

t . time ( )>g . var . t im e c u r r i n t e r e s t+tmpvar+s e l f . p l aye r func . rndx :g . l og . l og ( ”PLAYER: POP! ! I got to high ” )#return to the f l o o rs e l f . p l aye r func . r e turnF loor ( ”OBSphere” )#se t the f l a g f o r be ing d i s t r a c t e d back

to False :g . var . d i s t r a c t e d=False#pr in t ”im RESETING the hover ing pad”#re s e t the p layer func t i ons be f o re

re turn ing to wa l l walks e l f . r e setPlayerFunc ( )#se t the l a s t wander time to the current

time p l s 6 secsg . var . lastwander=t . time ( )+6#se t the next a v a i l a b l e i n t e r e s t time of

the hover padg . var . t im e n e x t i n t e r e s t [ ” hover ”]= t . time ( )+g . var . t im e p a s s i n t e r e s t [ 1 ]#se t the s t a t e to wa l l walk s t a t eg . var . c u r r s t a t e=0

i f g . var . c u r r s t a t e ==3:#s i z e changing#pr in t ” time : ” , t . time ()#pr in t ” time current i n t e r e s t :

” , g . var . t ime cu r r i n t e r e s t#pr in t ” time on i n t e r e s t :

” , g . var . t ime on in t e r e s t [” co lourpad ” ]#pr in t ”random : ” , s e l f . p l ayer func . rndx#ge t a va lue o f time on i n t e r e s t +

d i f f e r e n c e o f ( p l ay e r s wish to exp l o r e#minus how qu i c k l y the p layer g e t s bored )tmpvar=g . var . t ime on i n t e r e s t [ ” co lourpad ” ]+(g . var . p l aye r exp l o r e−g . var . p l aye r bored )#i s i t done with the co lour pad?

i ft . time ( )>g . var . t im e c u r r i n t e r e s t+tmpvar+s e l f . p l aye r func . rndx :g . l og . l og ( ”PLAYER: BANG! ! I got too big ” )#re s e t the s i z e o f the p layers e l f . p l aye r func . s e t S i z e ( ”OBSphere” , g . var . s i zeb4change )#re s e t the l o c a t i on o f the p layer be f o r e

r e s i z es e l f . p l aye r func . r e turnF loor ( ”OBSphere” )#se t the f l a g f o r be ing d i s t r a c t e d back

to False :g . var . d i s t r a c t e d=False#pr in t ”im RESETING the colourpad”#re s e t the p layer func t i on s be f o re

re turn ing to wa l l walks e l f . r e setPlayerFunc ( )#se t the l a s t wander time to the current

time p l s 6 secsg . var . lastwander=t . time ( )+6#se t the next a v a i l a b l e i n t e r e s t time of

the co lour changerg . var . t im e n e x t i n t e r e s t [ ” co lourpad ”]= t . time ( )+g . var . t im e p a s s i n t e r e s t [ 2 ]#se t the s t a t e to wa l l walk s t a t eg . var . c u r r s t a t e=0

i f g . var . c u r r s t a t e ==4:#v i r t u a l charac ter chas ing#ge t the d i s t ance to the v i r t u a ll s t =[ ]l s t . append ( s e l f . p l aye r func . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBYellowBall” ] ) )#get a va lue o f time on i n t e r e s t +

d i f f e r e n c e o f ( p l ay e r s wish to exp l o r e#minus how qu i c k l y the p l ayer g e t s bored )tmpvar=g . var . t ime on i n t e r e s t [ ” v i r t u a l ” ]+(g . var . p l aye r exp l o r e−g . var . p l aye r bored )#i s i t done with the v i r t u a l charac ter ? OR#i f the p layer i s <0.2 away then caught so

v i r t u a l w i l l go back to r o t a t i n gi f

( t . time ( )>g . var . t im e c u r r i n t e r e s t+tmpvar+s e l f . p l aye r func . rndx ) | ( l s t [ 0 ] <0 .2 ) :g . l og . l og ( ”PLAYER: I ’m done chas ing t h i s

b a l l ! ” )#se t the f l a g f o r be ing d i s t r a c t e d back

to False :g . var . d i s t r a c t e d=False#re s e t the p layer func t i on s be f o re

re turn ing to wa l l walks e l f . r e setPlayerFunc ( )

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E109

#se t the l a s t wander time to the currenttime p l s 6 secs

g . var . lastwander=t . time ( )+6#se t the next a v a i l a b l e i n t e r e s t time of

the v i r t u a lg . var . t im e n e x t i n t e r e s t [ ” v i r t u a l ”]= t . time ( )+g . var . t im e p a s s i n t e r e s t [ 0 ]#add a l i n e where the time spent on the

v i r t u a l charac ter i s reduced fo rnext time

#se t the s t a t e to wa l l walk s t a t eg . var . c u r r s t a t e=0

i f g . var . c u r r s t a t e ==5:#phoneboxing#ge t a va lue o f time on i n t e r e s t +

d i f f e r e n c e o f ( p l ay e r s wish to exp l o r e#minus how qu i c k l y the p layer g e t s bored )tmpvar=g . var . t ime on i n t e r e s t [ ”phonebox” ]+(g . var . p l aye r exp l o r e−g . var . p l aye r bored )#i s i t done with the phonebox?i f

t . time ( )>g . var . t im e c u r r i n t e r e s t+tmpvar+s e l f . p l aye r func . rndx :#return to ou t s i d e o f the phoneboxs e l f . p l aye r func . doneBox ( )g . l og . l og ( ”PLAYER: I ’m done with t h i s

Phonebox” )#se t the f l a g s f o r phonebox s t a g e s back

to False :g . var . phoneEmptyOne=Falseg . var . phoneEmptyTwo=Falseg . var . phoneEmptyThree=False#re s e t the p layer func t i ons be f o re

re turn ing to wa l l walks e l f . r e setPlayerFunc ( )#se t the l a s t wander time to the current

time p l s 6 secsg . var . lastwander=t . time ( )+6#se t the next a v a i l a b l e i n t e r e s t time of

the phoneboxg . var . t im e n e x t i n t e r e s t [ ”phonebox”]= t . time ( )+g . var . t im e p a s s i n t e r e s t [ 3 ]#se t the s t a t e to wa l l walk s t a t eg . var . c u r r s t a t e=0

def r e s e t ( s e l f ) :#i f the time has exceeded tha t o f the l a s t

r e f r e s h i n t e r v a l then r e f r e s h

i ft . time ( )>(g . var . l a s t r e f r e s h+s e l f . p l aye r func . rndx ) :s e l f . p l aye r func . r e f r e s h ( )#s e l f . co lour func . setMatColour (”Plane .013”)#Blender . Redraw ()#re s e t l a s t r e f r e s h timeg . var . l a s t r e f r e s h=t . time ( )

def c u r r e n t I n t e r e s t ( s e l f ) :#pr in t g . var . t ime n e x t i n t e r e s tl s t =[ ]l s t . append ( s e l f . p l aye r func . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBYellowBall” ] ) )l s t . append ( s e l f . p l aye r func . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBPlane .010 ” ] ) )l s t . append ( s e l f . p l aye r func . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBPlane .013 ” ] ) )l s t . append ( s e l f . p l aye r func . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBPlane .014 ” ] ) )””” i f t he re ’ s an o b j e c t w i th in c u r i s u i t y range ”””i f min( l s t )<g . var . p l a y e r c u r i o u s :

#The c l o s e s t i s the v i r t u a l b a l l and i thasn ’ t been in t e r a c t e d more than maxs p e c i f i e d

i f( l s t . index (min ( l s t ) )==0)&(g . var . i n t e r e s t s . count ( ” v i r t u a l ” )<g . var . r epea t s [ 3 ] ) :#i f a s u i t a b l e amount o f time has passed

s ince the l a s t i n t e r a c t i oni f

g . var . t im e n e x t i n t e r e s t [ ” v i r t u a l ”]< t . time ( ) :#add to the i n t e r e s t s l i s tg . var . i n t e r e s t s . append ( ” v i r t u a l ” )#se t the a t t en t i on f l a g to be

i n t e r e s t e d in the v i r t u a lcharac ter

g . var . c u r r s t a t e=4#The c l o s e s t i s the hover pad and i t hasn ’ t

been in t e r a c t e d more than max s p e c i f i e di f

( l s t . index (min ( l s t ) )==1)&(g . var . i n t e r e s t s . count ( ”hover ” )<g . var . r epea t s [ 2 ] ) :#i f a s u i t a b l e amount o f time has passed

s ince the l a s t i n t e r a c t i oni f

g . var . t im e n e x t i n t e r e s t [ ” hover ”]< t . time ( ) :#add to the i n t e r e s t s l i s tg . var . i n t e r e s t s . append ( ”hover ” )#se t the a t t en t i on f l a g to be

i n t e r e s t e d in the hover padg . var . c u r r s t a t e=2

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E110

#The c l o s e s t i s the co lour pad and i t hasn ’ tbeen in t e r a c t e d more than max s p e c i f i e d

i f( l s t . index (min ( l s t ) )==2)&(g . var . i n t e r e s t s . count ( ” colourpad ” )<g . var . r epea t s [ 1 ] ) :#i f a s u i t a b l e amount o f time has passed

s ince the l a s t i n t e r a c t i oni f

g . var . t im e n e x t i n t e r e s t [ ” co lourpad ”]< t . time ( ) :#add to the i n t e r e s t s l i s tg . var . i n t e r e s t s . append ( ” colourpad ” )#se t the a t t en t i on f l a g to be

i n t e r e s t e d in the co lour padg . var . c u r r s t a t e=3

#The c l o s e s t i s the phonebox and i t hasn ’ tbeen in t e r a c t e d more than max s p e c i f i e d

i f( l s t . index (min ( l s t ) )==3)&(g . var . i n t e r e s t s . count ( ”phonebox” )<g . var . r epea t s [ 0 ] ) :#i f a s u i t a b l e amount o f time has passed

s ince the l a s t i n t e r a c t i oni f

g . var . t im e n e x t i n t e r e s t [ ”phonebox”]< t . time ( ) :#add to the i n t e r e s t s l i s tg . var . i n t e r e s t s . append ( ”phonebox” )#se t the a t t en t i on f l a g to be

i n t e r e s t e d in the phoneboxg . var . c u r r s t a t e=5

else :””” nothing i s w i th in c u r i o s i t y range or not

j u s t been exp lored ”””

def resetPlayerFunc ( s e l f ) :s e l f . p l aye r func . onwal l=Fal ses e l f . p l aye r func . f i r s t move=−1s e l f . p l aye r func . f i r s t w a l l=−1s e l f . p l aye r func . last move=−1

def hoverPad ( s e l f ) :#has i t moved onto the hover pad?i f

s e l f . p l aye r func . sameLoc ( ”OBSphere” , ”OBPlane .010 ” ) | g . var . d i s t r a c t e d :#do hover ing bus ine s s#se t the f a c t t ha t the b a l l i s d i s t r a c t e dg . var . d i s t r a c t e d=True#s e l f . p l ayer func . hover ()s e l f . p l aye r func . moveToZObject ( ”OBSphere” , ”OBEmptyPad2” )

g . l og . l og ( ”PLAYER: I f e e l l i g h t , I th ink I ’mhover ing ! ” )

else :#move to the hover pads e l f . p l aye r func . moveToObject ( ”OBSphere” , ”OBPlane .010 ” )#whi l e moving keep s e t t i n g current time to

the t ime s ta r t ed on hove r pad#so tha t time spent on i n t e r e s t only s t a r t s

when reached i tg . var . t im e c u r r i n t e r e s t=t . time ( )g . l og . l og ( ”PLAYER: What ’ s t h i s . . . some s o r t

o f pad??” )

def colourPad ( s e l f ) :#has i t moved to the co lour pad?i f

s e l f . p l aye r func . sameLoc ( ”OBSphere” , ”OBPlane .013 ” ) | g . var . d i s t r a c t e d :#se t the f a c t t ha t the b a l l i s d i s t r a c t e dg . var . d i s t r a c t e d=True#do co lour pad bus ine s ss e l f . p l aye r func . s izeChanger ( ”OBSphere” , 0 . 0 1 )g . l og . l og ( ”PLAYER: I f e e l strange , I th ink

I ’m ge t t i n g b igge r ! ” )else :

#se t the current s i z e o f the p layerg . var . s i zeb4change=s e l f . p l aye r func . g e tS i z e ( ”OBSphere” )#move to the co lour pads e l f . p l aye r func . moveToObject ( ”OBSphere” , ”OBPlane .013 ” )#whi l e moving keep s e t t i n g current time to

the t ime s t a r t ed on co l ou r padg . var . t im e c u r r i n t e r e s t=t . time ( )g . l og . l og ( ”PLAYER: What ’ s t h i s . . . some s o r t

o f pad?” )

def phonebox ( s e l f ) :#i f moved to the f i r s t emptyi f

s e l f . p l aye r func . sameLoc ( ”OBSphere” , ”OBEmptyPhone” ) | g . var . phoneEmptyOne :g . var . phoneEmptyOne=True#move to the second emptyi f

s e l f . p l aye r func . sameLoc ( ”OBSphere” , ”OBEmptyPhone2” ) | g . var . phoneEmptyTwo :g . var . phoneEmptyTwo=True#move to the t h i r d empty

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E111

i fs e l f . p l aye r func . sameLoc ( ”OBSphere” , ”OBEmptyPhone3” ) | g . var . phoneEmptyThree :g . var . phoneEmptyThree=Trueg . l og . l og ( ”PLAYER: The l i g h t ’ s on in

t h i s phonebox” )#i f 2 seconds up then turn o f f l i g h t

and move to empty two by s e t t i n gphoneEmptyTwo f l a g to False

i f g . var . lampon<t . time ( ) :g . var . phoneEmptyThree=Falseg . var . phoneEmptyTwo=False

else :#move to the t h i r d emptys e l f . p l aye r func . moveToObject ( ”OBSphere” , ”OBEmptyPhone3” )#keep r e f r e s h i n g the temp time un t i l

move has reached Empty3g . var . lampon=t . time ( )+2g . l og . l og ( ”PLAYER: I ’m moving in to

the phonebox” )else :

#move to the second emptys e l f . p l aye r func . moveToObject ( ”OBSphere” , ”OBEmptyPhone2” )g . l og . l og ( ”PLAYER: I ’m going to out s id e

the phonebox” )else :

#move to the f i r s t emptys e l f . p l aye r func . moveToObject ( ”OBSphere” , ”OBEmptyPhone” )#whi l e moving keep s e t t i n g current time to

the t ime s tar t ed on phonebox#so tha t time spent on i n t e r e s t only s t a r t s

when reached i t#NOTE: added 1 second to a l l ow fo r some

movement timeg . var . t im e c u r r i n t e r e s t=t . time ( )+1g . l og . l og ( ”PLAYER: This l ooks l i k e a

phonebox , but I ’m going to approachs low ly ” )

def chaseV i r tua l ( s e l f ) :#se t current time to the

t ime s t a r t ed on co l ou r pad#g . var . t ime cu r r i n t e r e s t=t . time ()#move towards the v i r t u a l charac ter#as move towards , g e t the v i r t u a l charac ter to

move away

i f not g . var . d i s t r a c t e d :#se t current time to the

t ime s t a r t e d on v i r t u a lg . var . t im e c u r r i n t e r e s t=t . time ( )g . var . d i s t r a c t e d=True

else :#pr in t ”im doing the move v i r t u a l charac ter

th ing !”g . l og . l og ( ”PLAYER: I ’m chas ing the Vi r tua l ” )s e l f . p l aye r func . moveToObject ( ”OBSphere” , ”OBYellowBall” )

File: valight.py

import Blenderimport GameLogic as gfrom va funct ion import cmnfuncimport time as t

class l i g h t sw i t c h :#Light on and o f fc = g . ge tCur r entCont ro l l e r ( )#setup the l i g h t s func t i on c l a s sl i g h t f u n c=cmnfunc ( ”OBLamp.003 ” )

def sw i t che r ( s e l f ) :i f

s e l f . l i g h t f u n c . sameLoc ( ”OBSphere” , ”OBEmptyPhone3” )or g . var . switcht ime :g . l og . l og ( ”LIGHT: Someone i s in the

phonebox , I ’m on” )g . var . switcht ime=Truel i g h t = s e l f . c . getOwner ( )#turn l i g h t energy up to 2 to turn on and

s e t the co lour to whi te l i g h tl i g h t . energy =2.0l i g h t . co l ou r =[1 .0 , 1 . 0 , 1 . 0 ]#i f the l i g h t s been on fo r the s e t amount o f

time then s e t f l a g to f a l s ei f t . time ( )>g . var . onTime :

g . var . switcht ime=Falseg . l og . l og ( ”LIGHT: They ’ re no l onge r in

phonebox , I ’m turn ing o f f ” )else :

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E112

g . var . switcht ime=Falsel i g h t = s e l f . c . getOwner ( )#turn the energy to zero to turn l i g h t o f fl i g h t . energy =0.0#keeep i t e r a t i n g the on time un t i l the l i g h t

i s swi tched ong . var . onTime=t . time ( )+2

File: valog.py

import time as t”””VALog Module conta in ing d e f i n t i o n s f o r wr i t i n g to the

l o g f i l e .”””class a l o g f i l e :

n ew f i l e=” ta r ””””run the n ew l o g f i l e method and save the g l o b a l

f i l ename fo r use ”””def i n i t ( s e l f , f i l ename ) :

s e l f . n ew f i l e=”” . j o i n ( [ ”C: / VAvatar/Logs/” , f i l ename , ” . l og ” ] )s e l f . newLogFile ( s e l f . n ew f i l e )

def newLogFile ( s e l f , f i l enm ) :l s t =[ ]l s t . append ( ”########################################\n” )l s t . append ( ”This i s a L o g f i l e f o r the VAvatar

System\n” )l s t . append ( ”” . j o i n ( [ ”Creat ion Time/Date :

” , t . ct ime ( t . time ( ) ) ] ) )l s t . append ( ”\n\n” )l s t . append ( ”########################################\n” )l s t . append ( ”\n” )print s e l f . n ew f i l ef=open ( s e l f . newf i l e , ”w” )f . w r i t e l i n e s ( l s t )f . c l o s e ( )

def l og ( s e l f , msg) :f=open ( s e l f . newf i l e , ” r ” )l s t=f . r e a d l i n e s ( )f . c l o s e ( )tmp=l s t [ l en ( l s t )−1]

tmp2=l s t [ l en ( l s t )−2]l s t 2=tmp . s p l i t ( ’> ’ )l s t 3=tmp2 . s p l i t ( ’> ’ )#i f i t was ab l e to do the s p l i ti f ( l en ( l s t 2 )>1)&( l en ( l s t 3 )>1) :

#i f the new message i s not the same as theo ld message

i f not( l s t 2 [1]==”” . j o i n ( [ msg , ”\n” ] ) ) | ( l s t 3 [1]==”” . j o i n ( [ msg , ”\n” ] ) ) :#append i t to the l i s t and wr i t e i t to

f i l el s t . append ( ”” . j o i n ( [ t . ct ime ( t . time ( ) ) , ”>

” ,msg ] ) )l s t . append ( ”\n” )f=open ( s e l f . newf i l e , ”w” )f . w r i t e l i n e s ( l s t )f . c l o s e ( )

else :#append i t to the l i s t and wr i t e i t to the

f i l el s t . append ( ”” . j o i n ( [ t . ct ime ( t . time ( ) ) , ”>

” ,msg ] ) )l s t . append ( ”\n” )f=open ( s e l f . newf i l e , ”w” )f . w r i t e l i n e s ( l s t )f . c l o s e ( )

File: vasensor.py

import Blenderimport GameLogic as g

class s ense :c = g . ge tCur r entCont ro l l e r ( )stouch=c . getSensor ( ” touch” )sprop=c . getSensor ( ” property ” )sa lways=c . getSensor ( ” always ” )snear=c . getSensor ( ” near ” )currObj=”””””when c a l l e d want to s e t the o b j e c t sensors are

l i n k ed to ”””def i n i t ( s e l f , objnm) :

s e l f . currObj=g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ objnm ]

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E113

””” s e l f . c = g . ge tCurren tContro l l e r ( ) ”””””” ge t the sensors ”””””” s e l f . sa lways = s e l f . c . ge tSensor (” always ”)s e l f . snear = s e l f . c . ge tSensor (” near ”)s e l f . s touch = s e l f . c . ge tSensor (” touch ”)s e l f . sprop = s e l f . c . ge tSensor (” proper ty ”) ”””

””” ge t the current pos i ton o f our o b j e c t ”””def g e t p o s i t ( s e l f ) :

return s e l f . currObj . g e tPo s i t i on ( )

def mtouch ( s e l f , nobj , propert ) :s e l f . stouch . se tProper ty ( propert )try :

#pr in t ” h i to b j e c t ” , s e l f . s touch . ge tH i tOb j ec t ( ) . name

i f s e l f . stouch . getHitObject ( ) . name==nobj :return True

else :return False

except Attr ibuteError :return False

def getTouchObj ( s e l f , propert ) :s e l f . stouch . se tProper ty ( propert )try :

return s e l f . stouch . getHitObject . nameexcept Attr ibuteError :

return None

def i fTouchObj ( s e l f , propert ) :#s e l f . s touch . setTouchMateria l ( g .KX FALSE)s e l f . stouch . se tProper ty ( propert )try :

i f s e l f . stouch . i s P o s i t i v e ( ) :return s e l f . stouch . getHitObject . name

else :return None

except Attr ibuteError :return None

def i fNearObj ( s e l f , propert ) :#s e l f . snear . setTouchMateria l ( g .KX FALSE)s e l f . snear . s e tProper ty ( propert )try :

i f s e l f . snear . i s P o s i t i v e ( ) :return s e l f . snear . getHitObject . name

else :return None

except Attr ibuteError :return None

def mprop( s e l f , property , va lue ) :s e l f . sprop . se tProper ty ( property )try :

i f s e l f . sprop . getValue ( )==value :return True

else :return False

except Attr ibuteError :return False

def mall ( s e l f ) :i f s e l f . sa lways . i s P o s i t i v e ( ) :

return Trueelse :

return False

def snear ( s e l f , obj ) :tmp = s e l f . snear . g e tH i tOb jec tL i s t ( )try :

i f tmp[0]== obj :return True

else :return False

except Attr ibuteError :return False

File: vavariable.py

import Blender as bimport GameLogic as gfrom Prof i l eManager import pro f i l emgrimport time as t

class gamevar :”””This c l a s s w i l l e s s e n t i a l l y contain a l l the game

v a r i a b l e s the game w i l l

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E114

need as we l l as a l l the v a r i a b l e s t ha t have beens p e c i f i e d in the p r o f i l e

f i l e passed to the game”””””” p layer v a r i a b l e s ( red b a l l ) ”””player name=””p l a y e r s i z e=0p l a y e r cu r i o u s=0p laye r bored=0p l ay e r e xp l o r e=0player wander=0p l a y e r c o l ou r =(0 ,0 ,0)

#how many times would you v i s i t the same ob j e c t ?#repeats phonebox , repea t s co lourpad , repeats hoverpad , r e p e a t s v i r t u a lr epea t s = [2 , 2 , 2 , 2 ]#how long do you spend on each o b j e c t ?t ime on i n t e r e s t={” v i r t u a l ” : 1 0 , ”hover ” : 10 , ” co lourpad ” : 10 , ”phonebox” :10}

”””game v a r i a b l e s ”””#s t a t e s =[” wa l lwa l k ” ,”wander ” ,” hoverpad ” ,” colourpad ” ,” b a l l c h a s e ” ,” phonebox ” ,”moving ” ]c u r r s t a t e=0#time spent in the current s t a t ec u r r s t a t e t ime=0#se t f l a g i f p l ayer cu r r en t l y d i s t r a c t e dd i s t r a c t e d=False#se t which o b j e c t i n t e r a c t e d with l a s ti n t e r e s t s =[ ]#l i s t o f a l l o b j e c t s i n t e r a c t e d witht im e n e x t i n t e r e s t={” v i r t u a l ” : t . time ( ) , ”hover ” : t . time ( ) , ” co lourpad ” : t . time ( ) , ”phonebox” : t . time ( ) }#boo l e an f l a g s f o r c o n t r o l l i n g the movement o f the

b a l l to the phoneboxphoneEmptyOne=FalsephoneEmptyTwo=FalsephoneEmptyThree=False#time lamp s t a r t e d be ing on s t a t elampon=t . time ( )+5#or i g i n a l s i z e to chnage backs i zeb4change=5#or i g i n a l l o c a t i on be f o r e s i z e change#locb4change =(0 ,0 ,0)#time when the random numbers were l a s t changedl a s t r e f r e s h=t . time ( )#time when the p layer l a s t went f o r a

wander ( i n t i a l l y s e t to 10 secs ex t ra oni n t i t i a l i s a t i o n )

lastwander=t . time ( )#time s t a r t e d on current i n t e r e s tt im e c u r r i n t e r e s t =9999999999.99#time tha t has to pass be f o r e p layer w i l l next be

i n t e r e s t e d in o b j e c t#v i r t u a l , hover , colourpad , phoneboxt im e p a s s i n t e r e s t =[60 ,60 ,60 ,60 ]#has the swi tch been t r i g g e r e d ?switcht ime=False#what time was the l i g h t swi tched ononTime=999999999999.99#has the v i r t u a l been chased?beenChased=False#loca t i on when went wanderingcur rLocat ion =[0 ,0 ,0 ]#speed upper and lower boundsspeedup=4speedlow=1

def i n i t ( s e l f , p ro f i l e name ) :”””on i n i t i a l i s a t i o n c a l l the read p r o f i l e

method and update the c l a s sv a r i a b l e l i s t to match . ””””””NOTE: e v en t ua l l y the p r o f i l e name w i l l be

passed in to t h i s method . ”””pro = pro f i l emgr ( )pro . p ro f i l eRead ( p ro f i l e name )gamevar . player name=pro . nameavgamevar . p l a y e r s i z e=in t ( pro . s i z e )gamevar . p l a y e r c u r i o u s=in t ( pro . cu r i ou s )gamevar . p l aye r bored=in t ( pro . bored )gamevar . p l a y e r e xp l o r e=in t ( pro . exp lo r e )gamevar . p layer wander=in t ( pro . wander )gamevar . p l a y e r c o l ou r=tup l e ( pro . co l ou r )gamevar . r epea t s [0 ]= in t ( pro . repeat phonebox )gamevar . r epea t s [1 ]= in t ( pro . r epea t co l ourpad )gamevar . r epea t s [2 ]= in t ( pro . r epea t hove r )gamevar . r epea t s [3 ]= in t ( pro . r e p e a t v i r t u a l )gamevar . t ime on i n t e r e s t [ ”phonebox”]= in t ( pro . t ime on phonebox )gamevar . t ime on i n t e r e s t [ ” co lourpad ”]= in t ( pro . t ime on co lourpad )gamevar . t ime on i n t e r e s t [ ” hover ”]= in t ( pro . t ime on hover )gamevar . t ime on i n t e r e s t [ ” v i r t u a l ”]= in t ( pro . t ime on v i r t u a l )g . l og . l og ( ”Done I n i t i a l i s i n g the game va r i a b l e s ” )

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E115

File: vavirtual.py

import Blenderimport GameLogic as gfrom va funct ion import cmnfuncfrom vaactuator import actuat

class v i r t u a l p l a y e r :#con t r o l s the t ra ck ing o f the v i r t u a l to the empty

and watches f o r the p layerc = g . ge tCur r entCont ro l l e r ( )#setup the v i r t u a l s func t i on c l a s sv i r t u a l f u n c=cmnfunc ( ”OBYellowBall” )#setup the v i r t u a l s ac tua tor c l a s sv i r t u a l a c t=actuat ( ”OBYellowBall” )

def p laye r ( s e l f ) :l s t =[ ]l s t . append ( s e l f . v i r t u a l f u n c . currObj . getDistanceTo ( g . getCurrentScene ( ) . g e tOb j e c tL i s t ( ) [ ”OBSphere” ] ) )#pr in t ” d i s t ance to v i r t u a l ” , l s t [ 0 ]#i f the p layer i s <2.3 away then caught e l i f

w i th in c u r i o s i t y range then wanderi f l s t [ 0 ] <2 . 3 :

g . l og . l og ( ”VIRTUAL: The p laye r has caughtme, BANG! ” )

s e l f . v i r t u a l f u n c . s e tLoca t i on ( ”OBYellowBall” , g . var . cur rLocat ion )g . var . beenChased=Falseg . l og . l og ( ”VIRTUAL: I ’ ve been

re−i ncarnated ! ” )e l i f l s t [0] <g . var . p l ay e r cu r i ou s −0.7 :

g . l og . l og ( ”VIRTUAL: I ’ ve spotted the Player ,run−away ! ” )

#cance l t r a ck ings e l f . v i r t u a l a c t . mTrackTo( ”” )#go fo r a wanders e l f . v i r t u a l f u n c . v i r tua lwander ( )#f l a g t ha t have been chasedg . var . beenChased=True

else :#i f the v i r t u a l been chased then need to

move back to in range o f YBEmpty be f o r et ra ck ing

i f g . var . beenChased==True :#move back to the po s i t i on l e f t from

g . l og . l og ( ”VIRTUAL: The p laye r can ’ t beseen anymore , I ’m moving back” )

s e l f . v i r t u a l f u n c . moveToLocation ( ”OBYellowBall” , g . var . cur rLocat ion )i f

s e l f . v i r t u a l f u n c . sameLocation ( g . var . currLocat ion , s e l f . v i r t u a l f u n c . r e turnLocat ion ( ”OBYellowBall” ) ) :#se t the been chased f l a g to f a l s eg . var . beenChased=Falseg . l og . l og ( ”VIRTUAL: I ’m going back

to what I was doing ” )else :

”””Track to the YBEmpty”””s e l f . v i r t u a l a c t . mTrackTo( ”YBEmpty” )s e l f . v i r t u a l a c t . mforce ( ( 1 . 8 , 1 . 8 , 0 . 0 ) )#whi l e t r ack ing keep r e s e t i n g the

cu r r l o ca t i ong . var . cur rLocat ion=s e l f . v i r t u a l f u n c . r e turnLocat ion ( ”OBYellowBall” )

File: global.py

from vaactuator import actuatfrom vasensor import s ensefrom valog import a l o g f i l efrom vava r i ab l e import gamevarfrom va funct ion import cmnfuncimport Blenderimport GameLogic as gimport time as timport sys

#This s c r i p t i n i t i a l i s e s the g l o b a l v a r i a b l e s a v a i a l b l eto a l l o b j e c t s

#during runtime .

c = g . ge tCur r entCont ro l l e r ( )

inpt = c . getSensor ( ” s t a r t ” )

i f inpt . i s P o s i t i v e ( ) :print ” s t a r t i n i t i a l i s i n g ”g . l ightOn=Falsetmp=”” . j o i n ( [ ” vavatar log ” , s t r ( t . time ( ) ) ] )#setup the l o g f i l e f o r t h i s run through o f the

vavatar system

AP

PE

ND

IXD

.E

NV

IRO

NM

EN

TC

OD

E116

g . l og=a l o g f i l e (tmp)#setup the game v a r i a b l e sp r o f i l e=”C:/ VAvatar/ P r o f i l e s / P r o f i l e 1 . pro”#p r o f i l e=sys . argv [ 1 ]g . var=gamevar ( p r o f i l e )#crea te the sphere at the beginnning#g . func=cmnfunc (”OBSphere”)#g . func . respawn ()#ac t i v a t e the ac tua torg . addActiveActuator ( c . getActuator ( ” f i n i s h ” ) ,1 )print ” f i n i s h i n t i a l i s i n g ”

File: LightController.py

import GameLogic as gfrom va l i g h t import l i g h t sw i t c h

#i n i t i a l i s i n g s c r i p t c a l l e d on the l i g h t o b j e c t in thel i g h t box

g . l i g h t = l i g h t sw i t c h ( )

g . l i g h t . sw i t che r ( )

File: YellowBallCntrl.py

import GameLogic as gfrom vav i r t ua l import v i r t u a l p l a y e r

#i n i t i a l i s i n g s c r i p t c a l l e d on the ye l l ow b a l l o b j e c t

g . v i r t u a l = v i r t u a l p l a y e r ( )

g . v i r t u a l . p l aye r ( )

File: RedBallController.py

from vasensor import s ensefrom vagame import newvagameimport GameLogic as g#i n i t i a l i s i n g s c r i p t c a l l e d on the red b a l l o b j e c t

g . avs = sense ( ”OBSphere” )g . run = newvagame ( )c = g . ge tCur r entCont ro l l e r ( )

g . run . gameplay ( )

File: ColourPadController.py

import GameLogic as gfrom vacolourpad import co l ourchgr

#i n i t i a l i s i n g s c r i p t c a l l e d on the co lour changing pad

g . co l ou r = co lourchgr ( )g . co l ou r . changer ( )

Appendix E

Interface Code

File: ProfileManager.py

class pro f i l emgr :def i n i t ( s e l f ) :

s e l f . nameav=” avatar ”s e l f . s i z e =20s e l f . cu r i ou s=50s e l f . bored=50s e l f . exp lo r e=50s e l f . wander=50s e l f . c o l ou r =(0 ,0 ,0 ,0)s e l f . r e p e a t v i r t u a l=2s e l f . repeat phonebox=2s e l f . r epea t hove r=2s e l f . r epea t co l ourpad=2s e l f . t ime on v i r t u a l=10s e l f . t ime on phonebox=10s e l f . t ime on hover=10s e l f . t ime on co lourpad=10

def ge tva l ( s e l f , txt ) :txt = txt . r s t r i p ( ’ \n ’ )l s t = txt . s p l i t ( ’− ’ )

return l s t . pop (1 )

def s e t S i z e ( s e l f , nwsize ) :s e l f . s i z e=nwsize

def setName ( s e l f , nwname) :s e l f . nameav=nwname

def se tCur ious ( s e l f , nwcurious ) :s e l f . cu r i ou s=nwcurious

def setBored ( s e l f , nwbored ) :s e l f . bored=nwbored

def s e tExp lo re ( s e l f , nwexplore ) :s e l f . exp lo r e=nwexplore

def setWander ( s e l f , nwwander ) :s e l f . wander=nwwander

def se tCo lour ( s e l f , nwcolour ) :s e l f . c o l ou r=nwcolour

def s e tRepeatVi r tua l ( s e l f , rpV i r tua l ) :s e l f . r e p e a t v i r t u a l=rpVi r tua l

def setRepeatPhone ( s e l f , rpPhone ) :s e l f . repeat phonebox=rpPhone

def setRepeatHover ( s e l f , rpHover ) :s e l f . r epea t hove r=rpHover

def setRepeatColour ( s e l f , rpColour ) :s e l f . r epea t co l ourpad=rpColour

def setTimeOnVirtual ( s e l f , tmVirtual ) :

117

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

118s e l f . t ime on v i r t u a l=tmVirtual

def setTimeOnPhone ( s e l f , tmPhone ) :s e l f . t ime on phonebox=tmPhone

def setTimeOnHover ( s e l f , tmHover ) :s e l f . t ime on hover=tmHover

def setTimeOnColour ( s e l f , tmColour ) :s e l f . t ime on co lourpad=tmColour

def re tCo lour ( s e l f ) :s = s t r ( s e l f . c o l ou r )s . l s t r i p ( ” ( ” )s . r s t r i p ( ” ) ” )l s t = s . s p l i t ( ’ , ’ )l s t [0 ]= l s t [ 0 ] . s t r i p ( )l s t [1 ]= l s t [ 1 ] . s t r i p ( )l s t [2 ]= l s t [ 2 ] . s t r i p ( )l s t [3 ]= l s t [ 3 ] . s t r i p ( )return ( l s t [ 0 ] , l s t [ 1 ] , l s t [ 2 ] , l s t [ 3 ] )

def pro f i l eRead ( s e l f , p ro f i l ename ) :”””open pipe ”””f = open ( pro f i l ename , ’ r ’ )””” i f <a t t r i b u t e s > s e c t i on read in the the

va lue s ”””i f f . r e a d l i n e ( )==”<a t t r i bu t e s >\n” :

s e l f . nameav = s e l f . g e tva l ( f . r e a d l i n e ( ) )s e l f . s i z e = s e l f . g e tva l ( f . r e a d l i n e ( ) )s e l f . c o l ou r = s e l f . g e tva l ( f . r e a d l i n e ( ) )f . r e ad l i n e ( )

else :print ” f a i l e d to read the f i l e , no

a t t r i b u t e s tag ”””” i f <behaiours> s e c t i on read in the the

va lue s ”””i f f . r e a d l i n e ( )==”<behaviours >\n” :

s e l f . cu r i ou s = s e l f . g e tva l ( f . r e a d l i n e ( ) )s e l f . bored = s e l f . g e tva l ( f . r e a d l i n e ( ) )s e l f . exp lo r e = s e l f . g e tva l ( f . r e a d l i n e ( ) )s e l f . wander = s e l f . g e tva l ( f . r e a d l i n e ( ) )f . r e ad l i n e ( )

else :print ” f a i l e d to read the f i l e , no

behaviours tag ”””” i f <repeats> s e c t i on read in the the va lue s ”””i f f . r e a d l i n e ( )==”<repeats >\n” :

s e l f . r e p e a t v i r t u a l=s e l f . g e tva l ( f . r e ad l i n e ( ) )s e l f . repeat phonebox=s e l f . g e tva l ( f . r e ad l i n e ( ) )s e l f . r epea t hove r=s e l f . g e tva l ( f . r e ad l i n e ( ) )s e l f . r epea t co l ourpad=s e l f . g e tva l ( f . r e ad l i n e ( ) )s e l f . t ime on v i r t u a l=s e l f . g e tva l ( f . r e ad l i n e ( ) )s e l f . t ime on phonebox=s e l f . g e tva l ( f . r e ad l i n e ( ) )s e l f . t ime on hover=s e l f . g e tva l ( f . r e ad l i n e ( ) )s e l f . t ime on co lourpad=s e l f . g e tva l ( f . r e ad l i n e ( ) )f . r e ad l i n e ( )

else :print ” f a i l e d to read the f i l e , no r epea t s

tag ”””” c l o s e pipe ”””f . c l o s e ( )

def p r o f i l eWr i t e ( s e l f , p ro f i l ename ) :”””open pipe ”””f = open ( prof i l ename , ’w ’ )f . wr i t e ( ’<a t t r i bu t e s >\n ’ )””” wh i l e more a t t r i b u t e parameters to wr i t e to

f i l e ”””f . wr i t e ( ’ ’ . j o i n ( ( ’name− ’ , s e l f . nameav , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ s i z e− ’ , s t r ( s e l f . s i z e ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ co lour− ’ , s t r ( s e l f . c o l ou r ) , ’ \n ’ ) ) )f . wr i t e ( ’</a t t r i bu t e s >\n ’ )””” wh i l e more emotion parameters to wr i t e to

f i l e ”””f . wr i t e ( ’<behaviours >\n ’ )f . wr i t e ( ’ ’ . j o i n ( ( ’ cur ious− ’ , s t r ( s e l f . cu r i ou s ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ bored− ’ , s t r ( s e l f . bored ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ explore− ’ , s t r ( s e l f . exp lo r e ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ wander− ’ , s t r ( s e l f . wander ) , ’ \n ’ ) ) )f . wr i t e ( ’</behaviours >\n ’ )””” wh i l e more repeat parameters to wr i t e ”””f . wr i t e ( ’<repeats >\n ’ )f . wr i t e ( ’ ’ . j o i n ( ( ’ r e p e a t v i r t u a l − ’ , s t r ( s e l f . r e p e a t v i r t u a l ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ repeat phonebox− ’ , s t r ( s e l f . repeat phonebox ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ r epeat hover− ’ , s t r ( s e l f . r epea t hove r ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ r epeat co lourpad− ’ , s t r ( s e l f . r epea t co l ourpad ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ t ime on v i r tua l− ’ , s t r ( s e l f . t ime on v i r t u a l ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ time on phonebox− ’ , s t r ( s e l f . t ime on phonebox ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ t ime on hover− ’ , s t r ( s e l f . t ime on hover ) , ’ \n ’ ) ) )f . wr i t e ( ’ ’ . j o i n ( ( ’ t ime on co lourpad− ’ , s t r ( s e l f . t ime on co lourpad ) , ’ \n ’ ) ) )f . wr i t e ( ’</repeats >\n ’ )””” c l o s e pipe ”””

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

119f . c l o s e ( )

File: NMMenu.py

#Boa :Frame : frmVAS

import wximport osimport NewProf i le

def c r e a t e ( parent ) :return frmVAS( parent )

[wxID FRMVAS, wxID FRMVASBMPBLENDER2,wxID FRMVASBMPPYTHON2,

wxID FRMVASBTNNEW, wxID FRMVASBTNRUN,wxIDFRMVASCOMBOCAMERA,

wxID FRMVASLBLCAMERA, wxID FRMVASLBLCURR,wxID FRMVASLBLPOWERE,

wxID FRMVASLBLQ1, wxID FRMVASLBLQ2, wxID FRMVASLBLTITLE,wxID FRMVASSTATICBITMAP1, wxID FRMVASSTATICBITMAP2,

wxID FRMVASTXTPROFILE,] = [wx . NewId ( ) for i n i t c t r l s in range (15) ]

class frmVAS(wx . Frame) :def i n i t c t r l s ( s e l f , prnt ) :

# generated method , don ’ t e d i twx . Frame . i n i t ( s e l f , id=wxID FRMVAS,

name=’frmVAS ’ , parent=prnt ,pos=wx . Point (444 , 232) , s i z e=wx . S i z e (400 ,

390) ,s t y l e=wx .SIMPLE BORDER, t i t l e=’ V i r tua l

Avatar System ’ )s e l f . S e tC l i e n tS i z e (wx . S i z e (392 , 356) )s e l f . SetToolTipStr ing ( ’ ’ )s e l f . SetBackgroundColour (wx . Colour (255 , 255 ,

255) )s e l f . Center (wx .BOTH)s e l f . SetMaxSize (wx . S i z e (400 , 390) )s e l f . SetMinSize (wx . S i z e (400 , 390) )

s e l f . lblQ1 = wx . Stat i cText ( id=wxID FRMVASLBLQ1,

l a b e l=’ 1 . Launch the P r o f i l e Manager tos e l e c t d e s i r ed behaviour ’ ,

name=’ lblQ1 ’ , parent=s e l f ,pos=wx . Point (16 , 72) , s i z e=wx . S i z e (355 ,

14) , s t y l e =0)s e l f . lblQ1 . SetFont (wx . Font (9 , wx . SWISS ,

wx .NORMAL, wx .BOLD, False ,’Tahoma ’ ) )

s e l f . lblQ1 . SetToolTipStr ing ( ’ ’ )s e l f . lblQ1 . SetHelpText ( ’ ’ )

s e l f . lblQ2 = wx . Stat i cText ( id=wxID FRMVASLBLQ2,l a b e l=’ 3 . Run the Environment ’ ,

name=’ lblQ2 ’ , parent=s e l f ,pos=wx . Point (16 , 216) , s i z e=wx . S i z e (148 ,

14) , s t y l e =0)s e l f . lblQ2 . SetFont (wx . Font (9 , wx . SWISS ,

wx .NORMAL, wx .BOLD, False ,’Tahoma ’ ) )

s e l f . lblQ2 . SetToolTipStr ing ( ’ ’ )s e l f . lblQ2 . SetHelpText ( ’ ’ )

s e l f . btnNew = wx . Button ( id=wxID FRMVASBTNNEW,l a b e l=’ P r o f i l e Manager ’ ,

name=’btnNew ’ , parent=s e l f ,pos=wx . Point (32 , 96) ,

s i z e=wx . S i z e (128 , 40) , s t y l e =0)s e l f . btnNew . SetFont (wx . Font (9 , wx . SWISS ,

wx .NORMAL, wx .NORMAL, False ,’Tahoma ’ ) )

s e l f . btnNew . SetToolTipStr ing ( ’ Create NewP r o f i l e ’ )

s e l f . btnNew . Bind (wx .EVT BUTTON,s e l f . OnBtnNewButton ,

id=wxID FRMVASBTNNEW)

s e l f . l b lCur r =wx . Stat i cText ( id=wxID FRMVASLBLCURR,

l a b e l=’ Current P r o f i l e . . . ’ ,name=’ lb lCurr ’ , parent=s e l f ,

pos=wx . Point (32 , 240) , s i z e=wx . S i z e (85 ,13) , s t y l e =0)

s e l f . l b lCur r . SetFont (wx . Font (8 , wx . SWISS ,wx .NORMAL, wx .NORMAL, False ,

’Tahoma ’ ) )

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

120s e l f . l b lCur r . SetToolTipStr ing ( ’ ’ )

s e l f . t x t P r o f i l e =wx . TextCtrl ( id=wxID FRMVASTXTPROFILE,

name=’ t x tP r o f i l e ’ , parent=s e l f ,pos=wx . Point (32 , 264) ,

s i z e=wx . S i z e (144 , 24) , s t y l e =0, va lue=’ ’ )s e l f . t x t P r o f i l e . SetMaxSize (wx . S i z e (100 , 21) )s e l f . t x t P r o f i l e . SetMinSize (wx . S i z e (100 , 21) )s e l f . t x t P r o f i l e . SetToolTipStr ing ( ’The Se l e c t ed

P r o f i l e Wil l Appear Here ’ )s e l f . t x t P r o f i l e . SetHelpText ( ’ ’ )s e l f . t x t P r o f i l e . Se tEd i tab l e ( Fa l se )

s e l f . btnRun = wx . Button ( id=wxID FRMVASBTNRUN,l a b e l=’Run Environment ’ ,

name=’btnRun ’ , parent=s e l f ,pos=wx . Point (32 , 296) ,

s i z e=wx . S i z e (128 , 40) , s t y l e =0)s e l f . btnRun . SetFont (wx . Font (9 , wx . SWISS ,

wx .NORMAL, wx .NORMAL, False ,’Tahoma ’ ) )

s e l f . btnRun . SetToolTipStr ing ( ’ C l i ck to s t a r tEnvironment with your p r o f i l e ’ )

s e l f . btnRun . Bind (wx .EVT BUTTON,s e l f . OnBtnRunButton ,

id=wxID FRMVASBTNRUN)

s e l f . bmpPython2 =wx . Stat icBitmap ( bitmap=wx . Bitmap (u ’C: / VAvatar/python .bmp ’ ,

wx .BITMAP TYPE BMP) ,id=wxID FRMVASBMPPYTHON2,name=’bmpPython2 ’ ,

parent=s e l f , pos=wx . Point (336 , 264) ,s i z e=wx . S i z e (56 , 56) ,

s t y l e =0)s e l f . bmpPython2 . SetToolTipStr ing ( ’ ’ )

s e l f . bmpBlender2 =wx . Stat icBitmap ( bitmap=wx . Bitmap (u ’C: / VAvatar/ b lender .bmp ’ ,

wx .BITMAP TYPE BMP) ,id=wxID FRMVASBMPBLENDER2,

name=’ bmpBlender2 ’ , parent=s e l f ,pos=wx . Point (280 , 264) ,

s i z e=wx . S i z e (56 , 56) , s t y l e =0)

s e l f . bmpBlender2 . SetToolTipStr ing ( ’ ’ )

s e l f . lb lPowere =wx . Stat i cText ( id=wxID FRMVASLBLPOWERE,

l a b e l=’Powered by Blender and Python ’ ,name=’ lblPowere ’ ,

parent=s e l f , pos=wx . Point (224 , 320) ,s i z e=wx . S i z e (155 , 13) ,

s t y l e =0)s e l f . lb lPowere . SetToolTipStr ing ( ’ ’ )s e l f . lb lPowere . SetFont (wx . Font (8 , wx . SWISS ,

wx . ITALIC , wx .NORMAL, False ,’Tahoma ’ ) )

s e l f . lb lPowere . SetMaxSize (wx . S i z e (96 , 32) )

s e l f . l b l T i t l e =wx . Stat i cText ( id=wxID FRMVASLBLTITLE,

l a b e l=’ User−Centered Avatar System ’ ,name=’ l b l T i t l e ’ , parent=s e l f ,

pos=wx . Point (8 , 24) , s i z e=wx . S i z e (373 ,35) , s t y l e =0)

s e l f . l b l T i t l e . SetFont (wx . Font (22 , wx . SWISS ,wx .NORMAL, wx .NORMAL, False ,

’Tahoma ’ ) )s e l f . l b l T i t l e . SetToolTipStr ing ( ’ ’ )

s e l f . lblCamera =wx . Stat i cText ( id=wxID FRMVASLBLCAMERA,

l a b e l=’ 2 . S e l e c t your Camera ’ ,name=’ lblCamera ’ , parent=s e l f ,

pos=wx . Point (16 , 152) , s i z e=wx . S i z e (143 ,16) , s t y l e =0)

s e l f . lblCamera . SetFont (wx . Font (10 , wx . SWISS ,wx .NORMAL, wx .BOLD, False ,

’Tahoma ’ ) )s e l f . lblCamera . SetToolTipStr ing ( ’ ’ )

s e l f . comboCamera =wx .ComboBox( cho i c e s =[ ’ Birds−Eye Per spec t i v e ’ ,

’ Third−Person Per spec t i v e ’ ] ,id=wxIDFRMVASCOMBOCAMERA,

name=’ comboCamera ’ , parent=s e l f ,pos=wx . Point (32 , 176) ,

s i z e=wx . S i z e (130 , 21) , s t y l e =0,va lue=’<Se l e c t a Camera> ’ )

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

121s e l f . comboCamera . SetLabel ( ’<Se l e c t a Camera> ’ )s e l f . comboCamera . SetToolTipStr ing ( ’ ’ )

s e l f . s tat icBitmap1 =wx . Stat icBitmap ( bitmap=wx . Bitmap (u ’C: / VAvatar/ ba l l 3 .bmp ’ ,

wx .BITMAP TYPE BMP) ,id=wxID FRMVASSTATICBITMAP1,

name=’ stat icBitmap1 ’ , parent=s e l f ,pos=wx . Point (184 , 96) ,

s i z e=wx . S i z e (200 , 170) , s t y l e =0)s e l f . s tat icBitmap1 . SetToolTipStr ing ( ’ ’ )

s e l f . s tat icBitmap2 =wx . Stat icBitmap ( bitmap=wx . Bitmap (u ’C: / VAvatar/ c l o s e .bmp ’ ,

wx .BITMAP TYPE BMP) ,id=wxID FRMVASSTATICBITMAP2,

name=’ stat icBitmap2 ’ , parent=s e l f ,pos=wx . Point (368 , 8) ,

s i z e=wx . S i z e (16 , 20) , s t y l e =0)s e l f . s tat icBitmap2 . SetToolTipStr ing ( ’ ’ )s e l f . s tat icBitmap2 . Bind (wx .EVT LEFT DOWN,

s e l f . OnStaticBitmap2LeftDown )

def i n i t ( s e l f , parent ) :s e l f . i n i t c t r l s ( parent )

def OnBtnLoadButton ( s e l f , event ) :event . Skip ( )d lg = wx . F i l eD i a l og ( s e l f , ”Choose a p r o f i l e ” ,

” . ” , ”” , ” ∗ .∗ ” , wx .OPEN)try :

i f dlg . ShowModal ( ) == wx . ID OK :f i l ename = dlg . GetPath ( )pro f i l ename =

dlg . GetFilename ( ) . r s t r i p ( ’ . txt ’ )s e l f . t x t P r o f i l e . SetValue ( pro f i l ename )

f ina l ly :d lg . Destroy ( )

def OnBtnNewButton( s e l f , event ) :s e l f . Hide ( )s e l f . main = NewProf i le . c r e a t e (None )s e l f . main . Show ( )#s e l f . SetTopWindow( s e l f . main)return True

def OnBtnRunButton ( s e l f , event ) :s e l f . t x t P r o f i l e . SetValue ( NewProf i le . f i l enm )i f s e l f . t x t P r o f i l e . GetValue ( )==”” :

#have a warning d i a l o g to say must spec f yp r o f i l e

dlg m = wx . MessageBox ( ’No p r o f i l e has beens e l e c t e d to be used . P lease go to theP r o f i l e Manager and Load a P r o f i l e . ’ ,’ Error ! ’ , wx .OK)

else :#run the r i g h t code ver s ion fo r the r i g h t

camera v iewpoin ti f s e l f . comboCamera . GetValue ( )==”Birds−Eye

Per spec t i v e ” :#run the b l ender code program for b i r d s

eyeos . s t a r t f i l e ( ’C: / Program F i l e s /Blender

Foundation/Blender /Anything . exe ’ )e l i f

s e l f . comboCamera . GetValue ( )==”Third−PersonPer spec t i v e ” :#run the b l ender code program for t h i r d

personos . s t a r t f i l e ( ’C: / Program F i l e s /Blender

Foundation/Blender /Anything . exe ’ )else :

#dia l o g f o r reminder to s e l e c t camera .dlg m = wx . MessageBox ( ’No camera has

been s e l e c t e d to be used . ’ ,’ Error ! ’ , wx .OK)

def OnBtnExitButton ( s e l f , event ) :e x i t ( )

def OnStaticBitmap2LeftDown ( s e l f , event ) :e x i t ( )

File: NewProfile.py

#Boa :Frame : f rmPro f i l e

import wx

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

122from Prof i l eManager import pro f i l emgr as pmgrimport NMMenuf i l enm = ””

def c r e a t e ( parent ) :return f rmPro f i l e ( parent )

[wxID FRMPROFILE, wxID FRMPROFILEBTNCOLOUR,wxID FRMPROFILEBTNLOAD,

wxID FRMPROFILEBTNOPEN, wxID FRMPROFILEBTNSAVE,wxID FRMPROFILEGAUGEBORED,

wxID FRMPROFILEGAUGECURIOUS,wxID FRMPROFILEGAUGEEXPLORE,

wxID FRMPROFILELBLBORED, wxID FRMPROFILELBLCOLOUR,wxID FRMPROFILELBLCOLOUR2,

wxID FRMPROFILELBLCOLOURQUEST,wxID FRMPROFILELBLCURIOUS,

wxID FRMPROFILELBLEXPLORE, wxID FRMPROFILELBLFILENAME,wxID FRMPROFILELBLFRMTITLE,

wxID FRMPROFILELBLNAMEAVATAR,wxID FRMPROFILELBLSIZE, wxID FRMPROFILELBLTITLE,

wxID FRMPROFILELBLTOLOAD,wxID FRMPROFILESLIDERBORED,

wxID FRMPROFILESLIDERCURIOUS,wxID FRMPROFILESLIDEREXPLORE, wxID FRMPROFILESLIDERSIZE,wxID FRMPROFILESLIDERWANDER,

wxID FRMPROFILESTATICBITMAP1,wxID FRMPROFILESTATICBOX1, wxID FRMPROFILESTATICLINE1,wxID FRMPROFILETXTLOADPROFILE, wxID FRMPROFILETXTNAME,

] = [wx . NewId ( ) for i n i t c t r l s in range (30) ]

class f rmPro f i l e (wx . Frame) :def i n i t c t r l s ( s e l f , prnt ) :

# generated method , don ’ t e d i twx . Frame . i n i t ( s e l f , id=wxID FRMPROFILE,

name=’ f rmPro f i l e ’ ,parent=prnt , pos=wx . Point (311 , 193) ,

s i z e=wx . S i z e (500 , 460) ,s t y l e=wx .SIMPLE BORDER, t i t l e=’ P r o f i l e

Setup ’ )s e l f . S e tC l i e n tS i z e (wx . S i z e (492 , 426) )s e l f . SetMaxSize (wx . S i z e (500 , 460) )s e l f . SetMinSize (wx . S i z e (500 , 460) )s e l f . SetToolTipStr ing ( ’ ’ )

s e l f . SetBackgroundColour (wx . Colour (255 , 255 ,255) )

s e l f . Bind (wx .EVT ACTIVATE,s e l f . OnFrmProf i leActivate )

s e l f . Bind (wx .EVT LEFT DOWN,s e l f . OnFrmProfileLeftDown )

s e l f . l b lCur i ou s =wx . Stat i cText ( id=wxID FRMPROFILELBLCURIOUS,

l a b e l=’ Cur i o s i t y Leve l ’ ,name=’ lb lCur i ou s ’ , parent=s e l f ,

pos=wx . Point (264 , 376) , s i z e=wx . S i z e (82 ,16) , s t y l e =0)

s e l f . l b lCur i ou s . SetToolTipStr ing ( ’ ’ )s e l f . l b lCur i ou s . SetFont (wx . Font (10 , wx . SWISS ,

wx .NORMAL, wx .NORMAL,False , ’Tahoma ’ ) )

s e l f . lb lBored =wx . Stat i cText ( id=wxID FRMPROFILELBLBORED,

l a b e l=’Boredom Level ’ , name=’ lb lBored ’ ,parent=s e l f ,

pos=wx . Point (144 , 376) , s i z e=wx . S i z e (84 ,16) , s t y l e =0)

s e l f . lb lBored . SetFont (wx . Font (10 , wx . SWISS ,wx .NORMAL, wx .NORMAL, False ,

’Tahoma ’ ) )s e l f . lb lBored . SetToolTipStr ing ( ’ ’ )

s e l f . l b lExp l o r e =wx . Stat i cText ( id=wxID FRMPROFILELBLEXPLORE,

l a b e l=’ Explorat ion Leve l ’ ,name=’ lb lExp l o r e ’ , parent=s e l f ,

pos=wx . Point (16 , 376) , s i z e=wx . S i z e (96 ,16) , s t y l e =0)

s e l f . l b lExp l o r e . SetFont (wx . Font (10 , wx . SWISS ,wx .NORMAL, wx .NORMAL,

False , ’Tahoma ’ ) )s e l f . l b lExp l o r e . SetToolTipStr ing ( ’ ’ )

s e l f . s l i d e rCu r i ou s =wx . S l i d e r ( id=wxID FRMPROFILESLIDERCURIOUS,

maxValue=10, minValue=5,name=’ s l i d e rCu r i ou s ’ , parent=s e l f ,

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

123pos=wx . Point (272 , 192) , s i z e=wx . S i z e (24 ,

184) ,s t y l e=wx .SL VERTICAL | wx . SL INVERSE ,

value=7)s e l f . s l i d e rCu r i ou s . SetToolTipStr ing ( ’Move the

s l i d e r with the mouse ’ )s e l f . s l i d e rCu r i ou s . SetLabel ( ’ ’ )s e l f . s l i d e rCu r i ou s . SetBackgroundStyle (wx .BG STYLE COLOUR)s e l f . s l i d e rCu r i ou s . SetMax (10)s e l f . s l i d e rCu r i ou s . SetMin (5 )s e l f . s l i d e rCu r i ou s . Bind (wx .EVT SCROLL,

s e l f . OnS l ide rCur i ousSc ro l l )

s e l f . s l i d e rExp l o r e =wx . S l i d e r ( id=wxID FRMPROFILESLIDEREXPLORE,

maxValue=15, minValue=5,name=’ s l i d e rExp l o r e ’ , parent=s e l f ,

pos=wx . Point (32 , 192) , s i z e=wx . S i z e (24 ,184) ,

s t y l e=wx .SL VERTICAL | wx . SL INVERSE ,value=10)

s e l f . s l i d e rExp l o r e . SetLabel ( ’ ’ )s e l f . s l i d e rExp l o r e . SetBackgroundStyle (wx .BG STYLE SYSTEM)s e l f . s l i d e rExp l o r e . SetAutoLayout ( Fa l se )s e l f . s l i d e rExp l o r e . SetToolTipStr ing ( ’Move the

s l i d e r with the mouse ’ )s e l f . s l i d e rExp l o r e . Bind (wx .EVT MOVE,

s e l f . OnSliderExploreMove )s e l f . s l i d e rExp l o r e . Bind (wx .EVT SCROLL,

s e l f . OnS l ide rExp lo r eSc ro l l )

s e l f . lb lCo lourQuest =wx . Stat i cText ( id=wxID FRMPROFILELBLCOLOURQUEST,

l a b e l=’ Pick the Colour o f your Avatar . . . ’ ,name=’ lb lColourQuest ’ ,

parent=s e l f , pos=wx . Point (24 , 88) ,s i z e=wx . S i z e (182 , 16) ,

s t y l e =0)s e l f . lb lCo lourQuest . SetFont (wx . Font (10 ,

wx . SWISS , wx .NORMAL, wx .NORMAL,False , ’Tahoma ’ ) )

s e l f . lb lCo lourQuest . SetToolTipStr ing ( ’ ’ )

s e l f . l b l T i t l e =wx . Stat i cText ( id=wxID FRMPROFILELBLTITLE,

l a b e l=’ User Avatar P r o f i l e Setup ’ ,name=’ l b l T i t l e ’ , parent=s e l f ,

pos=wx . Point (56 , 8) , s i z e=wx . S i z e (268 ,29) , s t y l e =0)

s e l f . l b l T i t l e . SetAutoLayout (True )s e l f . l b l T i t l e . SetFont (wx . Font (18 , wx . SWISS ,

wx .NORMAL, wx .NORMAL, False ,’Tahoma ’ ) )

s e l f . l b l T i t l e . SetToolTipStr ing ( ’ ’ )

s e l f . s l i d e r S i z e =wx . S l i d e r ( id=wxID FRMPROFILESLIDERSIZE,maxValue=4,

minValue=1, name=’ s l i d e r S i z e ’ ,parent=s e l f , pos=wx . Point (200 ,

120) , s i z e=wx . S i z e (152 , 32) ,s t y l e=wx .SL HORIZONTAL, value=2)

s e l f . s l i d e r S i z e . SetLabel ( ’ ’ )s e l f . s l i d e r S i z e . SetMax (4)s e l f . s l i d e r S i z e . SetMin (1 )s e l f . s l i d e r S i z e . SetToolTipStr ing ( ’Move the

s l i d e r with the mouse ’ )

s e l f . l b l S i z e =wx . Stat i cText ( id=wxID FRMPROFILELBLSIZE,

l a b e l=’ Pick your S i z e . . Small to Big . . . ’ ,name=’ l b l S i z e ’ ,

parent=s e l f , pos=wx . Point (24 , 120) ,s i z e=wx . S i z e (167 , 16) ,

s t y l e =0)s e l f . l b l S i z e . SetFont (wx . Font (10 , wx . SWISS ,

wx .NORMAL, wx .NORMAL, False ,’Tahoma ’ ) )

s e l f . l b l S i z e . SetToolTipStr ing ( ’ ’ )

s e l f . btnSave =wx . Button ( id=wxID FRMPROFILEBTNSAVE,

l a b e l=’ Save P r o f i l e ’ , name=’ btnSave ’ ,parent=s e l f ,

pos=wx . Point (392 , 224) , s i z e=wx . S i z e (88 ,39) , s t y l e =0)

s e l f . btnSave . SetToolTipStr ing ( ’ C l i ck here toSave the cur rent p r o f i l e ’ )

s e l f . btnSave . Bind (wx .EVT BUTTON,s e l f . OnBtnSaveButton ,

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

124id=wxID FRMPROFILEBTNSAVE)

s e l f . s l i d e rBo r ed =wx . S l i d e r ( id=wxID FRMPROFILESLIDERBORED,maxValue=10,

minValue=5, name=’ s l i d e rBo r ed ’ ,parent=s e l f , pos=wx . Point (152 ,

192) , s i z e=wx . S i z e (24 , 184) ,s t y l e=wx .SL VERTICAL | wx . SL INVERSE ,

value=7)s e l f . s l i d e rBo r ed . SetLabel ( ’ ’ )s e l f . s l i d e rBo r ed . SetMax (10)s e l f . s l i d e rBo r ed . SetMin (5 )s e l f . s l i d e rBo r ed . SetToolTipStr ing ( ’Move the

s l i d e r with the mouse ’ )s e l f . s l i d e rBo r ed . Bind (wx .EVT SCROLL,

s e l f . OnS l ide rBoredScro l l )

s e l f . btnColour =wx . Button ( id=wxID FRMPROFILEBTNCOLOUR,

l a b e l=’ Colour Picker ’ , name=’ btnColour ’ ,parent=s e l f ,

pos=wx . Point (208 , 79) , s i z e=wx . S i z e (80 ,24) , s t y l e =0)

s e l f . btnColour . SetToolTipStr ing ( ’ ’ )s e l f . btnColour . SetFont (wx . Font (8 , wx . SWISS ,

wx . ITALIC , wx .NORMAL, False ,’Tahoma ’ ) )

s e l f . btnColour . Bind (wx .EVT BUTTON,s e l f . OnBtnColourButton ,

id=wxID FRMPROFILEBTNCOLOUR)

s e l f . btnOpen =wx . Button ( id=wxID FRMPROFILEBTNOPEN,

l a b e l=’Open P r o f i l e ’ , name=’btnOpen ’ ,parent=s e l f ,

pos=wx . Point (392 , 168) , s i z e=wx . S i z e (88 ,40) , s t y l e =0)

s e l f . btnOpen . SetToolTipStr ing ( ’ C l i ck here toOpen an e x i s t i n g p r o f i l e ’ )

s e l f . btnOpen . Bind (wx .EVT BUTTON,s e l f . OnBtnOpenButton ,

id=wxID FRMPROFILEBTNOPEN)

s e l f . s t a t i cL i n e 1 =wx . S ta t i cL in e ( id=wxID FRMPROFILESTATICLINE1,

name=’ s t a t i cL i n e 1 ’ , parent=s e l f ,pos=wx . Point (208 , 296) ,

s i z e=wx . S i z e (0 , 24) , s t y l e =0)

s e l f . l b lCo lou r =wx . Stat i cText ( id=wxID FRMPROFILELBLCOLOUR,l a b e l=’ ’ ,

name=’ lb lCo lou r ’ , parent=s e l f ,pos=wx . Point (128 , 328) ,

s i z e=wx . S i z e (0 , 13) , s t y l e =0)s e l f . l b lCo lou r . SetBackgroundColour (wx . Colour (255 ,

0 , 0) )s e l f . l b lCo lou r . SetToolTipStr ing ( ’ This i s the

co l ou r o f the Avatar ’ )

s e l f . l b lCo lour2 =wx . Stat i cText ( id=wxID FRMPROFILELBLCOLOUR2,l a b e l=’ ’ ,

name=’ lb lCo lour2 ’ , parent=s e l f ,pos=wx . Point (305 , 83) ,

s i z e=wx . S i z e (23 , 16) , s t y l e =0)s e l f . l b lCo lour2 . SetBackgroundColour (wx . Colour (255 ,

0 , 0) )s e l f . l b lCo lour2 . SetToolTipStr ing ( ’ This w i l l be

the co l ou r o f the avatar ’ )

s e l f . lblNameAvatar =wx . Stat i cText ( id=wxID FRMPROFILELBLNAMEAVATAR,

l a b e l=’ Pick the Name o f Your Avatar : ’ ,name=’ lblNameAvatar ’ ,

parent=s e l f , pos=wx . Point (24 , 56) ,s i z e=wx . S i z e (167 , 14) ,

s t y l e =0)s e l f . lblNameAvatar . SetFont (wx . Font (9 , wx . SWISS ,

wx .NORMAL, wx .NORMAL,False , ’Tahoma ’ ) )

s e l f . lblNameAvatar . SetToolTipStr ing ( ’ ’ )

s e l f . txtName =wx . TextCtrl ( id=wxID FRMPROFILETXTNAME,name=’ txtName ’ ,

parent=s e l f , pos=wx . Point (208 , 48) ,s i z e=wx . S i z e (136 , 21) ,

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

125s t y l e =0, va lue=’ Avatar ’ )

s e l f . txtName . SetFont (wx . Font (8 , wx . SWISS ,wx .NORMAL, wx .NORMAL, False ,

’Tahoma ’ ) )s e l f . txtName . SetToolTipStr ing ( ’Name o f your

avatar ’ )s e l f . txtName . SetHelpText ( ’ ’ )

s e l f . btnLoad =wx . Button ( id=wxID FRMPROFILEBTNLOAD,

l a b e l=’Load P r o f i l e ’ , name=’ btnLoad ’ ,parent=s e l f ,

pos=wx . Point (384 , 352) , s i z e=wx . S i z e (96 ,56) , s t y l e =0)

s e l f . btnLoad . SetFont (wx . Font (8 , wx . SWISS ,wx .NORMAL, wx .BOLD, False ,

’Tahoma ’ ) )s e l f . btnLoad . SetToolTipStr ing ( ’ C l i ck here when

the p r o f i l e you want to load i s on sc r e en ’ )s e l f . btnLoad . Bind (wx .EVT BUTTON,

s e l f . OnBtnLoadButton ,id=wxID FRMPROFILEBTNLOAD)

s e l f . t x tLoadPro f i l e =wx . TextCtrl ( id=wxID FRMPROFILETXTLOADPROFILE,

name=’ tx tLoadPro f i l e ’ , parent=s e l f ,pos=wx . Point (384 , 304) ,

s i z e=wx . S i z e (99 , 21) , s t y l e =0, va lue=’<Nop r o f i l e > ’ )

s e l f . t x tLoadPro f i l e . Enable (True )s e l f . t x tLoadPro f i l e . Se tEd i tab l e ( Fa l se )s e l f . t x tLoadPro f i l e . S e t In s e r t i onPo in t (20)s e l f . t x tLoadPro f i l e . SetHelpText ( ’ ’ )s e l f . t x tLoadPro f i l e . SetToolTipStr ing ( ’ ’ )

s e l f . lb lF i l ename =wx . Stat i cText ( id=wxID FRMPROFILELBLFILENAME,

l a b e l=’ ’ , name=’ lb lF i l ename ’ , parent=s e l f ,pos=wx . Point (80 , 328) ,

s i z e=wx . S i z e (0 , 13) , s t y l e =0)s e l f . lb lF i l ename . Enable ( Fa l se )

s e l f . s l iderWander =wx . S l i d e r ( id=wxID FRMPROFILESLIDERWANDER,

maxValue=100 , minValue=0,name=’ s l iderWander ’ , parent=s e l f ,

pos=wx . Point (472 , 408) , s i z e=wx . S i z e (100 ,24) ,

s t y l e=wx .SL HORIZONTAL, value=0)s e l f . s l iderWander . Enable ( Fa l se )s e l f . s l iderWander . SetToolTipStr ing ( ’ ’ )s e l f . s l iderWander . Show( Fal se )

s e l f . gaugeExplore =wx . Gauge ( id=wxID FRMPROFILEGAUGEEXPLORE,

name=’ gaugeExplore ’ , parent=s e l f ,pos=wx . Point (64 , 200) , range=10,

s i z e=wx . S i z e (24 , 168) ,s t y l e=wx .GA VERTICAL)

s e l f . gaugeExplore . SetToolTipStr ing ( ’ ’ )s e l f . gaugeExplore . SetBeze lFace (4 )

s e l f . gaugeBored =wx . Gauge ( id=wxID FRMPROFILEGAUGEBORED,

name=’ gaugeBored ’ , parent=s e l f ,pos=wx . Point (184 , 200) , range=5,

s i z e=wx . S i z e (24 , 168) ,s t y l e=wx .GA VERTICAL)

s e l f . gaugeBored . SetToolTipStr ing ( ’ ’ )

s e l f . gaugeCurious =wx . Gauge ( id=wxID FRMPROFILEGAUGECURIOUS,

name=’ gaugeCurious ’ , parent=s e l f ,pos=wx . Point (304 , 200) , range=5,

s i z e=wx . S i z e (24 , 168) ,s t y l e=wx .GA VERTICAL)

s e l f . gaugeCurious . SetToolTipStr ing ( ’ ’ )

s e l f . s ta t i cBox1 =wx . Stat icBox ( id=wxID FRMPROFILESTATICBOX1,l a b e l=’ ’ ,

name=’ s tat i cBox1 ’ , parent=s e l f ,pos=wx . Point (8 , 160) ,

s i z e=wx . S i z e (360 , 248) , s t y l e =0)s e l f . s ta t i cBox1 . SetFont (wx . Font (10 , wx . SWISS ,

wx .NORMAL, wx .NORMAL,False , ’Tahoma ’ ) )

s e l f . s ta t i cBox1 . SetToolTipStr ing ( ’ ’ )

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

126s e l f . s tat icBitmap1 =

wx . Stat icBitmap ( bitmap=wx . Bitmap (u ’C: / VAvatar/ b a l l .bmp ’ ,wx .BITMAP TYPE BMP) ,

id=wxID FRMPROFILESTATICBITMAP1,name=’ stat icBitmap1 ’ , parent=s e l f ,

pos=wx . Point (360 , 16) ,s i z e=wx . S i z e (130 , 140) , s t y l e =0)

s e l f . s tat icBitmap1 . SetToolTipStr ing ( ’ ’ )

s e l f . lb lFrmTit l e =wx . Stat i cText ( id=wxID FRMPROFILELBLFRMTITLE,

l a b e l=”Your Avatar ’ s Behaviour Leve l s . . . ” ,name=’ lb lFrmTit l e ’ ,

parent=s e l f , pos=wx . Point (16 , 160) ,s i z e=wx . S i z e (179 , 14) ,

s t y l e =0)s e l f . lb lFrmTit l e . SetToolTipStr ing ( ’ ’ )s e l f . lb lFrmTit l e . SetFont (wx . Font (9 , wx . SWISS ,

wx . ITALIC , wx .NORMAL,False , ’Tahoma ’ ) )

s e l f . lblToLoad =wx . Stat i cText ( id=wxID FRMPROFILELBLTOLOAD,

l a b e l=’ P r o f i l e to load . . . ’ ,name=’ lblToLoad ’ , parent=s e l f ,

pos=wx . Point (384 , 280) , s i z e=wx . S i z e (78 ,13) , s t y l e =0)

s e l f . lblToLoad . SetFont (wx . Font (8 , wx . SWISS ,wx . ITALIC , wx .NORMAL, False ,

’Tahoma ’ ) )

def i n i t ( s e l f , parent ) :s e l f . i n i t c t r l s ( parent )

def OnBtnSaveButton ( s e l f , event ) :global f i l enmdlg = wx . F i l eD i a l og ( s e l f , ”Save a f i l e ” ,

” . / P r o f i l e s ” , ”” , ” ∗ . pro” , wx .SAVE)try :

i f dlg . ShowModal ( ) == wx . ID OK :f i l ename = dlg . GetPath ( )pr = pmgr ( )i f s e l f . txtName . GetValue ( ) !=”” :

pr . setName ( s e l f . txtName . GetValue ( ) )else :

pr . setName ( ’ avatar ’ )pr . s e t S i z e ( s e l f . s l i d e r S i z e . GetValue ( ) )pr . se tCo lour ( s e l f . l b lCo lour2 . GetBackgroundColour ( ) )pr . s e tCur ious ( s e l f . s l i d e rCu r i ou s . GetValue ( ) )print s e l f . s l i d e rBo r ed . GetValue ( )pr . setBored ( s e l f . s l i d e rBo r ed . GetValue ( ) )pr . s e tExp lo re ( s e l f . s l i d e rExp l o r e . GetValue ( ) )pr . setWander ( s e l f . s l iderWander . GetValue ( ) )pr . p r o f i l eWr i t e ( f i l ename )””” s e l f . l b lF i l ename . SetLabe l ( f i l ename ) ”””f i l enm = dlg . GetPath ( )s e l f . lb lF i l ename . Show( Fal se )#Refresh the guage va lue ss e l f . gaugeBored . SetValue ( s e l f . s l i d e rBo r ed . GetValue ( )−5)s e l f . gaugeCurious . SetValue ( s e l f . s l i d e rCu r i ou s . GetValue ( )−5)s e l f . gaugeExplore . SetValue ( s e l f . s l i d e rExp l o r e . GetValue ( )−5)s e l f . t x tLoadPro f i l e . SetValue ( d lg . GetFilename ( ) )dlg m = wx . MessageBox ( ’The p r o f i l e was

s u c c e s s f u l l y saved ’ , ’ Success ! ’ ,wx .OK)

f ina l ly :d lg . Destroy ( )

def OnBtnColourButton ( s e l f , event ) :d lg = wx . ColourDialog ( s e l f , None )try :

i f dlg . ShowModal ( ) == wx . ID OK :data = dlg . GetColourData ( )c o l ou r s = data . GetColour ( ) . Get ( )s e l f . l b lCo lour2 . SetBackgroundColour ( c o l ou r s )wx . Frame . Refresh ( s e l f )# Your code

f ina l ly :d lg . Destroy ( )

def OnBtnOpenButton ( s e l f , event ) :global f i l enmdlg = wx . F i l eD ia l og ( s e l f , ”Open a f i l e ” ,

” . / P r o f i l e s ” , ”” , ” ∗ . pro” , wx .OPEN)try :

i f dlg . ShowModal ( ) == wx . ID OK :f i l ename = dlg . GetPath ( )pr = pmgr ( )pr . p ro f i l eRead ( f i l ename )s e l f . s l i d e r S i z e . SetValue ( i n t ( pr . s i z e ) )

AP

PE

ND

IXE

.IN

TE

RFA

CE

CO

DE

127s e l f . s l i d e rCu r i ou s . SetValue ( i n t ( pr . cu r i ou s ) )s e l f . s l i d e rBo r ed . SetValue ( i n t ( pr . bored ) )s e l f . s l i d e rExp l o r e . SetValue ( i n t ( pr . exp lo r e ) )s e l f . s l iderWander . SetValue ( i n t ( pr . wander ) )s e l f . l b lCo lour2 . SetBackgroundColour ( eva l ( pr . co l ou r ) )s e l f . txtName . SetValue ( s t r ( pr . nameav ) )””” s e l f . l b lF i l ename . SetLabe l ( f i l ename ) ”””f i l enm = f i l enames e l f . lb lF i l ename . Show( Fal se )s e l f . t x tLoadPro f i l e . SetValue ( d lg . GetFilename ( ) )#Refresh the guage va lue ss e l f . gaugeBored . SetValue ( s e l f . s l i d e rBo r ed . GetValue ( )−5)s e l f . gaugeCurious . SetValue ( s e l f . s l i d e rCu r i ou s . GetValue ( )−5)s e l f . gaugeExplore . SetValue ( s e l f . s l i d e rExp l o r e . GetValue ( )−5)

f ina l ly :d lg . Destroy ( )

def OnBtnLoadButton ( s e l f , event ) :global f i l enm””” check i f t he re i s a current a c t i v e p r o f i l e ”””””” i f s e l f . l b lF i l ename . GetLabel ( )==””:”””i f f i l enm==”” :

dlg m = wx . MessageBox ( ’No p r o f i l e has beens e l e c t e d to be used . ’ , ’ Error ! ’ , wx .OK)

else :””” w i l l need to hide t h i s screen , but send

over the va lue o f the f i l ename ”””dlg = wx . MessageDialog ( s e l f , ’ I f you havn \ ’ t

saved your p r o f i l e i t w i l l be l o s t ! ’ ,’Warning ! ’ , wx .OK | wx .CANCEL)

try :i f dlg . ShowModal ( ) == wx . ID OK :

s e l f . Hide ( )

s e l f . main = NMMenu. c r e a t e (None )s e l f . main . Show ( )#s e l f . SetTopWindow( s e l f . main)s e l f . main . t x tP r o f i l e . SetValue ( f i l enm )return True

f ina l ly :d lg . Destroy ( )

def OnS l ide rExp lo r eSc ro l l ( s e l f , event ) :s e l f . gaugeExplore . SetValue ( s e l f . s l i d e rExp l o r e . GetValue ( )−5)

def OnS l ide rCur i ousSc ro l l ( s e l f , event ) :s e l f . gaugeCurious . SetValue ( s e l f . s l i d e rCu r i ou s . GetValue ( )−5)

def OnSl iderBoredScro l l ( s e l f , event ) :s e l f . gaugeBored . SetValue ( s e l f . s l i d e rBo r ed . GetValue ( )−5)

def OnFrmProf i leActivate ( s e l f , event ) :s e l f . gaugeBored . SetValue ( s e l f . s l i d e rBo r ed . GetValue ( )−5)s e l f . gaugeCurious . SetValue ( s e l f . s l i d e rCu r i ou s . GetValue ( )−5)s e l f . gaugeExplore . SetValue ( s e l f . s l i d e rExp l o r e . GetValue ( )−5)

def OnSliderExploreMove ( s e l f , event ) :event . Skip ( )

def OnFrmProfileLeftDown ( s e l f , event ) :event . Skip ( )#pos i t i on = event . GetPosi t ion ()#pr in t po s i t i on#pr in t type ( po s i t i on )#s e l f . Se tPos i t i on ( po s i t i on )#s e l f . Refresh ()