[IEEE 13th IEEE International Conference on Embedded and Real-Time Computing Systems and...

6
Remote Controlled Group Behavior for Widely Spreaded and Cooperative Mobile Robots in Wireless Sensor Network Environment Laxmisha Rai and Soon Ju Kang School of Electrical Engineering and Computer Science, Kyungpook National University, Daegu, 702701, Korea [email protected], [email protected] Abstract In this paper, we present the experiments with robots in a wireless sensor network environment to support intelligent generation of group behaviors. We propose real-time software architecture in a wireless sensor network (WSN) environment for practical applications. The architecture is a layered architecture including decision making, knowledge processing, execution, and communication and sensor/actuator layers. The proposed architecture is tested for the multi-robot environment, where the robots are expected to exhibit group behaviors. The rules are written in CLIPS expert system tool to perform intelligent behavior generation and dynamic reasoning so as to make the behaviors more realistic. 1. Introduction The widely spreaded robots are necessary in many general missions such as gathering environmental data, rescue operations in natural disasters, searching for dangerous spots in nuclear radiations from reactors, earthquakes, oil or energy mining operations. The co- coordinated group of robots provides the opportunity to accomplish more complex tasks, to adapt to changing environmental conditions, and to survive individual failures [1]. Unlike animals; robots can survive in extreme weather conditions and also work without stress. The robots applicable for such situations require multiple numbers of sensors and actuators to accomplish expected tasks. This demands wireless sensors network to coordinate behavior of spatially distributed mobile robots. There is growing interest in WSN applications in robots and industrial automation. The software solutions and wireless sensor network technology both will reduce the effort of controlling the robots. A network of robots and sensors consists of a collection of sensors distributed over some area that form an ad- hoc network, and a collection of mobile robots that can interact with the sensor network [2]. Wireless sensor networks can facilitate the group robots to synchronize each other, in terms of overall behavior and in terms of interconnection. The focus of this paper is on particular issues related to building systems of sensors and robots in a wireless sensor environment that can exhibit the group behavior in real world conditions. We show the experimental prototyping multi-robot system to present the architecture and remotely controlled group behavior generation using CLIPS expert system tool. As robots gaining importance in ubiquitous and environmental applications, the group robots dedicated to single or multiple tasks are very much necessary. It is more realistic if the group of robots behaves as group as well sub-groups of multiple expertise. This will inspire to develop new class of group behavior rather than simply mimicking the animal behaviors to adapt and apply in complex environmental conditions. Even though, groups can perform higher tasks than individual members, individual tasks also sometimes considered useful. We aim to develop an architecture, which can decide autonomously based on the reasoning abilities. The proposed software architecture includes 5 layers: decision making, knowledge processing, execution, communication and sensor actuator according to the reactiveness level for external events. Figure 1 shows the proposed network topology for widely spreaded multi-robot environment in a WSN environment. The computer acts as a gateway and the coordinator connected to several routers. The coordinator maintains overall network knowledge. Routers are responsible for sending information to the end nodes as mobile robots. The mobile robots include many sensors and actuators this makes itself a network of sensors. A robot is connected as an end node, within the vicinity of router and it has the ability to 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications(RTCSA 2007) 0-7695-2975-5/07 $25.00 © 2007

Transcript of [IEEE 13th IEEE International Conference on Embedded and Real-Time Computing Systems and...

Page 1: [IEEE 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2007) - Daegu, Korea (2007.08.21-2007.08.24)] 13th IEEE International Conference

Remote Controlled Group Behavior for Widely Spreaded and Cooperative Mobile Robots in Wireless Sensor Network Environment

Laxmisha Rai and Soon Ju Kang School of Electrical Engineering and Computer Science, Kyungpook National University, Daegu, 702701, Korea

[email protected], [email protected]

Abstract

In this paper, we present the experiments with

robots in a wireless sensor network environment to support intelligent generation of group behaviors. We propose real-time software architecture in a wireless sensor network (WSN) environment for practical applications. The architecture is a layered architecture including decision making, knowledge processing, execution, and communication and sensor/actuator layers. The proposed architecture is tested for the multi-robot environment, where the robots are expected to exhibit group behaviors. The rules are written in CLIPS expert system tool to perform intelligent behavior generation and dynamic reasoning so as to make the behaviors more realistic. 1. Introduction

The widely spreaded robots are necessary in many general missions such as gathering environmental data, rescue operations in natural disasters, searching for dangerous spots in nuclear radiations from reactors, earthquakes, oil or energy mining operations. The co-coordinated group of robots provides the opportunity to accomplish more complex tasks, to adapt to changing environmental conditions, and to survive individual failures [1]. Unlike animals; robots can survive in extreme weather conditions and also work without stress. The robots applicable for such situations require multiple numbers of sensors and actuators to accomplish expected tasks. This demands wireless sensors network to coordinate behavior of spatially distributed mobile robots.

There is growing interest in WSN applications in

robots and industrial automation. The software solutions and wireless sensor network technology both will reduce the effort of controlling the robots. A network of robots and sensors consists of a collection

of sensors distributed over some area that form an ad-hoc network, and a collection of mobile robots that can interact with the sensor network [2]. Wireless sensor networks can facilitate the group robots to synchronize each other, in terms of overall behavior and in terms of interconnection. The focus of this paper is on particular issues related to building systems of sensors and robots in a wireless sensor environment that can exhibit the group behavior in real world conditions. We show the experimental prototyping multi-robot system to present the architecture and remotely controlled group behavior generation using CLIPS expert system tool.

As robots gaining importance in ubiquitous and

environmental applications, the group robots dedicated to single or multiple tasks are very much necessary. It is more realistic if the group of robots behaves as group as well sub-groups of multiple expertise. This will inspire to develop new class of group behavior rather than simply mimicking the animal behaviors to adapt and apply in complex environmental conditions. Even though, groups can perform higher tasks than individual members, individual tasks also sometimes considered useful. We aim to develop an architecture, which can decide autonomously based on the reasoning abilities. The proposed software architecture includes 5 layers: decision making, knowledge processing, execution, communication and sensor actuator according to the reactiveness level for external events.

Figure 1 shows the proposed network topology for

widely spreaded multi-robot environment in a WSN environment. The computer acts as a gateway and the coordinator connected to several routers. The coordinator maintains overall network knowledge. Routers are responsible for sending information to the end nodes as mobile robots. The mobile robots include many sensors and actuators this makes itself a network of sensors. A robot is connected as an end node, within the vicinity of router and it has the ability to

13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications(RTCSA 2007)0-7695-2975-5/07 $25.00 © 2007

Page 2: [IEEE 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2007) - Daegu, Korea (2007.08.21-2007.08.24)] 13th IEEE International Conference

move around router. Figure 1 show the architecture, where the robot is directly mapped as end node along with its sensors/actuators. This is great benefit to the network designers to reduce effort of configuration of network topology and the robot engineers to quickly generate group behaviors.

Figure 1. Proposed network topology for widely

spreaded multi-robot system The paper is organized as follows. The section 2

describes the related works and section 3 describes the requirement analysis. The section 4 describes proposed architecture; section 5 explains the design of integrated rule-based environment. The experimental evaluation is described in the section 6 and finally the conclusion in section 7. 2. Related Works

Herding, flocking, and schooling behaviors of animals have been studied extensively over the past decade, and research has stimulated attempts to create robots and simulated characters with similar skills. Researchers on multi-robot system, co-operative systems, swarm robots actively involved in developing group behavior utilizing various methodologies. There are number of works related to robots, group behavior, multi-robot simulation, robots navigation using sensor network and expert systems were suggested earlier. Authors in paper [2] describe experiments with networks of robots and sensors in support of rescue and first response operations. Here the sensors are used to monitor the temperature light etc. Mobile robot navigation using a sensor network is studied in [3]. They propose an algorithm for robot navigation using a sensor network embedded in the environment. About the works related experts systems and robots, Kate Foster et al. [4] studied the combining a rule-based expert system and machine learning in a simulated mobile robot control system. O. Burchan Bayazit et al. [5] proposed the better group behaviors using rule-

based roadmaps. Researchers in paper [6], presented fundamental rules for cooperative group behaviors, “flocking” and “arrangement” of multiple autonomous mobile robots which are represented by a small number of fuzzy rules. They believe that, group intelligence in complex lifelike behaviors is considered to be composed of local interactions of individuals with simple functions under fundamental rules of artificial life. However, most of the earlier works proposed the advantages of rule-based group behavior approaches but, none of them physically implemented in WSN environment. Also, earlier works involved with the simulation methods, rather than combined physical and virtual prototype and fail to demonstrate dynamic intelligent group behaviors. In our approach, we have implemented the entire system using CLIPS [7] expert system. The physical prototyping of member robots is developed using LEGO [8] toolkit.

3. Requirement Analysis

While designing robots which exhibit group behaviors, the engineers need to understand the behavior of herds of sheep, flock of birds or school of fishes. In general, their motion is random, but scope of their movement is within the known area. The flocking or herding behaviors are formed with the collective behavior of each member. Each group member will follow similar actions along with its neighboring members. So, our requirement is to generate such group behavior collectively as well as incrementally to member by member. We strongly believe that, only regenerating the behavior of herd or flock of natural entities is insufficient to fully utilize the advantages of co-operative mobile robots in WSN environment. This motivated us to control the group collectively and also control the individual member specifically.

The intelligent group behavior requires is proper

synchronization between the co-working robots. The robots need to work together as a group rather than behaving on its own way. To achieve these communication methods between the robots also need to be synchronized. The dedicated WSN protocol is necessary to achieve the goals. For example, sensor network protocols such as ZigBee[9] can greatly facilitate to directly mapping the node to individual robots. When a number of mobile robots are required to co-operate then, from a systems design perspective, the possibility that each mobile robot might be treated as a ZigBee node on a network is particularly interesting. Our other aim is to study the real-time issues which are important inside the robot because of

13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications(RTCSA 2007)0-7695-2975-5/07 $25.00 © 2007

Page 3: [IEEE 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2007) - Daegu, Korea (2007.08.21-2007.08.24)] 13th IEEE International Conference

the non-real-time features of sensor network protocol and Internet protocols. Our major design considerations are listed as follows:

1. To develop an efficient network topology

suitable for widely spreaded multi-robot environment suitable for WSN wireless sensor environment.

2. To design an intelligent real-time software

architecture to control individual as well as group (or subgroup) of robots using rule based system. One of the advantages of individual control is to develop heterogeneous behavior rather than homogenous behavior to cope with unexpected situations. This is required to prove, our architecture which can extend the behavior of animals to suite the technical or user demands. For example, a subset of robots may behave like flock of birds and other set of robots of the same group may behave such as group of people.

3. To develop facts, that can incorporated into

rules to effectively control the behavior of robot or robots in terms of its direction, speed and angle of rotation. The domain specific facts are required to incorporate them in to rule-base so as to develop efficient reasoning and decision making abilities.

4. Proposed Software Architecture Supporting WSN

The overview of the proposed software architecture with guaranteed real-time task execution features is as shown in Figure 2. The proposed software architecture is a layered architecture with five layers as sensor/actuator layer, communication layer, execution layer, knowledge-processing layer and decision making layer. The sensor/actuator layer composed of sensors, actuators and communication modules and the actual physical module of member robots. This layer is also composed of sensor-actuator network, to efficiently manage several sensors or actuators mounted on the member robots. The CAN protocol [10] is used for communication between sensors and actuators deployed in the robot. This is required to satisfy the real-time characteristics. The CAN guarantee the hard-real time communication between various sensor and actuators.

Figure 2. Overview of proposed Software

Architecture supporting WSN The communication layer is responsible for remote

wireless control of the robots and also acts as an interface between sensor network and outside world. This includes ZigBee or Wireless RS232 protocols. ZigBee is expected to provide low power connectivity for equipment that needs battery life as long as several months to several years but does not require data transfer rates as high as those enabled by Bluetooth. ZigBee compliant wireless devices are expected to transmit 10-75 meters, depending on the RF environment and the power output.

Figure 3. CAN Module

The execution layer is responsible for guaranteed

real-time execution of tasks. This layer includes both real-time and non-real time tasks. The non-real-time tasks include remote monitoring task or concurrent simulation environment tasks. The knowledge processing layer is constituted the real-time expert system and its components. In addition to the requirement of fast execution, the real-time expert system should also possess the ability to perform context focusing, interrupt handling, temporal reasoning, uncertainty handling, and truth maintenance [11].

The knowledge processing layer controls the whole

process of applying the rules to the working memory to obtain the output of the system. It includes the working memory, rule base, pattern matcher and inference

13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications(RTCSA 2007)0-7695-2975-5/07 $25.00 © 2007

Page 4: [IEEE 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2007) - Daegu, Korea (2007.08.21-2007.08.24)] 13th IEEE International Conference

engine. The rule base contains all the rules the system knows. The working memory contains all the pieces of information the rule-based system is working with premises and conclusions of the rules. The pattern matcher decides which rules apply and often the most expensive part of the rule-based program. The agenda contains the list of rules that could potentially fire, decides which rule to fire first. The decision making layer is a higher layer, responsible for actions and decision making. It also includes the user interface for remote-controlled operations.

5. Design of Integrated Rule-Based Environment 5.1 Implementation

The physical prototypes of robots were developed using LEGO kit and CAN based sensor network. The CAN based sensor network directly is responsible for interacting with sensors and actuators in the robot. In the present prototype, the CAN module is networked with 2 angle sensors and 2 motor drive actuators.

Figure 4. Components of individual member robot

Figure 5. KNU-Group Robots We have followed the model proposed in [12] to

develop member robots, to accurately study and behavior of robot movements. The robots also carry the CAN board and HPS-120 HandyPort [13]. The Figure 4 shows the components of the individual robot and Figure 5 shows the group of three robots in operation.

We have also plan to develop the simulation environment using the GLG [14] kit, to directly reflect the locomotion of the robots in the screen. 5.2. Integrated Rule-based Environment

The implementation environment includes a PC

with RT-Linux dual kernel environment. The PC supports ports to handle HPS-120 HandyPort-Serial. The CLIPS (version 6.23) expert system tool installed to support the real-time knowledge processing.

As a rule-based shell, CLIPS stores the knowledge

in rules, which are logic-based structures. The rules are defined by using defrule constructs. These rules control the high-level decisions and can call on C-compiled procedures. The various facts are written as external modules to generate different behaviors incorporating them into rules. We have added various modules as facts to the CLIPS rule-based shell. Two types of facts are implemented. They are (a) group (or subgroup) facts and (b) individual facts.

Figure 6. Identification scheme for individual

and groups robots, with N=3 The group facts are to generate behavior such as

forward, backward, left and right movements. These behaviors can be applied to robot group (RG) or subgroup. The parameters differ from group facts to the individual facts. There are four facts in this category. They are gbehf, gbehb, gbehr and gbehl. Each of these is the external module in our implementation and it is used to generate various group behaviors dynamically using the rules. Each of these facts has takes 3 parameters. The individual facts are applicable to only individual member robots, which are identified with robot number (RN).

There are four facts in individual category (ibehf,

ibehb ibehr and ibehl). The fact names and related parameters are shown in Table 1 and 2 respectively. Figure 6 shows the identification scheme for individual, subgroup and group of robots, where

13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications(RTCSA 2007)0-7695-2975-5/07 $25.00 © 2007

Page 5: [IEEE 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2007) - Daegu, Korea (2007.08.21-2007.08.24)] 13th IEEE International Conference

maximum of robots is 3 (N=3). With individual facts can be directly identified by robot numbers, RN (=1, 2, 3) and groups facts with RG (=2, 3).

Table 1. Group and Individual Facts

To move in left direction with the given angle (Angle) and Speed, the numbered robot (RN) from the current position.

RN, Speed, Angle

Individualibehl

To move in right direction with the given angle (Angle) and speed (Speed), the numbered robot (RN) from the current position.

RN, Speed, Angle

Individualibehr

To move in backward with given distance (Dist) and speed (Speed) , the numbered robot (RN) from the current position.

RN, Speed, Dist

Individualibehb

To move in forward with given distance (Dist) and speed (Speed), the numbered robot (RN) from the current position.

RN, Speed, Dist

Individualibehf

To move in left direction with the given angle (Angle) and speed (Speed), the mentioned set of robots (RG) simultaneously from the current position.

RG, Speed, Angle

Groupgbehl

To move in right direction with the given angle (Angle) and speed (Speed), the set of robots (RG) simultaneously from the current position.

RG, Speed, Angle

Groupgbehr

To move in backward with given distance (Dist) and speed (Speed), set of robots (RG) simultaneously from the current position.

RG, Speed, Dist

Groupgbehb

To move in forward with given distance (Dist) and speed (Speed), set of robots (RG) simultaneously from the current position.

RG, Speed, Dist

Groupgbehf

DescriptionParametersfact Typefact Name

To move in left direction with the given angle (Angle) and Speed, the numbered robot (RN) from the current position.

RN, Speed, Angle

Individualibehl

To move in right direction with the given angle (Angle) and speed (Speed), the numbered robot (RN) from the current position.

RN, Speed, Angle

Individualibehr

To move in backward with given distance (Dist) and speed (Speed) , the numbered robot (RN) from the current position.

RN, Speed, Dist

Individualibehb

To move in forward with given distance (Dist) and speed (Speed), the numbered robot (RN) from the current position.

RN, Speed, Dist

Individualibehf

To move in left direction with the given angle (Angle) and speed (Speed), the mentioned set of robots (RG) simultaneously from the current position.

RG, Speed, Angle

Groupgbehl

To move in right direction with the given angle (Angle) and speed (Speed), the set of robots (RG) simultaneously from the current position.

RG, Speed, Angle

Groupgbehr

To move in backward with given distance (Dist) and speed (Speed), set of robots (RG) simultaneously from the current position.

RG, Speed, Dist

Groupgbehb

To move in forward with given distance (Dist) and speed (Speed), set of robots (RG) simultaneously from the current position.

RG, Speed, Dist

Groupgbehf

DescriptionParametersfact Typefact Name

Table 2. List of robot fact parameters

Angle Sensor Value, this is required to turn the particular robot left/right by an angle value of Φ. For every angle sensor value of 1 the robot turns with 22.5 degrees.

1…NintAngle

The required distance that user wished for. This can be calculated using the speed value (Speed) and size of the robot wheels [12].

User defined

intDist

1 is the lowest and 6 is the maximum. 1-6intSpeed

RN-Robot identification number starting from 1 to N.1,2,3…NintRN

RG-number of robots to be operated.2 to N (N=number of robots in the group)

intRG

DescriptionValue Range

Data TypeParameter

Angle Sensor Value, this is required to turn the particular robot left/right by an angle value of Φ. For every angle sensor value of 1 the robot turns with 22.5 degrees.

1…NintAngle

The required distance that user wished for. This can be calculated using the speed value (Speed) and size of the robot wheels [12].

User defined

intDist

1 is the lowest and 6 is the maximum. 1-6intSpeed

RN-Robot identification number starting from 1 to N.1,2,3…NintRN

RG-number of robots to be operated.2 to N (N=number of robots in the group)

intRG

DescriptionValue Range

Data TypeParameter

6. Experimental Evaluation

The initial positions of the robots are assumed with minimum gap and directions in which robots are facing are initially set. The design allows the robots to navigate in a known-environment. The entire environment can be programmed using rule-base. The appropriate rules can also be fired if the robots want to return back to the original position. We have tested the robots to traverse together in a known environment and also return to the initial position using simple autonomous reasoning module. We found the results are satisfactory. The exact navigation can be improved with precise calibration. The rules guide the robots to go in a particular predefined direction also guide the robots to think like a group. The following rule shows such a group behavior, where 3 robots are moving forward, turns right, then; first 2 robots are moving back and the third robot is moving forward. This rule is

an example to show the flexibility applying individual and group facts.

The above rule is executed when we assert the following:

(defrule group_behavior

(behavior-is spread-random) => (gbehf 3 4 2) ;3 robots move forward with speed 4 and distance parameter 2 (gbehr 3 4 4) ;3 robots move right with speed 4 and angle =90 digress (gbehf 2 4 2) ;First 2 robots (1 and 2) move forward with speed 4 and distance parameter 2 (ibehf 3 4 2)) ;The 3rd robot move forward with speed 4 distance parameter 2

>(assert(behavior-is spread-random))

In a typical movement with speed parameter of 6

and distance parameter value of 10, the robot covers one meter of distance. 6.1 Group behaviors with variable priority

There is a scope for changing the role of the robots by assigning different priorities to different behavior patterns. This greatly enhances the power of robot control and behavior. For example, some cases it is necessary that only one robot (leader actions, for example robot 3) may need to move forward and rest of robots (1,2) may need to go backward(follower actions).

(deffacts init

(priority first)(priority second)) ;with salience declaration

(defrule leader_actions ;higher priority rule, leader actions(declare (salience 300))(priority first)(get_light_sensor ?lsv)=>(if (<= ?lsv 192) then

(ibehf 3 4 4) ; Robot 3 only move forward with speed 4 and distance 4

(printout t "Leader is Moving Forward" crlf))(bind ?lsv 0))

(defrule worker_actions ;lower priority rule, worker actions(declare (salience 200))(priority second)(get_light_sensor ?lsv)=>(if(> ?lsv 192) then

(gbehb 2 4 4); Robot 1,2 move backward with speed 4 and distance 4(printout t "Followers are Moving Backward" crlf))(bind ?lsv 0))

Figure 7. Group behaviors with variable priority

The Figure 7 describes role of priority during firing

of rules with simulated light sensor value. Only one robot perform forward action, and rest of the robots move backward to allow the leader robot to perform its actions. When the multiple activations are on the agenda, CLIPS automatically determines which activation is appropriate to fire.

6.2. Flexibility in dynamic reconfiguration of behaviors

The proposed architecture supports flexibility to adapt different group behaviors to adjust and manage

13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications(RTCSA 2007)0-7695-2975-5/07 $25.00 © 2007

Page 6: [IEEE 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2007) - Daegu, Korea (2007.08.21-2007.08.24)] 13th IEEE International Conference

with unexpected and in case of failures. This is important in real-time response applications. Figure 8 describes the switching of roles from disciplined behavior to random behavior. In this case, we assume that , depending on the light sensor values(lsv), which is connected to the robot is responsible for role switching between disciplined (only moving forward) and random (such as left, right, right, forward etc., with different speed and angle parameters) actions.

defrule fire_disciplined_actions

(get_light_sensor ?lsv) => (if (<= ?lsv 200) then ;lsv <=200 , Disciplined actions

(gbehf 3 4 10) (printout t "Performing Disciplined Actions” crlf)) (bind ?lsv 0))

(defrule fire_random_actions (get_light_sensor ?lsv) => (if(> ?lsv 200) then ;lsv >200 Random Actions (gbehf 3 4 5) (gbehr 3 4 4) (gbehl 2 4 4) (printout t "Performing Random Actions" crlf)) (bind ?lsv 0))

Figure 8. Rules describing dynamic reconfiguration of behaviors

7. Conclusion

We have presented layered software architecture for group behavior generation of widely spreaded robots in wireless sensor network environment. We believe that proposed wireless network topology and the robot sensor network are mapped dynamically with the change in robot locomotion. Various network end nodes are mapped to the mobile robots directly to increase the flexibility of interconnection and synchronization. This makes the sensor network dynamic rather than static. The complex group behaviors are generated using the rule sets. We have developed two types of facts to support our architecture. They are group and individual. The group facts are to manipulate varieties of operations on entire group or subgroup. The individual facts can be used only to operate individual robots, which are identified by a robot number. With this architecture, the reasoning module can autonomously scarp any misbehaving robot from the group. The group and individual facts are added to the CLIPS expert system shell to incorporate intelligent behavior. The architecture is implemented in RT-Linux environment with CLIPS as real-time expert system in the upper layer. We have used real-time communication protocol CAN, as a sensor network to interconnect sensor and actuators in robots

8. Acknowledgements

This research was supported by the MIC (Ministry of Information and Communication), Korea, under the ITRC (Information Technology Research Center) support program supervised by the IITA (Institute of Information Technology Assessment) (IITA-2006-C1090-0603-0020). 9. References [1] David Brogan, “Control of complex physically simulated robot groups”, Proc. SPIE Vol. 4512, Complex Adaptive Structures, 2001, pp. 172-184 [2] Keith Kotay, Ron Peterson, and Daniela Rus, “Experiments with Robots and Sensor Networks for Mapping and Navigation”, Proceedings of International Conference on Field and Service Robotics, 2005 [3] Batalin, M.A. Sukhatme, G.S. Hattig, M., “Mobile robot navigation using a sensor network”, Proceedings of IEEE International Conference on Robotics and Automation, 2004, pp 636-641 [4] Kate Foster,Tim Hendtlass, Combining a rule-based expert system and machine learning in a simulated mobile robot control system, Swinburne University of Technology , ISBN:1-58603-394-8, 2003, pp 361 - 370 [5] Bayazit O. B., Lien J.M., Amato N. M., “Better group behaviors using rule-based roadmaps”, In proceedings of International Workshop on the Algorithmic Foundations of Robotics (WAFR), 2002 [6] Jang-Hyun Kim, Jin Bae Park, Hyun-Seok Yang, Young-Pil Park, “Generation of Fuzzy Rules and Learning Algorithms for Cooperative Behavior of Autonomous Mobile Robots (AMRs)”, FSKD (1) 2005, pp 1015-1024 [7] CLIPS, http://www.ghg.net/clips/CLIPS.html [8] LEGO Home Page, www.legomindstorms.com [9] www.zigbee.org [10] Wolfhard Lawrenz, CAN System Engineering From Theory to Practical Applications, Springer Verlag, 1997 [11] Laszloa Ilyes , Feugenio Villaseca, John DeLaat, “A parallel strategy for implementing real-time expert systems using CLIPS” , Third CLIPS Conference Proceedings, 1994 [12] G.W. Lucas, “Using a PID-based Technique For Competitive Odometry and Dead- Reckoning”, http://www.seattlerobotics.org/encoder/200108/using_a_pid.html [13] http://www.handywave.com/ [14]GLG Tool Kit, Generic Logic Inc., http://www.genlogic.com/

13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications(RTCSA 2007)0-7695-2975-5/07 $25.00 © 2007