[IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA...

8
Object Recognition Architecture Using distributed and parallel Computing with Collaborator Junhee Lee, Sue J. Lee, Yeon-chool Park and Sukhan Lee Department of Electrical Engineering and Computer Science, Sungkyunkwan Univ., 300 CheonCheon-dong, Jangan-gu, Suwon, Gyeonggi-do, the republic of Korea. Department of Electrical Engineering, Stanford Univ., 126 Blackwelder 601 Stanford, CA, USA. {leejunhee,parkpd,lsh}@ece.skku.ac.kr [email protected] Abstract These days, object recognition is regarded as a sufficient condition for essential requirements of intelligent service robot. Under such demands, object recognition's algorithms and its methods have been increasing in complexity along with the increase of computational ability. Despite these developments, object recognition still consumes many computational resources, which impede total time throughput drop. The purpose of this paper is to suggest an object recognition software architecture, which reduces time throughput by applying concepts of ‘Component based approach’ and COMET(Concurrent Object Modeling and architectural design mEThod), a computational efficiency improvement method. In COMET, the component based approach reduces total time throughput by supporting dynamic distributed and parallel processing. To enable these computations, surplus computational resources of nearby collaborator robot can be used for distributed computing by SHAGE, which is a component management framework based on COMET. Using SHAGE, in order to connect physical operation among components, software function module should be a componentized component defined by ‘COMET component design guideline’. This paper componentizes the object recognition software function modules via this guideline, and shows the object recognition architecture as a connected relationship among these components. The experimental results show a maximum of 42% performance improvement compared to the original multi-feature evidence recognition framework. 1. Introduction Nowadays, the tasks of service robots are variegated while development of techniques expand service robot’s work field. Moreover, they are becoming more affordable and can work as in unison with other robots in a networked infrastructure. In indoor environment, service robots work with collaborator such as the other robots, and autonomous machines. We can consider three levels of the communication between service robot and its collaborators. First, a robot communicates with other robot/autonomous machine to assign and coordinate their common task. Second, a robot communicates to exchange the information and data. Third, a robot can communicate with collaborators to share their resources for computational efficiency. In these three levels, this paper focuses on the third level of communication. Figure1.Robot and network infra with computational resources [5]. To guarantee the quality of the service, service robot should complete the task such as dictation, localization and recognition. But the rise of task’s demanded quality level also increases the computational load. This leads to many problems caused by lack of computational resource, further leading to degradation of quality of the robot’s service. To solve these computational burdens, a robot generally installs multitude single board computer (SBC) bounded by 2007 IEEE International Conference on Granular Computing 0-7695-3032-X/07 $25.00 © 2007 IEEE DOI 10.1109/GrC.2007.94 490 2007 IEEE International Conference on Granular Computing 0-7695-3032-X/07 $25.00 © 2007 IEEE DOI 10.1109/GrC.2007.94 490

Transcript of [IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA...

Page 1: [IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA (2007.11.2-2007.11.4)] 2007 IEEE International Conference on Granular Computing (GRC 2007)

Object Recognition Architecture Using distributed and parallel Computing with Collaborator

Junhee Lee, Sue J. Lee, Yeon-chool Park and Sukhan Lee

Department of Electrical Engineering and Computer Science, Sungkyunkwan Univ., 300 CheonCheon-dong, Jangan-gu, Suwon, Gyeonggi-do, the republic of Korea.

Department of Electrical Engineering, Stanford Univ., 126 Blackwelder 601 Stanford, CA, USA. {leejunhee,parkpd,lsh}@ece.skku.ac.kr

[email protected]

Abstract These days, object recognition is regarded as a

sufficient condition for essential requirements of intelligent service robot. Under such demands, object recognition's algorithms and its methods have been increasing in complexity along with the increase of computational ability. Despite these developments, object recognition still consumes many computational resources, which impede total time throughput drop. The purpose of this paper is to suggest an object recognition software architecture, which reduces time throughput by applying concepts of ‘Component based approach’ and COMET(Concurrent Object Modeling and architectural design mEThod), a computational efficiency improvement method. In COMET, the component based approach reduces total time throughput by supporting dynamic distributed and parallel processing. To enable these computations, surplus computational resources of nearby collaborator robot can be used for distributed computing by SHAGE, which is a component management framework based on COMET. Using SHAGE, in order to connect physical operation among components, software function module should be a componentized component defined by ‘COMET component design guideline’. This paper componentizes the object recognition software function modules via this guideline, and shows the object recognition architecture as a connected relationship among these components. The experimental results show a maximum of 42% performance improvement compared to the original multi-feature evidence recognition framework.

1. Introduction Nowadays, the tasks of service robots are variegated while development of techniques expand service robot’s work field. Moreover, they are becoming more affordable and can work as in unison with other robots

in a networked infrastructure. In indoor environment, service robots work with collaborator such as the other robots, and autonomous machines. We can consider three levels of the communication between service robot and its collaborators. First, a robot communicates with other robot/autonomous machine to assign and coordinate their common task. Second, a robot communicates to exchange the information and data. Third, a robot can communicate with collaborators to share their resources for computational efficiency. In these three levels, this paper focuses on the third level of communication.

Figure1.Robot and network infra with computational resources [5].

To guarantee the quality of the service, service robot should complete the task such as dictation, localization and recognition. But the rise of task’s demanded quality level also increases the computational load. This leads to many problems caused by lack of computational resource, further leading to degradation of quality of the robot’s service. To solve these computational burdens, a robot generally installs multitude single board computer (SBC) bounded by

2007 IEEE International Conference on Granular Computing

0-7695-3032-X/07 $25.00 © 2007 IEEEDOI 10.1109/GrC.2007.94

490

2007 IEEE International Conference on Granular Computing

0-7695-3032-X/07 $25.00 © 2007 IEEEDOI 10.1109/GrC.2007.94

490

Page 2: [IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA (2007.11.2-2007.11.4)] 2007 IEEE International Conference on Granular Computing (GRC 2007)

internal high speed LAN to exchange some data via each SBC. These robots also install wireless LAN for level 1 or level 2 communications. Figure 1 shows that the structure of service robot’s general computational resource configuration. Although computational ability increases by installing many SBCs, there are still inefficiency problems because of the roles and tasks for the SBC pre-assigned by task’s hardware character. For example, if one SBC is assigned to work for sound, it works for only sound-related-tasks. Main SBC which assigned to control BLDC does not work when the robot is in stop status, even if vision SBC needs more computational resources. This inefficiency is also a problem when we look at robot which works together with other robots. There may arise a situation that one robot does not work when the other robot needs more computational power. This inefficiency reduces the service robot’s total performance. This paper starts with this computationally inefficient background. Object recognition is one representative function which is essential to be an intelligent service robot, but also obstruction of a task in real time because of its large amount of computational resources consumption. The objectives of this function is finding the rotation and the position of the object. These results are used when a robot grasps an object or performs a control function. Therefore, the Object recognition result should be accurate to satisfy the quality of data for controlling and grasping function. For precise results, object recognition adopts multi-feature-evidence based recognition method [2]. The main idea of this method is to synthesize all the results from each feature based recognition module such as line, SIFT, and color [2][11][12][13]. But, to guarantee real time, this method’s framework should have tradeoffs between recognition quality and real time guarantee because of the computational resource which is consumed in the process of applying multi-feature-evidence. This paper adopts ‘parallel and distributed processing concept’ to this method so that object recognition framework can have optimal usage of surplus resource which was introduced above, and minimizes the tradeoff. COMET (Concurrent Object Modeling and architectural design mEThod) is originally researched in software engineering field to efficienctly develop and manage software including use, reuse, re-engineering chain [1]. COMET also includes configure and manage optimal software architecture concept for efficient resource use [1][10]. SHAGE is an implemented framework of COMET’s concepts for managing robot framework and its software modules [4]. SHAGE manages robot’s software architecture to optimize controls each components to make parallel

and distributed computing. ‘Software component’ is the smallest unit software that can be used to architecture reconfiguration, which changes the relationships via each component. There are 7 SHAGE component guidelines that software module be a component. More details are treated in chapter 2. There are many approaches to manage resources efficiently or keep software architecture optimally. Aurora [6] is developed to supplements low-cost-client’s ability with server’s computational resource when a client performs human voice recognition. But this approach uses server-client model while SHAGE focuses on load balanced architecture. As middle ware based approach, ‘Robot Technology Middleware (RT-Middleware)’ [8] and ‘Open Robot Controller Architecture (ORCA)’ [7] also uses component-based approach. They help robot software components sharing, and encourage business market model making by leading the robot component standards rather than reconfiguring the architecture dynamically. SHAGE framework keeps optimized architectures to maximize robot’s resource efficient using by changing component’s physical run position dynamically in runtime. This SHAGE’s sprit of computational resource sharing may similar to grid computing [12]. The main sprit that maximizes efficiency of the resource use is similar to SHAGE and Grid-computing. However, there are some differences because grid computing shares the resource by booking before running, while the SHAGE framework shares the resource by runtime optimized software architecture reconfiguration [4][14][15]. The rest of this paper is as follows. Chapter 2 reviews SHAGE framework and component guideline. In Chapter 3, recognition framework, architecture and components that used in this paper’s experiment are explained. Chapter 4 is experiments and its results. Chapter 5 is assigned to make a conclusion of this paper. 2. SHAGE Framework 2.1.Review of SHAGE framework SHAGE (Self-Healing, Adaptive and Growing SoftwarE) framework is developed based on COMET (Concurrent Object Modeling and architectural design mEThod) to motivate robot software to have self-managed functionality. Framework consists of two parts as 2. There are seven modules inside the dotted line. Outside of the dotted line provides storage service which saves new information about how a robot be suited. Even though an outside robot's environment and users are not parts of framework, framework should consistently interact with those and decide "when and

491491

Page 3: [IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA (2007.11.2-2007.11.4)] 2007 IEEE International Conference on Granular Computing (GRC 2007)

Configuration M anager

SBC-1 SBC-2 SBC-3

Com ponent

Com ponent

Com ponentCom ponent

Com ponentCom ponent

Child CMChild CM Child CM

RM I RM I RM I

how to adapt". Target architecture is software assigned for actual robot's functionality.

Figure 2. Overview of SHAGE framework.[4]

Figure 3. Conceptual view of dynamic deployment[4]

Modules inside the framework are Monitor, Architecture Broker, Component Broker, Decision Maker, Learner, Reconfigurator, and Repository. Monitor observes current state's condition and evaluates adaptive behavior that framwork performed. With a request, it can set diverse criteria and adapt requests; but in the current research, the criteria is computational resource. Architecture Broker searches architecture reconfiguration strategies and abstract applicable candidates. Moreover, among the candidates, it creates candidate component composition which will be applied to the decided archicteture. This candidate component compositon is based on the comcrete component which a component broker has brought inside and outside component storage. Decision Maker determines the most suitable architecture among the candiate architectures which an architecture broker provides, by the information a lerner concrete. Moreover, among the candidate component architectures, it chooses the most suitable one for the current condition and a component compositon, which is the most suitable archictecture [2]. A lerner accumulates evaluation results about the current adaptive behavior which comes from evaluator, and then, it utilizes for decision maker to choose architecture and component composition.

Reconfigurator manages an architecture of current working robot software. When there's a request of a adaptivity, it reconfigurates an architecture based on decided architecture and component composition. Among the inside repositories, ontology repository saves architecture strategies and component ontologies. Component Repository stores excutable component codes.

Outside of the framework are composed with servers which provide storage service. Each server has ontology repository and component repository. Outside repositories provide a new ontology or component when a robot cannot be adapted to current state with only ontologies and compoents from the inside repositories. Repository managers are estabilished in each server, and add new ontologies or components, or provide tools of managing preexisting ontologies or components.

Starts when a monitor receives a computatinal resource state change from outside environment or a new request from a user. When an observer recognize a condition which a software architecture should be changed, it requests to architecture broker to adapt current state.(startAdaptation()). Architecture Brocker requests current architecture to Reconfigurator(getCurrentArch()), and then it searches candidate architecture reconfiguration strategy from the inside ontology repository. (searchArch()). Architecture Broker makes Decision Maker to choose the most suitable architecture reconfiguration strategy among the candidate architecture reconfiguration strategies. Moreover, to select component to adapt for the selected architecture, it requests component broker a suitable component(etComponentSet()). Component broker searches components from the componenet repositories(getComponent()). After receiving a component, architecture broker organizes a few component organizqations which a selected architecture can be applied in, and request decision Maker to choose the best organization.(selectCompositon()). After selecting an architecture and a component for organizing an architecture, it requests reconfigurator an architectural reconfiguration(ReconfigureArch()). Then, current architecture should be adapted to the current state by reconfigurator component adding/deleting/changing, and a connector should be set to have communication among components. After the reconstruction, the reconfigurator notices architecture broker that the reconfiguration is done, and architecture broker notices to Monitor that one adaptive process is done. After the adaptation and later, evaluator evaluates adaptive

492492

Page 4: [IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA (2007.11.2-2007.11.4)] 2007 IEEE International Conference on Granular Computing (GRC 2007)

behavior result and transter fo learner. Learner accumulates the evaluation results and let decision maker to use later. Above process is a one adaptive process. By this adaptive framework, this paper configures and manages time throughput side optimal architecture. So, this paper’s criterion is time throughput, not matching rate or robustness of recognition module. The decision maker makes decision by considering constraints which includes network bandwidth, current computational resource and quantity of the component’s input/output 2.2.Review of SHAGE framework

To configure and manage to optimal architecture, each software function module should be a component which is componentized by component design guide line in COMET. In SHAGE, follow 7 guide lines are used to implement COMET to robot. 1) a component must not know location of other components', i.e. a component should not create references of other components, 2) a component must have two types of ports to communicate with other components; required and provided ports, 3) a required port specify what functionality the component needs and will be connected to a provided port which implements a functionality the component needs, 4) a provided port specify what functionality the component provides and will be connected to a required port which needs the functionality, 5) every component should use message-based communication to use functionalities of other components, 6) a component has ‘modules’ to process incoming messages from other components, 7) a component must have a service manager which relay messages from other components to specific modules[4].

Those rules give a component reconfigurable property which change run deploy in runtime. One component has 3 state: 1) load 2) unload 3) work. ‘Load’ is a state that component just loaded in memory, not consuming computational power because component does not give a ‘message’ which contains input data and run request. ‘Unload’ is a state that component not loaded in memory. So, ‘unload’ component should change state to ‘load’ to run. ‘Work’ is physically working state with computational power consuming. The state change of ‘load’ to ‘work’ is invoked when the message reaches to the component. Physically each component is loaded to each SBC by decision maker’s decision. Component always load in memory by its own thread, but does not working and consuming until give a message. By this mechanism,

Configuration Manager in reconfigurater can manages computational resource by pass a message to component in SBC which has surplus computational resource. Figure 3 shows the conceptual structure of the dynamic deployment change and reconfiguration [4][14].

3. Main title To grasping the object, robot should find 2 information: 1) Identification 2) Location [16]. The object recognition process of robot has 2 main steps. Initially, target object’s id that selected in high level task manager is sent to recognition module. Then robot compares the scene image with model which pre-modeled in DB to estimate whether there exist of object (Identification). If there exist object in scene image, estimate camera to object rotation and translation matrix (POSE). So, the inputs of recognition are the 2D and 3D scene image. Output of recognition is 4x4 POSE Matrix which combined 3x3 rotation matrix and 3x1 translation matrix [17].

Object’s characteristic visual property is used to recognize object. Object’s characteristic shape like object’s outer line wire frame, object’s photometric characteristic property like patterns on object’s surface, characteristic color which represent object are examples that typical use of characteristic visual property. This paper uses 3 feature to recognize object: 1) line feature (relationship between lines); 2) SIFT feature (scale invariant key point) [11]; 3) Color feature. Each feature based recognition module can recognize the object independently. The results from each feature based recognition module can be fused to raise data accuracy [13]. Figure object recognition flow shows multi-evidence-feature object recognition process’s brief flow.

Figure 4. Overall Recognition Flow.

In figure 4, the input data of each feature based

recognition modules are 2D and 3D scene image. In the experiments, 2D scene image’s size is 640 * 480 * 3(BGR) * 1 byte (unsigned char) = 921600 byte, and 3D scene image is 640*480*4(BGR & depth)* 4 byte (float) = 4915200 byte. So, 5836800 byte is sent to

493493

Page 5: [IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA (2007.11.2-2007.11.4)] 2007 IEEE International Conference on Granular Computing (GRC 2007)

each feature based recognition components. This size is enough small in GIGA-bit LAN network interface card compare with definitely big processing time in each feature based modules. It needs only 4 1.5Mbytes MTU size packet to send .

The specific descriptions of each feature based recognition sub components and constraint that considering quantity of input/output data are follows.

3.1. Line Feature based Recognition

Object can be recognized by its characteristic shape line property. This feature give a benefit in recognize object which have vivid wire frame but have a texture-less objects like a refrigerator, a book shelf. But it is hard to identify objects that have similar wireframe. Figure 5 shows flow of line feature based object recognition and its example.

(a) flow diagram (b) Line feature model example Figure 5. line feature based recognition

Line feature based object recognition component is consisted of follow sub components and flows. First, in the given scene image, 2D lines are extracted by Canny edge method (2D line extraction). The 2D lines change the 3D lines by mapping corresponding 3D points (3D line detection). Detected 3D lines match to the models in database by their connectivity and relations (line matching). Finally, an object’s POS is estimated by comparing the matched lines, model’s scale and rotations (line based POS estimation). When we consider input data, “Canny Edge Extraction”, “3D Line Detection” can deployment change between wired connected SBC. After “3D Line Detection”, deployment of components can freely change via wire/wireless because the amount of messages between components like ‘3D line set’, ’POSE Candidate set’ are very small (less than 1 kb). 3.2 SIFT Feature Based Recognition.

SIFT feature can use to recognize fully textured object by use object’s photometric property. SIFT Feature based object recognition generally strong to

recognize shaded object or object which under lack conditions of light environment. But this feature is weak to recognize texture less objects as like glass or refrigerator. Figure 6 shows that the flow of SIFT based object recognition and its example. The red dots are extracted SIFT key points.

(a) flow diagram (b) example of SIFT key

point Figure 6. SIFT feature based recognition

This feature based component is consisted of 3 sub components which are “SIFT key point Extraction”, “SIFT Matching”, “SIFT Feature base POSE Estimation”. The process of SIFT object recognition is as follows. At first, SIFT points are extracted in given 2D image (SIFT key point extract). Extracted SIFT points generate candidate SIFT point set by matched with model’s SIFT points in data base (SIFT matching). Finally object candidate position and rotation of camera to object are generated by comparing among models and candidate sets (SIFT based object POS estimation). In these SIFT sub components, “SIFT Key Point Extract”, “SIFT Matching” only can change between wired connected SBC because these two component needs scene image. But, “SIFT feature based POSE Estimation” can consume all computational resource in SBCs that connected by wired/wireless network. 3.3.Color feature based Recognition If there are remarkable characteristic color in the object, color feature can used to recognize the object. Red ball or blue book is typical example of this color feature based object recognition can be adapted. This method recognizes an object quickly because it just use instinct data and algorithm. So this module consumes small computational resource. That is merit when there are not enough resource in robot but should briefly recognize object. But this method cannot find camera to object rotation basically. Figure 7 shows color feature based object recognition flow and example of color region segmentation. The object candidate regions are segmented from given 2D scene image by modeling object’s hue value with threshold (Color

494494

Page 6: [IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA (2007.11.2-2007.11.4)] 2007 IEEE International Conference on Granular Computing (GRC 2007)

matching and color region segmentation). Then, plausible candidate sets are segmented and filtered by the size of their sets and compared in near environments (neighboring and filtering). Then center points of 3D point which correspond to candidate set are estimated by an object (location camera to object translation). As we can see figure 7 only ‘neighboring and filtering’ can change its deployment through the wireless network.

(a) flow diagram (b)Color region segmentation Figure 7. Color feature based recognition

3.4.POSE Estimation: Fusion and Filtering Each estimated POSE from above A, B, C should be fuse and filtered to be a synthesized with situation and assigned probabilities. Fusion & filtering module are consisted of 2 main steps. First, the propagation matrix is generated by comparing current and previous scene’s odometry and head pan/tilt data. After then, current n numbers candidate sets with previous result and propagation matrix are filtered by particle filter method [2][13]. Because this 2 step have small computational load, we can thinks that there is no merit to depart two components. So we does not componentized to sub componentize. Since this component’s input/output data is very small, it can be executed in every wired/wireless connected SBC. Table 1. list of the component and deploy permission

Component Local Wires 2D Line Extraction O X 3D Line Detection O X Line Matching O O Line Feature based POSE Estimation O O SIFT Key point Extraction O X SIFT Matching O X SIFT Feature based POSE Estimation O O Color Matching & segmentation O X Neighboring O O Color Feature based POSE Estimation O X Data Fusion & Filtering O O

4. Experiment & Result

In this chapter, we perform the experiments and compare results of each step with the components that decrypted in chapter 3. The specs of 3 single board computers (SBC) on the robot 1 (TROT) are 1) CPU: Intel Pentium 4 2.6GHZ; 2) RAM: 512 MB; 3) LAN:

1000MBPS; and SBC on the robot 2 (Infotainment Robot) are 1) CPU: Intel Pentium 4 2.0GHZ; 2) RAM: 512 MB; 3) LAN: 1000MBPS; and both robots are bounded by 54Mbps; that showed in figure 1. Robot 1 has a 4 SBC; 1) Main SBC; 2) Voice SBC; 3) Vision SBC; 4) Manipulation SBC. Robot 2 has a 3 SBC; 1) Main SBC; 2) Voice SBC; 3) Vision SBC. Originally, these SBC had an own duty that related in their physical device, but SHAGE uses all SBC by separate “real-softness software” and “hardware bounded software”. The table 1 shows the component list and deployment permissions that consider input/output data and network bandwidth constraint. To evaluate iteration time, we do not include the time for initializing and closing. In SHAGE framework, the component bounded hardware device is not a ‘reconfigurable module’. For that case, the component always works in the related SBC. In this paper, ‘Image supplier’ is one of components example which cannot do reconfiguration. More detail descriptions of each experiment are as follow.

A InitializeModule ()

00. Initialize Fusion & filtering Module; 01. Initialize Line based POSE Estimation Module; 02. Initialize SIFT based POSE Estimation Module; 03. Initialize Color based POSE Estimation Module; a

CloseModule () 04. Close Fusion & Filtering Module; 05. Close Line based POSE Estimation Module; 06. Close SIFT based POSE Estimation Module; 07. Close Color based POSE Estimation Module; a

GetCurrentSceneImage() 08. Return (CurrentScene 2D & 3D Image); a

GetPropagationMatrix() 09. Return (Propagation Matrix); a

Main() 10. InitializeModule (); 11.while(forever) 12. GetCurrentScenImage(); 13. GetPropagationMatrix(); 14. do Line based POSE Estimation; 15 do Fusion & Filtering; 16. do SIFT based POSE Estimation; 17 do Fusion & Filtering; 18. do Color based POSE Estimation; 19. do Fusion & Filtering POSE Estimation; 20. if(End Request) break; 21.ColseModule (); A

Figure 8. Pseudo code of experiment A: a mass software module

495495

Page 7: [IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA (2007.11.2-2007.11.4)] 2007 IEEE International Conference on Granular Computing (GRC 2007)

4.1. Before Componentize: a Mass Software Module

Experiment executed with a mass recognition module which is not componentized. To obtain highly accurate data and guarantee robustness, with one scene image which capture stereo camera at time t, three feature methods are used to recognize object and estimate object’s POS, and then filtered. Each feature based recognition function run one time iteration by iteration. These POS matrices are inserted for fusion and filtering function by being written on hard bounded shared memory structure. Fusion & filtering function generate one iteration’s estimated POS such as chapter 3 section D. figure 6 denotes experiment A’s pseudo code. Estimated object POS which captures at time T is generated line 19.

The 1000 Average iteration time from line 12 through line 19 is 1184 ms. It means robot needs 1.2 second to recognize one object in one scene. With all the features, the result quality can be high, but it is not enough for a robot to work in the real time. Therefore, there should be a tradeoff between time and recognition accuracy to guarantee the real time. Fusion & filtering module are consisted of 2 main steps. First, the propagation matrix is generated by comparing current and previous scene’s odometer and head pan/tilt data. After then, current n numbers candidate sets with previous result and propagation matrix are filtered by particle filter method [2][13].

Table 2. Deployment Decision Result.

Component SBC

2D Line Extraction Main

3D Line Detection Voice

Line Matching Vision

Line Feature based POSE Estimation Main

SIFT Key point Extraction Vision

SIFT Matching Mani

SIFT Feature based POSE Estimation Voice

Color Matching & segmentation Main

Neighboring Mani

Color Feature based POSE Estimation Voice

Data Fusion & Filtering Vision

4.2. Componentized & perform on SHAGE framework: without Collaborator In this experiment, dividend modules in experimental A have been a component by guide line which described in chapter 2, [14]. As an independent thread,

each component is executed parallel on SHAGE framework while SHAGE framework is keeping optimal architecture by runtime dynamic reconfiguration. Since the “Decision Maker” considers the constraint, resource monitoring data, pre-gauged computational burden and related probability under the condition that there’s no near collaborator robot detected. The deployments of each component are showed in table 2. Figure 9 shows the overall architecture of this experiment’s recognition specific components.

Figure. 9. Overall Recognition architecture.

The software architecture on figure 9 is constructed

by ‘Message Pass’ method. Each component creates output and passes output port when input data is inserted from input port, whatever other components. SHAGE framework makes and keeps the architecture by managing the messages via components. Experiment B executed in the same environmental conditions of experiment A. Because one iteration in experiment A means estimation time that is from scene image, experiment B measure the time that is from Image supplier to final POS estimation at time t scene in figure 9. Average 960 ms result obtained from 1000 iteration execute. Comparing experiment A’s result 1184ms to experiment B’s, 1184/960 = 1.233, 23% is the performance improvement. 4.3. Componentized & perform on SHAGE framework: With Collaborator

experiment executed in the same conditions of experiment B, except existence of collaborator robot in ad-hoc wireless reach area. Under this condition, the “Decision Maker” decides the deployments of these components as table 3.

Table 3. Deployment Decision Result. Component Robot SBC

2D Line Extraction 1 Main

3D Line Detection 1 Main

Line Matching 2 Vision

496496

Page 8: [IEEE 2007 IEEE International Conference on Granular Computing (GRC 2007) - Fremont, CA, USA (2007.11.2-2007.11.4)] 2007 IEEE International Conference on Granular Computing (GRC 2007)

Line Feature based POSE Estimation

2 Main

SIFT Key point Extraction 1 Vision

SIFT Matching 1 Vision

SIFT Feature based POSE Estimation

2 Voice

Color Matching & segmentation 1 Main

Neighboring 1 Mani

Color Feature based POSE Estimation

1 Voice

Data Fusion & Filtering 1 Vision

After the experiment C, we get average 826 ms

result obtained from 1000 iteration execute as experiment A and B. Comparing experiment A’s result 1184ms to experiment B’s, 1184/826 = 1.43, 43% is the performance improvement. Comparing with experiment b, then 960/826 = 1.162, 16% that means the performance improvement portions of collaboration.

5. Conclusion

In this paper, we suggest new computational architecture for multi evidence recognition framework by applying COMET and componentize functions by SHAGE component guide line. It is successful example of configure & manage optimal architecture. The result of experiment shows maximum 43% improvement. References [1] G. T. Heineman and W. T. Councill, “Component-based Software Engineering: Putting the Pieces Together,” Addison- Wesley, 2001. [2] Sukhan Lee, Seongsoo Lee, Jeihun Lee, Dongju Moon, Eunyoung Kim and Jeonghyun Seo, “Robust Recognition and Pose Estimation of 3D Objects Based on Evidence Fusion in a Sequence of Images,” accepted IEEE ICRA 2007. [3] Intelligent Robotics Program, one of the 21st Century Frontier R&D Programs funded by the Ministry of Commerce, Industry and Energy of Korea. Available in http://www.irobotics.re.kr/. [4] D. Kim, S. Park, Y. Jin, H. Chang, Y.-S. Park, I.-Y. Ko, K. Lee, J. Lee, Y.-C. Park, and S. Lee, “Shage: A framework for self-managed robot software,” In Proceedings of Workshop on Software Engineering for Adaptive and Self-Managing Systems(SEAMS), 2006. [5] Specific spec available in http://www.dasatech.co.kr. [6] M. Holmberg, D. Gelbart, and W. Hemmert, "Automatic speech recognition with an adaptation model motivated by auditory processing," Audio, Speech and Language Processing, IEEE Transactions on [see also Speech and Audio Processing, IEEE Transactions on], vol. 14, pp. 43-49, 2006.

[7] A. Makarenko, A. Brooks, T. Kaupp,“Orca: Components for Robotics,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006). Workshop on Robotic Standardization. [8] Ando, N.; Suehiro, T.; Kitagaki, K.; Kotoku, T.; Woo-Keun Yoon “RT-middleware: distributed component middleware for RT (robot technology),” 2005 IEEE/RSJ International Conference on. Intelligent Robots and Systems, 2005. (IROS 2005). [9] Nemeth, Z.; Gombas, G.; Balaton, Z , “Performance evaluation on grids: directions, issues, and open problems,” Proceedings. 12th Euromicro Conference on. Parallel, Distributed and Network-Based Processing, 2004. [10] P. Oreizy, M. M. Gorlick, R. N. Taylor, D. Heimbingner, G. Johnson, N. Medvidovid, A. Quilici, D. S. Rosenblum, and A. L. Wolf. “An architecture-based approach to self aaptive software,” IEEE Intelligent Systems, 14(3):54–62, May 1999. [11] D. Lowe. “Object recognition from local scale invariant features,” In Proc. 7th International Conf. Computer Vision (ICCV’99), pp. 1150–1157, Kerkyra, Greece, September 1999. [12] M. F. S. Farias and J. M. de Carvalho, “Multi-view Technique For 3D Polyhedral Object Rocognition Using Surface Representation,” Revista Controle & Automacao., pp. 107-117, 1999. [13] Sukhan Lee, Eunyoung Kim and Yeonchool Park, "3D Object Recogni- tion using Multiple Features for Robotic Manipulation," IEEE International Conf. Robotics and Automation, pp. 3768-3774, May 2006. [14] D. Kim and S. Park. “Designing dynamic software architecture for home service robot software,” In E. Sha, S.-K. Han, C.-Z. Xu, M. H. Kim, L. T. Yang, and B. Xiao, editors, IFIP International Conference on Embedded and Ubiquitous Computing(EUC), volume 4096, pages 437–448, 2006. [15] M. Kim, S. Kim, S. Park, M. Choi, M. Kim, and H. Gomaa, “Uml-based service robot software development: A case study,” In Proceedings of the 28th International Conferenceon Software Engineering, Shanghai, 2006. [16] E. Truco and A. Verri, ‘Introductory Techniques for 3-D Computer Vision,’ Prentice Hall ,pp248

497497