Focus support interface based on actions for collaborative learning

7
Focus support interface based on actions for collaborative learning Yuki Hayashi , Tomoko Kojiri, Toyohide Watanabe Department of Systems and Social Informatics, Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan article info Available online 20 November 2009 Keywords: Round-table interface CSCL Awareness Learning activity Focusing intention abstract With the development of information and communication technologies, learners can easily study with others in the distributed environment. However, it is still hard for them to share interaction with others efficiently because of the limited communication means. In order for learners to study collaboratively with others and immerse in learning, it is important for them to grasp directly the actions occurred in the learning environment, such as making utterances, facing to other learners, writing memos, etc. Moreover, they should observe the collaborative learning environment appropriately according to their focusing intentions. In this paper, we analyze the activities occurred in the collaborative learning environment, and propose a method for detecting focusing intention of the learner. Then, we address the effective view change based on the focusing intention in the collaborative learning environment. From our experimental result, our method of focusing intention could detect 70% of focusing targets of learners correctly. & 2009 Elsevier B.V. All rights reserved. 1. Introduction As the drastic development of information and communication technologies, learners can study with others without sharing the same physical space [1,2]. Computer-supported collaborative learning (CSCL) is one of the interesting learning styles in which plural learners study collaboratively in the shared virtual space. However, as the communication band of the network is restricted, learners cannot acquire much information with regard to the other learners’ situations or behaviors. In order to support communication in virtual environment, several applications are developed that introduce camera images of participants [3,4]. These systems have a function which enables the communication among participants such as text-chat, voice- chat, shared white board tool, etc. In addition, participants can acquire facial expressions of others from the camera images. However, since camera images for individual participants are displayed on separate windows, it is difficult for participants to feel that they share the same environment. On the other hand, in the research field of computer supported cooperative work (CSCW), a considerable number of studies have pursued the reality like a real world interaction, such as eye contact, room acoustics, and so on. Kauff and Schreer [5] proposed a 3D video conference system where participants can get eye contact and gesture around the table in the virtual environment. However, in order to collaborate with other learners smoothly/effectively in a virtual learning environment, it is important for learners not only to feel the reality but also to understand other learners’ actions and situations actively. Since collaborative learning progresses by learners’ actions, learning environment should be designed to provide the actions according to their focusing intentions. For the purpose of providing learning environment which can be manipulated based on learners’ intentions, several support systems which embedded concepts of ‘‘face-to-face’’ or ‘‘aware- ness’’ was introduced [6,7]. Ogata and Yano [8] suggested the ‘‘knowledge awareness’’ concept to encourage collaboration, and proposed the open-ended collaborative learning system. Knowl- edge awareness gives learners information about other learners’ activities in a shared environment. However, this system cannot provide learners the situation, phenomenon and context of the whole learning environment. The awareness information of objects in the learning environment, such as other learners, their possessions, and so on, should be grasped by learners dynami- cally. To make learners concentrate on the learning process while communicating with others effectively, we proposed a round- table interface where learners can grasp the learning situation from their own views according to their focusing intentions [9]. In this interface, other learners were represented by their camera images and were arranged around the round-table in the 3-dimensional learning environment. The learner’s focusing target was situated in the center of learner view. In addition, the learner view was changed automatically according to the focusing degree for the learner’s target. In order to estimate learner’s ARTICLE IN PRESS Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2009.05.020 Corresponding author. E-mail addresses: [email protected] (Y. Hayashi), [email protected] (T. Kojiri), [email protected] (T. Watanabe). Neurocomputing 73 (2010) 669–675

Transcript of Focus support interface based on actions for collaborative learning

Page 1: Focus support interface based on actions for collaborative learning

ARTICLE IN PRESS

Neurocomputing 73 (2010) 669–675

Contents lists available at ScienceDirect

Neurocomputing

0925-23

doi:10.1

� Corr

E-m

kojiri@w

watana

journal homepage: www.elsevier.com/locate/neucom

Focus support interface based on actions for collaborative learning

Yuki Hayashi �, Tomoko Kojiri, Toyohide Watanabe

Department of Systems and Social Informatics, Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan

a r t i c l e i n f o

Available online 20 November 2009

Keywords:

Round-table interface

CSCL

Awareness

Learning activity

Focusing intention

12/$ - see front matter & 2009 Elsevier B.V. A

016/j.neucom.2009.05.020

esponding author.

ail addresses: [email protected]

atanabe.ss.is.nagoya-u.ac.jp (T. Kojiri),

[email protected] (T. Watanab

a b s t r a c t

With the development of information and communication technologies, learners can easily study with

others in the distributed environment. However, it is still hard for them to share interaction with others

efficiently because of the limited communication means. In order for learners to study collaboratively

with others and immerse in learning, it is important for them to grasp directly the actions occurred in

the learning environment, such as making utterances, facing to other learners, writing memos, etc.

Moreover, they should observe the collaborative learning environment appropriately according to their

focusing intentions. In this paper, we analyze the activities occurred in the collaborative learning

environment, and propose a method for detecting focusing intention of the learner. Then, we address

the effective view change based on the focusing intention in the collaborative learning environment.

From our experimental result, our method of focusing intention could detect 70% of focusing targets of

learners correctly.

& 2009 Elsevier B.V. All rights reserved.

1. Introduction

As the drastic development of information and communicationtechnologies, learners can study with others without sharing thesame physical space [1,2]. Computer-supported collaborativelearning (CSCL) is one of the interesting learning styles in whichplural learners study collaboratively in the shared virtual space.However, as the communication band of the network is restricted,learners cannot acquire much information with regard to theother learners’ situations or behaviors.

In order to support communication in virtual environment,several applications are developed that introduce camera imagesof participants [3,4]. These systems have a function which enablesthe communication among participants such as text-chat, voice-chat, shared white board tool, etc. In addition, participants canacquire facial expressions of others from the camera images.However, since camera images for individual participants aredisplayed on separate windows, it is difficult for participants tofeel that they share the same environment. On the other hand,in the research field of computer supported cooperative work(CSCW), a considerable number of studies have pursued thereality like a real world interaction, such as eye contact, roomacoustics, and so on. Kauff and Schreer [5] proposed a 3D videoconference system where participants can get eye contact and

ll rights reserved.

a-u.ac.jp (Y. Hayashi),

e).

gesture around the table in the virtual environment. However, inorder to collaborate with other learners smoothly/effectively in avirtual learning environment, it is important for learners not onlyto feel the reality but also to understand other learners’ actionsand situations actively. Since collaborative learning progresses bylearners’ actions, learning environment should be designed toprovide the actions according to their focusing intentions.

For the purpose of providing learning environment which canbe manipulated based on learners’ intentions, several supportsystems which embedded concepts of ‘‘face-to-face’’ or ‘‘aware-ness’’ was introduced [6,7]. Ogata and Yano [8] suggested the‘‘knowledge awareness’’ concept to encourage collaboration, andproposed the open-ended collaborative learning system. Knowl-edge awareness gives learners information about other learners’activities in a shared environment. However, this system cannotprovide learners the situation, phenomenon and context of thewhole learning environment. The awareness information ofobjects in the learning environment, such as other learners, theirpossessions, and so on, should be grasped by learners dynami-cally.

To make learners concentrate on the learning process whilecommunicating with others effectively, we proposed a round-table interface where learners can grasp the learning situationfrom their own views according to their focusing intentions [9].In this interface, other learners were represented by theircamera images and were arranged around the round-table inthe 3-dimensional learning environment. The learner’s focusingtarget was situated in the center of learner view. In addition, thelearner view was changed automatically according to the focusingdegree for the learner’s target. In order to estimate learner’s

Page 2: Focus support interface based on actions for collaborative learning

ARTICLE IN PRESS

Learner BLearner A

Learner X

1

2 3

Y. Hayashi et al. / Neurocomputing 73 (2010) 669–675670

focusing target, the method for calculating the focusing degreesfor other learners proposed by Kojiri et al. [10] was applied. In thismethod, the focusing degree was determined by type, target, andcontents of utterance. Based on the experimental results, viewchange in round-table interface can help learners to understandother learners’ expressions and communicate with other learnersin a more natural manner. However, since the detection methodof focusing target did not consider the other learners’ actionsother than making utterances, the focusing target is sometimesdetermined inappropriately [11]. Therefore, it is necessary toimprove the mechanism for calculating the focusing degree so asto consider not only utterances but also actions. In the real world,learners usually change their focusing targets based on actions oflearners such as looking to the other learners and their privateworks as well as utterances. The focusing intentions of learnersare also affected by series of actions, which is called activities. Byconsidering learning activities in determining the focusing degree,focusing intentions of learners can be grasped more correctly.Thus, the detection method of focusing degree should be modifiedin order to consider the various activities during the learning. Inaddition, the learner view in the interface must be organizedbased on the estimated focusing target.

In this paper, we propose a method of calculating focusingdegrees and detecting the focusing target based on the collabora-tive learning activities. In order to detect the focusing targets oflearners, seven types of action transitions and target learners foreach transition are defined. Focusing degrees for each learner arecalculated based on the actions so as to increase the degrees fortheir target learners. In addition, the learner view is changedautomatically according to the focusing degree for the learner’sfocusing target. In order to change the size of the focusing target,the distance between the learner and the focusing target ismanipulated according to the focusing degree. By reflecting thelearners’ focusing intentions in our round-table interface, theycan grasp the learning situation from the interface more easily/naturally. Our research provides the fundamental way to dealwith and represent learners’ focusing intentions in virtuallearning environment.

The remainder of this paper is structured as follows. In Section2, the focusing target learner based on learners’ actions isdiscussed. Section 2 also explained learner’s view change in ourlearning environment according to the focusing intention.In Section 3, the calculation method of focusing degrees foreach learner and the modification mechanism are described.In addition, the display method of learner view according to thecalculated focusing degrees is described in Section 4. Then,windows of our prototype system and their operations aredescribed in Section 5. In Section 6, the result of experimentwhich evaluates the appropriateness of our detection method isshown. Finally, conclusion of this research and our future workare briefly described in Section 7.

Fig. 1. Actions from Learner X.

Learner X

Learner A Learner B

45

6

7

Fig. 2. Actions from other learner.

2. Approach

2.1. Focusing target learner based on learning activities

In terms of promoting the smooth communication amonglearners and attaining the learner’s individual learning success-fully, Watanabe [12] defined concepts of field sharing and spacesharing. The concept of space sharing demands the collaborationenvironment where all learners should coexist in the same activespace at the same time. On the other hand, field sharing expectsevery learner to interact with others by observing their learningactivities directly. In the real world, learners make various actionswhile discussing with others so as to accomplish a common

exercise, such as focusing on others, writing memo-sheet,exchanging with other learners and so on. If learners are awareof such actions in the virtual collaborative learning environment,the interaction among learners becomes more smooth andeffective. Therefore, focusing intention of the learner should becalculated according to the learning activities and reflected to thelearner view through the interface.

Currently, we focus on the virtual learning environment wherelearners can make utterances using the text-chat and write downtheir solutions/ideas to their own memo-sheets. In this learningenvironment, individual learners and their memo-sheets areobjects which relate to the learning. During the learning, learnersmake some actions such as making utterances to other learner/learners, writing down ideas to their own memo-sheets, obser-ving memo-sheet of other learner, looking to the other learners,and so on. In all actions, a learner is an active object. On the otherhand, the memo-sheet is observed and activated by its owner orother learners, so it is a passive object. If a learner makes action tothe other learner, he/she may be interested in the target. Sincememo-sheets represent the owners’ answers/ideas, the action tothe memo-sheets may represent the interest to its owner. If thelearner makes action to his own memo-sheet, he/she mayconcentrate on solving exercise by himself/herself.

The focusing target of the learner is changed through theactions. We defined the learners’ actions into two types: private

actions and public actions. Private actions occurs by the learner tohis/her own object. In our learning environment, the actions ofwriting/observing his/her memo-sheet are private actions. On theother hand, public actions correspond to the actions from learnersto other learner/learners or their objects. The action transitionsamong learning objects can be classified into seven patterns.Figs. 1 and 2 show actions that can occur in our learningenvironment. In these figures, nodes named learner X, learner A,and learner B correspond to individual learners and their memo-sheets. The source of the arrow indicates the learner who makesan action and the direction of the arrow shows the target of theaction. We summarize the change of focusing intention of learner

X for each action in Table 1. Table 1 describes the learner X’s targetlearner (the learner to whom learner X may have the focusingintention). Action 1 has occurred by the learner to his/her ownmemo-sheet. Therefore, action 1 is regarded as a private action.

Page 3: Focus support interface based on actions for collaborative learning

ARTICLE IN PRESS

Table 1Target learner of learner X based on action.

Action From To Target learner for action

1. Writing down to learner X’s memo-sheet

Observing Learner X’s memo-sheet

Learner X Learner X’s memo-sheet Learner X

2. Making utterance to all learners Learner X All learners Undefined

3. Making utterance to learner B

Observing learner B’s memo-sheet

Learner X Learner B and B’s memo-sheet Learner B

4. Making utterance to Learner X

Looking Learner X

Observing Learner X’s memo-sheet

Learner B Learner X or X’s memo-sheet Learner B

5. Making utterance to all learners Learner B All learners Learner B

6. Writing down to learner B’s memo-sheet

Observing learner B’s memo-sheet

Learner B Learner B’s memo-sheet Learner B

7. Making utterance to learner A

Looking learner A

Observing learner A’s memo-sheet

Learner B learner A and A’s memo-sheet Learner A

Learner view

Learner B

View change

Fig. 3. Learner view changed by private action.

Learner B

Learner ALearner view

View change

Fig. 4. Learner view changed by public action.

Y. Hayashi et al. / Neurocomputing 73 (2010) 669–675 671

Other actions (actions 2–7) are defined as public actions becausethese actions occurs to other learners/learners’ memo-sheets. As aresult of action 1, learner X himself is the target learner. On thecontrary, the target learner becomes the other learner accordingto the actions shown as actions 3–7. Action 2 corresponds to theaction that learner X makes an utterance to all learners. In thiscase, it is difficult to estimate a learner who will answer to theutterance, so the target learner is not able to be defined. Wemention the target learner of each action as below.

Action 1: when X operates his/her memo-sheet, the targetlearner is changed to X, because X’s direction of view turns to his/her memo-sheet.

Action 2: when X makes an utterance to all learners (A and B),X expects to get answers from other learners. It is difficult toestimate a learner who answers to the utterance, so the targetlearner is not changed by this action.

Action 3: when X makes an action to B, X expects informationfrom B. Therefore, the target learner is changed to B.

Action 4: when B makes an utterance to X or monitors X’smemo-sheet, B is detected as X’s target learner since X may wantto observe B’s learning action.

Action 5: when B utters the utterance to all learners (X and A),X may want to grasp the information of B such as the facialexpression and contents in memo-sheet, so the target learner ischanged to B.

Action 6: when B manipulates his own memo-sheet, X maybecome curious about B or B’s memo-sheet. Therefore, the targetlearner is changed to B.

Action 7: when B makes an action to A, X wants to see thereaction of A, such as facial expression. Therefore, the targetlearner is changed to A.

2.2. Learner’s view change according to focusing target

In the real world, the learner commonly observes variousphenomena from his/her view according to the focusing intention.The learner puts his/her focusing target in the center of his/herview and observes the focusing target carefully. Moreover, thesize of the focusing target in learner view is changed according tohis/her focusing degrees. Namely, when the learner focuses on thetarget learner, the focusing target becomes large in his/her view.On the other hand, the size of the focusing target gets smallerin learner view if he/she does not pay attention to the currentfocusing target. To attain the smooth/effective learning, theinterface should display appropriate learner view as that in realworld according to the focusing degree for focusing target.

As private actions, the learner writes down or observes his/hermemo-sheet. If the learner concentrates on his/her private

actions, he/she looks down to the direction of his/her memo-sheet. If the learner focuses on the public actions, he/she looks upto see the focusing target. Hence, the learner view moves betweenother learners and his/her memo-sheet according to the focusingdegree for himself/herself. Fig. 3 shows the imagination of thechange of the learner’s view direction from learner B to his/herown memo-sheet. On the contrary, the learner view to otherlearners is also changed based on the public actions. When thelearner focuses on the other learner/memo-sheet, the direction ofthe learner view turns to the focusing target so as to observe his/her detailed information. In addition, if the focusing target ischanged to the other learner, the learner moves his/her own viewto the focusing target. In Fig. 4, the focusing target of the learner ischanged from learner B to A based on the public actions, so thelearner’s direction of the view is turned to learner A.

Page 4: Focus support interface based on actions for collaborative learning

ARTICLE IN PRESS

z

y

d (t)

dmax

dmin

Learner’s position

Center of round-table

Other learners’camera images

x

Fig. 5. Distance between learner and center of round-table.

Other learner’s camera imageOther learner’s camera image

Angle �

Learner locationLearner location

Memo-sheetMemo-sheet Direction of viewDirection of view

Fig. 6. Angle between other learner and learner’s own memo-sheet.

Y. Hayashi et al. / Neurocomputing 73 (2010) 669–675672

In our interface, the object displayed in the learner view ischanged automatically according to the learner’s focusing target.In order to reflect the focusing intention of the learner, focusingdegrees for all learners including learner himself/herself arecalculated based on actions. Then, the learner who has the largestfocusing degree is determined as a focusing target. The focusingtarget is appeared in the center of our interface window. Whenthe learner himself/herself is the focusing target, the learner’sdirection of the view turns to his/her memo-sheet. Moreover, thesize of the focusing target is changed according to the focusingdegree for the focusing target. In order to change the size of thetarget, the distance between the learner and the focusing target ischanged according to the focusing degree. From the positionassigned based on the focusing degree, the direction of learnerview is changed up-and-down and right-and-left toward thefocusing target.

3. Detection of focusing target

3.1. Formula of calculating focusing degree

The target learner based on action is changed according toactions shown in Table 1. In order to determine the focusingtarget, the focusing degrees for individual learners are calculatedby Expression (1). F(n,t) corresponds to the focusing degree forlearner n at the time t. N is the number of learners in the learningenvironment and ai is the constant number which represents thechange of the focusing degree on a certain action i. ai shows theeffect of actions for determining focusing intention and is definedfor each action.

Fðn; tþ1Þ

¼

Fðn; tÞþaiP8n0ANFðn0; tÞþai

ðif learner n is target learnerÞ

Fðn; tÞP8n0ANFðn0; tÞþai

ðif learner n is not target learnerÞð0rFðn; tÞr1Þ

8>>><>>>:

ð1Þ

When the action i is occurred, ai is added to the currentfocusing degree for the target learner based on action. Then, thefocusing degrees for all learners are normalized so as to set thetotal focusing degree to 1. The focusing target is determined as alearner whose focusing degree is the largest.

3.2. Modification mechanism of calculating focusing degree

In Expression (1), focusing degrees for learners are calculatedby adding the constant number ai for each action so as to increasethe focusing degree for the target learner. However, the ratio ofchanging focusing degree may differ among learners, whichsometimes causes the detection of inappropriate focusing target.In order to reflect such difference, ai should be modified based onlearner’s action of indicating correct focusing target.

Here, we assume the situation where the target learnerbased on action i is learner A and the focusing target ischanged from learner B to A. However, if the learner stillwants to view learner B after the action, the constant number ofthe action i is considered as too large. In this case, ai should bechanged to a smaller number. On the contrary, if the learner’scorrect focusing target is learner A when the target learner basedon action i is learner A but focusing target is not changed fromlearner B, ai needs to be increased. In order to acquire learner’scorrect focusing target, our interface allows learners to select thecorrect focusing target if the detected target differs from his/herfocusing intention. When the function is invoked, ai is modified.

Expression (2) is the calculation method for adapting ai. Whenthe focusing target which is changed by action i is inappropriate,ai is decreased as a certain number b. ai is increased to a certainnumber b, if the focusing target is not changed by action i.

aNexti

¼aCurrenti�b ðif the focusing target is changed by action iÞ

aCurrentiþb ðif the focusing target is not changed by action iÞð0rair1Þ

(

ð2Þ

4. Learner view

In our collaborative learning environment, the other learnersare allocated around the round-table. Learners are representedusing polygon objects attached with their camera images. Theirmemo-sheets are also arranged on the round-table in front oftheir camera images.

Learner view for each learner’s interface is set by the locationand direction of his/her view determined by his/her focusingdegree for the focusing target. When the focusing target is theother learner, the distance from the center of the round-table isdetermined according to the focusing degree. Expression (3) is amethod of calculating distance from the center of the round-tableinterface at time t, which is represented as d(t), and Fig. 5 showsits conceptual imagination. In Expression (3), F(n,t) which iscalculated based on Expression (1) is the focusing degree forlearner n at time t. The learner takes the position between dmin

and dmax in the virtual learning environment. When the learner issituated at dmin, he/she is eagerly focusing on the focusing target.On the other hand, when the learner is at dmax, he/she is not fullyfocusing on the focusing target. According to this expression, thelearner gets nearer to the center of the round-table as the focusing

Page 5: Focus support interface based on actions for collaborative learning

ARTICLE IN PRESS

Text -chat windowText-chat window

Learners own memo -sheet

window (Editable)

Learners own memo-sheet

window (Editable)

Focusing learner ’s memo -sheet

window (Not editable)

Focusing learner’s memo-sheetwindow (Not editable)

Combo box for selecting target learner

Learner’s own memo-sheet

Focusing target

Round -table windowRound-table window Other learners camera images

Other learners memo-sheets

Combo box for changingfocusing target

Fig. 7. Windows in our interface.

Focusing degree for learner A

0.001Focusing degree for learner D

0.757

Focusing degree for learner A

0.024Focusing degree for learner E

0.942

Focusing degree for learner A

0.159Focusing degree for learner E

0.625

D CE E DF E DF

A A A

Fig. 8. Example of view changes according to focusing degrees.

Y. Hayashi et al. / Neurocomputing 73 (2010) 669–675 673

degree becomes larger.

dðtÞ ¼ ð1�Fðn; tÞÞ � dmaxþFðn; tÞ � dmin ð3Þ

On the other hand, the focusing intention of learner himself/herself is represented as a view angle between the other learner’scamera image and learner’s own memo-sheet, namely, the up-and-down direction. Fig. 6 illustrates the changing angle. Theangle y is calculated according to the focusing degree for learnerhimself/herself. As the learner’s own focusing degree becomeslarger, the view direction of the learner goes down to the memo-sheet. If the focusing target is the learner himself/herself, thehorizontal direction of the view and the position of the learnerfrom the center of the round-table are determined by focusingdegree for the learner who has the second-largest focusingdegree. Based on the calculated d(t) and the angle y, the learnerview is set to make the focusing target larger in the interface.

5. Prototype system

We embedded our interface into the collaborative learningsystem HARMONY, which has been developed in our laboratory

[13]. Fig. 7 shows windows in our interface. In text-chat window,learners choose the target learner of utterance in inputting theirutterances. Learners can observe the learning environment throughthe round-table window. The camera images of other learnerswhich exist in the learner view are situated around the round-table.Camera images of others are faced to their focusing targets. Thelearner situated in the center of the round-table window is thelearner’s focusing target. The learner can observe or write down his/her answer by clicking his/her own memo-sheet in the round-tablewindow. If focusing targets of learners differ from their intentions,they can change their focusing target by selecting the learner namefrom the combo box in round-table window.

When the focusing degree for learner himself/herself becomesthe largest, his/her own memo-sheet window appears to the sideof the round-table window. This window disappears if thefocusing degree for the learner himself/herself is decreased andbecomes smaller than a certain value. On the other hand, otherlearner’s memo-sheet can be seen from the learner only if thememo-sheet in the round-table window is clicked and its ownerbecomes the focusing target. Focusing learner’s memo-sheetwindow is not editable. It also disappears when the focusingtarget is changed to other learner.

Page 6: Focus support interface based on actions for collaborative learning

ARTICLE IN PRESS

Table 2Initial values of action ai in Table 1.

Type

i

Actions Constant number ai

1 Making utterance to learner B from

Learner X

0.6

Making utterance to learner X from

learner B

learner B

Making utterance to all learners from

learner B

Making utterance to learner A from

learner B

2 Writing down to learner X’s memo-sheet

by learner X

0.7

Observing learner X’s memo-sheet by

learner X

3 Looking learner X from learner B 0.4

Looking learner A from learner B

4 Observing learner B’s memo-sheet from

learner X

0.3

Observing learner X’s memo-sheet from

learner B

Writing down into learner B’s memo-sheet

by learner B

Observing learner B’s memo-sheet from

learner B

Observing learner A’s memo-sheet from

learner B

Table 3Ratio of detecting correct focusing target for making utterance and observing

memo-sheet.

RUtterance (ratio of making

utterance)RMemo-sheet (ratio of

observing memo-sheet)

Group 1 77.3% (17/22) 82.6% (19/23)

Group 2 80% (4/5) 56.3% (9/16)

Total 77.8% (21/27) 71.8% (28/39)

Y. Hayashi et al. / Neurocomputing 73 (2010) 669–675674

The learner view is changed based on other learners’ focusingdegrees. Fig. 8 is an example of view change in the round-tablewindow according to the focusing degree. In this example, sixlearners (A–F) participate in the collaborative learning, and A’sround-table window is displayed. In Fig. 8(a), A focuses on D

whose focusing degree is 0.757. Since A’s own focusing degree is0.001, A does not face to A’s memo-sheet. If the focusing degreefor E is increased to 0.942 and A’s focusing target is changed to E,the camera image of E moves to the center of the window asshown in Fig. 8(b). On the other hand, when A’s own focusingdegree is increased to 0.159, A’s view is inclined to the memo-sheet so as to observe his/her memo-sheet easily as shown inFig. 8(c).

6. Evaluation

6.1. Experimental setting

In order to evaluate the appropriateness of our detectionmethod, we perform the experiments using the prototype system.For this experiment, two groups (Groups 1 and 2) were organizedby four under graduate students in our laboratory.

Before the experiment, we explained the manipulation of ourinterface. Participants were asked to translate full Englishsentences into Japanese sentences by discussing with othersusing the round-table interface. The English sentences wereextracted from the entrance examinations of the JapaneseUniversity. Participants were asked to write translated documentsto the memo-sheet windows of participants during the learning.The discussion was continued for about 20 min. All participantsjoined in the group from the start to the end of the learning.After the learning, we asked participants to indicate learnerswhom they made utterances to or observed memo-sheets. Theinitial values of the constant number ai are defined as Table 2heuristically. Actions are categorized into four types. Since targetlearner of action 2 is undefined, ai is not set for action 2.

For the evaluation, the ratios of actions to detected focusingtargets to all actions were calculated for making utterance andobserving memo-sheet, respectively. In Expressions (4) and (5),RUtterance indicates the ratio of making utterance and RMemo-sheet

represents the ratio of observing memo-sheet to the correctfocusing target. Target of the utterances can be grasped by combobox of selecting target learner in the text-chat window. In ourdetection method, the focusing target is not changed by makingutterance to all learners from learner himself/herself. Therefore,RUtterance is determined by counting the number of selectedfocusing targets as target learners of utterances. On the otherhand, if participants want to acquire memo-sheet of the otherparticipant other than focusing target, they have to select thename of the other participant from the combo box in the round-table window. In calculating RMemo-sheet, actions that acquirememo-sheet after changing the focusing target by combo box areregarded as actions that learners other than focusing targets areselected. Therefore, RMemo-sheet is determined by the total numberof acquiring memo-sheets and the number of acquiring memo-sheet after changing the focusing target by combo box. Learnersmay make actions to their focusing targets. That is, if RUtterance andRMemo-sheet are large, our system could detect the focusing targetsof participants successfully.

RUtterance ¼number of utterances to focusing target

total number of all utterances to other learnerð4Þ

RMemo�sheet

¼ 1�number of acquiring memo-sheets after changing the focusing target

total number of acquiring memo-sheets of other learners

ð5Þ

6.2. Experimental result

Table 3 shows the results of RUtterance and RMemo-sheet for eachgroup. Total results of both RUtterance and RMemo-sheet are over 70%.Therefore, appropriate focusing targets depending on the learningactivities are displayed in the round-table interface to someextent. Modification for actions’ constant numbers ai does nothappen through the experiment. This result indicates that theinitial values are appropriate for participants.

RUtterance is high for both groups. According to the interview,many participants answered that they made utterances tolearners who made the utterance. These opinions prove our ideathat the learner focuses to the learner who makes an utterance toall participants or to the learner. On the other hand, RMemo-sheet ofGroup 2 was 56.3%. Participants in Group 2 commented that theywanted to know the progress of all other participants from theirmemo-sheets. For this reason, two participants in Group 2observed memo-sheets by changing their focusing targetsthrough combo box more than once. Thus, the number ofacquiring memo-sheets by changing the focusing target wasincreased. Participants also answered that they were interested inthe learner who answered to the utterance before or the learner

Page 7: Focus support interface based on actions for collaborative learning

ARTICLE IN PRESS

Y. Hayashi et al. / Neurocomputing 73 (2010) 669–675 675

who has a possibility of writing answers. These commentsindicate that the focusing target is determined not only by thetype of action but also impressions of other learners that arederived from qualities of actions. In order to detect the focusingtarget more correctly, our detection method should consider themeaning of actions.

In Table 3, the total number of all utterances to other learner inGroup 2 is smaller compared to Group 1. Many participants inGroup 2 commented that they felt hard to select target learners oftheir utterances, so they made utterances to all learners instead ofindicating target learners. In this experiment, collaborativelearning of only two groups were evaluated. In order to confirmthe effectiveness of our detection method, further evaluationswith more groups should be conducted.

7. Conclusion

In this paper, we analyzed the actions in the collaborative learningenvironment and the target learner that the learner may focus on foreach action. We also proposed a method of detecting the focusingtarget according to the actions. Then, we developed the round-tableinterface which reflects the focusing intention of the learner. In thisinterface, the direction, the distance, and the angle of the learner viewis changed automatically according to the learner’s focusing targetand focusing degrees. From our experimental result, our method ofdetecting focusing intention could detect 70% of focusing targetscorrectly. For our future, we should continue further experiments toconfirm the effectiveness of the round-table interface.

We believe our method can promote communication amonglearners in designing CSCL systems/tools. However, adaptivelearning groups or subjects of our method are not evaluated.We have to conduct the evaluation so as to reveal the learningsituation which our method can support.

Currently, our interface has focused on supporting the commu-nication among learners. However, the support of understandinglearning subjects is not embedded. As one of the collaborativelearning advantage, fresh solutions/ideas are acquired from utterancesof other learners. In our current interface, all utterances are displayedin the same way in the text-chat window. In order to promote theunderstanding it is important to be aware of effective utterances forindividual learners. By considering action series, these effectiveutterances can be estimated. Therefore, we should develop thedetection method of the effective utterance and the way to displayingthem in our interface.

Acknowledgement

This research was supported by Grants-in-Aid from the HaraResearch Foundations.

References

[1] H.H. Adelsberger, B. Collis, J.M. Pawlowski, Handbook on InformationTechnologies for Education and Training, Springer-Verlag, Berlin, 2002.

[2] J.H.E. Andriessen, Working with Groupware, Springer-Verlag, Berlin, 2003.

[3] Microsoft. Windows Live Messenger. /http://messenger.live.com/S.[4] SOBA-Project, Inc. SOBA-CITY. /http://city.soba-project.com/S.[5] P. Kauff, O. Schreer, An immersive 3D video-conferencing system using

shared virtual team user environments, in: Proceedings of the ACM-CVE,2002, pp. 105–112.

[6] C. Gutwin, G. Stark, S. Greenberg, Support for workspace awareness ineducational groupware, in: Proceedings of the ACM-CSCL 1995, 1995, pp.147–156.

[7] E. Prasolova-Førland, M. Divitini, Supporting social awareness: requirementsfor educational CVE, in: Proceedings of the ICALT 2003, 2003, pp. 366–367.

[8] H. Ogata, Y. Yano, Combining knowledge awareness and information filteringin an open-ended collaborative learning environment, International Journalof Artificial Intelligence in Education, 2000, pp. 33–46.

[9] Y. Hayashi, T. Kojiri, T. Watanabe, Computer-supported focusing interface forcollaborative learning, in: Proceedings of the ICCE 2007, 2007, pp. 123–130.

[10] T. Kojiri, Y. Ito, T. Watanabe, User-oriented interface for collaborative learningenvironment, in: Proceedings of the ICCE 2002, vol. 1, 2002, pp. 213–214.

[11] Y. Hayashi, T. Kojiri, T. Watanabe, Focusing support interface for collaborativelearning, The Journal of Information and Systems in Education 6 (1) (2008)17–25.

[12] T. Watanabe, The next advanced framework of collaborative interactionenvironment, in: Proceedings of the ED-MEDIA 2008, 2008, pp. 1350–1358.

[13] T. Kojiri, Y. Ogawa, T. Watanabe, Agent-oriented support environment inweb-based collaborative learning, International Journal of Universal Compu-ter Science 7 (3) (2001) 226–239.

Mr. Yuki Hayashi received the B.E. and M.E. degreesfrom the Nagoya University, Japan, in 2007 and 2009.He is currently a doctoral course student of GraduateSchool of Information Science, Nagoya University,Japan. His research interests include computer-sup-ported collaborative learning and human computerinterface. He is a member of IPSJ and JSAI.

Dr. Tomoko Kojiri received the B.E., M.E., and Ph.D.degrees from the Nagoya University, Nagoya, Japan, in1998, 2000, and 2003, respectively. From 2003 to 2007,she was a research associate with the Nagoya Uni-versity, Japan. Since 2007, she has been an assistantprofessor with the Nagoya University, Japan. Herresearch interests include computer-supported colla-borative learning, intelligent tutoring system, andhuman computer interface. She is a member of IPSJ,JSAI, IEICE, and JSiSE.

Prof. Toyohide Watanabe received the B.E., M.E., andPh.D. degrees from Kyoto University, Kyoto, Japan, in1972, 1974, and 1983, respectively. In 1987, he was anassociate professor with the Department of Informa-tion Engineering, Nagoya University. He currently is aprofessor with the Graduate School of InformationScience, Nagoya University, Japan. His research inter-ests include knowledge/data engineering, computer-supported collaborative learning, parallel and distrib-uted process interaction, document understanding,and drawing interpretation. He is a member of IPSJ,IEICE, JSSST, JSAI, JSiSE, ACM, AAAI, AACE, and IEEE

Computer Society.