Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages...

10
Proceedings of the SIGDIAL 2018 Conference, pages 70–79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational Linguistics 70 Introduction method for argumentative dialogue using paired question-answering interchange about personality Kazuki Sakai 1 , Ryuichiro Higashinaka 2 , Yuichiro Yoshikawa 1 , Hiroshi Ishiguro 1 , and Junji Tomita 2 1 Osaka University / JST ERATO 2 NTT Media Intelligence Laboratories, NTT Corporation 1 {sakai.kazuki,yoshikawa,ishiguro}@irl.sys.es.osaka-u.ac.jp 2 {higashinaka.ryuichiro,tomita.junji}@lab.ntt.co.jp Abstract To provide a better discussion experience in current argumentative dialogue sys- tems, it is necessary for the user to feel motivated to participate, even if the sys- tem already responds appropriately. In this paper, we propose a method that can smoothly introduce argumentative di- alogue by inserting an initial discourse, consisting of question-answer pairs con- cerning personality. The system can in- duce interest of the users prior to agree- ment or disagreement during the main dis- course. By disclosing their interests, the users will feel familiarity and motivation to further engage in the argumentative di- alogue and understand the system’s intent. To verify the effectiveness of a question- answer dialogue inserted before the argu- ment, a subjective experiment was con- ducted using a text chat interface. The results suggest that inserting the question- answer dialogue enhances familiarity and naturalness. Notably, the results suggest that women more than men regard the di- alogue as more natural and the argument as deepened, following an exchange con- cerning personality. 1 Introduction Argumentation is a process of reaching consensus through premises and rebuttals, and it is an im- portant skill required in daily life (Scheuer et al., 2010). Through argumentation, we can not only reach decisions, but also learn what others think. Such decision-making and the interchange of views are one of the most important and advanced parts of human activities. If an artificial dialogue system can argue on certain topics with us, this can both help us to work efficiently and establish a close relationship with the system. Recently, there have been some studies con- cerning argumentative dialogue systems. Hi- gashinaka et al. developed an argumentative dialogue system that can discuss certain top- ics by using large-scale argumentation struc- tures (Higashinaka et al., 2017). However, this system could not provide all users with a satisfac- tory discussion experience, even though it could appropriately respond to their opinions. One pos- sible reason for this is that some users are not nec- essarily motivated to argue on the topics suggested by the system. We aim to improve an argumentative dia- logue system by adding a function to motivate a user to participate in an argumentative dia- logue. To increase the user’s motivation to par- ticipate in the argumentative dialogue, we focus on small talk. Small talk can help participants build certain relationships before they enter the main dialogue (Zhao et al., 2014). In negotia- tion and counseling, a close relationship between two humans can improve the performance of cer- tain tasks (Drolet and Morris, 2000; Kang et al., 2012). Relationships between a user and system are important for reaching a consensus though dia- logue (Katagiri et al., 2013) Thus, it is considered to be possible for a user to be naturally guided into an argumentative dialogue by performing small talk. In practice, we adopted a question-answering dialogue, where users are casually asked about their personal experiences or ideas. This was implemented by using what we call a personal database (hereafter PDB), which involves pairs consisting of a personal question and a corre- sponding example answer, which are likely to ap- pear in human-human conversation. When asked about personal issues, users are expected to feel interested in the system, and then be induced to feel open and close to the system. Meanwhile, the system provides its own answers to the questions

Transcript of Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages...

Page 1: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

Proceedings of the SIGDIAL 2018 Conference, pages 70–79,Melbourne, Australia, 12-14 July 2018. c©2018 Association for Computational Linguistics

70

Introduction method for argumentative dialogue using pairedquestion-answering interchange about personality

Kazuki Sakai1, Ryuichiro Higashinaka2, Yuichiro Yoshikawa1,Hiroshi Ishiguro1, and Junji Tomita2

1Osaka University / JST ERATO2NTT Media Intelligence Laboratories, NTT Corporation

1{sakai.kazuki,yoshikawa,ishiguro}@irl.sys.es.osaka-u.ac.jp2{higashinaka.ryuichiro,tomita.junji}@lab.ntt.co.jp

Abstract

To provide a better discussion experiencein current argumentative dialogue sys-tems, it is necessary for the user to feelmotivated to participate, even if the sys-tem already responds appropriately. Inthis paper, we propose a method thatcan smoothly introduce argumentative di-alogue by inserting an initial discourse,consisting of question-answer pairs con-cerning personality. The system can in-duce interest of the users prior to agree-ment or disagreement during the main dis-course. By disclosing their interests, theusers will feel familiarity and motivationto further engage in the argumentative di-alogue and understand the system’s intent.To verify the effectiveness of a question-answer dialogue inserted before the argu-ment, a subjective experiment was con-ducted using a text chat interface. Theresults suggest that inserting the question-answer dialogue enhances familiarity andnaturalness. Notably, the results suggestthat women more than men regard the di-alogue as more natural and the argumentas deepened, following an exchange con-cerning personality.

1 Introduction

Argumentation is a process of reaching consensusthrough premises and rebuttals, and it is an im-portant skill required in daily life (Scheuer et al.,2010). Through argumentation, we can not onlyreach decisions, but also learn what others think.Such decision-making and the interchange ofviews are one of the most important and advancedparts of human activities. If an artificial dialoguesystem can argue on certain topics with us, thiscan both help us to work efficiently and establisha close relationship with the system.

Recently, there have been some studies con-cerning argumentative dialogue systems. Hi-gashinaka et al. developed an argumentativedialogue system that can discuss certain top-ics by using large-scale argumentation struc-tures (Higashinaka et al., 2017). However, thissystem could not provide all users with a satisfac-tory discussion experience, even though it couldappropriately respond to their opinions. One pos-sible reason for this is that some users are not nec-essarily motivated to argue on the topics suggestedby the system.

We aim to improve an argumentative dia-logue system by adding a function to motivatea user to participate in an argumentative dia-logue. To increase the user’s motivation to par-ticipate in the argumentative dialogue, we focuson small talk. Small talk can help participantsbuild certain relationships before they enter themain dialogue (Zhao et al., 2014). In negotia-tion and counseling, a close relationship betweentwo humans can improve the performance of cer-tain tasks (Drolet and Morris, 2000; Kang et al.,2012). Relationships between a user and systemare important for reaching a consensus though dia-logue (Katagiri et al., 2013) Thus, it is consideredto be possible for a user to be naturally guided intoan argumentative dialogue by performing smalltalk.

In practice, we adopted a question-answeringdialogue, where users are casually asked abouttheir personal experiences or ideas. This wasimplemented by using what we call a personaldatabase (hereafter PDB), which involves pairsconsisting of a personal question and a corre-sponding example answer, which are likely to ap-pear in human-human conversation. When askedabout personal issues, users are expected to feelinterested in the system, and then be induced tofeel open and close to the system. Meanwhile, thesystem provides its own answers to the questions

Page 2: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

71

by using the PDB. From the answers of the userand the system, users are expected to gain an ideaof what is common and different between them, arequirement which has been suggested to be im-portant for humans to be motivated to understandone another (Uchida et al., 2016).

In this research, we extend the argumentativedialogue system described in (Higashinaka et al.,2017) to add a function that can smoothly in-troduce argumentative dialogue by inserting aquestion-answering dialogue using the PDB (here-inafter referred to as PDB-QA dialogue). It is con-sidered that users of the proposed system can beexpected to be motivated to partake in the argu-mentative dialogue, and that they can then partakein a deep discussion with the system. To verifythe effectiveness of this system, we conducted asubjective experiment using a text chat interface.

The remainder of this paper is organized as fol-lows. In Section 2, we describe related work. InSection 3, we describe our proposed method, in-cluding how to develop the question-answering di-alogue and how to integrate this into an existingargumentative dialogue system. In Section 4, wedescribe an experiment we conducted, in whichhuman subjects expressed their impressions of thedialogue through a text chat interface. We summa-rize the paper and discuss future work in Section 5.

2 Related work

Although there is little work on an auto-mated system that can perform discussion withusers, recently, there has been a great deal ofwork aimed at automatically extracting premisesand conclusions from text; argumentation min-ing has been applied to various data, includ-ing legal text (Moens et al., 2007), newswiretext (Bal and Saint-Dizier, 2010), opinions in dis-cussion forums (Rosenthal and McKeown, 2012),and varied online text (Yanai et al., 2016).

There has been some research concerning theintroduction of a dialogue. Rogers et al. showedthat it became easier for two people to talkduring the first meeting by using an applica-tion that can share their opinions on a dis-play (Rogers and Brignull, 2002). Patricia etal. reported that small talk in an initial dis-course improved the interaction in a business sit-uation (Pullin, 2010). Inaguma et al. analyzedthe prosodic features of shared laughter as an ice-breaker in initial dialogues (Inaguma et al., 2016).

However, it is unclear how to develop an initial di-alogue for smoothly introducing a discussion.

It is known that people interact with artifi-cial constructions such as dialogue systems, vir-tual agents, and robots in the same manner asthey interact with other humans (Reeves and Nass,1996). Schegloff et al. showed that human conver-sation usually interleaves the contents of a task-oriented dialogue with social contents (Schegloff,1968). Jiang et al. showed that 30% of all ut-terances of Microsoft Cortana, a well-known task-oriented dialogue system, consist of social con-tents (Jiang et al., 2015). It is considered that per-forming small talk can be natural in argumentativedialogue systems.

There have been many studies on dialoguesystems that include small talk. Bechberger etal. developed a dialogue system that conveysnews text and performs small talk related to thenews (Bechberger et al., 2016). Kobori et al.showed that inserting small talk improved the im-pressions of an interview system (Kobori et al.,2016). Bickmore et al. showed that the task suc-cess rate was improved by constructing a trust re-lationship using small talk (Bickmore and Cassell,2005). Tina et al. developed a dialogue sys-tem that included the function of interacting usingsmall talk (Kluwer, 2015). We consider that ar-gumentative dialogues may be performed deeplysince small talk can improve the trust relationship.

Related to the studies dealing with multiple dia-logue strategies including argumentative and so-cial dialogues, there are several works concern-ing hybrid dialogue systems that integrate task-oriented and chat-oriented dialogue systems. Pa-paioannou et al. proposed a method to acquire dia-logue strategies for hybrid systems in a robot usingreinforcement learning (Papaioannou and Lemon,2017). Yu et al. showed that multiple di-alogue systems can interact using appropriatedialogue strategies learned through reinforce-ment learning (Yu et al., 2017). Akasaka et al.demonstrated a classification method for inpututterances to select what dialogue systems areused (Akasaki and Kaji, 2017). However, in ini-tial dialogue, it is unclear which dialogue strate-gies can be employed to smoothly introduce an ar-gumentative dialogue.

Page 3: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

72

Figure 1: Flow of PDB-QA dialogue. Each partcontains two system utterances and two user utter-ances. We used questions in an order based on thesimilarity between the dialogue topic and questiontext.

3 Proposed method

We propose a method for introducing an argu-mentative dialogue using the PDB-QA dialogue,which is a question-answering dialogue concern-ing personality. We then describe some existingargumentative dialogue systems. Next, we explainhow to develop an extended argumentative dia-logue system using the PDB-QA dialogue.

3.1 PDB-QA dialogue by usingquestion-answering pair aboutpersonality

The PDB consists of personal questions and exam-ple answers and is used to ask the interlocutor fordetailed information (Tidwell and Walther, 2002).Such questions may be asked even when the in-terlocutor is a dialogue system (Nisimura et al.,2011). In this study, we used the PDB describedin (Sugiyama et al., 2014). This PDB is a large-scale database of pairs of questions and answersrelated to personal information. Questions in-cluded in the PDB involved various personal ques-tions, question categories, answer examples, andtopics attached to each question. Based on thedegree of overlap of questions, question-answerpairs frequently encountered during conversationare extracted. The PDB includes personal ques-tions such as “what dishes do you like?” and“which places have you visited?”

We explain the procedure for generating a PDB-QA dialogue using this PDB. As shown in Fig-ure 1, the PDB-QA dialogue consists of severalparts. Each part consists of four utterances: thesystem’s question using the PDB, the user’s re-sponse, the system’s answer, and the user’s re-sponse to this. To determine the order in which

Figure 2: Architecture of developed dialogue sys-tem.

to ask multiple questions, we used the similar-ity between the topic of argument and the ques-tion text, calculated by Word2vec (Mikolov et al.,2013). From parts 1 to N, we used questions inan order starting from the highest similarity, i.e.,part 1 uses a question with the N-th highest simi-larity and part N uses another question that has thehighest similarity. This is because it is consideredthat approaching the topic gradually is natural as adialogue structure. Through this process, we canperform N parts of the PDB-QA dialogue.

3.2 Argumentative dialogue system

We used the argumentative dialogue system de-scribed in (Higashinaka et al., 2017). This sys-tem can generate appropriate argumentative dia-logue text based on large-scale knowledge struc-tures, called argumentation structures, which areconstructed manually. An argumentation structureis represented by a graph structure, composed ofnodes that represent premises and edges represent-ing support or nonsupport relationships, based onan extended version of Walton’s model (Walton,2013).

A user utterance is input into two modules: di-alogue act estimation and proposition identifica-tion. The dialogue act estimation module esti-mates four dialogue-act types: assertion, question,concession, and retraction. The proposition iden-tification module determines the argumentationnode that contains the content closest to the inputuser utterance. The discussion manager updatesthe argumentation structure on the basis of the un-derstanding result, which checks whether the cor-responding node is already mentioned. Then thedialogue manager retrieves premises that can beused for support or rebuttal based on traversingalong with argumentation structures. The systemoutputs a supportive or nonsupportive response to

Page 4: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

73

Figure 3: Flow in the dialogue manager.

the user’s utterance.

3.3 Integration of argumentation dialoguesystem and PDB-QA dialogue

Figure 2 illustrates the architecture of our argu-mentative dialogue system. The user interactswith the system through the text chat interface onthe browser. The natural language understand-ing module has two modules related by the argu-mentative dialogue system. Note that this mod-ule is only used in the argument phase describedas follows. The dialogue manager manages twodialogue states. One is the question-answeringphase, and the other is the argument phase. Fig-ure 3 illustrates the flow of dialogue managed bythe dialogue manager. First, the dialogue managerinitiates the opening dialogue, such as by askingthe user her name. Then, it begins the question-answering phase. In this phase, the PDB-QA di-alogue is performed, as described in Section 3.1.The PDB-QA dialogue is a predefined question-answering dialogue, regardless of user utterances.The answer of the system for a PDB question isprepared by the experimenter in advance. Thenatural language generation module changes thesystem response such as adding conjunctions andchanging the end of the sentences using a dialogueact. Later, the dialogue manager begins the argu-ment phase. In the argument phase, the utterancesof the system are premises that can be used forsupport or rebuttal, and they consist of the resultsof the argumentative dialogue system.

Line Speaker Utterance1 S Do you like trips?2 U Yes, I do.3 S I see. I like them too.

I want to go on a trip during the weekend.4 U I also want to go during the weekend.5 S Where do you want to go for a trip?6 U I want to go abroad.7 S Me too.

I want to go to France if I have the time.8 U French dishes seem great.9 S Which is the better place to travel

in Japan, Hokkaido or Okinawa?10 U I want to go to Okinawa.11 S You can enjoy unique dishes in Okinawa.12 U I think goya-champuru (a traditional

Okinawa dish) is delicious.13 S But Hokkaido also has delicious dishes,

such as Genghis Khan.14 U I see. Hokkaido has more delicious food.

Figure 4: An example of the dialogue. The topic isthat of which place the better to travel to in Japan:Hokkaido or Okinawa. Lines 1 ∼ 8 are part of thePDB question dialogue, and lines 9 ∼ 14 are partof the argumentative dialogue. Speaker S and Urepresent the system and user, respectively.

Figure 4 shows an example of the dialogue weperformed. The topic is as follows: which is thebetter place to travel to in Japan: Hokkaido or Ok-inawa? Lines 1 ∼ 8 are a part of the PDB questiondialogue, and lines 9 ∼ 14 are a part of the argu-mentative dialogue. Speakers S and U representthe system and the user, respectively.

4 Experiment

In this section, we describe a subjective experi-ment to verify the effect of inserting the PDB-QAdialogue. We compared the subjects’ evaluationsand behavior for two types of dialogue: one withPDB-QA and the other without it. The hypoth-esis is that by inserting the PDB-QA dialogue inadvance, users are motivated to partake in the ar-gumentative dialogue and can then discuss deeplywith the system. To verify this hypothesis, sub-jects communicated with the argumentative dia-logue system through a text chat interface on abrowser, and then recorded their impressions in aquestionnaire. We quantitatively evaluated the av-erage number of words per utterance of the user inthe argument phase. It is expected that the num-ber of words per user’s utterance in our argumen-tative dialogue system should be relatively lowerthan that in the previous system, because when a

Page 5: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

74

Figure 5: Screenshot of the text chat interface.

user builds a relationship with the system, the userexpresses own ideas with fewer words.

4.1 Method4.1.1 SubjectsThirty-two Japanese adults (16 males and 16 fe-males, with an average age of 20.3 years) partici-pated as subjects. Half of the subjects participatedwith the PDB condition, and the other half withoutit. The ratio of males to females in each conditionwas the same. One male with the PDB conditionand two males without were excluded because ofsystem failures, and the utterances of the remain-ing 29 people were analyzed.

4.1.2 ApparatusThe experiment was conducted in a space sepa-rated by curtains. A laptop PC was placed on thetable, and the PC displayed a web browser to showthe text chat interface, as shown in Figure 5. Notethat the dialogue in the experiment was performedin Japanese. The dialogue text of the interactionbetween the system and the subject was displayedin the middle part of the browser, and a text box forthe subject to input his/her own utterances was dis-played at the lower part of the browser. Note thatwe call the sentence displayed in the interface an“utterance.” In other words, sentences producedby the system and input by the user with a key-board are called the system’s and user’s utterances,respectively.

4.1.3 StimuliIn this experiment, we compared two conditions:with and without the PDB. The condition withPDB included two phases of dialogue: a question-answering phase and an argument phase. The con-dition without PDB included one phase of dia-logue: the argument phase. In this experiment, the

subject and the system alternately provided utter-ances. Each pair of such utterances is referred toas one turn. Both conditions included two turnsof opening dialogue, such as asking the subject’sname and a greeting. The question-answeringphase consisted of three parts, each of which in-cluded two turns of dialogue. In total, six turnsof dialogue were performed. The argument phasecontained six turns of dialogue. We prepared fivediscussion topics and assigned any of these to thesubject at random: (1) the pros and cons of drivingautomobiles, (2) benefits of living in the country-side vs. living in the city, (3) which is the betterplace to travel to in Japan between Hokkaido andOkinawa, (4) which is the better breakfast betweenbread and rice, and (5) which is the better themepark betweenTokyo Disney Resort and UniversalStudios Japan.

4.1.4 ProcedureThis experiment was conducted according to thefollowing procedure. First, the experimenter gavea subject the instructions for the experiment. Thecontents of the instructions were that the subjectinteracts with the system through the text chat in-terface on the browser, interacts only once, and an-swers the questionnaire after the dialogue. Next,the experimenter asked the subject to read thequestionnaire in advance. After that, interactionwas started. After completing the dialogue, theexperimenter asked the subject to answer the ques-tionnaire.

4.1.5 MeasurementThe items of the questionnaire regarding impres-sions were the same for both conditions, and therewere eleven items in total. These included ques-tions related to the overall impression of the dia-logue system, the argumentative dialogue, and theuser’s motivation for conversing with the dialoguesystem. The items concerning the impression ofthe dialogue consisted of the following five:

Q1 The utterances of the system are correct inJapanese,

Q2 The dialogue with the system is easy to under-stand,

Q3 The dialogue with the system is familiar,

Q4 The dialogue with the system has a lot of con-tent, and

Q5 The dialogue with the system is natural.

Page 6: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

75

Figure 6: Box plots of the results of the questionnaire.

Figure 7: Box plots of results for the average num-bers of words and content words (nouns, verbs, ad-jectives, conjunctions, and interjections) per utter-ance in the argument phase. We used MeCab totokenize Japanese words.

The items concerning the impression of the argu-ment dialogue were the following two:

Q6 You can deeply discuss the topic of X, andQ7 You can smoothly enter the argumentative di-

alogue about X,

where X is the actual topic (e.g., which is the betterplace to travel to in Japan between Hokkaido andOkinawa). The items related to motivation for thedialogue were the following four:

Q8 You want to convey your opinions,Q9 You want to understand the system’s opinions,Q10 You feel that the system wants to convey its

opinions, andQ11 You feel that the system wants to understand

your opinions.

A Likert scale was used to elicit the sub-jects’ impressions. We used a seven-point scale

that ranged from a value of 1, correspondingto “strongly disagree,” to 7, corresponding to“strongly agree.” The midpoint value of 4 corre-sponded to “undecided.”

We also counted the average number of wordsper user utterance and the average number of con-tent words (nouns, verbs, adjectives, conjunctions,and interjections) in the argument phase. Weused MeCab to tokenize the words and label theJapanese parts of speech.

4.2 Result

Figure 6 presents the box plots of the answers tothe questionnaire. A Mann-Whitney U test wasused to compare the scores on the Likert scale. ForQ3, namely “the dialogue with the system is famil-iar,” the median score for the condition with PDBwas found to be marginally significantly higherthan that for the condition without PDB (W = 143,p < 0.1). For Q5, namely “the dialogue withthe system is natural,” the median score for thecondition with PDB was found to be significantlyhigher than that for the condition without PDB(W = 149.5, p < 0.05). For other questions, nosignificant differences between the two conditionswere detected.

As shown in Figure 6, we did not directly con-firm an improvement concerning the smooth intro-duction to the argumentative dialogue by insertingthe PDB-QA dialogue. However, this figure sug-gests that it is possible for the user to feel thatthe dialogue is familiar and more natural whenthe PDB-QA dialogue is inserted. This result maybe because the system performs in the manner inwhich a human usually does, and a certain rela-

Page 7: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

76

Figure 8: Box plots of results of the questionnairefor male users.

Figure 9: Box plots of female results of the ques-tionnaire for female users.

Figure 10: Box plots of results for male users forthe average number of words and content words perutterance in the argumentation phase.

Figure 11: Box plots of results for female users forthe average number of words and content words perutterance in the argumentation phase.

tionship is built between the user and the system.Thus, it is considered that inserting the PDB-QAdialogue improves the naturalness of the dialogueand relationships.

Figure 7 presents the box plots of the averagenumbers of words and content words per user ut-terance. For the average number of words, the me-dian score for the condition with PDB was foundto be significantly less than that for the conditionwithout PDB (W = 40, p < 0.01). Concerningthe average number of content words, the medianscore for the condition with PDB was also foundto be significantly less than that for the conditionwithout PDB (W = 40, p < 0.01).

As shown in Figure 7, it was found that the av-erage numbers of words and content words in thecondition with PDB were significantly less thanthose in the condition without PDB. These re-sults suggest that when the relationship betweenthe user and the system is not close, the usersmay express their opinions using a larger numberof words, to correctly convey their own message;on the other hand, when the relationship is close,the users may express their opinions using fewerwords.

In general, it is known that there are some dif-ferences in purposes of conversation owing to gen-der differences (Tannen, 2001). In this study, we

suppose that the different purposes of conversationresulting from gender differences may affect ourresults. Therefore, we analyzed the effects of gen-der. We divided the data by gender, and then plot-ted each result. In the result for male users, shownin Figure 8, no significant differences between thetwo conditions were detected. On the other hand,in the result for female users, shown in Figure 9,we observe some significant differences betweenthe two conditions. According to this figure, forQ5, namely “the dialogue with the system is natu-ral,” the median score for the condition with PDBwas found to be marginally significantly higherthan that for the condition without PDB (W = 49,p < 0.1). For Q6, namely “you can deeply dis-cuss the topic,” the median score for the conditionwith PDB was also found to be marginally signif-icantly higher than that for the condition withoutPDB (W = 49, p < 0.1). In addition, we comparedmales’ and females’ data under the conditions withand without PDB. As a result, for Q7, namely “youcan smoothly enter the argumentative dialogue,”the median score with PDB for females was foundto be marginally significantly higher than that withPDB for males (W = 13.5, p < 0.1). These re-sults suggest that it is possible that females mayfeel that the PDB-QA dialogue inserted before theargumentative dialogue is more natural, and this

Page 8: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

77

may lead to the result that females feel the argu-mentative dialogue is deepened more. Thus, it issuggested that inserting the PDB-QA dialogue inour proposed method may be more effective forfemales.

In addition, Figures 10 and 11 show the resultsfor male and female users for words and contentwords, respectively. As shown in Figure 10, formale users, the average number of content wordsfor the condition with PDB was found to be sig-nificantly less than that without PDB (W = 7,p < 0.05). This result may be because of theirdegree of motivation, but the actual reason is un-known. On the other hand, as shown in Figure 11,for female users, the average numbers of wordsand content words with PDB were found to bemarginally significantly less than those withoutPDB (W = 14, p < 0.1, W = 15, p < 0.1, respec-tively). These results suggest that females may usefewer words when they feel familiarity with the in-terlocutor.

5 Summary and future work

We proposed a PDB-QA dialogue method tosmoothly introduce an argumentative dialogue.We conducted an evaluation experiment to ver-ify the effectiveness of inserting the PDB-QA di-alogue. The results suggest that the impressionsof the dialogue, such as familiarity and natural-ness, may be improved by inserting the PDB-QAdialogue. Specifically, we found that females mayperceive a PDB-QA dialogue inserted before anargumentative dialogue as more natural, and thismay lead to the result that the argumentative dia-logue can be deepened. We also found that whenthe relationship between the user and the systemis not close, the users may express their opinionsusing a larger number of words, whereas when therelationship is close, the users may express theiropinions with fewer words.

We can improve the performance of the dia-logue system by adjusting several parameters ofPDB dialogue, which were fixed in the experimentfor the sake of control. For example, we can adjusthow questions are chosen (the degree of similarityof questions to be selected), the order of questions,the number of questions, and the amount of infor-mation to be presented in an answer to a question.It may be possible to improve the performance ifwe select better parameters depending on a user’spreferences or the context of a conversation.

For further improvement, we can consider ani-macy, which is another element that may be im-portant. Animacy describes the characteristic ofbeing like a living being, in other words, the char-acteristic of whether a human can relate to mindand will in an object. We suppose that in a dia-logue, it is important for the user to feel animacytoward the interlocutor, because it is important forthe user to recognize the dialogue system as a spe-cial target with which they can form a certain re-lationship. As a preliminary experiment, we mea-sured the psychological indicators for mind per-ception (Gray et al., 2011). This scale can mea-sure how much agency (capacity for self-control,planning, and memory) and experience (capacityfor pleasure, fear, and hunger) the subject feels thetarget has. Analyzing how impressions of agencyand experience might affect the answers to thequestionnaire or the behavior of users will be animportant aspect of future work.

In this paper, we compared the conditions withand without PDB. Comparing the two conditions,we surmise that at least three factors exist that af-fect the results: whether utterances are in the formof a question, whether they contain personal con-tent, and whether they are related to the topic ofthe argumentative dialogue. For the first factor,we suppose that a question form can explicitly re-veal common and differing sentiments in the an-swer to the question. It is considered that thismakes it easy for the user to become interested.For the second aspect, we suppose that asking aquestion concerning personality can make it possi-ble to construct a certain relationship more easily.As regards the final point, we feel this preventsa sudden change of topic. We suppose that thismakes it possible for the user to enter the argu-mentative dialogue more smoothly. Investigatingthe kinds of factors that affect a natural introduc-tion into the argumentative dialogue will be a topicof future work.

Acknowledgments

This work was supported by JST ERATO GrantNumber JPMJER1401, Japan.

References

S. Akasaki and N. Kaji. 2017. Chat detection in anintelligent assistant: Combining task-oriented andnon-task-oriented spoken dialogue systems. In ACL.

Page 9: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

78

B. K. Bal and P. Saint-Dizier. 2010. Towards buildingannotated resources for analyzing opinions and ar-gumentation in news editorials. In Proceedings ofthe language resources and evaluation conference.

L. Bechberger, M. Schmidt, A. Waibel, and M. Fed-erico. 2016. Personalized news event retrieval forsmall talk in social dialog systems. In Proceedingsof Speech Communication; 12. ITG Symposium.

T. Bickmore and J. Cassell. 2005. Social Dialonguewith Embodied Conversational Agents. Advances inNatural Multimodal Dialogue Systems.

A. L. Drolet and M. W. Morris. 2000. Rapport inconflict resolution: Accounting for how face-to-facecontact fosters mutual cooperation in mixed-motiveconflicts. Journal of Experimental Social Psychol-ogy, 36(1):26–50.

K Gray, AC Jenkins, AS Heberlein, and DM Weg-ner. 2011. Distortions of mind perception in psy-chopathology. In Proceedings of the NationalAcademy of Sciences of the United States of Amer-ica, pages 477–479.

R. Higashinaka, K. Sakai, H. Sugiyama, H. Narimatsu,T. Arimoto, T. Fukutomi, K. Matsui, Y. Ijima, H. Ito,S. Araki, Y. Yoshikawa, H. Ishiguro, and Y. Matsuo.2017. Argumentative dialogue system based on ar-gumentation structures. In Proceedings of the 21stWorkshop on the Semantics and Pragmatics of Dia-logue, pages 154–155.

H. Inaguma, K. Inoue, S. Nakamura, K. Takanashi, andT. Kawahara. 2016. Prediction of ice-breaking be-tween participants using prosodic features in the firstmeeting dialogue. In Proceedings of the 2Nd Work-shop on Advancements in Social Signal Processingfor Multimodal Interaction, pages 11–15.

J. Jiang, A. H. Awadallah, R. Jones, U. Ozertem, I. Zi-touni, R. G. Kulkarni, and O. Z. Khan. 2015. Auto-matic online evaluation of intelligent assistants. InProceedings of the 24th International Conference onWorld Wide Web, pages 506–516.

S. Kang, J. Gratch, C. Sidner, R. Artstein, L. Huang,and L. Morency. 2012. Towards building a virtualcounselor: Modeling nonverbal behavior during in-timate self-disclosure. In Proceedings of 11th In-ternational Conference on Autonomous Agents andMultiagent Systems.

Y. Katagiri, K. Takanashi, M. Ishizaki, Y. Den, andM. Enomoto. 2013. Concern alignment and trust inconsensus-building dialogues. In Proceedings of the9th International Conference on Cognitive Science,pages 422–428.

T. Kluwer. 2015. Social talk capabilities for dialoguesystems. universaar.

T. Kobori, M. Nakano, and T. Nakamura. 2016. Smalltalk improves user impressions of interview dialoguesystems. In Proceedings of the 17th Annual Meeting

of the Special Interest Group on Discourse and Dia-logue, pages 370–380.

T. Mikolov, I. Sutskever, K. Chen, G. Corrado, andJ. Dean. 2013. Distributed representations of wordsand phrases and their compositionality. In Ad-vances in Neural Information Processing System,pages 3111–3119.

M. Moens, E. Boiy, R. M. Palau, and C. Reed. 2007.Automatic detection of arguments in legal texts. InProceedings of the 11th international conference onArtificial intelligence and law, pages 225–230.

R. Nisimura, Y. Nishihara, R. Tsurumi, A. Lee,H. Saruwatari, and K. Shikano. 2011. Takemaru-kun: Speech-oriented information system for realworld research platform. In Proceedings of the 2011Conference on Empirical Methods in Natural Lan-guage Processing, pages 583–593.

I. Papaioannou and O. Lemon. 2017. Combiningchat and task-based multimodal dialogue for moreengaging hri: A scalable method using reinforce-ment learning. In Proceedings of the Companionof the 2017 ACM/IEEE International Conference onHuman-Robot Interaction, pages 365–366.

P. Pullin. 2010. Small talk, rapport, and interna-tional communicative competence: Lessons to learnfrom belf. The Journal of Business Communication,47(4):455–476.

B. Reeves and C. Nass. 1996. How people treat com-puters, television, and new media like real peopleand places. CSLI Publications and Cambridge uni-versity press.

Y. Rogers and H. Brignull. 2002. Subtle ice-breaking:encouraging socializing and interaction around alarge public display. In In Workshop on Public,Community. and Situated Displays.

S. Rosenthal and K. McKeown. 2012. Detecting opin-ionated claims in online discussions. In Proceedingsof the 6th IEEE International Conference on Seman-tic Computing, pages 30–37.

E. A Schegloff. 1968. Sequencing in conversationalopenings. American anthropologist, 70(6):1075–1095.

O. Scheuer, F. Loll, N. Pinkwart, and B. M. McLaren.2010. Computer–supported argumentation: A re-view of the state of the art. International Jour-nal of Computer-Supported Collaborative Learning,5(1):43–102.

H. Sugiyama, T. Meguro, and R. Higashinaka. 2014.Large-scale collection and analsis of personalquestion-answer pairs for conversational agents. InProceedings of Intelligent Virtual Agents, pages420–433.

D. Tannen. 2001. You junst don’t understand: womenand men in conversation. New York: HarperCollins.

Page 10: Introduction method for argumentative dialogue …Proceedings of the SIGDIAL 2018 Conference , pages 70 79, Melbourne, Australia, 12-14 July 2018. c 2018 Association for Computational

79

L. C Tidwell and J. B. Walther. 2002. Computer-mediated communication effects on disclosure, im-pressions, and interpersonal evaluations. HumanCommunication Research, 28(3):317–348.

T. Uchida, T. Minato, and H. Ishiguro. 2016. Does aconversational robot need to have its own values? astudy of dialogue strategy to enhance people’s moti-vation to use autonomous conversational robots. InProceedings of the 4th annual International Confer-ence on Human-Agent Interaction, pages 187–192.

D. Walton. 2013. Methods of argumentation. Cam-bridge University Press.

K. Yanai, Y. Kobayashi, M. Sato, T. Yanase, MiyoshiT, Y. Niwa, and H. Ikeda. 2016. Debating artificialintelligence. Hitachi Review, 65(6):151.

Z. Yu, A. W. Black, and A. I. Rudnicky. 2017. Learningconversational systems that interleave task and non-task content. In Proceedings of International JointConference on Artificial Intelligence.

R. Zhao, A. Papangelis, and J. Cassell. 2014. Towardsa dyadic computational model of rapport manage-ment for human-virtual agent interaction. In Pro-ceedings of International Conference on IntelligentVirtual Agents, pages 514–527.