Sinhala Conversational Interface for Appointment ...

6
978-1-7281-8412-8/20/$31.00 ©2020 IEEE Sinhala Conversational Interface for Appointment Management and Medical Advice D. D. S. Rajapakshe, K. N. B. Kudawithana, U. L. N. P. Uswatte, N. A. B. D. Nishshanka, A. V. S. Piyawardana, K. N. Pulasinghe, Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka (dilrukshi.rajapakshe, naveenbimsara, pramuditha.n95, dilshanbinod)@gmail.com, (vijani.p, koliya.p)@sliit.lk Abstract— This paper proposes an intelligent conversational user interface to assist Sinhala speaking users to make appointments with doctors and to obtain medical advices. This Sinhala Conversational Interface for Appointment Management and Medical Advice (SCI-AMMA) consists of Speech Recognition unit, Query Processing unit, Dialog Management unit, Voice Synthesizer unit, and User Information Management unit to handle user requests and maintain a meaningful dialogue. The SCI-AMMA gets the users’ speech utterances and recognize the language content of it for further processing. Language content is further processed using query processing unit to identify users’ intent. To fulfil the users’ intent, a reply is generated from Dialogue Management Unit. This reply/answer will be delivered to the user by means of a voice synthesizer. The proposed system is successfully implemented using state of the art technology stack including Flutter, Python, Protégé and Firebase. Performance of the system is demonstrated using several sample scenarios/dialogues. Keywords— RASA, NLU, NLP, AI, Ontology, QR code, SPARQL, Protégé, Microservice, REST API, RDFS, Firebase, Flutter, Python & OWL I. INTRODUCTION An intelligent conversational user interface (ICUI) has gained much attention recently since it can be used in many settings such as intelligent personal assistant systems, customer care systems, and information retrieval systems. Few famous systems include Apple Siri, Microsoft Cortana, Amazon Alexa. Those successful ICUIs are backed by lot of computational linguistic research in English language since English is the 3 rd most spoken language on the planet. All aspects of English language have been thoroughly researched since computers were invented. Sinhala is a low resource language which lacks required amount of monolingual or parallel corpora as well as manually crafted linguistics resources sufficient for the development of such ICUI. On the other hand, it is not a viable business case for many technology giants since Sinhala speaking community is a minority community in global context. The proposed system is an attempt to allow Sinhala speaking community to interact with computer systems using their mother tongue using voice or text. Over the last three decades, hand-held devices such as mobile phones, tabs are developed with more portable technology, and a higher quality interconnection mechanisms. Since mobile phones become popular among both adults and children, it is the best way to provide a better service to entire community. In last few years, the consumption of smartphones grew tremendously paving the way for increased demand for mobile application development. According to the research of RescueTime, average smartphone user is spending 3.15 hours with their phone, thus a mobile application will reach the target audience quickly [1] and also mobile application makes it easier for people to provide low-cost services. Humans use internet to save time and work efficiently. Also, internet has become universal and it helps many for socializing. To save time and to do things efficiently, people try to do their important work using the internet because the internet has become universal, and it delivers services quickly. Building an automated intelligent system for the health domain and for the Sinhala speaking community is very important to overcome their poor knowledge on health, use of computers and English language. Moreover, an encapsulated artificial intelligence mobile chat agent for health domain help the people in an emergency, answer to their health-related queries, maintain safe distance in interacting with humans and available in 24x7. SCI-AMMA mobile application provides the Human-Computer Interaction facility as a solution for making an appointment with a Doctor in hospitals and provides efficient services for the patients. This ICUI can utilize human spoken language and builds-up a conversational flow to help its user. The proposed system will secure all the personal information as well as information derived from the patient’s previous hospital visits, doctors, and hospital. This research paper is organized as follows: in Section II, proposed system is described in the context of other similar systems, in Section III, methodology of the proposed system is described. In section IV results obtained is presented while in Section V concluding remarks are mentioned. II. LITERATURE SURVEY A. Existing Systems 1. Octopus: The Octopus is an Intelligent PC assistant developed in Java for Sinhala users. The Octopus can search for data in an AIML Knowledge Base or throughout the local network. The dialogs framework was used as a MaSMT framework. The Octopus interacts with the user using java based free TTS and the user interacts with the Octopus using a text message [2]. 2. Diabot :The Diabot is a Diabetes mobile agent. The User and Diabot interact using a text chat. The dialogs framework used is RASA framework. The developer's language is React. The UI and RASA NLU communicate by using Flask web server API. University of California, Irvine (UCI) machine learning repository is used as the Knowledge Base. The classification algorithms are used to predict answers [3]. 3. Chatbol: The Chatbol is a social (Slack) chatbot for the Spanish football domain. The search query based on SPARQL queries in Wikidata. The RASA framework used as the dialogue management tool [4]. 4. Quinn: The Quinn was a chat robot agent for a mental disorder. The User and Quinn interact with a Voice-To- Voice. The RASA framework used as the dialogue management tool. It was developed in Python. As the Knowledge Base, MySQL server is used. The classification algorithms are used to predict answers [5].

Transcript of Sinhala Conversational Interface for Appointment ...

Page 1: Sinhala Conversational Interface for Appointment ...

978-1-7281-8412-8/20/$31.00 ©2020 IEEE

Sinhala Conversational Interface for Appointment Management and Medical Advice

D. D. S. Rajapakshe, K. N. B. Kudawithana, U. L. N. P. Uswatte, N. A. B. D. Nishshanka, A. V. S. Piyawardana, K. N. Pulasinghe, Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka

(dilrukshi.rajapakshe, naveenbimsara, pramuditha.n95, dilshanbinod)@gmail.com, (vijani.p, koliya.p)@sliit.lk

Abstract— This paper proposes an intelligent conversational user interface to assist Sinhala speaking users to make appointments with doctors and to obtain medical advices. This Sinhala Conversational Interface for Appointment Management and Medical Advice (SCI-AMMA) consists of Speech Recognition unit, Query Processing unit, Dialog Management unit, Voice Synthesizer unit, and User Information Management unit to handle user requests and maintain a meaningful dialogue. The SCI-AMMA gets the users’ speech utterances and recognize the language content of it for further processing. Language content is further processed using query processing unit to identify users’ intent. To fulfil the users’ intent, a reply is generated from Dialogue Management Unit. This reply/answer will be delivered to the user by means of a voice synthesizer. The proposed system is successfully implemented using state of the art technology stack including Flutter, Python, Protégé and Firebase. Performance of the system is demonstrated using several sample scenarios/dialogues.

Keywords— RASA, NLU, NLP, AI, Ontology, QR code, SPARQL, Protégé, Microservice, REST API, RDFS, Firebase, Flutter, Python & OWL

I. INTRODUCTION An intelligent conversational user interface (ICUI) has

gained much attention recently since it can be used in many settings such as intelligent personal assistant systems, customer care systems, and information retrieval systems. Few famous systems include Apple Siri, Microsoft Cortana, Amazon Alexa. Those successful ICUIs are backed by lot of computational linguistic research in English language since English is the 3rd most spoken language on the planet. All aspects of English language have been thoroughly researched since computers were invented. Sinhala is a low resource language which lacks required amount of monolingual or parallel corpora as well as manually crafted linguistics resources sufficient for the development of such ICUI. On the other hand, it is not a viable business case for many technology giants since Sinhala speaking community is a minority community in global context. The proposed system is an attempt to allow Sinhala speaking community to interact with computer systems using their mother tongue using voice or text.

Over the last three decades, hand-held devices such as mobile phones, tabs are developed with more portable technology, and a higher quality interconnection mechanisms. Since mobile phones become popular among both adults and children, it is the best way to provide a better service to entire community. In last few years, the consumption of smartphones grew tremendously paving the way for increased demand for mobile application development. According to the research of RescueTime, average smartphone user is spending 3.15 hours with their phone, thus a mobile application will reach the target audience quickly [1] and also mobile application makes it easier for people to provide low-cost services.

Humans use internet to save time and work efficiently. Also, internet has become universal and it helps many for socializing. To save time and to do things efficiently, people try to do their important work using the internet because the internet has become universal, and it delivers services quickly. Building an automated intelligent system for the health domain and for the Sinhala speaking community is very important to overcome their poor knowledge on health, use of computers and English language. Moreover, an encapsulated artificial intelligence mobile chat agent for health domain help the people in an emergency, answer to their health-related queries, maintain safe distance in interacting with humans and available in 24x7. SCI-AMMA mobile application provides the Human-Computer Interaction facility as a solution for making an appointment with a Doctor in hospitals and provides efficient services for the patients. This ICUI can utilize human spoken language and builds-up a conversational flow to help its user. The proposed system will secure all the personal information as well as information derived from the patient’s previous hospital visits, doctors, and hospital.

This research paper is organized as follows: in Section II, proposed system is described in the context of other similar systems, in Section III, methodology of the proposed system is described. In section IV results obtained is presented while in Section V concluding remarks are mentioned.

II. LITERATURE SURVEY

A. Existing Systems 1. Octopus: The Octopus is an Intelligent PC assistant

developed in Java for Sinhala users. The Octopus can search for data in an AIML Knowledge Base or throughout the local network. The dialogs framework was used as a MaSMT framework. The Octopus interacts with the user using java based free TTS and the user interacts with the Octopus using a text message [2].

2. Diabot :The Diabot is a Diabetes mobile agent. The User and Diabot interact using a text chat. The dialogs framework used is RASA framework. The developer's language is React. The UI and RASA NLU communicate by using Flask web server API. University of California, Irvine (UCI) machine learning repository is used as the Knowledge Base. The classification algorithms are used to predict answers [3].

3. Chatbol: The Chatbol is a social (Slack) chatbot for the Spanish football domain. The search query based on SPARQL queries in Wikidata. The RASA framework used as the dialogue management tool [4].

4. Quinn: The Quinn was a chat robot agent for a mental disorder. The User and Quinn interact with a Voice-To-Voice. The RASA framework used as the dialogue management tool. It was developed in Python. As the Knowledge Base, MySQL server is used. The classification algorithms are used to predict answers [5].

Page 2: Sinhala Conversational Interface for Appointment ...

B. Research Gap description between existing and SCI-AMMA Sinhala language-based doctor appointment system and

medical advice system is an innovative application, currently unavailable in Sri Lanka. Among existing systems, there are no available researchers for Automated doctor Appointment Systems for Sinhala users. SCI-AMMA introduce the solution to this problem. In SCI-AMMA, REST API is used to create a connection between Microservice and mobile App.

To make appointments this system has implemented the Sinhala keyword dictionary, and this supports both Sinhala and English words to make a response for creating answers in Sinhala. In SCI-AMMA, Flutter is used for mobile development where React is used in similar systems. When considering the RASA plugin usage of previous researches there are no implementations to connect Flutter and RASA . The proposed system used Python webhook API with the Flask REST API framework to communication between RASA and approach system.

A session is a part of SCI-AMMA. SCI-AMMA can identify user action according to the session. Therefore, when the user authenticated succusses fully system recognizes the user and what is the user wants to get from SCI-AMMA. Those sessions are helping to save user actions in the system and identify the new users in the SCI-AMMA. The new user is not friendly with the system, and so the user will be guided. This unique with our project considering existed system. For this research, Google Text-to-Speech plugin is used to give accurate and clear voice output rather than in existing systems. Also, getting user voice into text using Speech_to_Text 2.3.0 Dart Package. This the first time developing Sinhala's voice to text with Flutter.

In SCI-AMMA, the user’s appointment details are also saved automatically in Firebase DB at the end of the process. There is not any existing mobile doctor appointment management system this process. However, when it comes to mobile apps development, the Firebase query is most widely used. Ontology query speeds up considering other query execution times. That’s why the approach system used to SPARQL query for developing a query. it is a research part of the DOI-HM. also, the first time was developed a proteges Knowledge Base in the Sinhala language, and previse systems use to MySQL, AIML, Wikidata and Internet.

Fig. 1. SCI-AMMA System Overview Diagram

In previous researches, there are not any Sinhala medical artificial intelligence agent available for the RASA framework. For that, SCI-AMMA developed for filling that research gap. If the user gives the Sinhala sentence for the RASA framework, it will identify the Sinhala sentence and generate relevant output for that sentence. Sinhala data module has been developed in RASA NLU to identify the user question. The system implements a suitable answer in Sinhala by using essential facts in the REST API. Methodology and Result sections of this paper will explain how this system has been implemented and how much it is accurate in applying real-world scenarios.

III. METHODOLOGY Fig. 1 SCI-AMMA System Overview Diagram shows the

overall workflow of the SCI-AMMA. Four strategies used to develop the proposed system(SCI-AMMA). Those are,

A. Voice to Text Analyzer & Ontology Query development. 1. Developing Virtual agent (SCI-AMMA) —

When we are developing SCI-AMMA, we used the Flutter SDK for mobile app development. It is an SDK to build high-performance, high-accuracy apps for IOS, Android, Web, and desktop from a single codebase. According to the authors of the Handbook of Human Factors in Medical Device Design, the colors which can be easily recognize by humans are red, green, yellow, orange, and blue. So, we used blue and white colour for developing UI of SCI-AMMA[12]. The users are required to have a Google account to access SCI-AMMA. After the user login supported by Google, the app will navigate to the bot interface and the user is recognized by SCI-AMMA. To interact with the system, the users can use either voice or text.

The user's voice (Sinhala voice to text output) is converted to text using speech_to_text 2.3.0 Dart Package. When the patient and SCI-AMMA interact with each other system will generate a session according to the user reactions. The purpose of generating a session is to identify the system's user action and guide new users. If a user did not interact with the system at least 5 times, the system will guide the user. According to the session, the system will address relevant Microservice. Microservice generates a non-analysed result output for user questionnaires and that result will send to the SCI-AMMA. SCI-AMMA will send that to the RASA dialogue framework. The microservice and SCI-AMMA are communicating using REST API. A Webhook API is used to connect SCI-AMMA to RASA framework. This is the process of voice analyser. The user's NIC number and email are validated to avoid creating multiple accounts.

2. Determine the Optimum Identification to sort out the Human Medical(Drug and symptom) questionnaire (DOI-HM) —

SCI-AMMA can be used to give instructions to the patients on their questions about diseases, drugs, and side effects. To archive this goal, DOI-HM is developed. The DOI-HM is an Ontology to generate answer to user question. The DOI-HM uses Sinhala Protégé Knowledge Base to answer user questions. The Ontologies knowledge representation language is RDF & OWL. The Apache Jena Fuseki is a SPARQL server and provides a protocol for the query, update, and SPARQL graph store protocol. Figure 8 at 3rd step SPARQL query shows how to retrieve the Individual values of the Ontology. When retrieving information about drugs,

Page 3: Sinhala Conversational Interface for Appointment ...

WuPalmer algorithm is used to identify a drug name. The proposed system identified a human disease considering the fact Most and Less common and Serious symptoms. DOI-HM is developed in Python. Several Ontology applications use Protégé Knowledge Base, but not done for Sinhala language. This is the first time of developing an Ontology for Sinhala. The easiest way to identify the keywords of a user question is by managing relationships with the classes and Individual values in the Protégé. The Protégé is an object-oriented base Knowledge Base

B. Appointment Management

Fig. 2. Appointment Management System Diagram

Natural Language Processing is normally shortened as NLP is a part of artificial intelligence that can be applied for interaction between computers and humans using the Natural Language. The ultimate goal of NLP is to read, decipher, understand and make sense of human language in a more valuable way. Most NLP techniques rely on machine learning in order to derive meaning from human language. In this research, NLP is used to understand questions that are given as Sinhala speech. Initially, Sinhala speech input is converted into Sinhala text and then it is translated into English. These files are stored as text format.

A google translate plugin is used to translate Sinhala to English and the converted text contains important data that is required to make a response. A probability-based approach is used to find important words from given sentence. To identify important words using this probability-based approach, all the words in the sentence are compared with the keywords stored in the database file which is in .csv format. Then the identified words are sent to Firebase DB to get response. The Firebase DB contains data about Medical centers, doctors who visit the medical centers and their available times and dates with regard to the medical centers, diseases and symptoms' details regarding each disease, drug details for the specific disease are contained and also the words that matches the important facts. The tokenized words are mapped with the important facts contained in the Firebase. These words that are mapped which matches with the Firebase are known as important facts.

In the database query generation part, queries are automatically created with responses that are taken from the matched words included in the database. Those details should be retrieved from the Firebase DB as a database query and then they are needed to be retrieved as a text file. (Fig. 2). A mobile application is developed for Medical Center data registration. This application is developed using Flutter, which

is platform independent. Therefore, application can be used in both IOS and in android platforms. Medical Centers will be able to download the application and they can register by themselves. After the registration, the medical Centers must upload the patients' appointment details to the database including doctors' details available times and dates

C. Maintaining the Integrity of the Specifications

Fig. 3. RASA Plugin & TTS System diagram

At the implementation of the RASA plugin, there are some fields required to be sent into the RASA process. Those are important facts of the medical answer, user validation status, logging count of the user, name of the user and the session id. Also, the user's medical question should be sent to the RASA process separately. From the DOI-HD process, a POST request is sent which contains important facts, user email, username, and the session id. By accessing this POST request above fields are used for the implementations of the RASA plugin. These important facts become the medical answer to the user's question. At the beginning of the process, these important facts must be translated to the Sinhala Language which is in English Language (Fig. 3).

It is necessary for the Dialogue Management process in RASA development to be in Sinhala Language. This process is done using the Google Translator API. The Google Translator plugin should be imported to do this process in the Flutter environment. The user email is taken for validation by checking it with the Firebase authentication section. There is a Flutter query to do this process with Firebase authentication. If the email exists in the Firebase authentication, validation status is set to be "Old User" otherwise it is set to be "New User". There is another Flutter query to check the logging count of the user. This process is also done using the user email.

Logging count is obtained by the number of attempts that an email is used. At this process also Firebase authentication feature is been used. Each session id has its own specific session name. By using a Flutter query, this session id and session name mapping is been done. If the received session id is either 1 or 2; status is set to be "processing”, or the session id is either 0 or 6 status is set to be "complete". This process is also managing by a Firebase query. There is another process to save translated “important facts” considering this session id. Only if the session id is 1, mentioned “important facts” have been saved in Firebase database.

Above user validation status, user email, logging count, session name, and the current date-time fields will be stored in the Firebase DB. This record is called a "user log". Also,

Page 4: Sinhala Conversational Interface for Appointment ...

these mentioned fields should be sent into the RASA process to do Dialogue Management. By creating a Flutter GET API, these parameters have been sent into the RASA process. This is the bridge between the Text-to-Speech process and the Dialogue Management process. Simultaneously, the user's medical question is sent into the RASA NLU with the aid of the Webhook API. The Python REST API is developed with the support of the Flask framework. Flask plugin is imported in the PyCharm IDEA. Also, this REST API is sent as a POST request method. At the end of the Dialogue Management Process, the medical answer will be received through the Webhook API along with the previous POST request.

This medical answer is received as a response to the POST request. This proper medical answer and the user's medical question will be displayed on the SCI-AMMA mobile application. At the same time, the medical answer will be converted into the voice output in Sinhala at the Text-to-Speech process. Sound waves will be smooth, good in quality, and in better speed that would be a healthy level to the humans. Google Text-to-Speech (gTTS) API is used for the Text-to-Speech conversion process. The gTTS plugin must be imported to access this API. There is a section in the SCI-AMMA mobile application called "Appointment". From this section, the user can store their appointment as a reminder. Also, the user can edit or delete their stored appointment. Especially the user can search for any stored appointment under the Doctor name, Appointment ID, Health Issue, Date, Time, or Hospital name.

There is a section that is used to generate a QR code according to a specific appointment. The user can select an appointment that they want to show to the medical center. By scanning that relevant QR code, the medical center can view the appointment details according to the selected one.

D. RASA Framework Development The RASA development system overview diagram has

been described by using Fig 5. User’s given question received for the RASA framework using Webhook API (REST API). That text is taken into the RASA NLU. Markdown format and JSON format are training formats of RASA -NLU. Markdown format is the easiest data training format available in RASA NLU.

In this research, JSON is used as training data format because it is compatible with the mobile application development rather than Markdown format. By using JSON format, system will enrich training data, and it is easy for any developer to read and write. Some power users are interested in Markdown as it is easy to understand for them. An example for the JSON format is given Fig. 4. This given JSON format will train the model using “common_examples” and all of the training examples include in the “common_examples” array. After moving the text into RASA NLU, it will extract the intents which are inside the sentence. A pipeline is used to identify the intents of the sentence. Pretrained Embeddings and Supervise Embeddings are the two types of pipeline available in this framework.

Supervise Embeddings pipeline is used for this research since Sinhala language model is not available. In this process Whitespace Tokenizer is included inside the supervise Embeddings. A Config.yml file is configured to implement the tokenizer. After the intent extraction, the process moves to the RASA CORE. Purpose of this RASA CORE is generating the relevant answer for the user’s question. Utter and Action

techniques are used to generate the response. Utter contains the hardcoded answer and Action contains the answer which was generated using the REST API important facts. Both the techniques are used for this research. Utter technique sentences are written in domain.yml file. Before using Action technique, have to start the action server. After generating the relevant answer, Text-to-Speech process will catch it using Webhook API. RASA Stories can handle the conversation. Both intent (question) and Utter or Action (answer) technique were connected by stories.md file. Those stories are written by Markdown format.

Fig. 4. JSON format

Fig. 5. RASA Development System overview diagram

IV. RESULTS

Fig. 6. User Interface of SCI-AMMA

Fig. 6 depicts the conversational flow of the system. We tested our system with 10 dialogues (Fig. 7) with 10 users. According to the result SCI-AMMA accuracy is 75% percentage(Fig. 7). we used to calculated system accuracy 2 parameters one is voice to text accuracy, and the other is dialog generation. To calculated accuracy, equation in Fig. 10 is used where "X" denoted the system response; "Y" denoted as an expected output; and "n" denotes the number of users.

Page 5: Sinhala Conversational Interface for Appointment ...

Fig. 7. Reliability of SCI-AMMA

Fig. 8. Reliability of Voice to text

Fig. 9. Calculation metrics

Fig. 10. Calculation metrics

A. Voice to Text Analyzer & Ontology Query development 1. Voice to Text Analyzer —

The user voice into text accuracy in 1st dialogue is 73.6%, 2nd dialogue is 83.5% and overall performance is 78.55%( Fig.8). I used two-equation to calculate the accuracy of the StoT. To calculate the user accuracy, equation in Fig. 9 is used where "X" denoted as a system response; "Y" denoted as an expected output; and "n" denoted as a number of users.

2. Determine Optimized Identification of Human Medical Questioner (DOI-HM) —

The prediction of the answer to the medical advice question depends on the Knowledge Base. Moreover, SCI-AMMA efficiency and accuracy are dependent on the Knowledge Base. This part is based on an Ontology and SPAEQL Query development process. This is the first time of training Sinhala Ontology using Protégé. We had to take not only Sinhala names but also English names because some medical terms do not have Sinhala meaning, and some people are mixing English and Sinhala. This process has four steps

such as question identification, intent identification, SPARQL query generation and finally response generation. DOI-HM processing gets 9023ms for one REST call. Fig. 11 shows output of DOI-HM.

Fig. 11. DOI-HMQ output

B. Appointment Management

Fig. 12. Important fact identifier

Fig. 12 shows user’s medical question in JSON format. Intents can be identified from both Sinhala and English sentence. These intents obtained from the keyword matching process in Firebase DB. Identified intents are send into Firebase DB to response generation process. Fig. 12 illustrates how the intents are recognized from this question.

C. RASA Plugin and Text-to-Speech From the query translating process, important medical facts in Sinhala language are obtained. Given example shows (Fig. 13) the output for query translator process. These important facts should be sent to the Dialogue Management process of the RASA Framework. At the end of this process, proper medical answers will be received, which are then sent to the Text-to-Speech process. Using the Google Text-to-Speech API these proper medical answers will be converted into voice as an output. Proper medical answers will be given out as a voice output according to the user’s medical question.

Page 6: Sinhala Conversational Interface for Appointment ...

Fig. 13. Query Translator

D. RASA Framework Development 1. Appointment Management —

The user’s appointment management question moves into the RASA framework, then the question is identified by using pre-trained data modules. That identified question has an id and the id match with the related answer id using rasa stories. Then the related answer is generated. According to Fig. 14 which is included Appoint Management Dialog Management. It says about the details of diabetic doctors that the patient wants to know. Then the RASA framework will recognize that question. That recognized question id is “doctor_type”. Then that id will match with its related answer id of “action_meet_doctor” (Fig. 14). By using “action_meet_doctor” id, the answer generates which, is a list of the doctors available for that disease.

Fig. 14. Story in RASA Framework & Dialogue Management

2. DOI-HM —

The user’s DOI-HM question moves into the RASA framework, then the question is identified by using pre-trained data modules. That identified question has an id and the id match with the related answer id using rasa stories. Then the related answer is generated. According to Fig. 14 which is included DOI-HM Dialogue Management. Here, the user wants to know about the disease by giving symptoms. Then the RASA framework will recognize that question. That recognized question id is “doctor_advice”. Then that id will match with its related answer id of “action_response_advice” (Fig. 14). By using “action_response_advice” id, the answer generates the disease relevant to the symptoms.

V. CONCLUSION In the modern world, use of automated intelligent systems

make people’s life easier and save time for other important

engagements. Make them available as a mobile application in local languages supports social inclusion. Moreover, businesses as well as not-for-profit organizations are now moving to automated systems to reduce the operational cost. SCI-AMMA is a successful attempt of developing an end-to-end conversational system for health domain that can be used in many settings such as doctor appointment management (channelling) systems, medical advice systems, and medical screening tools.

In SCI-AMMA, a medical knowledge base is used in the system, and it is designed to imitate the natural communication between a patient and a doctor in the system. In the design stage medical terms used in the domain, and different scenarios and suitable dialogue patterns were studied and embedded to the system. Similarly, this can be extended to any other domain such as telecom industry or financial services industry by carefully selecting the knowledge base required for that industry and different scenarios and suitable dialogue patters that are heavily used in those industries. Thus, this work can be extended to other domains and provide a service to Sinhala speaking community.

As future work, the system can be trained to understand different dialects of Sinhala language, withstand ambiguous terms in the language and use of these systems in noisy places like train station, or bus stand can be studied to provide a better service on those settings.

REFERENCES [1] MacKay, J. "Screen time stats 2019: Here’s how much you use your phone

during the workday," 19 March 2019. [Online]. Available: https://blog.rescuetime.com/screen-time-stats-2018/. [Accessed 11 September 2020].

[2] Karunananda, A. S., "Octopus: A Multi Agent Chatbot," in Proceedings of 8th International Research Conference, KDU, November 2015.

[3] Bali, M., "Diabot: A Predictive Medical Chatbot using Ensemble Learning," in International Journal of Recent Technology and Engineering (IJRTE), July 2019.

[4] Segura, C., "Chatbol, a chatbot for the Spanish “La Liga”," in International Workshop on Spoken Dialog System Technology 2018 (IWSDS), Singapore, May 2018.

[5] Sandra V A1,Vinitha V2 "Quinn: Medical Assistant for Mental Counseling using RASA Stack," in International Research Journal of Engineering and Technology (IRJET), India, June 2019.

[6] Stevens Robert, "TAMBIS: Transparent Access to Multiple Bioinformatics Information Sources," in Oxford University Press 2000, Manchester, July 29, 1999.

[7] Bhatt Mehul, "OntoMove: A Knowledge Based Framework For Semantic Requirement Profiling and Resource Acquisition," in Australian Software Engineering Conference (ASWEC'07), Australian, 2007.

[8] Weerasinghe Ruvan, "Festival-si: A Sinhala Text-to-Speech System," in Language Technology Research Laboratory, Colombo, Sri Lanka, September 2007.

[9] Kowatsch Tobias*, "Text-based Healthcare Chatbots Supporting Patient and Health Professional Teams: Preliminary Results of a Randomized Controlled Trial on Childhood Obesity," in Intelligent Virtual Agents , Stockholm, Sweden., 2017.

[10] Nachabe Lina, "Applying Ontology to WBAN for mobile application in the context of sport exercises," September 2014.

[11] Nanayakkara Lakshika, "A Human Quality Text to Speech System for Sinhala,," in The 6th Intl, August 2018.

[12] Weinger Matthew B., "Handbook of Human Factors inMedical Device Design," in BioMedical Engineering OnLine , OnLine, June 2011.

Identify applicable funding agency here. If none, delete this text box.