Explainable Adaptive Assistants Deborah L. McGuinness, Tetherless World Constellation, RPI Alyssa...

1
Explainable Adaptive Assistants Explainable Adaptive Assistants Deborah L. McGuinness, Tetherless World Constellation, RPI Alyssa Glass, Stanford University Michael Wolverton, SRI International Paulo Pinheiro da Silva, University of Texas at El Paso Cynthia Chang, Tetherless World Constellation, RPI Designed to Extract an explanation-relevant snapshot of execution state, including learning provenance Communicate that information to the Explainer System utilizes introspective predicates to build a justification of current task(s), then outputs the justification structure via an RDBMS Database schema designed for easy, scalable storage and retrieval of relevant execution and provenance information Designed to be reusable for other types of reasoning about execution and action Architecture Architecture Motivation Motivation Usable adaptive assistants need to be able to explain their recommendations if users are expected to trust them. Our work on ICEE -- the Integrated Cognitive Explanation Environment -- provides an extensible infrastructure for supporting explanations. We aim to improve trust in learning-enabled agents by providing transparency concerning: Provenance Information manipulation Task processing Learning The ICEE explainer includes: Descriptions of question types and explanation strategies Architecture for generating interoperable, machine interpretable, sharable justifications of answers containing enough information to generate explanations Components capable of obtaining justification information from SPARK (a BDI agent architecture) History of execution states as justifications User Study User Study Interviewed 13 adaptive agent users, focusing on trust, failures, surprises, and other sources of confusion. Identified themes on trusting adaptive agents, including: Learning can make users feel ignored Users want to ask questions when they perceive failures Granularity of feedback is vitally important Users need transparency and access to knowledge provenance Users require explainable verification to build trust Study also identified question types most important to users, to motivate future work Dialogue Dialogue Given a PML justification of SPARK’s execution, ICEE provides a dialogue interface to explaining actions. Users can start a dialogue with any of several question types, for example: “Why are you doing <task>?” ICEE contains several explanation strategies for each question type and, based on context and a user model, chooses one strategy, for example: Strategy: Reveal task hierarchy “I am trying to do <high-level-task> and <task> is one subgoal in the process.” ICEE then parses the execution justification to present the portions relevant to the query and the explanation strategy. Strategies for explaining learned and modified procedures focus on provenance information and the learning method used. ICEE also suggests follow-up queries, enabling back-and-forth dialogue between agent and the user. SPARK Wrapper and Database SPARK Wrapper and Database CALO GUI PML Generato r Task Manager (TM) TM Wrapper TM Explainer Explanatio n Dispatcher Task databas e McGuinness, D.L., Glass, A., Wolverton, M., and Pinheiro da Silva, P. “Explaining Task Processing in Cognitive Assistants that Learn.” Proceedings of the 20th International FLAIRS Conference (FLAIRS-20), 2007. Glass, A., McGuinness, D.L., and Wolverton, M. “Toward Establishing Trust in Adaptive Agents.” Proceedings of 2008 International Conference on Intelligent User Interfaces (IUI2008), 2008. www.inference-web.org Use Ask Understand Update PML: Provenance, Justification, and Trust Interlingua Inference Web Toolkit: Suite of tools for generating, browsing, searching, validating, and summarizing explanations ICEE: Explanation framework for explaining cognitive agents, with focus on task processing and learning Explanation Foundation Explanation Foundation nding from: DARPA, NSF, IARPA, ARL, Lockheed Martin, Fujitsu, SRI, IBM.

Transcript of Explainable Adaptive Assistants Deborah L. McGuinness, Tetherless World Constellation, RPI Alyssa...

Page 1: Explainable Adaptive Assistants Deborah L. McGuinness, Tetherless World Constellation, RPI Alyssa Glass, Stanford University Michael Wolverton, SRI International.

Explainable Adaptive AssistantsExplainable Adaptive AssistantsDeborah L. McGuinness, Tetherless World Constellation, RPI

Alyssa Glass, Stanford UniversityMichael Wolverton, SRI International

Paulo Pinheiro da Silva, University of Texas at El PasoCynthia Chang, Tetherless World Constellation, RPI

Designed to

Extract an explanation-relevant snapshot of execution

state, including learning provenance

Communicate that information to the Explainer

System utilizes introspective predicates to build a

justification of current task(s), then outputs the justification

structure via an RDBMS

Database schema designed for easy, scalable storage

and retrieval of relevant execution and provenance

information

Designed to be reusable for other types of reasoning

about execution and action

ArchitectureArchitecture

MotivationMotivationUsable adaptive assistants need to be able to explain their

recommendations if users are expected to trust them. Our

work on ICEE -- the Integrated Cognitive Explanation

Environment -- provides an extensible infrastructure for

supporting explanations. We aim to improve trust in learning-

enabled agents by providing transparency concerning:

Provenance

Information manipulation

Task processing

Learning

The ICEE explainer includes:

Descriptions of question types and explanation

strategies

Architecture for generating interoperable, machine

interpretable, sharable justifications of answers containing

enough information to generate explanations

Components capable of obtaining justification

information from SPARK (a BDI agent architecture)

History of execution states as justifications

User StudyUser StudyInterviewed 13 adaptive agent users, focusing on trust,

failures, surprises, and other sources of confusion. Identified

themes on trusting adaptive agents, including:

Learning can make users feel ignored

Users want to ask questions when they perceive failures

Granularity of feedback is vitally important

Users need transparency and access to knowledge

provenance

Users require explainable verification to build trust

Study also identified question types most important to users,

to motivate future work

DialogueDialogueGiven a PML justification of SPARK’s execution, ICEE provides a dialogue interface to explaining actions. Users can start a dialogue with any of several question types, for example:

“Why are you doing <task>?”

ICEE contains several explanation strategies for each

question type and, based on context and a user model,

chooses one strategy, for example:

Strategy: Reveal task hierarchy

“I am trying to do <high-level-task> and <task> is

one subgoal in the process.”

ICEE then parses the execution justification to present the

portions relevant to the query and the explanation strategy.

Strategies for explaining learned and modified procedures

focus on provenance information and the learning method

used. ICEE also suggests follow-up queries, enabling back-

and-forth dialogue between agent and the user.

SPARK Wrapper and DatabaseSPARK Wrapper and Database

CALO GUI

PML Generator

Task Manager (TM)

TM WrapperTM ExplainerExplanation Dispatcher

Task database

McGuinness, D.L., Glass, A., Wolverton, M., and Pinheiro da Silva, P. “Explaining Task Processing in Cognitive Assistants that Learn.” Proceedings of the

20th International FLAIRS Conference (FLAIRS-20), 2007.

Glass, A., McGuinness, D.L., and Wolverton, M. “Toward Establishing Trust in Adaptive Agents.” Proceedings of 2008 International Conference on Intelligent

User Interfaces (IUI2008), 2008.

www.inference-web.org

Use Ask

UnderstandUpdate

PML: Provenance, Justification, and Trust Interlingua

Inference Web Toolkit: Suite of tools for generating,

browsing, searching, validating, and summarizing

explanations

ICEE: Explanation framework for explaining cognitive

agents, with focus on task processing and learning

Explanation FoundationExplanation Foundation

TW funding from:  DARPA, NSF, IARPA, ARL, Lockheed Martin, Fujitsu, SRI, IBM.