Download - A Cognitive Navigation Assistant for the Blind fileThe technologies developed in our proposal will be leveraged to help visually impaired users to build a cognitive map of the environment

Transcript
Page 1: A Cognitive Navigation Assistant for the Blind fileThe technologies developed in our proposal will be leveraged to help visually impaired users to build a cognitive map of the environment

A Cognitive Navigation Assistant for the BlindKris Kitani Manuela Veloso Chieko Asakawa Dragan Ahmetovic Hernisa Kacorri Hironobu Takagi

Motivation

� For individuals with visual impairments, navigation in a new environment is challenging as much of the visual information is not accessible

� Our goal is to provide guidance to visually impaired users in unexplored environments using machine perception and co-robot interaction

� The technologies developed in our proposal will be leveraged to help visually impaired users to build a cognitive map of the environment

� A cognitive map is a mental model of the environment, essential for independent mobility, which depends on the understanding of:• Geometrical structure of the environment (e.g. pathways, door and stairs)

• Functional attributes of the surroundings (e.g. place to eat, rest or socialize)

Cyber-physical Cognitive Assistant

� We develop a cognitive assistant which is able to augment human cognitive capabilities, in particular for people with visual impairments

� The cognitive assistant enables contextual hyper-awareness and guidance through a portfolio of sensing modalities and algorithms

� We focus on a cyber-physical instantiation of a navigation assistant for visually impaired users, through two concrete manifestations:

Compact Wearable Interface

� Smartphone interface, relies on existing consumer devices

� Audio feedback facilitates the creation of a cognitive map

� Highly accessible through audio-based interaction paradigms

Embodied Robotic Interface

� Hardware platform that implements a co-robot motion model

� Lessens the cognitive load of the user through haptic feedback

� Aware of situational context and user’s preferential dynamics

Portfolio of Algorithms for Context Awareness

� We propose a portfolio of algorithms, accessed by both wearable and robotic interfaces to make sense of the user’s surroundings

� Our goal is to provide to the user the information about the knowledge landscape of the environment. In particular in terms of:

Location of the user in the environment

� Accurate localization that relies on:

� Computer vision-based algorithms

� Bluetooth low energy (BLE) beacons

Landmarks of the environment

� Collect first-person navigation videos

� Identify important visual landmarks

� Use identified landmarks for guidance

Frames

Mot

ion

ampl

itude

(ver

tical

)

When seeing landmarksMotion vectors

t=550 t=900

t=1100 t=2000

SD=1/30sec

SD=1sec

SD=3sec

Functional attributes of the environment

� Collect video examples of activities

� Map locations to possible activities

� Predict environment “Action Maps”

Goals and Broader Impact

� We aim to improve the quality of life for people with visual impairments through sensory and cognitive assistance

� In the long term, we envision an approach towards human augmentation through machine intelligence

� This work will shed light on the automatical acquisition of environmental knowledge using machine perception

� We investigate how that information can be conveyed through an embodied co-robot or a wearable assistant