UCAmI Presentation Dec.2013, Guanacaste, Costa Rica

Post on 07-May-2015

193 views 2 download

description

Understanding Movement and Interaction: An Ontology for Kinect-Based 3D Depth Sensors Natalia Díaz Rodríguez, Robin Wikström, Johan Lilius, Manuel Pegalajar Cuéllar, Miguel Delgado Calvo-Flores

Transcript of UCAmI Presentation Dec.2013, Guanacaste, Costa Rica

Understanding Movement and Interaction: an Ontology for Kinect-based 3D Depth Sensors

1

Natalia Díaz Rodríguez1, Robin Wikström1, Johan Lilius1, Manuel Pegalajar Cuéllar2 and Miguel Delgado Calvo Flores2

1Turku Centre for Computer Science (TUCS), Dept. of IT, Åbo Akademi University, (Finland) 2Dept. Of Computer Science and Artificial Intelligence, University of Granada (Spain)

7th International Conference on Ubiquitous Computing & Ambient Intelligence (UCAmI 2013) 5.12.13

Introduction §  A crucial & challenging task in AmI: Human behaviour modelling and recognition §  Video-based monitoring techniques. Applications: –  Technology for detection of in-home activities

(posture, gestures) –  Elderly care (fall detection) –  Exercise monitoring (rehabilitation)

⇒ Can be inaccurate, compromise privacy, become intrusive

⇒ No common scheme for skeleton data ⇒ Need for a device independent 3D-Depth Sensors

Ontology 2

Introduction §  WHY Semantic Technologies & Ontologies?

–  Formulate relationships between concepts –  Independent knowledge sharing minimizing

redundancy –  Extensible, device/platform independent –  Context-awareness: modelling & reasoning for

automatic inference, e.g.:

§  Other useful context-aware info: stress, heart rate, sleep quality, mood, etc.

3

Related Work

§  Exercise applications based on 3D depth cameras multimodal features (gesture + spoken commands) –  Virtual Social Gyms –  Eyes-Free Yoga –  Kinect@Home (crowdsourcing 3D environment datasets) –  Kinect Fusion (real-time 3D reconstruction and interaction) –  Kinect based robots map indoor environments to 3D models –  Ontology-based annotation of images & semantic maps

§  BML (Behaviour Markup Language)

[http://mobihealthnews.com/22351/slideshow-7-startups-using-microsoft-kinect-for-online-physical-therapy/]

4

5

Proposal: modelling movement and interaction

Aim: §  Combine data-driven computer vision with knowledge-driven

semantics to obtain high level & more meaninful info. => Annotate semantically physical movement & interaction to enable automatic knowledge reasoning. –  E.g. Provide feedback when doing exercise to patient +

physiotherapist (quality & frequency). §  Gathering sensor info, allows semantic queries for further

knowledge reasoning –  E.g. long term evolution of back posture.

Ontology features §  Kinect Sensor §  3D Volume §  Audio (speech recognition engines) §  Tracking Modes (Default/Seated, -2 out of 6

users-) §  Gestures (grip, release, push, scroll)

§  Interaction Controls (video, images, text) 6

Ontology features

§  Object interaction (Kinect Fusion API). – User-Object Interactions (grab, release,

touch, click etc.) – Hand –interactive, gripping, pressing- and

Arm state –primary. §  Body Movement (rotate, bend, extend, elevate):

clockwise, direction or body side. – E.g. RotateWristClockwise, ElevateFootFront,

LeftBodyPart

7

Ontology features

§  Skeleton tracking (bone joint rotations + bone orientations)

8

Exercises & Workouts

9

10

Excerpt of classes, data & object properties

11

Examples of use

12

Example 1: Defining basic movement (Stand, BendDown, TwistRight, MoveObject, etc). Example 2: When defining, e.g. SitStandExercise workout, the N of series done in time as well as the exercise quality can be measured and compared with predefined medical guidelines, to give feedback.

Examples of use

13

Example 3: Historic analysis can be provided to monitor posture quality in time. E.g. having back less straight than 1 year ago can be notified to correct/prevent on time. Example 4: An office worker can be notified when he is not having straight back and neck or when he has been sitting for too long.

Implementation

§  Protégé, OWL 2 §  Skeleton tracking: Kinect for Windows SDK C#.

–  Kinect NUI, Kinect Interaction, Fusion and Audio modules.

§  NeOn Ontology engineering methodology (reuse ontology resources, requirements specification, development of required scenarios and dynamic ontology evolution). – Spatial Relations Ontology (contains, disjoint,

equals, overlaps)

14

Conclusions

§  OWL 2 ontology (ALC DL expressivity): 164 classes, 53 object properties, 58 data properties, 93 individuals. §  Exercise & Workout sub-ontology registers

performance quality evolution §  Abstract atomic gestures => incremental, fine, and

coarse grained activity recognition.

15

Conclusions

§  Validation with physiotherapists exercises (on-going)

§  Combining computer vision with semantic models can enhance – context-awareness – common understanding – recognition accuracy –  trust and data provenance

16

Kinect Ontology: http://users.abo.fi/rowikstr/KinectOntology/

Future Directions

§  Tackling feedback §  Gesture Definition Markup Language (GDML).

§  Large rule dataset scalability + performance (reasoning, querying/updating/ subscribing)

§  Fuzzy rules to tackle imprecision, vagueness & uncertainty. –  Ease looseness in the model and facilitate user

interaction (linguistic labels for natural language customization).

17

18

§  Integration into A) new M3 distributed architecture -low power distributed processing-

B) Philips PHL (Personal Health Labs) platform Atom board Future: ARM

Future Directions

Future Directions §  On-going: Orthopedic rehabilitation exercises.

§  Hip, Shoulder, Knee post-surgery §  Cardiac

§  Record + train new users’ patterns §  Hip Extension §  Hip Abduction §  Sit-Stand §  Knee Extension §  Knee Abduction

§  Local, distributed, low-power M3 RDF-store 19

20

Thank you for your attention!

Natalia Díaz Rodríguez ndiaz@abo.fi

Embedded Systems Lab. Department of Information Technologies Åbo Akademi University, Turku, Finland

TUCS (Turku Centre for Computer Science)

Department of Computer Science and Artificial Intelligence

University of Granada, Spain

21