Cloud Semantic-based Dynamic Multimodal Platform for Building mHealth Context-aware Services
Résumé
Currently, everybody wish to access to applications from a wide variety of devices (PC, Tablet, Smartphone, Set-top-box, etc.) in situations including various interactions and modalities (mouse, tactile screen, voice, gesture detection, etc.). At home, users interact with many devices and get access to many multimedia oriented documents (hosted on local drives, on cloud storage, online streaming, etc.) in various situations with multiple (and sometimes at the same time) devices. The diversity and heterogeneity of users profiles and service sources can be a barrier to discover the available services sources that can come from anywhere from the home or the city. The objective of this paper is to suggest a meta-level architecture for increasing the high level of context concepts abstracting for heterogeneous profiles and service sources via a top-level ontology. We particularly focus on context-aware mHealth applications and propose an ontologies-based architecture, OntoSmart (a top-ONTOlogy SMART), which provides adapted services that help users to broadcast of multimedia documents and their use with interactive services in order to help in maintaining old people at home and achieving their preferences. In order to validate our proposal, we have used Semantic Web, Cloud and Middlewares by specifying and matching OWL profiles and experiment their usage on several platforms.