Ontology for a voice transcription of OpenStreetMap data: the case of space apprehension by visually impaired persons - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Ontology for a voice transcription of OpenStreetMap data: the case of space apprehension by visually impaired persons

Résumé

In this paper, we propose a vocal ontology of Open-StreetMap data for the apprehension of space by visually impaired people. Indeed, the platform based on produsage gives a freedom to data producers to choose the descriptors of geocoded locations. Unfortunately , this freedom, called also folksonomy leads to complicate subsequent searches of data. We try to solve this issue in a simple but usable method to extract data from OSM databases in order to send them to visually impaired people using Text To Speech technology. We focus on how to help people suffering from visual disability to plan their itinerary, to comprehend a map by querying computer and getting information about surrounding environment in a mono-modal human-computer dialogue.
Fichier principal
Vignette du fichier
waset_Londres.pdf (603.58 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01533064 , version 1 (06-06-2017)

Licence

Domaine public

Identifiants

  • HAL Id : hal-01533064 , version 1

Citer

Said Boularouk, Didier Josselin, Eitan Altman. Ontology for a voice transcription of OpenStreetMap data: the case of space apprehension by visually impaired persons. World Academy of Science, Engineering and Technology, May 2017, London, United Kingdom. ⟨hal-01533064⟩
263 Consultations
236 Téléchargements

Partager

Gmail Facebook X LinkedIn More