Designing a transcription font for mouth actions in sign languages: the Typannot typographic system - Archive ouverte HAL
Communication Dans Un Congrès Année : 2022

Designing a transcription font for mouth actions in sign languages: the Typannot typographic system

Dessiner la police pour transcrire le mouvement de la bouche en LS : le système typographique Typannot

Disegnare il carattere per trascrivere il movimento della bocca nelle LS: il sistema tipografico Typannot

Léa Chevrefils
Adrien Contesse
  • Fonction : Auteur
  • PersonId : 1072584
Patrick Doan
  • Fonction : Auteur
  • PersonId : 906511
Claire Danet
Chloé Thomas
Morgane Rébulard
  • Fonction : Auteur
  • PersonId : 906510

Résumé

Facial actions are among the least documented parameters in sign languages (SL). Yet facial signs have a key role in SL, operating either as inseparable parts of a given manual sign or carrying its own meaning independently. One could argue that a major obstacle to this research is the difficulty related to its study. It is true that the scale at which facial actions occur is much smaller and the difference between facial actions is much finer. Eight parts (jaw, lips, eyebrow, etc.) divided between the upper face [Eye Action (EA)] and the lower face [Mouth Actions (MA)] are at play in facial actions. Most of these parts have a limited range of motion yet can communicate lots of information. Indeed, our brains have developed to recognize the most subtle changes in human faces (Ekman, 1984) as they carry crucial information for human interactions. If this has enabled us to implicitly perceive others' emotions, the conscious segmentation, organization, and analysis of such subtle movements remains a challenge. The existing research has mainly been focused on MA, as the lower part of the face plays a greater role in SL. Studies have brought to light the different linguistic values of MA, for example Mouthings, which are an articulation without voice of spoken words in the vocal language of the given country, and Mouth Gestures, which are mouth movements completely independent from vocal languages and can carry various types of values (adverbial, semantically empty, enacting, mouth activity in the context of the whole-face) (Fontana, 2008; Crasborn et al., 2008). These findings reveal the importance of MA in SL; yet, their study, difficult by nature, is made even more difficult by the lack of a complete and efficient transcription system. Several preliminary works (Bergman & Wallin, 2001; Sutton Spence & Day, 2001; Ajello et al., 2001) were led by a corpus driven approach resulting in different ways of categorizing mouth segments and positions. Some authors engineered a series of symbols to annotate a list of previously defined MA (HamNoSys of Prillwitz et al., 1989; Vogst-Svenden, 2001; Hohenberger & Happ, 2001). To date, the most developed tool available is SignWriting (Sutton, 1995), which offers 187 different symbols to transcribe MA. They can be compounded into an emoji like image. If the system offers an easy and intuitive way to symbolize facial expressions and emotions, it lacks granularity and fails to transcribe complex facial actions. Neither does it allow searchability as each composition results in an image from which no data can be extracted. Lastly, the symbols are based on the visible outer shapes of the face and not based on which facial parts were in action in order to create the facial expression. These features are setting limitations in the capacity to offer a deeper analysis of facial signs, of their values and of their interactions with other SL components. Typannot (Danet et al., 2020) is developing a complete typographic system to transcribe SL. For each pre defined SL parameter (handshape, initial location of upper limb, movement and facial actions), a dedicated type font, grouped in a complete type family, is currently being developed. All fonts within the Typannot system are based on the same structure. The fundamental principle upon which their graphematic formulas were designed is that the Typannot system transcribes which parts are in action in the making of a sign instead of transcribing the resulting image observed from an outer point of view. Each parameter font works using a two layer structure: the first displays all articulatory pieces of information required to make a sign [part(s) involved and value(s) of each part] in a string of individual characters; the second combines all pieces of information into a single morphological and intuitive to read glyph. This two layer system and the combination of multiple characters into glyphs is made possible through typographic engineering. To be fully efficient, these glyphs need to be legible and scriptable by all kinds of users, from beginners to experts. They also need to offer complete searchability and data mining by retaining every feature of every sign. In this presentation, Typannot graphematic formula using the XYZ axes as a frame of reference will be described. This spatial model allows the formula to be concise yet exhaustive in the description of any given mouth action. Then, the conception of the typographic system, from the initial sketches to the type design will be explained. Furthermore, the type engineering process enabling pieces of information from every part to be embedded into each of the Typannot glyphs will be demonstrated. Next, the digital interface currently being developed to allow complete accessibility and usability of the typographic system is presented. Lastly, questions about the possible evolution brought by the use of such a system in the analysis and understanding of MA, their roles and interactions within SL will be discussed. Bibliographical references Ajello, R., Mazzoni, L., & Nicolai, F. (2001). Linguistic gestures: mouthing in Italian Sign Language. International studies on sign language and communication of the deaf, 39. Bergman, B., & Wallin, L. (2001). A preliminary analysis of visual mouth segments in Swedish Sign Language. International studies on sign language and communication of the deaf, 39. Crasborn, O., van der Kooij, E., Waters, D., Woll, B., & Mesch, J. (2008). Frequency and spreading behavior of different types of mouth actions in three sign languages. Sign Language & Linguistics, 11(1), 45-67. Danet, C., Boutet, D., Doan, P., Bianchini, C.S., & Contesse, A. (2020). Transcribing sign languages with TYPANNOT: the typographic system that retains and displays layers of information. Proceedings of Grapholinguistics in the 21st Century, 5, 1007-1035. Ekman, P. (1984). Expression and the nature of emotion. Approaches to Emotion, 3(19), 344. Fontana, S. (2008). Mouth actions as gesture in sign language. Gesture, 8(1), 104-123. Hohenberger, A., & Happ, D. (2001). The linguistic primacy of signs and mouth gestures over mouthings: evidence from language productions in German Sign Language. International studies on sign language and communication of the deaf, 39. Prillwitz, P., Leven, R., Zienert, H., Hamke, T., & Henning, J. (1989). HamNoSys version 2.0: Hamburg Notation System for Sign Languages: an introductory guide. International Studies on Sign Language and Communication of the Deaf, 5. Sutton, V. (1995). Lessons In SignWriting. The Deaf Action Committee for SignWriting. Sutton Spence, R., & Day, L. (2001). Mouthings and mouth gestures in British Sign Language. International studies on sign language and communication of the deaf, 39. Vogst Svenden, M. (2001). A comparison of mouth gestures and mouthings in Norwegian Sign Language. International studies on sign language and communication of the deaf, 39.
4C C057 2022-presentazione Grafematik Palaiseau.pdf (15.33 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03741122 , version 1 (31-07-2022)

Identifiants

  • HAL Id : hal-03741122 , version 1

Citer

Claudia S. Bianchini, Léa Chevrefils, Adrien Contesse, Patrick Doan, Claire Danet, et al.. Designing a transcription font for mouth actions in sign languages: the Typannot typographic system. G21C 2022 "Grapholinguistics in the 21st century", Jun 2022, Palaiseau, France. ⟨hal-03741122⟩
70 Consultations
8 Téléchargements

Partager

More