Today, smartphones present a great variety of features and embed an important number of sensors (accelerometer, touchscreen, light sensor, orientation sensor…). Created for giving context-aware abilities, these sensors also allow new types of interaction like shaking the phone or changing its orientation. Such type of interaction reduces limitation of mobile phones and paves the way to a great expansion of multimodal mobile interactions. Unfortunately, the current context of mobile software development makes difficult the development of multimodal applications. In this thesis, we intend to help developers to deal with this difficulty by providing a model-based solution that aims to facilitate the development of multimodal mobile interfaces. We adopt the principles of the Model Driven Engineering (MDE), which is particularly fitted for such context. Our proposition includes M4L (Mobile MultiModality Modeling Language) modeling language to model the (input/output) multimodal interactions and the MIMIC (MobIle MultImodality Creator) framework that allows the graphical modeling and automatic generation of multimodal mobile interfaces. Our approach respects the main criteria of the MDE in order to define an efficient model-based approach. The results of our evaluations suggest that using our approach facilitates the development of sensor-based multimodal mobile interfaces. With our contributions, we aim to facilitate the penetration of multimodal applications and benefit from their advantages.
Président (Examinateur) :M. Lionel SEINTURIER - Université Lille 1 Rapporteurs : Mme. Laurence NIGAY - Université Joseph Fourier Grenoble 1 et M. Yacine BELLIK - Université Paris Sud Examinateur : Mme. Marianne HUCHARD - Université Montpellier 2 Directeurs : M. José ROUILLARD - Université Lille 1 et M. Jean-Claude TARBY - Université Lille 1
Thesis of the team NOCE defended on 27/10/2014