A more intelligent test case generation approach through task models manipulation
Résumé
Ensuring that an interactive application allows users to perform their activities and reach their goals is critical to the overall usability of the interactive application. Indeed, the effectiveness factor of usability directly refers to this capability. Assessing effectiveness is a real challenge for usability testing as usability tests only cover a very limited number of tasks and activities. This paper proposes an approach towards automated testing of effectiveness of interactive applications. To this end we resort to two main elements: an exhaustive description of users' activities and goals using task models, and the generation of scenarios (from the task models) to be tested over the application. However, the number of scenarios can be very high (beyond the computing capabilities of machines) and we might end up testing multiple similar scenarios. In order to overcome these problems, we propose strategies based on task models manipulations (e.g., manipulating task nodes, operator nodes, information...) resulting in a more intelligent test case generation approach. For each strategy, we investigate its relevance (both in terms of test case generation and in terms of validity compared to the original task models) and we illustrate it with a small example. Finally, the proposed strategies are applied on a real-size case study demonstrating their relevance and validity to test interactive applications.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|
Loading...