Leonardo Villalobos Arias

Leonardo Villalobos Arias

Es estudiante: 
No
Programa en que estudia: 

Proyectos

Publicaciones

Evaluation of a model-based testing platform for Java applications

Descripción:

Model-based testing (MBT) automates the design and generation of test cases from a model. This process includes model building, test selection criteria, test case generation, and test case execution stages. Current tools support this process at various levels of automation, most of them supporting three out of four stages. Among them is MBT4J, a platform that extends ModelJUnit with several techniques, offering a high level of automation for testing Java applications. In this study, the authors evaluate the efficacy of the MBT4J platform, in terms of the number of test cases generated, errors detected, and coverage metrics. A case study is conducted using two open-source Java systems from public repositories, and 15 different configurations. MBT4J was able to automatically generate five models from the source code. It was also able to generate up to 2025 unique test cases for one system and up to 1044 for the other, resulting in 167 and 349 failed tests, respectively. Transition and transition pair coverage reached 100% for all models. Code coverage ranged between 72 and 84% for the one system and between 59 and 76% for the other. The study found that Greedy and Random were the most effective testers for finding errors.

Tipo de publicación: Journal Article

Publicado en: IET Software

Model-based testing areas, tools and challenges: A tertiary study

Descripción:

Context: Model-based testing is one of the most studied approaches by secondary studies in the area of software testing. Aggregating knowledge from secondary studies on model- based testing can be useful for both academia and industry. 

Objective: The goal of this study is to characterize secondary studies in model-based testing, in terms of the areas, tools and challenges they have investigated. 

Method: We conducted a tertiary study following the guidelines for systematic mapping studies. Our mapping included 22 secondary studies, of which 12 were literature surveys and 10 systematic reviews, over the period 1996–2016. 

Results: A hierarchy of model-based testing areas and subareas was built based on existing taxonomies as well as data that emerged from the secondary studies themselves. This hierarchy was then used to classify studies, tools, challenges and their tendencies in a unified classification scheme. We found that the two most studied areas are UML models and transition-based notations, both being modeling paradigms. Regarding tendencies of areas in time, we found two areas with constant activity through time, namely, test objectives and model specification. With respect to tools, we only found five studies that compared and classified model-based testing tools. These tools have been classified into common dimensions that mainly refer to the model type and phases of the model-based testing process they support. We reclassified all the tools into the hierarchy of model-based testing areas we proposed, and found that most tools were reported within the modeling paradigm area. With regard to tendencies of tools, we found that tools for testing the functional behavior of software have prevailed over time. Another finding was the shift from tools that support the generation of abstract tests to those that support the generation of executable tests. For analyzing challenges, we used six categories that emerged from the data (based on a grounded analysis): efficacy, availability, complexity, professional skills, investment, cost & effort, and evaluation & empirical evidence. We found that most challenges were related to availability. Besides, we too classified challenges according to our hierarchy of model-based testing areas, and found that most challenges fell in the model specification area. With respect to tendencies in challenges, we found they have moved from complexity of the approaches to the lack of approaches for specific software domains. 

Conclusions: Only a few systematic reviews on model-based testing could be found, therefore some areas still lack secondary studies, particularly, test execution aspects, language types, model dynamics, as well as some modeling paradigms and generation methods. We thus encourage the community to perform further systematic reviews and mapping studies, following known protocols and reporting procedures, in order to increase the quality and quantity of empirical studies in model-based testing.

Tipo de publicación: Journal Article

Publicado en: CLEI Electronic Journal