A distributed bug analyzer based on user-interaction features for mobile apps

TítuloA distributed bug analyzer based on user-interaction features for mobile apps
Tipo de publicaciónJournal Article
Year of Publication2017
AutoresMéndez-Porras, A, Méndez-Marín, G, Tablada-Rojas, A, Hidalgo, MNieto, García-Chamizo, JManuel, Jenkins, M, Martínez, A
JournalJournal of Ambient Intelligence and Humanized Computing
Volumen8
Pagination579–591
Date Published08/2017
ISSN1868-5145
Palabras claveAutomated testing, Digital imaging processing, Distributed bug analyzer, Interest points, User-interaction features
Resumen

Developers must spend more effort and attention on the processes of software development to deliver quality applications to the users. Software testing and automation play a strategic role in ensuring the quality of mobile applications. This paper proposes and evaluates a Distributed Bug Analyzer based on user-interaction features that uses digital imaging processing to find bugs. Our Distributed Bug Analyzer detects bugs by comparing the similarity between images taken before and after an user-interaction feature occurs. An interest point detector and descriptor is used for image comparison. To evaluate the Distribute Bug Analyzer, we conducted a case study with 38 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed (using SURF) to obtain interest points, from which a similarity percentage was computed, to identify the presence of bugs. We used a Master Computer, a Storage Test Database, and four Slave Computers to evaluate the Distributed Bug Analyzer. We performed 360 tests of user-interaction features in total. We found 79 bugs when manually testing user-interaction features, and 69 bugs when using digital imaging processing to detect bugs with a threshold fixed at 92.5{%} of similarity. Distributed Bug Analyzer evenly distributed tests that are pending in the Storage Test Database between the Slave Computers. Slave Computers 1, 2, 3, and 4 processed 21, 20, 23, and 36{%} of image pair respectively.

URLhttps://doi.org/10.1007/s12652-016-0435-7
DOI10.1007/s12652-016-0435-7