Discipline: Computer Sciences and Information Management
Subcategory: Computer Science & Information Systems
Session: 1
Room: Park Tower 8219
John Heaps - University of Texas at San Antonio
Co-Author(s): Rodney Rodriguez, University of Texas at San Antonio, San Antonio, TX; Xiaoyin Wang, University of Texas at San Antonio, San Antonio,TX
The measurement and estimation of reliability of software systems is necessary to make decisions about those systems, such as maintenance, adoption, development, etc. Various process-based bug detection approaches and reliability models have been used. However, these approaches, while useful, have many limitations: they require fine-grained monitoring of software processes, new software components can cause an increase in defects, and they often cannot differentiate between different types of defects. Therefore, we propose a framework that utilizes a pattern-based approach to detect bug patterns in software artifacts (e.g., code base) which is used to estimate the reliability of the system.
The framework consists of four major components: Learning Engine, Pattern Detection, Test Integrator, Reliability Model. The Learning Engine mines online bug repositories and project hosting websites to identify types of bugs, and then estimates the probability a pattern relates to software failures. The Code Analyzer concretizes such bugs to patterns to detect them in software artifacts using pattern-based techniques and tools. The Test Integrator estimates the probability that each pattern instance is activated in program execution. The Reliability Model links the types of bugs to abstract software reliability aspects, and then estimates a reliability score based on the aspects of the detected patterns. To evaluate the effectiveness of our framework, we applied our model on 236 Java projects and found that most projects had high overall reliability with values greater than 0.8, where 0 is the lowest score and 1 is the highest score. However, there was still approximately 20% of projects that had reliability lower than 0.5. We further investigated the validity of our scores by comparing the reliability of projects by Google and Square against average open source projects. As expected, we found that both Google and Square projects were much better (above 0.8) than the average open source projects (approximately 0.5). Our future goals include: investigation of machine learning applications to help automatically generate bug patterns, as they are currently produced manually which is slow and tedious; inclusion of automatic test case generation and execution, as current test integration is preliminary; integration of positive code pattern detection, as currently only negative code patterns are utilized.
References:
Nathaniel Ayewah , David Hovemeyer , J. David Morgenthaler , John Penix , William Pugh, Using Static Analysis to Find Bugs, IEEE Software, v.25 n.5, p.22-29, September 2008.
Peter Mell , Karen Scarfone , Sasha Romanosky, Common Vulnerability Scoring System, IEEE Security and Privacy, v.4 n.6, p.85-89, November 2006.
Guangtai Liang , Jian Wang , Shaochun Li , Rong Chang, PatBugs: A Pattern-Based Bug Detector for Cross-platform Mobile Applications, Proceedings of the 2014 IEEE International Conference on Mobile Services, p.84-91, June 27-July 02, 2014.
Funder Acknowledgement(s): The MSI STEM Research & Development Consortium (MSRDC, Award #D01_W911SR14-2-0001-0012) ; The National Science Foundation (NSF, Award #173620);CREST Center for Security and Privacy Enhanced Cloud Computing (C-SPECC, Award #1736209)
Faculty Advisor: Jianwei Niu, jianwei.niu@utsa.edu
Role: I was involved in most aspects of the research, including the collection of types of bugs, production of bug patterns, detection of bug patterns in code, and the design of the measurement model.