Nascimento, A.M.; Shimanuki, G.K.G.; Dias, L.A.V. Making More with Less: Improving Software Testing Outcomes Using a Cross-Project and Cross-Language ML Classifier Based on Cost-Sensitive Training. Appl. Sci.2024, 14, 4880.
Nascimento, A.M.; Shimanuki, G.K.G.; Dias, L.A.V. Making More with Less: Improving Software Testing Outcomes Using a Cross-Project and Cross-Language ML Classifier Based on Cost-Sensitive Training. Appl. Sci. 2024, 14, 4880.
Nascimento, A.M.; Shimanuki, G.K.G.; Dias, L.A.V. Making More with Less: Improving Software Testing Outcomes Using a Cross-Project and Cross-Language ML Classifier Based on Cost-Sensitive Training. Appl. Sci.2024, 14, 4880.
Nascimento, A.M.; Shimanuki, G.K.G.; Dias, L.A.V. Making More with Less: Improving Software Testing Outcomes Using a Cross-Project and Cross-Language ML Classifier Based on Cost-Sensitive Training. Appl. Sci. 2024, 14, 4880.
Abstract
As digitalization expands across all sectors, the economic toll of software defects on the U.S. economy reaches up to $2.41 trillion annually. High-profile incidents like the Boeing 787-Max 8 crash have shown the devastating potential of these defects, highlighting the critical importance of software testing within quality assurance frameworks. However, due to its complexity and resource intensity, the exhaustive nature of comprehensive testing often surpasses budget constraints. This research utilizes a machine learning (ML) model to enhance software testing decisions by pinpointing areas most susceptible to defects and optimizing scarce resource allocation. Previous studies have shown promising results using cost-sensitive training to refine ML models, improving predictive accuracy by reducing false negatives through addressing class imbalances in defect prediction datasets. This approach facilitates more targeted and effective testing efforts. Nevertheless, the generalizability of these models across different projects (cross-project) and programming languages (cross-language) remained untested. This study validates the model's applicability across diverse development environments by integrating various datasets from distinct projects into a unified, using a more interpretable ML approach. The results demonstrate that ML can support software testing decisions, enabling teams to identify up to seven times more defective modules with the same testing effort as a benchmark.
Keywords
Machine Learning; Imbalance; Software Defect Prediction; NASA MDP; Random Forest; Software Quality; Generalization; Cost-Sensitive; Cross-language; Cross-Project
Subject
Engineering, Safety, Risk, Reliability and Quality
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.