首页> 外文会议>IEEE International Conference on Software Quality, Reliability, and Security >Cross-Entropy: A New Metric for Software Defect Prediction
【24h】

Cross-Entropy: A New Metric for Software Defect Prediction

机译:跨熵:软件缺陷预测的新度量

获取原文

摘要

Defect prediction is an active topic in software quality assurance, which can help developers find potential bugs and make better use of resources. To improve prediction performance, this paper introduces cross-entropy, one common measure for natural language, as a new code metric into defect prediction tasks and proposes a framework called DefectLearner for this process. We first build a recurrent neural network language model to learn regularities in source code from software repository. Based on the trained model, the cross-entropy of each component can be calculated. To evaluate the discrimination for defect-proneness, cross-entropy is compared with 20 widely used metrics on 12 open-source projects. The experimental results show that cross-entropy metric is more discriminative than 50% of the traditional metrics. Besides, we combine cross-entropy with traditional metric suites together for accurate defect prediction. With cross-entropy added, the performance of prediction models is improved by an average of 2.8% in F1-score.
机译:缺陷预测是软件质量保证中的活动主题,可以帮助开发人员找到潜在的错误并更好地利用资源。为了提高预测性能,本文介绍了跨熵,自然语言的一种常见措施,作为缺陷预测任务的新代码度量,并提出了一个称为该过程的缺陷仪器的框架。我们首先构建经常性的神经网络语言模型,以从软件存储库中学习源代码中的规律性。基于训练的模型,可以计算每个组件的跨熵。为了评估缺陷的歧视,将跨熵与12个开源项目上的20个广泛使用的指标进行比较。实验结果表明,交叉熵度量比传统指标的50%更为辨别。此外,我们将交叉熵与传统的公制套件相结合,共同进行准确的缺陷预测。通过跨熵添加,F1分数的预测模型的性能平均增长2.8 %。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号