首页> 外文会议>IEEE International Conference on Software Quality, Reliability, and Security >Machine Learning to Evaluate Evolvability Defects: Code Metrics Thresholds for a Given Context
【24h】

Machine Learning to Evaluate Evolvability Defects: Code Metrics Thresholds for a Given Context

机译:机器学习评估进度的缺陷:给定上下文的代码度量阈值

获取原文

摘要

Evolvability defects are non-understandable and non-modifiable states that do not directly produce runtime behavioral failures. Automatic source code evaluation by metrics and thresholds can help reduce the burden of a manual inspection. This study addresses two problems. (1) Evolvability defects are not usually managed in bug tracking systems. (2) Conventional methods cannot fully interpret the relations among the metrics in a given context (e.g., programming language, application domain). The key actions of our method are to (1) gather training-data for machine learning by experts' manual inspection of some of the files in given systems (benchmark) and (2) employ a classification-tree learner algorithm, C5.0, which can deal with non-orthogonal relations between metrics. Furthermore, we experimentally confirm that, even with less training-data, our method provides a more precise evaluation than four conventional methods (the percentile, Alves' method, Bender's method, and the ROC curve-based method).
机译:进化能力的缺陷是不可理解的,不可修改的规定,不直接产生运行时行为失败。通过指标和阈值自动源代码分析可以帮助减少人工检查的负担。该研究解决了两个问题。 (1)可进化性缺陷不会在错误跟踪系统通常管理。 (2)的常规方法不能完全解释在给定上下文的指标之间的关系(例如,编程语言,应用域)。我们的方法的关键行动是:(1)收集的训练数据的机器学习在给定系统的一些文件的专家人工检查(基准)和(2)采用分类树学习算法,C5.0,它可以处理指标之间的非正交关系。此外,我们通过实验确认,即使有少的训练数据,我们的方法提供了超过四个常规方法(百分位,阿尔维斯的方法,本德尔的方法,和基于ROC曲线的方法)更精确的评估。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号