【24h】

Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study

机译:我们应该学习概率模型进行模型检查吗?一种新方法和实证研究

获取原文

摘要

Many automated system analysis techniques (e.g., model checking, model-based testing) rely on first obtaining a model of the system under analysis. System modeling is often done manually, which is often considered as a hindrance to adopt model-based system analysis and development techniques. To overcome this problem, researchers have proposed to automatically "learn" models based on sample system executions and shown that the learned models can be useful sometimes. There are however many questions to be answered. For instance, how much shall we generalize from the observed samples and how fast would learning converge? Or, would the analysis result based on the learned model be more accurate than the estimation we could have obtained by sampling many system executions within the same amount of time? In this work, we investigate existing algorithms for learning probabilistic models for model checking, propose an evolution-based approach for better controlling the degree of generalization and conduct an empirical study in order to answer the questions. One of our findings is that the effectiveness of learning may sometimes be limited.
机译:许多自动化系统分析技术(例如,模型检查,基于模型的测试)都依赖于首先获取要分析的系统的模型。系统建模通常是手动完成的,通常被认为是采用基于模型的系统分析和开发技术的障碍。为了克服这个问题,研究人员建议根据示例系统执行情况自动“学习”模型,并表明学习的模型有时会很有用。但是,有许多问题需要回答。例如,我们应该从观察到的样本中得出多少结论,学习将收敛到多快?还是基于学习模型的分析结果比我们在相同时间内对许多系统执行情况进行采样所获得的估计值更准确?在这项工作中,我们研究了用于学习概率模型检查模型的现有算法,提出了一种基于进化的方法来更好地控制泛化程度,并进行了实证研究以回答问题。我们的发现之一是学习的有效性有时可能会受到限制。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号