首页> 外文会议>IFIP/IEEE Symposium on Integrated Network and Service Management >Performance Prediction in Dynamic Clouds using Transfer Learning
【24h】

Performance Prediction in Dynamic Clouds using Transfer Learning

机译:使用转移学习的动态云中的性能预测

获取原文

摘要

Learning a performance model for a cloud service is challenging since its operational environment changes during execution, which requires re-training of the model in order to maintain prediction accuracy. Training a new model from scratch generally involves extensive new measurements and often generates a data-collection overhead that negatively affects the service performance.In this paper, we investigate an approach for re-training neural-network models, which is based on transfer learning. Under this approach, a limited number of neural-network layers are re-trained while others remain unchanged. We study the accuracy of the re-trained model and the efficiency of the method with respect to the number of re-trained layers and the number of new measurements. The evaluation is performed using traces collected from a testbed that runs a Video-on-Demand service and a Key-Value Store under various load conditions. We study model re-training after changes in load pattern, infrastructure configuration, service configuration, and target metric. We find that our method significantly reduces the number of new measurements required to compute a new model after a change. The reduction exceeds an order of magnitude in most cases.
机译:学习云服务的性能模型具有挑战性,因为其运行环境在执行过程中会发生变化,这需要对模型进行重新训练以保持预测准确性。从头开始训练新模型通常会涉及大量的新测量,并且通常会产生负面影响服务性能的数据收集开销。在本文中,我们研究了一种基于迁移学习的再训练神经网络模型的方法。在这种方法下,对有限数量的神经网络层进行了重新训练,而其他层则保持不变。我们针对重新训练的层数和新测量的数量,研究了重新训练的模型的准确性和方法的效率。使用从测试台收集的跟踪执行评估,该测试台在各种负载条件下运行视频点播服务和键值存储。我们研究在负载模式,基础架构配置,服务配置和目标指标发生变化之后的模型重新训练。我们发现,我们的方法大大减少了更改后计算新模型所需的新测量数量。在大多数情况下,减少量超过一个数量级。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号