Benchmark suites are significant for evaluating various aspects of Cloudservices from a holistic view. However, there is still a gap between usingbenchmark suites and achieving holistic impression of the evaluated Cloudservices. Most Cloud service evaluation work intended to report individualbenchmarking results without delivering summary measures. As a result, it couldbe still hard for customers with such evaluation reports to understand anevaluated Cloud service from a global perspective. Inspired by the boostingapproaches to machine learning, we proposed the concept Boosting Metrics torepresent all the potential approaches that are able to integrate a suite ofbenchmarking results. This paper introduces two types of preliminary boostingmetrics, and demonstrates how the boosting metrics can be used to supplementprimary measures of individual Cloud service features. In particular, boostingmetrics can play a summary Response role in applying experimental design toCloud services evaluation. Although the concept Boosting Metrics was refinedbased on our work in the Cloud Computing domain, we believe it can be easilyadapted to the evaluation work of other computing paradigms.
展开▼