首页> 外文学位 >An empirical foundation for automated Web interface evaluation.
【24h】

An empirical foundation for automated Web interface evaluation.

机译:自动Web界面评估的经验基础。

获取原文
获取原文并翻译 | 示例

摘要

This dissertation explores the development of an automated Web evaluation methodology and tools. It presents an extensive survey of usability evaluation methods for Web and graphical interfaces and shows that automated evaluation is greatly underexplored, especially in the Web domain.; This dissertation presents a new methodology for HCI: a synthesis of usability and performance evaluation techniques, which together build an empirical foundation for automated interface evaluation. The general approach involves: (1) identifying an exhaustive set of quantitative interface measures; (2) computing measures for a large sample of rated interfaces; (3) deriving statistical models from the measures and ratings; (4) using the models to predict ratings for new interfaces; and (5) validating model predictions.; This dissertation presents a specific instantiation for evaluating information-centric Web sites. The methodology entails computing 157 highly-accurate, quantitative page-level and site-level measures. The measures assess many aspects of Web interfaces, including the amount of text on a page, color usage, and consistency. These measures along with expert ratings from Internet professionals are used to derive statistical models of highly-rated Web interfaces. The models are then used in the automated analysis of Web interfaces.; This dissertation presents analysis of quantitative measures for over 5300 Web pages and 330 sites. It describes several statistical models for distinguishing good, average, and poor pages with 93%–96% accuracy and for distinguishing sites with 68%–88% accuracy.; This dissertation describes two studies conducted to provide insight about what the statistical models assess and whether they help to improve Web design. The first study attempts to link expert ratings to usability ratings, but the results do not enable strong conclusions to be drawn. The second study uses the results of applying the statistical models for assessing and refining example sites and shows that pages and sites modified based on the models are preferred by participants—professional and non Web designers—over the original ones. Finally, this dissertation demonstrates use of the statistical models for assessing existing Web design guidelines.; This dissertation represents an important first step towards enabling nonprofessional designers to iteratively improve the quality of their designs.
机译:本文探讨了自动Web评估方法和工具的开发。它对Web和图形界面的可用性评估方法进行了广泛的调查,并表明对自动化评估的研究还很不足,尤其是在Web领域。本文提出了一种新的人机交互方法:可用性和性能评估技术的综合,共同为自动化接口评估奠定了经验基础。一般方法包括:(1)确定详尽的定量接口措施集; (2)大量额定接口样本的计算措施; (3)从测度和等级推导统计模型; (4)使用模型预测新界面的等级; (5)验证模型预测。本文提出了一种以信息为中心的网站评估实例。该方法需要计算157个高精度,定量的页面级和站点级度量。这些措施评估了Web界面的许多方面,包括页面上的文本量,颜色使用情况和一致性。这些措施与Internet专业人员的专家评级一起用于得出高度评价的Web界面的统计模型。然后将这些模型用于Web界面的自动化分析。本文对5300多个网页和330个站点的量化度量进行了分析。它描述了几种统计模型,用于以93%-96%的准确度区分好页,平均页和差页,以及以68%-88%的准确度区分站点。本文介绍了两项研究,以提供有关统计模型评估以及它们是否有助于改善Web设计的见解。第一项研究试图将专家评分与可用性评分联系起来,但结果无法得出强有力的结论。第二项研究使用了应用统计模型评估和完善示例站点的结果,并显示,基于模型修改的页面和站点比原始站点更受参与者(专业和非Web设计师)青睐。最后,本文演示了使用统计模型评估现有的Web设计指南。这篇论文代表了使非专业设计师能够迭代地提高其设计质量的重要的第一步。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号