...
首页> 外文期刊>Universal access in the information society >Accessible images (AIMS): a model to build self-describing images for assisting screen reader users
【24h】

Accessible images (AIMS): a model to build self-describing images for assisting screen reader users

机译:辅助图像 (AIMS):用于构建自描述图像以帮助屏幕阅读器用户的模型

获取原文
获取原文并翻译 | 示例

摘要

Non-visual web access depends on the textual description of various non-text elements of web pages. The existing methods of describing images for non-visual access do not provide a strong coupling between described images and their description. If an image is reused multiple times either in a single Web site or across multiple times, it is required to keep the description at all instances. This paper presents a tightly coupled model termed accessible images (AIMS) which utilizes a steganography-based approach to embed the description in the images at the server side and updating alt text of the web pages with the description extracted with the help of a browser extension. The proposed AIMS model has been built, targeting toward a web image description ecosystem in which images evolve into a self-description phase. The primary advantage of the proposed AIMS model is the elimination of the redundant description of an image resource at multiple instances. The experiments conducted on a dataset confirm that the AIMS model is capable of embedding and extracting descriptions with an accuracy level of 99.6.
机译:非可视 Web 访问依赖于网页中各种非文本元素的文本描述。描述非视觉访问图像的现有方法无法在描述的图像与其描述之间提供强耦合。如果图像在单个网站中多次重复使用或多次重复使用,则需要在所有实例中保留说明。本文提出了一种称为可访问图像(AIMS)的紧密耦合模型,该模型利用基于隐写术的方法将描述嵌入到服务器端的图像中,并借助浏览器扩展程序提取描述来更新网页的替代文本。所提出的AIMS模型已经建立,旨在建立一个网络图像描述生态系统,在这个生态系统中,图像会演变为自我描述阶段。所提出的AIMS模型的主要优点是消除了在多个实例中对图像资源的冗余描述。在数据集上进行的实验证实,AIMS模型能够以99.6%的准确率嵌入和提取描述。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号