Non-visual web access depends on the textual description of various non-text elements of web pages. The existing methods of describing images for non-visual access do not provide a strong coupling between described images and their description. If an image is reused multiple times either in a single Web site or across multiple times, it is required to keep the description at all instances. This paper presents a tightly coupled model termed accessible images (AIMS) which utilizes a steganography-based approach to embed the description in the images at the server side and updating alt text of the web pages with the description extracted with the help of a browser extension. The proposed AIMS model has been built, targeting toward a web image description ecosystem in which images evolve into a self-description phase. The primary advantage of the proposed AIMS model is the elimination of the redundant description of an image resource at multiple instances. The experiments conducted on a dataset confirm that the AIMS model is capable of embedding and extracting descriptions with an accuracy level of 99.6.
展开▼
机译:非可视 Web 访问依赖于网页中各种非文本元素的文本描述。描述非视觉访问图像的现有方法无法在描述的图像与其描述之间提供强耦合。如果图像在单个网站中多次重复使用或多次重复使用,则需要在所有实例中保留说明。本文提出了一种称为可访问图像(AIMS)的紧密耦合模型,该模型利用基于隐写术的方法将描述嵌入到服务器端的图像中,并借助浏览器扩展程序提取描述来更新网页的替代文本。所提出的AIMS模型已经建立,旨在建立一个网络图像描述生态系统,在这个生态系统中,图像会演变为自我描述阶段。所提出的AIMS模型的主要优点是消除了在多个实例中对图像资源的冗余描述。在数据集上进行的实验证实,AIMS模型能够以99.6%的准确率嵌入和提取描述。
展开▼