首页> 美国卫生研究院文献>International Journal of Environmental Research and Public Health >Analysis and Validation of Cross-Modal Generative Adversarial Network for Sensory Substitution
【2h】

Analysis and Validation of Cross-Modal Generative Adversarial Network for Sensory Substitution

机译:感官替代的跨模态生成对抗网络的分析与验证

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Visual-auditory sensory substitution has demonstrated great potential to help visually impaired and blind groups to recognize objects and to perform basic navigational tasks. However, the high latency between visual information acquisition and auditory transduction may contribute to the lack of the successful adoption of such aid technologies in the blind community; thus far, substitution methods have remained only laboratory-scale research or pilot demonstrations. This high latency for data conversion leads to challenges in perceiving fast-moving objects or rapid environmental changes. To reduce this latency, prior analysis of auditory sensitivity is necessary. However, existing auditory sensitivity analyses are subjective because they were conducted using human behavioral analysis. Therefore, in this study, we propose a cross-modal generative adversarial network-based evaluation method to find an optimal auditory sensitivity to reduce transmission latency in visual-auditory sensory substitution, which is related to the perception of visual information. We further conducted a human-based assessment to evaluate the effectiveness of the proposed model-based analysis in human behavioral experiments. We conducted experiments with three participant groups, including sighted users (SU), congenitally blind (CB) and late-blind (LB) individuals. Experimental results from the proposed model showed that the temporal length of the auditory signal for sensory substitution could be reduced by 50%. This result indicates the possibility of improving the performance of the conventional vOICe method by up to two times. We confirmed that our experimental results are consistent with human assessment through behavioral experiments. Analyzing auditory sensitivity with deep learning models has the potential to improve the efficiency of sensory substitution.
机译:视觉听觉感官替代表明有助于有助于视觉受损和盲目的群体来识别对象并执行基本的导航任务。然而,视觉信息获取和听觉转换之间的高延迟可能导致盲群中缺乏成功采用这种援助技术;到目前为止,替代方法仍然只是实验室规模的研究或试验示范。数据转换的这种高延迟导致感知快速移动物体或快速环境变化的挑战。为了减少这种潜伏期,需要先前的听觉灵敏度分析。然而,现有的听觉敏感性分析是主观的,因为它们是使用人类行为分析进行的。因此,在本研究中,我们提出了一种基于跨模型生成的对抗网络的评估方法,以找到最佳听觉敏感性,以降低视觉听觉感官替换中的传输延迟,这与视觉信息的感知有关。我们进一步进行了基于人的评估,以评估拟议的基于模型的分析在人行为实验中的有效性。我们用三个参与者组进行了实验,包括观察用户(SU),共同盲(CB)和晚盲(LB)个人。来自所提出的模型的实验结果表明,感官替代的听觉信号的时间长度可降低50%。该结果表示可能会将传统语音方法的性能提高两倍。我们确认我们的实验结果与通过行为实验的人体评估一致。利用深度学习模型分析听觉敏感性有可能提高感官替代效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号