首页> 外文会议>IEEE/ACM International Workshop on Software Engineering for AI in Autonomous Systems >How Machine Perception Relates to Human Perception: Visual Saliency and Distance in a Frame-by-Frame Semantic Segmentation Task for Highly/Fully Automated Driving
【24h】

How Machine Perception Relates to Human Perception: Visual Saliency and Distance in a Frame-by-Frame Semantic Segmentation Task for Highly/Fully Automated Driving

机译:机器感知如何涉及人类感知:逐帧语义分段任务中的视觉显着性和距离,用于高/全自动驾驶

获取原文

摘要

In this paper, we investigate the link between machine perception and human perception for highly/fully automated driving. We compare the classification results of a camera-based frame-by-frame semantic segmentation model Machine with a well-established visual saliency model Human on the Cityscapes dataset. The results show that Machine classifies foreground objects better if they are more salient, indicating a similarity with the human visual system. For background objects, the accuracy drops when the saliency increases, giving evidence for the assumption that Machine has an implicit concept of saliency.
机译:在本文中,我们研究了对高/全自动驾驶的机器感知和人类感知之间的联系。我们将基于相机的帧逐帧语义分割模型机的分类结果与CityScapes DataSet的良好的视觉显着模型人类进行了比较。结果表明,如果它们更加突出,则机将前景对象进行更好地分类,表明与人类视觉系统的相似性。对于后台对象,显着增加时,精度下降,给出了机器具有隐式显着性概念的假设。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号