首页> 外文期刊>IEEE Transactions on Image Processing >Saliency Prediction on Omnidirectional Image With Generative Adversarial Imitation Learning
【24h】

Saliency Prediction on Omnidirectional Image With Generative Adversarial Imitation Learning

机译:具有生成对抗性模仿学习的全向图像的显着性预测

获取原文
获取原文并翻译 | 示例

摘要

When watching omnidirectional images (ODIs), subjects can access different viewports by moving their heads. Therefore, it is necessary to predict subjects’ head fixations on ODIs. Inspired by generative adversarial imitation learning (GAIL), this paper proposes a novel approach to predict saliency of head fixations on ODIs, named SalGAIL. First, we establish a dataset for attention on ODIs (AOI). In contrast to traditional datasets, our AOI dataset is large-scale, which contains the head fixations of 30 subjects viewing 600 ODIs. Next, we mine our AOI dataset and discover three findings: (1) the consistency of head fixations are consistent among subjects, and it grows alongside the increased subject number; (2) the head fixations exist with a front center bias (FCB); and (3) the magnitude of head movement is similar across the subjects. According to these findings, our SalGAIL approach applies deep reinforcement learning (DRL) to predict the head fixations of one subject, in which GAIL learns the reward of DRL, rather than the traditional human-designed reward. Then, multi-stream DRL is developed to yield the head fixations of different subjects, and the saliency map of an ODI is generated via convoluting predicted head fixations. Finally, experiments validate the effectiveness of our approach in predicting saliency maps of ODIs, significantly better than 11 state-of-the-art approaches. Our AOI dataset and code of SalGAIL are available online at https://github.com/yanglixiaoshen/SalGAIL .
机译:在观看全向图像(ODIS)时,受试者可以通过移动头部来访问不同的视口。因此,有必要预测ODIS上的受试者的头部固定。这篇论文提出了一种由生成的对抗仿仿古学习(GAIL)提出了一种新的方法来预测奥迪斯的头部固定的显着性,名为Salgail。首先,我们建立一个关于ODIS(AOI)的数据集。与传统数据集相比,我们的AOI数据集是大规模的,其中包含30个科目的头部固定,查看600 ODIS。接下来,我们挖掘我们的AOI数据集并发现三个调查结果:(1)头部固定的一致性在主题中是一致的,而且它与增加的主题号一起增长; (2)具有前中心偏置(FCB)的头部固定; (3)头部运动的幅度相似。根据这些调查结果,我们的Salgail方法适用于深度加强学习(DRL)来预测一个主题的头部固定,其中Gail了解DRL的奖励,而不是传统的人类设计的奖励。然后,开发了多流DRL以产生不同受试者的头部固定,并且通过卷积预测的头部固定产生ODI的显着图。最后,实验验证了我们在预测ODIS的显着性图中的方法的有效性,显着优于11个最先进的方法。我们的AOI数据集和SalGail代码在 https://github.com/yanglixiaoshen/salgail

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号