您现在的位置: 首页> 研究主题> 视觉注意模型

视觉注意模型

视觉注意模型的相关文献在1995年到2022年内共计90篇,主要集中在自动化技术、计算机技术、无线电电子学、电信技术、心理学 等领域,其中期刊论文54篇、会议论文10篇、专利文献207340篇;相关期刊38种,包括社会心理科学、中外企业家、中国市场等; 相关会议8种,包括2014年全国特种设备安全与节能学术会议、第八届全国仿真器学术年会、第二届全国图象图形联合学术会议等;视觉注意模型的相关文献由269位作者贡献,包括焦李成、苏娟、马文萍等。

视觉注意模型—发文量

期刊论文>

论文:54 占比:0.03%

会议论文>

论文:10 占比:0.00%

专利文献>

论文:207340 占比:99.97%

总计:207404篇

视觉注意模型—发文趋势图

视觉注意模型

-研究学者

  • 焦李成
  • 苏娟
  • 马文萍
  • 马晶晶
  • 张瑞
  • 张菁
  • 杨小康
  • 杨罗
  • 杨颖
  • 臧笛
  • 期刊论文
  • 会议论文
  • 专利文献

搜索

排序:

年份

    • 熊伟; 徐永力
    • 摘要: On the basis of studying the theories of classical ITTI visual attention models, the defects of traditional visual models applied to sea-surface SAR images are summarized according to the characteristics of the background and the target of sea-surface SAR images. A visual attention model design algorithm for sea-surface SAR images is proposed. Firstly, the model uses the basic framework of the classical ITTI model, selects and extracts the texture and shape features that can describe the SAR image well. Then the corresponding saliency map of features is obtained. Secondly, the new integration mechanism of the saliency map of features is adopted to replace the linear-adding mechanism of the classical model for fusing the saliency maps and obtaining the overall saliency map. Finally, the gray features of the attention focus of all the saliency maps are integrated to select the optimal significance characterization. By using the multi-scale competitive strategy, the filtering and threshold segmentation are completed to realize the accurate screening of significant areas. Therefore, the detection of the significant areas of SAR images is completed. Experiments were carried out by using Terra-SAR-X and other satellite data, and their results verified the good significance-detection effects of the model. The model can better meet the demands of the detection of high-resolution image targets. By carrying out further comparative analysis with the classical visual model, it is discovered that the proposed algorithm can not only reduce the impact of the false alarm caused by speckle noise and uneven sea-clutter background on the detection result, but also greatly improve the detection speed by 25% to 45%.%在研究了经典ITTI等视觉注意模型的理论基础上,结合海面SAR图像背景及目标特点,对传统视觉模型应用于海面SAR图像的缺陷进行分析总结,提出一种适用于海面SAR图像视觉注意模型设计算法.首先,模型借鉴经典IT-TI模型的基本框架,选择并提取了能够较好描述SAR图像的纹理和形状特征,求取相应的特征显著图;其次,采用新的特征显著图整合机制替代经典模型的线性相加机制进行显著图融合得到总显著图;最后,综合各特征显著图下注意焦点的灰度特征,选择最佳的显著性表征,完成通过多尺度竞争策略对显著图的滤波及阈值分割实现显著区域的精确筛选,从而完成SAR图像的显著区域检测.实验采用TerraSAR-X等多幅卫星数据进行仿真实验,结果验证了模型良好的显著性检测效果,更符合实际高分辨率图像目标检测的应用需求.通过进一步与经典视觉模型对比分析,模型在改善了由斑点噪声和不均匀的海杂波背景对检测结果产生的虚警影响的同时,检测速度也较之提高了25%~45%.
    • 张伟奎; 钱涛; 余长江; 邱斌; 邹帅; 苏宇晨
    • 摘要: 本研究站在视觉注意模型的基础上,制定出了一种输电线路违规检测方法,分析了违规线路和其他环境的图像差异,分类计算线路边缘直方图特征,与其他线路环境的图像梯度分布相比,违规线路差异明显.对其展开修补,并采取一定的手段来划分违规线路,使其成为不同的阶段.全面分析整合机器人自身传感器的特点,将其与违规线路检测结果相结合,对现阶段机器人的运行情况进行判定,此外根据当前补修作业流程运作模型,对状态转移函数进行补充完善,形成规划的完成需要借助状态转移等手段开展起来.此外违规线路检测实验、行为规划实验等均离不开实验室模拟线路环境的支撑,实验表明该方法的有效性.
    • 赵博; 秦贵和
    • 摘要: First,the saliency is extracted from the original image based on visual attention model.Then,the image is decomposed by Non-negative Matrix Factorization (NMF) to reduce the dimension and scrambled by chaotic encryption.The decomposed saliency map is embedded into the frequency component of the original image by DCT-SVD transform.At the receiving terminal,the watermarking is extracted from the received image,and the saliency map of the received image is also extracted.Finally,comparison of the difference between the received image's saliency map and watermarking,the Lorenz curve and the reasonable threshold are used to determine whether the received image is changed by noise or artificial tampering.Compared with other similar algorithms,the proposed algorithm has good discrimination against noise interference and artificial tampering.%首先提取原图像基于视觉注意模型的显著图,然后通过NMF分解得到降维显著图作为灰度水印图像,之后通过混沌加密算法对水印图像进行置乱,将置乱后的水印信息通过DCT-SVD变换嵌入原图像的低频分量中.在接收端,对接收到的图像进行水印提取和解密操作得到原始灰度水印图像,即原图像的显著图,同时提取接收图像的显著图,通过对比原图像显著图与接收图像显著图的差异性,利用基于Lorenz曲线的差异算法与合理阈值判断接收图像是否受到噪声干扰或人为篡改.与其他类似算法的对比结果表明,该算法针对噪声干扰与人为篡改有着很好的区分性.
    • 张忠芳; 赵争; 魏钜杰
    • 摘要: 为了解决SAR图像基于人类视觉注意模型舰船检测算法中需要人工确定经验阈值的问题,提出一种自适应阈值的视觉注意模型SAR舰船检测算法.引入最大类间方差(OTSU)法确定自适应阈值进行图像初分割,再应用视觉注意模型得到视觉显著图,最终根据显著图的统计特性进行自适应阈值分割检测出舰船目标.该算法相对于已有的视觉注意模型舰船检测算法自动化程度更高,与视觉注意模型舰船检测算法以及目前普遍使用的双参数CFAR、K CFAR、KSW双阈值算法同时处理3种星载SAR数据——ENVISAT ASAR(25m)、Sentinel-1(10m)和Cosmo-SkyMed(3 m),进行对比分析实验,实验结果证明该算法简单、准确、高效.%In the SAR image,ship detection algorithm based on the human visual attention model needs to manually determine the experience threshold.In order to solve this problem,ship detection algorithm in synthetic aperture radar imagery based on adaptive threshold of visual attention model is proposed.Firstly,the maximum between class variance method,i.e.OTSU method,is introduced to determine the adaptive threshold for image segmentation.Then,a visual saliency map is obtained by using the visual attention model.Finally,according to the statistical characteristics of the saliency map,the adaptive threshold segmentation method is used to detect the ship targets.Compared with the existing visual attention model,the proposed ship detection algorithm has higher automation degree.The proposed algorithm,the visual attention model of ship detection algorithm,and the commonly used algorithm of two parameters CFAR,K-CFAR and KSW double threshold algorithm are used to deal with three kinds of spaceborne SAR data,which are ENVISAT ASAR (25 m),Sentinel-1 (10 m)and Cosmo-SkyMed (3 m)simultaneously.The comparative analysis experiment is then carried out.The experimental results show that the proposed algorithm is simple,accurate and efficient.
    • 黎宁; 龚元; 许莙苓; 顾晓蓉; 徐涛; ZhouHuiyu
    • 摘要: 目的 为研究多场景下的行人检测,提出一种视觉注意机制下基于语义特征的行人检测方法.方法 首先,在初级视觉特征基础上,结合行人肤色的语义特征,通过将自下而上的数据驱动型视觉注意与自上而下的任务驱动型视觉注意有机结合,建立空域静态视觉注意模型;然后,结合运动信息的语义特征,采用运动矢量熵值计算运动显著性,建立时域动态视觉注意模型;在此基础上,以特征权重融合的方式,构建时空域融合的视觉注意模型,由此得到视觉显著图,并通过视觉注意焦点的选择完成行人检测.结果 选用标准库和实拍视频,在Matlab R2012a平台上,进行实验验证.与其他视觉注意模型进行对比仿真,本文方法具有良好的行人检测效果,在实验视频上的行人检测正确率达93%.结论 本文方法在不同的场景下具有良好的鲁棒性能,能够用于提高现有视频监控系统的智能化性能.
    • 陈文杰; 周海英
    • 摘要: 多数图像目标识别过程只对主要目标物进行提取,再分类识别,造成图像背景信息丢失,为此提出一种背景约束机制(background restraint mechanism)下的目标识别方法.通过视觉注意模型分别提取图像的前景目标物和背景信息,实现图像的前景目标物与背景分离,通过对背景图像信息的提取识别形成对前景目标物的概率约束.将此约束机制引入分类器中形成一种BRM_GAM(background-restraint-mechanism_ Gaussian ARTMAP)分类模型,对前景目标物进行分类识别.实验结果表明,该方法有较好识别效率和时效性,符合人类认知.此外,提出一种利用GAM模型提取图像语义字典直方图,进行图像语义抽取的方法.
    • 刘尚旺; 胡剑兰
    • 摘要: 为快速准确地获取图像感兴趣区域,有必要从宏观视觉通道到微观视觉神经细胞全程模拟生物视觉机制。首先,在模拟宏观视觉 where 通道的超复数傅里叶变换 HFT(Hypercomplex Fourier Transform)模型中,为突显图像中的显著目标,增加背景通道,抑制背景信息;其次,用模拟生物视觉神经元的脉冲耦合神经网络 PCNN(Pulse Coupled Neural Network)来扩展 HFT 模型:将改进HFT 模型的显著图作为简化 PCNN 的输入图像,并利用最小交叉熵分割出感兴趣区域。实验结果表明,该感兴趣区域提取算法的准确性达到98.1%,提取时间为5.732 s,能够快速准确地检测出图像的感兴趣区域。%In order to acquire image’s region of interest quickly and accurately,it is necessary to simulate the whole process of biological vision mechanism from macroscopic visual channel to micro visual nerve cells.First,in hypercomplex Fourier transform (HFT)model which simulates the macroscopic visual “where”channel,in order to highlight the salient objects in images,we add background channel to inhibit the background information;Secondly,we expand HFT model by pulse coupled neural network (PCNN)which simulates the biological visual neurons:taking the salient map obtained by the improved HFT as the input image of simplified PCNN and taking the advantage of minimum cross entropy to segment the region of interest.Experimental results show that the accuracy of the proposed region of interest acquisition algo-rithm achieves 98.1%,and the extraction time is 5.732 s,so it can acquire the region of interest quickly and accurately.
    • 张倩
    • 摘要: 计算机视觉领域内,为了模拟人类视觉系统,越来越多的视觉注意模型不断涌现,但缺乏对其进行客观、公正、合理的评价方法体系。针对此问题,首先,对现有模型广泛使用的测试图像集和评价模型的性能指标进行梳理和总结;其次,将统计学中的均方差指标和双侧T-test假设检验方法引到选择性视觉注意模型的显著性评价上;最后,提出一个综合评价视觉注意模型的方法体系准则,经实验验证与文献分析其评价结果较为客观、公正、可信。
    • 金美玲
    • 摘要: 影像检索需提取大量的局部特征,本文提出运用视觉注意模型,以目标显著性提取显著目标区域,采用改进的SIFT算法提取关键点局部特征,得到特征向量,再对局部特征进行聚类.实验证明,此方法在图像检索精度和检索速度上有明显优势.
  • 查看更多

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号