首页> 中文期刊>电子学报 >基于随机化视觉词典组和上下文语义信息的目标检索方法

基于随机化视觉词典组和上下文语义信息的目标检索方法

     

摘要

There arc several problems existing in the conventional bag of visual words methods, such as low tune efficiency and large memory consumption,the synonymy and polysemy of visual words, furthermore,they may fail to return satisfactory results if the object region is inaccurate or if the captured object is too small to be represented with discriminative features. An object retrieval method based on randomized visual dictionaries and contextual semantic information is proposed for the above problems. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is used, and a group of scalable random visual vocabularies is generated;then,a new object model consisting of contextual semantic information is devised,which is drawn from the visual elements surrounding the query object; finally, the Kullback-Leibler divergence is introduced as a similarity measurement to accomplish object retrieval. Experimental results indicate that the distinguishability of objects is effectively improved and the object retrieval performance method is substantially boosted compared with the traditional methods.%传统的视觉词典法(Bag of Visual Words,BoVW)具有时间效率低、内存消耗大以及视觉单词同义性和歧义性的问题,且当目标区域所包含的信息不能正确或不足以表达用户检索意图时就得不到理想的检索结果.针对这些问题,本文提出了基于随机化视觉词典组和上下文语义信息的目标检索方法.首先,该方法采用精确欧氏位置敏感哈希(Exact Euclidean Locality Sensitive Hashing,E2LSH)对局部特征点进行聚类,生成一组支持动态扩充的随机化视觉词典组;然后,利用查询目标及其周围的视觉单元构造包含上下文语义信息的目标模型;最后,引入K-L散度(KullbackLeibler divergence)进行相似性度量完成目标检索.实验结果表明,新方法较好地提高了目标对象的可区分性,有效地提高了检索性能.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号