首页> 外文期刊>Expert systems with applications >Learning meta-knowledge for few-shot image emotion recognition
【24h】

Learning meta-knowledge for few-shot image emotion recognition

机译:学习Meta-Ingress of Meta-Image情感识别

获取原文
获取原文并翻译 | 示例

摘要

Previous studies have demonstrated that images are of great importance in attracting people's attention and motivating them to take action. Various attributes (e.g., colors, aesthetics, and embedded objects) related to images are considered driving factors. Among which emotions in images, in particular, play a critical role in stimulating individuals, based on the Stimulus-Organism-Response theory. Consequently, many researchers put great efforts to understand image emotions, ranging from developing theoretical models to a broad spectrum of applications. Due to the complex and unstructured characteristics of images, identifying image emotions is challenging. Although some significant progress in image emotion classification has been achieved, inherent constraints still remain unaddressed. For example, acquiring a sufficiently large amount of labeled data to train a good model is costly and inevitably requires lots of human efforts. Besides, building a generalized model applicable to different datasets still needs a deep exploration. Image emotions are very subjective, which also makes such a classification task difficult. This paper proposes a general meta-learning framework for the few-shot image emotion classification, called Meta-IEC. Meta-IEC provides the capability of: (i) adapting to a similar dataset but new classes that have not been encountered before, and (ii) generalizing to a completely different dataset where emotion classes are unseen in the training dataset and only very few labeled images are available. Meta-IEC is also able to capture the uncertainty and ambiguity during the meta-testing, where we implement a hierarchical Bayesian graphical model to understand latent relationships among various parameters between meta-training and meta-testing. Extensive experiments on three commonly used datasets empirically demonstrate the superiority of our method over several state-of-the-art baselines. For example, our meta-learning based model can achieve performance improvement up to 5+%. We also provide some managerial implications on parameter sensitivity and label selection of meta-training and meta-testing.
机译:以前的研究已经表明,图像是非常重要的吸引了人们的注意,并促使他们采取行动。与图像有关的各种属性(例如,颜色,美观性,和嵌入的对象)被认为是驱动因素。其中情绪的图像,特别是在刺激个人发挥关键作用的基础上,刺激一有机体 - 反映论。因此,许多研究人员把很大的努力来了解图像的情绪,从发展理论模型的应用范围非常广泛。由于图像的复杂化和非结构化的特性,识别图像的情绪是具有挑战性的。虽然在图像的情感类别一些显著已经取得进展,内在的约束仍然没有得到解决。例如,在获取足够大的标记数据来训练一个很好的模式是昂贵的,必然需要大量的人类努力的。此外,建立适用于不同的数据集广义模式仍需要深入的探索。图片情绪非常主观的,这也使得这种分类任务艰巨。本文提出了为数不多的镜头图像的情感类别,被称为元IEC一般的元学习框架。元IEC规定的能力:(一)适应了类似的数据集,但尚未遇到过的新的类,及(ii)推广到一个完全不同的数据集,其中的情感类都在训练数据集看不见的,只有极少数的标记图像可用。元IEC也能捕捉到元测试中,我们实施分层贝叶斯图形模型,以了解元的培训和元测试之间的各种参数之间的关系潜伏在不确定性和模糊性。三个常用数据集大量的实验经验证明了我们方法的优越性在国家的最先进的几个基准。例如,我们的元学习基于模型可以实现性能提高多达5 +%。我们还提供了参数的敏感性和元培训和元测试标签选择一些管理上的意义。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号