首页> 外文期刊>Cybernetics, IEEE Transactions on >Data-Driven Affective Filtering for Images and Videos
【24h】

Data-Driven Affective Filtering for Images and Videos

机译:数据驱动的图像和视频情感过滤

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, a novel system is developed for synthesizing user-specified emotions onto arbitrary input images or videos. Other than defining the visual affective model based on empirical knowledge, a data-driven learning framework is proposed to extract the emotion-related knowledge from a set of emotion-annotated images. In a divide-and-conquer manner, the images are clustered into several emotion-specific scene subgroups for model learning. The visual affection is modeled with Gaussian mixture models based on color features of local image patches. For the purpose of affective filtering, the feature distribution of the target is aligned to the statistical model constructed from the emotion-specific scene subgroup, through a piecewise linear transformation. The transformation is derived through a learning algorithm, which is developed with the incorporation of a regularization term enforcing spatial smoothness, edge preservation, and temporal smoothness for the derived image or video transformation. Optimization of the objective function is sought via standard nonlinear method. Intensive experimental results and user studies demonstrate that the proposed affective filtering framework can yield effective and natural effects for images and videos.
机译:在本文中,开发了一种新颖的系统,用于将用户指定的情绪合成到任意输入图像或视频上。除了基于经验知识定义视觉情感模型之外,还提出了一种数据驱动的学习框架,以从一组带有情感注释的图像中提取与情感相关的知识。以分而治之的方式,将图像聚类为几个特定于情绪的场景子组,以进行模型学习。视觉效果是根据局部图像斑块的颜色特征使用高斯混合模型建模的。出于情感过滤的目的,通过分段线性变换,将目标的特征分布与从特定于情感的场景子组构建的统计模型对齐。该变换是通过学习算法导出的,该学习算法通过并入一个正则化项来开发,该正则化项对导出的图像或视频变换强制执行空间平滑度,边缘保留和时间平滑度。通过标准非线性方法寻求目标函数的优化。大量的实验结果和用户研究表明,提出的情感过滤框架可以对图像和视频产生有效而自然的效果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号