【24h】

Detecting Sexually Provocative Images

机译:检测色情图片

获取原文

摘要

While the abundance of visual content available on the Internet, and the easy access to such content by all users allows us to find relevant content quickly, it also poses challenges. For example, if a parent wants to restrict the visual content which their child can see, this content needs to either be automatically tagged as offensive or not, or a computer vision algorithm needs to be trained to detect offensive content. One type of potentially offensive content is sexually explicit or provocative imagery. An image may be sexually provocative if it portrays nudity, but the sexual innuendo could also be contained in the body posture or facial expression of the human subject shown in the photo. Existing methods simply analyze skin exposure, but fail to capture the hidden intent behind images. Thus, they are unable to capture several important ways in which an image might be sexually provocative, hence offensive to children. We propose to address this problem by extracting a unified feature descriptor constituting the percentage of skin exposure, the body posture of the human in the image, and his/her gestures and facial expressions. We learn to predict these cues, then train a hierarchical model which combines them. We show in experiments that this model more accurately detects sexual innuendos behind images.
机译:尽管Internet上可用的视觉内容丰富,并且所有用户都可以轻松访问这些内容,这使我们能够快速找到相关内容,但这也带来了挑战。例如,如果父母想限制他们的孩子可以看到的视觉内容,则需要将该内容自动标记为具有或不具有攻击性,或者需要训练计算机视觉算法以检测令人反感的内容。一种潜在的令人反感的内容是色情或挑衅性的图像。如果图像刻画裸露图像,可能会激发性欲,但照片中所显示的人类对象的身体姿势或面部表情中也可能包含性暗示。现有方法仅分析皮肤暴露情况,但无法捕获图像背后的隐藏意图。因此,他们无法捕捉图像可能具有性挑衅,从而冒犯儿童的几种重要方式。我们建议通过提取一个统一的特征描述符来解决这个问题,该描述符描述了皮肤暴露的百分比,图像中人的身体姿势以及他/她的手势和面部表情。我们学习预测这些线索,然后训练结合它们的层次模型。我们在实验中表明,该模型可以更准确地检测出图像背后的性爱。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号