首页> 外文期刊>Multimedia Tools and Applications >Classifying informative and non-informative tweets from the twitter by adapting image features during disaster
【24h】

Classifying informative and non-informative tweets from the twitter by adapting image features during disaster

机译:通过在灾难期间调整图像功能来分类来自Twitter的信息和非信息推文

获取原文
获取原文并翻译 | 示例
           

摘要

During the crisis, people post a large number of informative and non-informative tweets on Twitter. Informative tweets provide helpful information such as affected individuals, infrastructure damage, availability and resource requirements, etc. In contrast, non-informative tweets do not provide helpful information related to either humanitarian organizations or victims. Identifying informative tweets is a challenging task during the disaster. People often post images along with text on Twitter during the disaster. In addition to tweet text features, image features are also crucial for identifying informative tweets. However, existing methods use only text features but do not use image features to identify crisis-related tweets during the disaster. This paper proposes a novel approach by considering the image features along with the text features. It includes a text-based classification model, an image-based classification model and a late fusion. The text-based classification model uses the Convolutional Neural Network (CNN) and the Artificial Neural Network (ANN). CNN is used to extract text features from a tweet and the ANN is used to classify tweets based on extracted text features of CNN. The image-based classification model uses the fine-tuned VGG-16 architecture to extract the image features from the image and classify the image in a tweet. The output of the text-based classification model and the image-based classification model are combined using the late fusion technique to predict the tweet label. Extensive experiments are carried out on Twitter datasets of various crises, such as the Mexico earthquake, California Wildfires, etc., to demonstrate the effectiveness of the proposed method. The proposed method outperforms the state-of-the-art methods on various parameters to identify informative tweets during the disaster.
机译:在危机期间,人们在Twitter上发布了大量信息和非信息推文。信息性推文提供有用的信息,如受影响的个人,基础设施损坏,可用性和资源要求等。相反,非信息性推文不提供与人道主义组织或受害者有关的有用信息。在灾难期间,识别信息推文是一个具有挑战性的任务。人们经常在灾难期间与Twitter上发布图像。除了推文文本特征外,图像功能也对于识别信息推荐也是至关重要的。但是,现有方法仅使用文本功能,但不使用图像功能来识别灾难期间识别与危机相关的推文。本文通过考虑图像特征以及文本特征来提出一种新的方法。它包括基于文本的分类模型,基于图像的分类模型和晚期融合。基于文本的分类模型使用卷积神经网络(CNN)和人工神经网络(ANN)。 CNN用于从Tweet中提取文本功能,ANN用于对基于CNN的提取的文本特征进行分类推文。基于图像的分类模型使用微调的VGG-16架构来从图像中提取图像特征,并在推文中对图像进行分类。基于文本的分类模型和基于图像的分类模型的输出使用后期融合技术来预测推文标签。在各种危机的Twitter数据集中进行了广泛的实验,例如墨西哥地震,加利福尼亚野火等,以证明该方法的有效性。所提出的方法优于各种参数的最先进的方法,以识别灾难期间的信息推文。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号