...
首页> 外文期刊>IFAC PapersOnLine >Analysis of the Impact of Poisoned Data within Twitter Classification Models
【24h】

Analysis of the Impact of Poisoned Data within Twitter Classification Models

机译:在Twitter分类模型中分析中毒数据的影响

获取原文
   

获取外文期刊封面封底 >>

       

摘要

Many social-networks today face growing problems of group polarization, radicaliza-tion, and fake news. These issues are being exacerbated by the phenomenon of bots, which are becoming better at mimicking real people and are able to spread fake news faster within social-networks. Methods exist for detecting these social-media bots, but they may be vulnerable to manipulation. One way this might be done is through what is called a poisoning attack, where the data used to train a model is altered with the goal of reducing the models accuracy. The goal of this research is to study how poisoning attacks may be applied to models for detecting bots on Twitter. The results show that by introducing mislabeled data- points into a such a models training data, attackers can reduce its accuracy by up to twenty percent. The possibility of more effective poisoning techniques exists, and remains a topic for future research.
机译:如今,许多社交网络都面临着群体两极化,激进化和假新闻的日益严重的问题。僵尸现象加剧了这些问题,僵尸现象越来越能模仿真实的人,并且能够在社交网络中更快地传播假新闻。存在检测这些社交媒体机器人的方法,但它们可能容易受到操纵。一种可能的方法是通过所谓的中毒攻击,其中用于训练模型的数据被更改,目的是降低模型的准确性。这项研究的目的是研究如何将中毒攻击应用于在Twitter上检测机器人的模型。结果表明,通过将标记错误的数据点引入此类训练数据模型中,攻击者可以将其准确性降低多达20%。存在更有效的中毒技术的可能性,并且仍然是未来研究的主题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号