During humanitarian crises, there is a need for a large amount of information in a short period of time. Such need creates the base for misinformation such as rumors, fake news or hoaxes to spread within and outside the affected community. This results in (mis)information harms that can generate serious short term or long-term consequences. In such situations, there is a need for a joint human-machine effort to mitigate such harms. Computational scientists have created misinformation detection systems and algorithms, while social scientists have examined the roles of involved parties, examined the way misinformation spreads and convinces people. However, there has been no work, to our knowledge, in examining situations when the machine and human interact with each other in the context of misinformation. In order to systematically examine the harms from misinformation, we draw on Activity Theory to suggest a suitable framework. Such a framework enables interactions among the human and machines and their respective loops for the purpose of mitigation of misinformation harms
展开▼