首页> 外文期刊>Artificial intelligence >Comparing human behavior models in repeated Stackelberg security games: An extended study
【24h】

Comparing human behavior models in repeated Stackelberg security games: An extended study

机译:在重复的Stackelberg安全游戏中比较人类行为模型:一项扩展研究

获取原文
获取原文并翻译 | 示例

摘要

Several competing human behavior models have been proposed to model boundedly rational adversaries in repeated Stackelberg Security Games (SSG). However, these existing models fail to address three main issues which are detrimental to defender performance. First, while they attempt to learn adversary behavior models from adversaries' past actions ("attacks on targets"), they fail to take into account adversaries' future adaptation based on successes or failures of these past actions. Second, existing algorithms fail to learn a reliable model of the adversary unless there exists sufficient data collected by exposing enough of the attack surface - a situation that often arises in initial rounds of the repeated SSG. Third, current leading models have failed to include probability weighting functions, even though it is well known that human beings' weighting of probability is typically nonlinear. To address these limitations of existing models, this article provides three main contributions. Our first contribution is a new human behavior model, SHARP, which mitigates these three limitations as follows: (ⅰ) SHARP reasons based on success or failure of the adversary's past actions on exposed portions of the attack surface to model adversary adaptivity; (ⅱ) SHARP reasons about similarity between exposed and unexposed areas of the attack surface, and also incorporates a discounting parameter to mitigate adversary's lack of exposure to enough of the attack surface; and (ⅲ) SHARP integrates a non-linear probability weighting function to capture the adversary's true weighting of probability. Our second contribution is a first "repeated measures study" - at least in the context of SSGs -of competing human behavior models. This study, where each experiment lasted a period of multiple weeks with individual sets of human subjects on the Amazon Mechanical Turk platform, illustrates the strengths and weaknesses of different models and shows the advantages of SHARP. Our third major contribution is to demonstrate SHARP'S superiority by conducting real-world human subjects experiments at the Bukit Barisan Seletan National Park in Indonesia against wildlife security experts.
机译:已经提出了几种相互竞争的人类行为模型来模拟重复的Stackelberg安全游戏(SSG)中的有限理性对手。但是,这些现有模型无法解决不利于防御者绩效的三个主要问题。首先,尽管他们试图从对手的过去行为(“攻击目标”)中学习对手的行为模型,但他们却没有考虑到对手基于这些过去行为的成​​败而做出的未来适应。其次,除非存在通过暴露足够多的攻击面而收集到的足够数据,否则现有算法无法学习到可靠的对手模型-这种情况通常出现在重复SSG的初始回合中。第三,尽管众所周知,人类对概率的加权通常是非线性的,但是当前的领先模型未能包括概率加权函数。为了解决现有模型的这些限制,本文提供了三个主要贡献。我们的第一个贡献是一种新的人类行为模型SHARP,它减轻了以下三个限制:(ⅰ)SHARP的原因基于攻击者在攻击面的暴露部分上的过去动作的成功或失败,以模拟对手的适应性; (ⅱ)SHARP解释了攻击面暴露和未暴露区域之间相似的原因,并且还加入了折现参数以减轻对手缺乏足够攻击面暴露的机会; (ⅲ)SHARP集成了非线性概率加权函数,以捕获对手对概率的真实加权。我们的第二个贡献是,至少在SSG的背景下,首次开展了有关竞争人类行为模型的“重复措施研究”。这项研究的每个实验在Amazon Mechanical Turk平台上针对不同组的人类受试者进行了长达数周的研究,阐明了不同模型的优缺点,并展示了SHARP的优势。我们的第三项主要贡献是通过在印度尼西亚的武吉巴里森实里丹国家公园与野生动植物安全专家进行现实世界的人类实验,证明SHARP的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号