...
首页> 外文期刊>The Journal of Artificial Intelligence Research >The Effects of Experience on Deception in Human-Agent Negotiation
【24h】

The Effects of Experience on Deception in Human-Agent Negotiation

机译:人体代理商谈判经验对欺骗的影响

获取原文
   

获取外文期刊封面封底 >>

       

摘要

Negotiation is the complex social process by which multiple parties come to mutual agreement over a series of issues. As such, it has proven to be a key challenge problem for designing adequately social AIs that can effectively navigate this space. Artificial AI agents that are capable of negotiating must be capable of realizing policies and strategies that govern offer acceptances, offer generation, preference elicitation, and more. But the next generation of agents must also adapt to reflect their users’ experiences.     The best human negotiators tend to have honed their craft through hours of practice and experience. But, not all negotiators agree on which strategic tactics to use, and endorsement of deceptive tactics in particular is a controversial topic for many negotiators. We examine the ways in which deceptive tactics are used and endorsed in non-repeated human negotiation and show that prior experience plays a key role in governing what tactics are seen as acceptable or useful in negotiation. Previous work has indicated that people that negotiate through artificial agent representatives may be more inclined to fairness than those people that negotiate directly. We present a series of three user studies that challenge this initial assumption and expand on this picture by examining the role of past experience.     This work constructs a new scale for measuring endorsement of manipulative negotiation tactics and introduces its use to artificial intelligence research. It continues by presenting the results of a series of three studies that examine how negotiating experience can change what negotiation tactics and strategies human endorse. Study #1 looks at human endorsement of deceptive techniques based on prior negotiating experience as well as representative effects. Study #2 further characterizes the negativity of prior experience in relation to endorsement of deceptive techniques. Finally, in Study #3, we show that the lessons learned from the empirical observations in Study #1 and #2 can in fact be induced—by designing agents that provide a specific type of negative experience, human endorsement of deception can be predictably manipulated.
机译:谈判是复杂的社会进程,通过该社会进程,通过该社会进入一系列问题相互协议。因此,它已被证明是设计充分社会AIS的关键挑战问题,可以有效地导航这个空间。能够进行谈判的人工AI代理人必须能够实现管理所提供的政策和策略,以获得录取,提供生成,优先级诱因等。但下一代代理商也必须适应反映其用户的经历。最好的人类谈判者倾向于通过练习和经验来磨练他们的工艺。但是,并非所有谈判者都同意使用哪种战略策略,特别是欺骗性策略的认可是许多谈判者的争议课题。我们研究了欺骗性策略在非重复的人类谈判中使用并批准的方式,并表明事先经验在管理谈判中被视为可接受或有用的策略在谈判中发挥着关键作用。以前的工作表明,通过人工代理代表谈判的人可能比直接谈判的人更倾向于公平。我们展示了一系列三个用户研究,通过检查过去经验的作用,挑战这一初步假设并扩展了这幅画。这项工作构建了一种衡量操纵谈判策略的认可的新规模,并介绍了其对人工智能研究的用途。它继续介绍一系列三项研究的结果,了解谈判经验如何改变谈判策略和策略人类的策略。研究#1根据先前的谈判经验以及代表性效果来看待欺骗性技术的人的认可。研究#2进一步表征了与欺骗性技术相关的先前经验的消极性。最后,在研究#3中,我们表明,从研究#1和#2的经验观察中学到的经验教训实际上可以通过设计特定类型的负面经验的设计代理,可以预见地操纵欺骗的人类认可。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号