首页> 外文期刊>Contemporary security policy >How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons
【24h】

How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons

机译:国际军控对军事人工智能的可行性如何?核武器的三个教训

获取原文
获取原文并翻译 | 示例
       

摘要

Many observers anticipate "arms races" between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the militarization of AI. It applies three analytical lenses to argue that (1) norm institutionalization can counter or slow proliferation; (2) organized "epistemic communities" of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to "normal accidents," such that assurances of "meaningful human control" are largely inadequate. I conclude that while there are key differences, understanding these lessons remains essential to those seeking to pursue or study the next chapter in global arms control.
机译:许多观察家预计,试图在各种军事应用中部署人工智能(AI)的国家之间会发生“军备竞赛”,其中一些人出于道德和法律依据,或者从战略稳定性或事故风险的角度提出了担忧。军用AI的军备控制制度有多可行?本文与控制核武器的经验相提并论,以研究预防,引导或遏制AI军事化的努力的机遇和陷阱。它运用三个分析视角来论证:(1)规范制度化可以抵消或减缓扩散; (2)组织“疫病界”的专家可以有效地促进军备控制; (3)许多军事AI应用程序仍然容易受到“正常事故”的影响,因此对“有意义的人为控制”的保证在很大程度上是不够的。我得出的结论是,尽管存在关键差异,但对于那些寻求追求或研究全球军备控制下一章的人来说,理解这些教训仍然至关重要。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号