首页> 外文期刊>Wired >Trigger Warning
【24h】

Trigger Warning

机译:触发警告

获取原文
获取原文并翻译 | 示例
           

摘要

LAST AUGUST, SEVERAL dozen military drones and tanklike robots set off on an air and land drill 40 miles south of Seattle. The objective was to locate mock terrorists hiding among buildings. It was one of several exercises conducted last summer to test how artificial intelligence, with its ability to parse complex systems at lightning speed, could be deployed in combat zones. But the exercise served another, if not explicitly stated, purpose: to reflect the shift in the Pentagon's "humans in the loop" thinking when it comes to operating autonomous weapons. Military officials have begun to publicly push back against a reflexive discomfort with putting machines in charge of com- bat decisions. The policy remains that autonomous weapons must be designed "to allow commanders and operators to exercise appropriate levels of human judgment over the use of force," but some higher-ups now question whether humans need to OK every single trigger pull.
机译:去年8月,几十名军事无人机和坦克机器人在西雅图南部40英里的空气和陆地钻钻。 目标是定位在建筑物之间隐藏的模拟恐怖分子。 它是去年夏天进行的几个练习之一,以测试人工智能如何,能够在闪电速度下解析复杂系统,可以在战斗区域中部署。 但是,如果没有明确说明,则练习为另一个,目的:反映五角大楼的“人类在循环中”思想的思想,朝着经营自主武器。 军事官员已经开始公开推动反思的反思不适与掌管蝙蝠决策的机器。 该政策仍然是,必须设计自主武器“让指挥官和运营商对使用武力进行适当的人类判断水平”,但现在有些较高的是人类是否需要每一次触发器。

著录项

  • 来源
    《Wired》 |2021年第7期|30-31|共2页
  • 作者

    WILL KNIGHT;

  • 作者单位
  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号