LAST AUGUST, SEVERAL dozen military drones and tanklike robots set off on an air and land drill 40 miles south of Seattle. The objective was to locate mock terrorists hiding among buildings. It was one of several exercises conducted last summer to test how artificial intelligence, with its ability to parse complex systems at lightning speed, could be deployed in combat zones. But the exercise served another, if not explicitly stated, purpose: to reflect the shift in the Pentagon's "humans in the loop" thinking when it comes to operating autonomous weapons. Military officials have begun to publicly push back against a reflexive discomfort with putting machines in charge of com- bat decisions. The policy remains that autonomous weapons must be designed "to allow commanders and operators to exercise appropriate levels of human judgment over the use of force," but some higher-ups now question whether humans need to OK every single trigger pull.
展开▼