- About the Center
- Team RIM
- Innovation and Industry
Could We Trust Killer Robots?
In the year 2015, somewhere over the tribal territories of Pakistan, an American MQ-9 Reaper drone patrols a complex "kill zone"—an area of terrorist activity in which large numbers of civilians are also present. But on this mission, the drone isn't piloted from afar. It's on its own. The aircraft moves closer to gather information about a potential target. Infrared cameras, heat sensors and other tools of surveillance determine whether the target is indeed a militant, examining, for instance, whether he seems ready to attack… Science fiction? Not according to Ronald Arkin, the director of the Mobile Robot Lab at Georgia Tech. Since 2006, with support from the U.S. Army Research Office, Dr. Arkin and his colleagues have been working to develop features for a new generation of smart weapons: robot drones that are capable not only of carrying out pinpoint attacks but of deciding on their own when it is permissible to fire on a particular target. Dr. Arkin wants to create "lethal autonomous systems" that operate in strict accord with the laws of war… Could a machine ever be capable of making the practical and ethical decisions demanded of American troops in the field? Dr. Arkin thinks so. In fact, his work has been motivated in large part by his concerns about the failures of human decision-makers in the heat of battle, especially in attacking targets that aren't a threat.
Article can be found at Wall Street Journal.