Drones and other robotic war-fighting machines are becoming more and more ubiquitous. China has a new set of attack drones -- aimed, for now anyway, at Russia -- and nobody loves robotic warfare more than our drone-lovin' president.
So it's probably wise to ask: when will the unmanned attack drones start making their own decisions? When -- to borrow a phrase from the increasingly more plausible Terminator films -- will the machines become self-aware? Are we on the way to a real-life Skynet situation?
Relax. The Pentagon has already thought of this. And they've got it all figured out. From Wired:
The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.
Here’s what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage (.pdf, thanks to Cryptome.org). Translated from the bureaucrat, the Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.
Oh. Never mind, then.