Home / Computers/technology / News / Drones Will Soon Decide To Kill, And We All Might Die!

DRONES WILL SOON DECIDE TO KILL, AND WE ALL MIGHT DIE!



The US Army recently announced they are developing the first drone that can spot and target vehicles and people using artificial intelligence, which is a big step forward. Whereas current military drones are still controlled by people, this advanced technology will decide who to kill with almost no human involvement.

Once completed, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for the wider society. This might lead to extermination, losing any semblance of humanity in the process and, at the same time, could widen the sphere of warfare so that the companies, engineers and scientists building AI become valid military targets.

Existing lethal military drones, like the MQ-9 Reaper, are carefully controlled and piloted via satellite. If a pilot drops a bomb or fires a missile, a human sensor operator actively guides the drone onto the chosen target using a laser. The crew has the final ethical, legal and operational responsibility for killing designated human targets.

One of the Reaper operators states: "I am very much of the mindset that I would allow an insurgent, however important a target, to get away rather than take a risky shot that might kill civilians."

Even with these drone killings, human emotions, judgements and ethics have always remained at the centre of war. The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows the psychological impact of remote killing.

This points to the direction one possible military and ethical argument by Ronald Arkin, in support of autonomous killing drones. If these drones drop the bombs, psychological problems among crew members can be avoided. The weakness in the argument is that you do not have to be responsible for killing or be traumatised by it. Intelligence specialists and other military personnel regularly analyse graphic footage from drone strikes.

Take the human out of the image and you also take out the humanity of the decision to kill. The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings. However, legal and ethical responsibility does not somehow just disappear if you remove human oversight. It, instead, becomes a higher responsibility for the fall on other people, including artificial intelligence scientists.

With autonomous drone weapon systems, certain lines of computer code would certainly be classed as dual-use (those which develop products for both civilian and military application). Companies like Google, and its employees or its systems, could become liable to attack from an enemy state.

There are even darker issues still, the whole point of the self-learning algorithms – programs that independently learn from whatever data they can collect – is to become better at whatever task they are given. If a lethal weapon, such as these killing drones, become better at its job through self-learning, someone will need to decide on an acceptable stage of development, how much it still needs to learn, at which it can be deployed.

Recent experiences of autonomous AI in society should serve as a warning. If machines are left to decide who dies, especially on a grand scale, then what we are witnessing is possible human extermination. Any government or military that unleashed such forces would violate whatever values it claims to be defending.


LATEST
Meet Elios, The Drone For Industrial Inspection
The First Heavy Lift Drone For High Altitude Cleaning
This Dragonfly Is A Remote Controlled Cyborg Drone
This Drone Can Grab Things With Its Robot Arms
Amphibious Drone Performs Belly Landings On The Water
Here Is Why We Need To Ban 'Killer Robots'
You Can Now Use Your Torso To Control Drones
Have You Seen This Shapeshifting-flying Dragon Drone?
Drones Can Detect Suspicious Activity In Real-time