Home / Computers/technology / News / A pledge is doing its round to stop AI weapons


Date: 2018-07-18

Tech leaders, including the Space X and Tesla founder, Elon Musk, as well as the three co-founders of Google's AI subsidiary, DeepMind, have signed a pledge promising to not develop "lethal autonomous weapons".

It is the latest move from an unofficial and global coalition of researchers and executives that are opposed to the propagation of such technology. The pledge warns that weapon systems that use AI to "[select] and [engage] targets without human intervention" pose moral and pragmatic threats. Morally, the signatories argue, the decision to take a human life "should never be delegated to a machine." On the pragmatic front, they say that the spread of such weaponry would be "dangerously destabilising for every country and individual."

The pledge was published at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, and was organised by the Future of Life Institute; a research institute that aims to "mitigate existential risk" to humanity. Max Tegmark, a signatory of the pledge and professor of physics at MIT, said in a statement that the pledge showed AI leaders "shifting from talk to action." Tegmark said the pledge did what politicians have not: impose hard limits on the development of AI for military use. "Weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons and should be dealt with in the same way," said Tegmark.

A pledge is doing its round to stop AI weapons 2

Some campaigners have suggested that LAWS should be subject to restrictions, similar to those placed on chemical weapons and landmines. However, it is incredibly difficult to draw a line between what does and does not constitute an autonomous system. They also point out that enforcing such laws would be a huge challenge, as the technology to develop AI weaponry is already widespread. Additionally, most countries that are involved in developing this technology (like the US and China) have no real incentive not to do so.

However, recent events have shown that collective activism like the pledge can make a difference. Google, for example, was rocked by employee protests after it was revealed that the company was helping develop non-lethal AI drone tools for the Pentagon. Weeks later is published the new research guidelines and promised that they will not develop AI weapon systems.

A pledge is doing its round to stop AI weapons 3

It is reasonable to point out that the organisations involved are not stopping themselves from developing military AI tools with other, non-lethal uses. But a promise not to put a computer solely in charge of killing is better than no promise at all.

The full text of the pledge can be read below, and a full list of signatories can be found here:


Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.

There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.

Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.


echo '