Elon Musk and AI leaders call for a ban on killer robots
Leaders in the fields of AI and robotics, including Elon Musk and Google DeepMind’s Mustafa Suleyman, have signed a letter calling on the United Nations to ban lethal autonomous weapons, otherwise known as “killer robots.” In their petition, the group states that the development of such technology would usher in a “third revolution in warfare,” that could equal the invention of gunpowder and nuclear weapons.
“Once developed, [autonomous weapons] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” write the signatories. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
The letter is signed by the founders of 116 AI and robotics companies from 26 countries, and was published this weekend ahead of the International Joint Conference on Artificial Intelligence (IJCAI). This was intended to coincide with the beginning of formal talks by the UN exploring such a ban. 123 member nations agreed to the talks — which were triggered in part by the publication of a similar petition in 2015 — but discussions have been delayed due to unpaid fees from member states.
The experts signing the letter say that autonomous weapons that kill without human intervention are “morally wrong,” and that their use should be controlled under the 1983 Convention on Certain Conventional Weapons (CCW). This UN agreement regulates the use of a number of types of weapons including land mines, fire bombs, and chemical weapons.
The use of drones by the US is seen by some as a prelude to the deployment of autonomous weapons — allowing nations to conduct war with little fanfare or attention.
Signatories to the letter say the need to act is urgent. “This is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action,” said signatory Ryan Gariepy, founder of Clearpath Robotics, in a press statement. “We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”
A number of nations are currently developing lethal autonomous weapons, including the US, China, Russia, and Israel. Some systems have already been deployed, like autonomous border turrets built by South Korean arms manufacturer Dodaam Systems. The turrets are equipped with machine guns and are technically capable of identifying and firing on targets without human intervention, although currently human operators have to authorize any lethal shots.
Proponents of autonomous weapons say such technology could reduce battlefield casualties, and would be able to discriminate more accurately between civilians and combatants. But critics says these attitudes will only lead to such weapons being deployed more frequently, and cite the use of drone strikes by the US, which have allowed the country to conduct persistent bombing campaigns across the Middle East.
At this point in time no country seems likely to slow its development of such weapons for fear that others will overtake them. A US department of defense report on the subject cited by the Financial Times urges increased investment in autonomous weapon technology, so that America may “remain ahead of adversaries who also will exploit its operational benefits.”
This sort of arms race mentality is exactly the situation that the AI and robotics experts want to avoid. As the petition submitted to the UN states: “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”