A total of 116 founders of artificial intelligence companies from 26 countries wrote an open letter to the UN that, according to the Future of Life Institute, it “urgently addresses the challenge of lethal autonomous weapons (commonly called ‘murderous robots’) and prohibits Its use internationally. ”
In the document, the experts praised UN initiatives to create a group of government experts to address this issue, and to appoint Indian ambassador Amandeep Singh Gill to its board. However, they regret the fact that the group’s first meeting was canceled (it was postponed to November).
According to the signatories, “lethal autonomous weapons threaten to become the third war revolution. Once created, they will allow armed conflict to take place on a larger scale than ever before, and on a faster time scale than humans can understand.”
They further stress that “we do not have much time to act,” since “once this Pandora’s box is opened, it will be difficult to close it.” For these reasons, the founders of the artificial intelligence companies say: “We beg the parties involved to find a way to protect all of us from these dangers.”
Progressing towards legislation
This is not the first time the issue has been discussed within the UN. In December 2016, the organization convened a 123-member committee to review its Conventional Weapons Conference to include lethal autonomous weapons. Of the 123 members, 19 have already spoken out in favor of a total ban on such weapons.
Among the signatories are Mustafa Suleyman, one of the founders of DeepMind, a sister company of Google whose creations have already defeated mankind in Go, and Elon Musk. Musk is the CEO of several companies, such as SpaceX and Tesla, and has been warning about the potential dangers of artificial intelligence for some time.
Recently, an artificial intelligence from OpenAI (one of Musk’s companies) defeated some of the world’s leading professionals in DOTA 2. This brand motivated the executive to speak on Twitter about the importance of regulating the development of artificial intelligence systems, claiming that They are “a much greater threat” than a possible nuclear war against North Korea.
In the real world
According to The Verge, the urgency that the experts ask for in their letter is not unfounded. Ryan Gariepy, one of the signatories, told the site that “this is not a hypothetical scenario but a very real and very urgent concern that requires immediate action.” In addition, he considers that these weapons “are about to become reality and have a real potential to inflict significant harm on innocent people as well as global instability”
In fact, there are countries that are very interested in the creation of such weapons. South Korea, for example, already uses armed drones developed by Dodaam Systems to patrol its border with North Korea – although, for now, lethal attitudes still require the authorization of a human being.
This, however, creates a situation in which other countries also feel pressure to create such weapons for fear of being left behind in this field. In a Financial Times quoted by The Verge, a report by the US Department of Defense suggested investing in lethal autonomous weapons so the US could “keep ahead of its opponents who will also take advantage of the benefits of this technology.”