More than 100 leading figures in the development of Artificial Intelligence systems released a statement saying that “mitigating the risk of extinction caused by AI should be a global priority” as it was with pandemics and nuclear war. Read more here!
Concern about the potential negative impact of Artificial Intelligence on society continues to grow. Last week we told you that Open AI had asked international organizations to regulate the development and use of its own technology, which includes ChatGPTand now more than 100 industry experts have signed a statement warning that AI could become an existential threat to humanity comparable to nuclear war and pandemics.
The statementpublished on Tuesday, May 30 by the Center for AI Security (a non-profit organization), is much shorter than the letter released in March calling for a pause on AI development, which had been signed by hundreds of AI researchers and executives, including Elon Musk. What’s more, it’s just a sentence: “Mitigating the risk of extinction caused by AI should be a global priority along with other risks on a societal scale, such as pandemics and nuclear war.”
Some of the leading figures in the development of Artificial Intelligence who signed the declaration are the CEO of OpenAI, Sam Altman; the CEO of Google DeepMind, Demis Hassabis; the CEO of Anthropic, Dario Amodei; Geoffrey Hinton and Yoshua Bengio, two of the three Turing Award recipients for their work on deep learning; and dozens of entrepreneurs and researchers working on next-generation AI problems.
Along with the statement, the Center for AI Security added: “AI experts, journalists, policymakers and the public are increasingly discussing a broad spectrum of important and urgent AI risks. Even so, it can be difficult to raise concerns about some of the more serious risks of advanced AI.” The organization also clarifies that the statement aims to “overcome this obstacle and open the discussion”.
According to Wired, Max Tegmark, professor of physics at the Massachusetts Institute of Technology and director of the Future of Life Institute, a nonprofit focused on the long-term risks posed by AI, said the statement is “a great initiative” and which he hopes will encourage governments and the general public to take the existential risks of AI more seriously and discuss them “without fear of ridicule.”
On the other hand, Dan Hendricks, director of the Center for AI Security, compared the current concern about Artificial Intelligence with that among scientists about the creation of the atomic bomb. This parallelism had already been made in a blog by Open AI: “It is likely that in time we will need something like an IAEA [Organismo Internacional de Energía Atómica] for superintelligence efforts.”
However, not all AI experts agree with the AI-caused doomsday scenario, instead they care more about more immediate problems that this technology deepens, such as misinformation. What’s more, some researchers believe that the sudden alarm about long-term theoretical risk distracts attention from the issues at hand.
Among those who believe that fears of AI wiping out humanity are a distraction are Yann LeCunwho won the Turing Award with hinton and Bengio for the development of deep learning; and Arvind Narayanan, computer scientist from Princeton University. Speaking to the BBC in March, narayanan He said that the catastrophic scenarios typical of science fiction are not realistic, that the problem is that “attention has been diverted from the short-term damage of the AI” and “current AI is not capable enough for these risks to materialize.”
Another is Meredith Whittaker, President of the Signal Foundation and Co-Founder and Senior Advisor to the AI Now Institute (a non-profit organization focused on AI and the concentration of power in the technology industry). She said that many of those who signed the statement probably believe that the risks are real, but that the alarm “It doesn’t capture the real issues.”
He also added that the discussion of existential risk presents a new AI capability as if it were a product of natural scientific progress rather than a reflection of products shaped by corporate interests and control. “This speech is a kind of attempt to erase the work that has already been done to identify concrete damages and very significant limitations in these systems”said whittaker.
Margaret Mitchella Hugging Face researcher who left Google in 2021, said the long-term ramifications of AI are worth thinking about, but that those behind the statement appear to have done little to consider how they might prioritize more immediate damage, including how this technology is used for surveillance. “This statement, as written, and where it comes from, suggests to me that it will do more harm than help in determining what to prioritize,” added Mitchell.
Whether you are on one side or the other, there is no doubt that Artificial Intelligence continues to develop and expand in different areas. Although this brought more than one benefit, we must not stop thinking that if it is not limited or regulated it could get out of hand.