The beginning of the world’s first artificial intelligence security summit, which is taking place in the UK, was marked by a fierce debate on what measures and scope of action should be applied as part of efforts to prevent and minimize existential risks associated with AI.
Experts say that this event was a kind of reflection of the tension in the technological community, provoked by a disparate conceptual understanding of the essence of machine intelligence and potential scenarios for its development. The governments of the world, trying to interact with new realities within the framework of relatively standard practices, offer rules and guarantees in an attempt to minimize the risks of artificial intelligence.
At the summit, which is taking place in Bletchley Park, the former home of secret code-breakers during the Second World War, technology leaders and scientists could not come to a consensus on the justification and expediency of focusing attention on negative scenarios of AI development, among which, in addition to the literal physical threat to life, the likelihood of spreading disinformation and promoting narratives based on discriminatory meanings. In this case, the discussion is somewhat complicated by the fact that the issues of the possibility of the continuation of the existence of human civilization in the conditions of the functioning of digital thinking systems, which in terms of abilities surpass the human mind, are being considered.
Some attendees at the summit expressed concerns that so-called AI doomers would dominate the hearings. Their fears were not in vain.
Elon Musk, who stated that artificial intelligence could cause the extinction of mankind, came to the event in the company of British Prime Minister Rishi Sunak.
Aidan Gomez, co-founder, and chief executive officer of the AI company Cohere Inc., believes that in the context of thinking about the potential risks of artificial intelligence, one should not perceive the problem through the prism of those fantastic pictures of the future that are the plot basis of films about global disasters. In his opinion, in this case, attention should be focused on discussing practical, short-term harm, which is not what can be conditionally designated as the end of the world.
Chief Artificial Intelligence Specialist at Meta Platforms Inc. Yann LeCun accused competitors, including DeepMind co-founder Demis Hassabis, of exaggerating the potential threats posed by machine intelligence. In his opinion, statements about existential risks are something like planned actions aimed at carrying out a regulatory takeover of the industry. Demis Hassabis, during a conversation with journalists, called these accusations preposterous.
On the sidelines of the summit, Ciaran Martin, former head of the UK’s National Cybersecurity Center, described the current state of discussion on the risks associated with AI as a debate between those who are adherents of a catastrophic view of artificial intelligence and those who are supporters of the opinion that there are problems in this case, but they can be solved and they do not have a global existential nature that threatens human civilization. He said that public and private communities should take into account all risks.
It should be noted that new technological realities are traditionally perceived by people as something threatening, because in this case humanity is faced with the unknown, and the unknown is always frightening. In addition, artificial intelligence systems that have the potential for mental development exceeding the abilities of all living beings without exception, generate fear of a kind of matter that can absorb the habitual space of life. Such a reaction is natural and always occurs when technological reality makes the transition to a new stage of its existence. But the so-called organicity of fear does not negate the fact that there are risks. Humanity has not really encountered a parallel consciousness that is smarter than it. At the same time, we should not forget that the future is extremely difficult to predict from the perspective of the present. This is both a good and not the most optimistic fact.
At the closed meetings of the summit, attention was paid to the question of whether the development of advanced models of next-generation machine intelligence should be suspended. In this context, the problems of the influence of advanced AI on democracy, human rights, justice, and equality were considered separately.
The media, citing sources, report that in between seminars, Elon Musk was mobbed and held court with delegates from technology companies and civil society.
Matt Clifford, a representative of the British Prime Minister who helped organize the summit, said that the event is not focused on long-term risks and focuses on next year’s AI models.
At the same time, amid fierce discussions, there are signs of some convergence of positions. Max Tegmark, a professor at the Massachusetts Institute of Technology, who called for the suspension of the development of powerful artificial intelligence systems, said that the relevant discussions began to melt away. According to him, those who are concerned about existential risks, loss of control, and other similar dangers should initially support those who talk about direct harm, that they become their allies and jointly implement security standards.
As we have reported earlier, UK, US, EU and China Sign Declaration of AI’s Danger.