Over the past few years, some experts in the technology area, with varying degrees of anxiety contained in their statements and assumptions, have been warning that advanced artificial intelligence systems could become a threat to humanity, but these are just hypothetical scenarios, the potential implementation of which does not exactly belong to the category of the inevitable variation of the future.
It is worth noting that human perception as a whole is characterized by, at least, a wary attitude towards everything unknown or little studied. The lack of an accurate or at least significant understanding of the consequences of a particular process or phenomenon causes thoughts to move towards alarming predictions, including those assumptions about the future that contain scenes of apocalyptic events or reproduce various massive versions of global catastrophes from average disaster films, foreshadowing what can be called the demise of the world. At the same time, fear does not always reflect reality, both in its configuration in the form of an already realized or realizable state of affairs, and in the context of expectations about what form of being will eventually become the natural order of things. Artificial intelligence has huge potential. Futurological forecasts suggest that over time, machine intelligence systems, in the course of their technological evolution, will transform into some kind of autonomous form of consciousness capable of independent development and surpassing the human mind in terms of cognitive abilities. It is worth noting that this is an assumption, not a guaranteed reality that will be realized in the future. At the same time, artificial intelligence has already demonstrated impressive abilities, for example, in processing huge amounts of information, including in the context of an analytical approach. Also, chatbots powered by machine intelligence, such as ChatGPT from OpenAI or Gemini from Google, can generate original content based on user requests. Against this background, the movement of artificial intelligence towards the ability of independent reasoning and autonomous cognition of the world as a global matter of the cosmic being, not limited by the space of existence of human civilization, seems almost guaranteed. At the same time, these abilities still continue to be a potential and remain in the category of hypotheses. Under the assumption that artificial intelligence will nevertheless become an independent form of consciousness, surpassing the human mind in the context of cognitive parameters, impressive prospects are emerging. The ancient Greek philosopher Pythagoras defined the cosmos as the world existing around a person. What the world around artificial intelligence will be like in terms of how the digital mind will perceive and understand this environment is still unknown. However, most likely it will be a fundamentally and drastically new interpretation of being as a global process of existence in the context of a cosmological approach. It is possible that AI will become something like a new form of intellectual evolution in the space of life known to humans. Against this background, concerns about the potential dangers of machine intelligence are largely natural. In this context, humanity is faced with the prospect of co-existence with not just a completely unknown, but also a more powerful form of consciousness. At the same time, these fears should not be perceived as an inevitability and a sure premonition of a guaranteed catastrophe. In 2024, the artificial intelligence industry demonstrated such an experience of functioning, which contains enough reasons to weaken the corresponding fears and to boost optimism about the future of AI as an organic part of the space of human existence.
Last year, generative machine intelligence was in the condition of a kind of technological prosperity that is likely to last into 2025 and beyond. Generative artificial intelligence has also demonstrated its practical benefits. Moreover, AI is gradually shaping monetization opportunities. The beneficiaries of the so-called artificial intelligence boom are not only developers of the relevant technology but also companies that produce the material products necessary for the industry to move forward, for example, chips for training and ensuring the subsequent functioning of AI.
The current vision of the evolution of machine intelligence is already generating concrete results. In this case, it implies a vision that has become a kind of general concept for the global technology sector.
Humanity likes terms and definitions as a way of verbally explaining the world around it, giving this space of the being meaning and a kind of identification as a surveyed territory. There is also a name for those who warn about the catastrophic risks associated with artificial intelligence. In this case, it implies a kind of term of modernity like AI doomers. It is worth noting that those who are covered by the relevant wording do not agree with such definitions. AI doomers are nervously watching the evolution of artificial intelligence, believing that in the course of this process, a situation will form in the future when digital thinking systems will make decisions about killing people. In this context, it is worth mentioning that among some proponents of this vision of technological dangers, there is a widespread opinion that machine intelligence, having become the highest form of rational consciousness, can identify humanity as a useless, anachronistic, or threatening biological species. According to their theory, an appropriate conclusion by artificial intelligence could mean disastrous consequences for humankind.
AI doomers also fear that digital intelligence may become a tool that governments will use to oppress the masses or launch various destructive processes in society. It is worth noting that this point of view does not envisage a kind of fatal force within artificial intelligence itself as a form of consciousness.
In 2023, assumptions were actively circulating in the area of thought that a kind of renaissance of technological regulation was approaching. The issue of the safe functioning of artificial intelligence is an urgent topic of discussion at the global level. Obviously, this is a constructive and important question. Any technology can be applied both within the framework of favorable scenarios and to achieve destructive goals. For example, scammers use artificial intelligence, which is why their activities have become more sophisticated. Personal awareness of users is important to counter such cyber threats. For example, an Internet search query such as how to know if my camera is hacked will allow anyone to get information about signs of unauthorized access to the device.
Obviously, governments are interested in regulating the artificial intelligence industry. At the same time, in this case, it is extremely important not to form a control system that will limit the development of AI. The logic of authority as a kind of functional system with its own philosophy provides for the mandatory regulation of powerful technologies, the mass use of which can potentially have significant economic, social, political, and cultural consequences.
Artificial intelligence can cause harm on a societal scale within the framework of such failures and features of its functioning, such as, for example, insufficient content moderation and so-called hallucinations, which are actually made-up information in response to user requests about specific facts, processes, and phenomena of reality. At the same time, this does not mean that AI is a symbolic car on humanity’s equally symbolic path to destruction.
In the context of concerns about artificial intelligence, it is also worth noting that people tend to think about death and fear the cessation of personal being. Such fears are sometimes extrapolated beyond the limits of individual life to the global existence of mankind. Within the framework of such reasoning, such a powerful force as advanced technologies, including artificial intelligence, can become a kind of trigger for nervous mental torpor before moving from what people call being to the territory of what they identify as non-being. In this case, technology is perceived as a potential source of critical sensitive, sometimes even deadly impacts on a global scale. History knows many examples of alarmism regarding something fundamentally new. At the same time, technological progress is moving forward and reaching new heights, and yet the world has not turned into mountains of ashes and a lifeless desert, despite the predictions of the imminent demise of all things spread by false prophets.
The security of artificial intelligence as a subject for discussion in the context of a practical approach, rather than describing abstract fears about the metaphysical ocean of all-conquering death, has ceased to be a highly specialized topic that only industry representatives pay attention to, having burst into the information space as a kind of global phenomenon.
It is worth noting that, in general, concerns about machine intelligence should not be classified as a kind of marginality. So far, there is no definitive understanding of what final configuration the AI will have as a result of its development. Security issues are important in this case. However, it’s still not worth panicking to prepare for the Apocalypse because of AI.
In 2023, Elon Musk and more than 1,000 technologists and scientists called for the suspension of artificial intelligence development. They also said that the world must prepare for the serious risks associated with AI. Then top scientists from Google, OpenAI, and other laboratories signed an open letter stating that more attention should be paid to the risk of human extinction due to artificial intelligence. The President of the United States, Joe Biden, even signed an AI executive order with the overall goal of protecting US residents from digital intelligence systems.
In November 2023, the non-profit board of OpenAI fired chief executive officer Sam Altman, stating that he had a reputation for lying and couldn’t be trusted with technology as important as artificial general intelligence, a digital cognitive system characterized by self-awareness. It is worth noting that new definitions of AGI have been formulated recently. The interpretation often depends on the specifics and business goals of a particular company developing artificial intelligence systems.
At a certain point, a situation emerged in which assumptions were relevant that the aspirations of Silicon Valley entrepreneurs would become a secondary factor in how society perceives AI. At the same time, entrepreneurs have not ignored and do not ignore the security issues of artificial intelligence.
In June 2023, a16z cofounder Marc Andreessen published a 7,000-word essay titled Why AI Will Save the World. This text focuses on the analysis of the agenda of AI doomers. Marc Andreessen also outlined a more optimistic vision of how artificial intelligence will play out. He noted that the era of AI has arrived, and boy are people freaking out. According to him, artificial intelligence will not destroy the world, but can actually save it. Marc Andreessen stated that a convenient solution to combat fears about AI is a formula that involves moving faster and breaking things. According to him, this is basically the same ideology that defined all other technologies of the 21st century and their attendant problems. Marc Andreessen stated that Big Tech companies and startups should be allowed to build artificial intelligence as fast and aggressively as possible, with virtually no regulatory barriers. In his opinion, the implementation of an appropriate approach will ensure that machine intelligence does not become a technology concentrated in the hands of several powerful firms or governments. He also noted that the mentioned approach will allow the United States to compete effectively with China.
It’s also obvious that the strategy proposed by Marc Andreessen would form a system of conditions under which many a16z’s AI startups would be able to earn much more money. Many experts considered his kind of technical optimism inappropriate against the background of extreme income disparity, pandemics, and housing crises.
Marc Andreessen does not agree with the concepts and approaches common in the Big Tech area in all cases. At the same time, making money is an object of interest for the entire industry, and it is a kind of corporate axiom that remains relevant against the background of changes in internal and external conditions and circumstances.
In 2024, a16z’s co-founders wrote a letter with Microsoft chief executive officer Satya Nadella. This letter actually asked the government not to regulate the artificial intelligence industry at all.
It is worth noting that the alarmist statements and warnings about the safety of AI and the potential dangers associated with the operation and scaling of digital intelligence systems that were observed in 2023 did not have an impact on the process of investing in advanced technology. In 2024, the volume of corresponding financial injections reached peak levels. Sam Altman returned to OpenAI. However, in 2024, many safety researchers left the company, stating the decline of the safety culture of the ChatGPT developer.
Joe Biden’s safety-focused artificial intelligence executive order has largely lost relevance in Washington. Donald Trump, who won the United States presidential election in November and will return to the White House this month, announced his intention to repeal the mentioned executive order, which, according to him, hinders innovation in the area of artificial intelligence.
Marc Andreessen said that in recent months he has advised Donald Trump on AI and technology issues. A longtime venture capitalist at a16z, Sriram Krishnan, is now Mr. Trump’s official senior adviser on artificial intelligence.
Dean Ball, a research fellow at George Mason University’s Mercatus Center, said during a conversation with media representatives that members of the Republican Party have several priorities related to machine intelligence. This includes, among other things, building out data centers to power artificial intelligence, the use of digital cognitive systems in government and the military, competition with China, limiting content moderation from center-left technology companies, and protecting children from chatbots. Dean Ball separately noted that the mentioned priorities outrank AI doom. In his opinion, the movement for the prevention of catastrophic risks associated with artificial intelligence has lost ground at the federal level. Moreover, he believes that the supporters of this movement also lost the one major fight they had at the state and local levels. In this case, California’s controversial AI safety bill SB 1047 is meant.
One of the reasons that AI doom, as a kind of concept for the perception of artificial intelligence or as an idea, has become less relevant in the space of public attention in 2024, is related to the scaling of digital cognitive models. The practical experience of using machine intelligence platforms has confirmed to the current the inconsistency of alarmist warnings about the demise of the world. Obviously, no global catastrophe will occur if, for certain reasons, the artificial intelligence model gives the user incorrect advice or information that contradicts the circumstances of objective reality. Such situations in the process of AI systems functioning can have some consequences, but in this case, there is clearly no risk of destroying all life without the possibility of rebirth.
At the same time, in 2024, some digital products powered by artificial intelligence seemed to realize concepts in the space of the virtual dimension of reality that in the past were perceived as viable only within the territory of the creativity of science fiction authors, or at least as possible only in the very distant future. In 2024, OpenAI demonstrated how a person can talk with phones rather than through them. In this case, it means ChatGPT’s Advanced Voice Mode, which provides users with access to hyper-realistic GPT-4o’s audio responses. Meta unveiled smart glasses with real-time visual understanding.
It is worth noting that the idea shaping the perception of artificial intelligence as a source of a large-scale catastrophic threat is largely related to the vision of the technological future described in some science fiction films and literary works. At the same time, this case implies a view from the past into an unknown future, which is more based on fantasy than on the facts of reality. The world as a space, the periods of existence of which can be segmented chronologically, had many configurations placed at different historical distances from each other. For example, in the very distant past, electricity, which eventually became as natural as possible, was not even an idea in its current sense, but it is worth noting that ancient Egyptian texts dating back to 2750 BC contain references to fish with electrical properties. At the same time, fiction sometimes correctly predicts the future configuration of the global reality of the world. Obviously, this case does not imply the implementation of forecasts related to the demise of all living things. Arthur C. Clarke’s novel Childhood’s End, published in 1953, mentions devices similar to modern mobile phones. In Jules Verne’s novel From the Earth to the Moon in 1865, many aspects of modern space exploration were predicted, including the launch of rockets. Isaac Asimov’s literary works described self-driving vehicles. Moreover, many of Philip K. Dick’s books have mentioned some kind of virtual and augmented reality prototypes. At the same time, fiction and cinema of the same thematic content should not be perceived as a kind of library of predictions that will definitely come true. However, this does not negate the fact that many science fiction writers have managed to see the future through the existential distance of the years they have not lived.
In 2024, the SB 1047 bill, which was supported by two highly regarded artificial intelligence researchers Geoffrey Hinton and Yoshua Benjio, became a kind of peak moment in the battle for the safety of artificial intelligence. This bill is aimed at preventing the implementation of scenarios in which advanced machine intelligence systems cause a mass extinction of people and cyber attacks that can generate huge damage.
California Governor Gavin Newsom vetoed SB 1047. He stated that in this case there is an outsized impact.
It is worth noting that SB 1047 contained significant flaws in terms of its compliance with the current state of affairs in the artificial intelligence industry. In this case, among others, it implies an excessive focus on AI-related risk. Also, the bill envisioned regulating artificial intelligence models depending on their size, focusing on controlling major industry players. At the same time, this case did not take into account new techniques such as test-time computing or the rise of small AI models, which have already been led by artificial intelligence laboratories pivot to. Moreover, some experts perceived the bill as an attack on open-source machine intelligence. For companies such as Meta and Mistral AI, in this case, there is a risk of restrictions in their activities on releasing highly customizable frontier artificial intelligence models.
The bill’s author, Senator Scott Wiener, reckons that Silicon Valley played dirty in shaping public opinion favorable to it regarding SB 1047. He also told reporters that venture capitalists from Y Combinator and a16z have launched a propaganda campaign against the bill. It was noted that as part of the mentioned campaign, a statement was distributed that SB 1047 would send software developers to jail for perjury. In June, Y Combinator asked the young founders to sign a letter containing the relevant statement. Around that time, Andreessen Horowitz general partner Anjney Midha made a similar claim on a podcast. The Brookings Institution described this as one of the many misrepresentations of a bill. In this context, it was noted that SB 1047 mentioned that the tech executives should provide reports indicating the shortcomings of their artificial intelligence models. The bill also characterized lying in government documents as perjury.
Y Combinator rejected the accusations of involvement in spreading disinformation, telling reporters that SB 1047 contained vague wording and few specifics.
Meta’s chief of artificial intelligence, scientist Yann LeCun, has repeatedly opposed ideas that are the semantic basis of AI doom rhetoric. In 2024, at Davos, he stated that the idea that somehow machine intelligence systems will take over humanity is preposterous and ridiculous. Yann LeCun also noted that the industry is very far from developing superintelligent artificial intelligence systems. According to him, there are many ways to build any technology in such a way that it is dangerous, but as long as there is one way to do it right, that is all people need.
The policymakers behind SB 1047 have hinted that they may come back in 2025 with a modified bill focused on the long-term risks associated with artificial intelligence.
Sunny Gandhi, Encode’s vice president of political affairs, told reporters that the AI safety movement has made encouraging progress in 2024, despite the veto of SB 1047. Public awareness of the long-term risks associated with artificial intelligence and the willingness among policymakers to tackle these complex challenges were also highlighted. Sunny Gandhi stated that Encode expects significant efforts in 2025 to regulate the risks of AI-related catastrophes.
At the same time, a16z general partner Martin Casado, an antagonist of the mentioned risk regulation, noted in his December op-ed the need for a more reasonable policy in the area of artificial intelligence. He also stated that AI appears to be tremendously safe.
The fight for the safety of artificial intelligence will continue, but its outcome is still unclear. In this case, it is worth noting that AI as a technology continues to be in a state of development and has not yet reached the peak point of its evolution. For this reason, even the most sophisticated control system, which theoretically can be presented in 2025, is likely to become outdated over time and at some moment will no longer meet relevant challenges. At the same time, regulation should not be a barrier to development or a tool for monopolizing the industry.