Science & Technology

Ilya Sutskever Raises $1 Billion for His New AI Company

OpenAI co-founder Ilya Sutskever, who left the startup in May that developed the world’s most popular artificial intelligence-powered chatbot called ChatGPT, has raised $1 billion in investment funding for his new company, Safe Superintelligence, or SSI.

Ilya Sutskever Raises $1 Billion for His New AI Company

The firm that received the financing specializes in carrying out activity in the machine intelligence industry. The investment was announced on SSI’s official account on the social media platform X. It was also noted that investors who provided the company at the early-stage existence with funding include Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Moreover, NFDG, an investment partnership co-run by SSI executive Daniel Gross, was mentioned.

Back in May, Ilya Sutskever, announcing a new company, wrote in his X account that the main goal of this business would be to strive for safe superintelligence in a straight shot, with one focus, one goal, and one product. He was OpenAI’s chief scientist and co-led the firm’s Superalignment team with Jan Leike, who also left in May to join rival artificial intelligence startup Anthropic.

It is worth noting that after Ilya Sutskever left, the developer of ChatGPT announced the abolition of the mentioned team. This group of specialists was formed last year. Whether the specified decision of OpenAI means a final refusal to develop such forms of artificial intelligence that are many times superior to the current configuration of the digital mind in terms of cognitive abilities, or a temporary stop of the corresponding projects is still unknown. According to media reports, citing insiders, some members of the abolished group of specialists joined other units of the company.

The striving for the so-called superintelligence that exists in the digital space is rather an ambitious and largely groundbreaking idea, but the implementation of this plan in a virtual environment is still a prospect, not a fait accompli. At the same time, it is worth noting that this idea is not a concept that is not in line with reality and does not correspond to the real toolkit of AI developers. It is also notable that currently, predictions are circulating that in the foreseeable future, there will be a form of artificial intelligence that will significantly surpass the human thinking system in the context of understanding and analyzing facts, phenomena, and processes of the global space of the being. Supporters of a high level of realism in such scenarios are, for example, Elon Musk and Masayoshi Son.

Returning to the topic of the May changes in the structure of OpenAI specialists, it is also worth noting that Jan Leike after he decided to leave the startup, wrote in his X account that in this company, safety culture and processes have been taken a backseat to shiny products.

Ilya Sutskever started SSI with Daniel Gross, who oversaw Apple’s AI and search efforts, and former OpenAI employee Daniel Levy. The company has offices in Tel Aviv, Israel, and Palo Alto, California.

A message posted on SSI’s X account noted that the firm’s singular focus means that the new artificial intelligence brand’s team does not intend to be distracted by management overhead or production cycles. It was also underlined in this case that SSI’s business model implies that progress and security will be insulated from short-term commercial pressure. The corresponding wording can be interpreted as a statement of the company’s desire to carry out activities for the sake of the evolution of artificial intelligence as a higher goal, including in a certain philosophical sense. Whether these statements hint at OpenAI’s excessive focus on the commercial component of its operations is anyone’s guess. It is worth noting that similar opinions about the current strategy of the ChatGPT developer have already been declared in the public sphere. For example, Elon Musk said this year that OpenAI had begun to focus on commercial metrics as the main criteria for evaluating the effectiveness of its activities and had stopped following the original goals of its activities for the benefit of humanity.

According to media reports, Ilya Sutskever was one of the OpenAI board members involved in the temporary replacement of co-founder and chief executive officer Sam Altman in 2023. At that time, the company released a statement indicating that Mr. Altman had not been consistently candid in his communications with the board. Against this background, many unofficial versions of what is happening at OpenAI were formed, including allegations of disagreement between the management regarding the priorities of the activities and alarmist, largely conspiracy theories that the firm, in the context of developing artificial intelligence, has reached some stage of technology evolution that is a global threat, which has caused internal turbulence.

Several media reported that Ilya Sutskever focused on ensuring that machine intelligence does not harm people, and Sam Altman, and some other employees of the startup, sought to deliver new technologies. Against the background of internal instability, almost all OpenAI representatives signed an open letter of intent to resign because of what is happening. A few days later, Sam Altman returned to the position.

Ilya Sutskever has publicly apologized for his role in the specified happened. On his X account, he wrote about his deep regret for participating in the actions of the board. Ilya Sutskever also declared in this context the absence of intentions to harm OpenAI.

Ilya Sutskever is currently one of the most influential technologists in the artificial intelligence industry. He trained under Geoffrey Hinton, known as the Godfather of AI. Ilya Sutskever is also one of the first proponents of the idea of scaling in the context of the development of artificial intelligence. In this case, it is implied, as demonstrated by practical experience, the correct idea that the level of performance of machine intelligence shows an increase as computing power grows. It is worth noting that it was the corresponding concept of the technological process that formed the platform for such achievements in the area of generative artificial intelligence as the above-mentioned ChatGPT.

Ilya Sutskever, during a conversation with media representatives this week, said that SSI, as part of its activities, will adhere to the concept of scaling, which differs from the OpenAI approach. Formulating his thoughts within the framework of a symbolic manner of presenting conclusions, Mr. Sutskever said that he had identified a mountain different from the one he was working on. Also in this context, it was noted that after climbing to the top of the mentioned mountain, the paradigm will change. Moreover, according to Ilya Sutskever, everything that is already known about artificial intelligence will change. According to him, the most important safety activity of superintelligence will be carried out at this moment.

During a conversation with media representatives, Ilya Sutskever underlined that the first SSI product will be the safe superintelligence. According to him, the world will change a lot when the company gets to the main point of development of super-powerful artificial intelligence. In the relevant context, he noted that for the mentioned reason, it is currently quite difficult to provide a plan for what SSI will do. According to him, the world will be a very different place after the groundbreaking breakthrough in the development of artificial intelligence.

Ilya Sutskever’s statements should probably be interpreted as a kind of signal that a fundamentally new stage in the technological evolution of AI will fundamentally change the very perception of the digital mind by human consciousness. Also in this context, it should be noted that currently there can be no answer to the question of what kind of configuration of artificial intelligence will exist after a breakthrough in its development. Against the background of this specificity of AI as a kind of perspective, various types of fears are being formed, since humanity will have to face an alternative form of mind that surpasses human consciousness in the context of cognitive capabilities.

Ilya Sutskever, in the context of reflections on the prospects for the development of artificial intelligence, stated that many big ideas are being discovered. Also, according to him, significant relevant research has yet to be carried out.

Ilya Sutskever says that currently many people are thinking about when artificial intelligence will become more powerful and what steps and tests are necessary to do for this. According to him, it is still difficult to answer the relevant question, since there is still a lot of research to be done.

Mr. Sutskever says that currently such a term as the scaling hypothesis is often used, but at the same time, the meaning of this formulation is less often clarified. According to him, the great breakthrough of deep learning of the past decade is a particular formula for the scaling hypothesis. However, Ilya Sutskever stated that this formula will change. Also in this context, he underlined that the capabilities of the system will grow, and the security issue will become the most intense.

Separately, during a conversation with media representatives, Ilya Sutskever drew attention to the fact that currently all companies operating in the artificial intelligence industry are not open-sourcing their primary work. According to him, the achievement of the goal in the form of safety superintelligence will contribute to the expansion of opportunities to open-source.

Also, as part of his communication with media representatives, Ilya Sutzkever noted that he has a very high opinion of the artificial intelligence industry. According to him, as people continue to make progress, all different companies will realize the nature of the challenge that they’re facing.

Artificial intelligence does have significant potential, which is large-scale at the global level. With a high degree of probability, the evolution of AI will become what can be called a factor in the transformation of the space of existence of human civilization in many or, probably, all dimensions of this plane. Artificial intelligence has the potential to change the political state of affairs at the global level, bring fundamental innovations to the economy generally and the manufacturing sector particularly, and also amend the cultural aspect of the mentioned civilization.

Serhii Mikhailov

2722 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.