News

OpenAI Reportedly Dissolves Safety Team

OpenAI decided to disband its team of specialists, which dealt with issues related to long-term risks arising from the use of artificial intelligence.

It is worth noting that the mentioned group of specialists began their work a year ago. An insider, who used the right of anonymity, on Friday, May 17, during a conversation with media representatives confirmed the information about the disbandment of this team. According to the informant, some members of the group of specialists evolving to explore artificial intelligence security issues have been reassigned to other OpenAI units.

It is worth noting that this news was made public just a few days later, as both heads of the disbanded team, including the co-founder of the firm Ilya Sutskever and Jan Leike, announced their departure from the startup. Jan Leike said on Friday that OpenAI’s safety culture and processes have taken a back seat to shiny products.

The disbanded team, whose short history of existence began last year, focused on scientific and technical achievements that make it possible to control artificial intelligence systems with a level of mind development exceeding human cognitive abilities. In 2023, the startup announced that it would provide 20% of its computing power for the implementation of the sphere of AI security initiative within four years.

Ilya Sutskever and Jan Leike on Tuesday, May 14, with a difference of several hours, posted messages on the social media platform X about leaving OpenAI. Jan Leike said on Friday that he joined the startup because he believed it was the best place to research. According to Mr. Leike, for a long time, he did not agree with the company’s management on the main priorities of its activities, but over time, the corresponding conceptual discrepancy in the vision of the goals reached a critical point. He stated that OpenAI should be more focused on ensuring security, monitoring such potential risks, being ready to counter these challenges, and paying attention to the impact of advanced technology on society.

Jan Leike says that the mentioned problems are very difficult to solve. He stated that his team had been sailing against the wind for the past few months. Jan Leike separately noted that sometimes the specified group of specialists faced difficulties related to computing resources, which made it difficult to conduct research. In his opinion, OpenAI should become a company focused primarily on security. He also stated that designing machines that will be smarter than humans is an inherently dangerous endeavor.

It’s worth mentioning that last year OpenAI faced what could be called a leadership crisis. The co-founder and CEO of the company, Sam Altman, was involved in this internal process, who was removed from office in November, but returned shortly. Before his suspension, he was accused of not being frank enough in conversations with the rest of the management of the firm.

The media reports that Ilya Sutskever has focused his attention on ensuring that artificial intelligence does not harm people. At the same time, according to journalists, other members of the OpenAI management did not agree with such approaches to priorities, being focused on intensifying activities for the development of new technologies.

As we have reported earlier, OpenAI Launches GPT-4o.

Serhii Mikhailov

3122 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.