OpenAI is currently creating a new team led by Ilya Sutskever, chief Researcher and one of the co-founders of the company.
The new team will develop ways to control superintelligent artificial intelligence systems
Ilya Sutskever and Jan Leike, a leading OpenAI setup specialist, posted a publication on the company’s blog on Wednesday, July 5, in which they claim that machine intelligence may appear over the next ten years, surpassing the capabilities of the human mind. They also stated that a super-powered AI would not necessarily be benevolent toward humans. For this reason, according to them, it is necessary to research ways to control and limit advanced technology.
Representatives of the company stated that there is currently no solution for controlling superintelligent artificial intelligence and preventing the risk of the machine mind going beyond the control zone. According to them, modern AI tuning methods are based on people’s ability to control advanced technology. But, as they claim, a person will not be able to control artificial intelligence systems, which in many ways exceed the abilities of his mind.
To prevent or at least minimize the risk of a situation where there is no possibility of regulating the AI sphere, OpenAI creates a new team. A group of specialists will have access to 20% of the computing resources that the company currently has. Scientists and engineers from the company’s previous alignment division and other departments will strive to solve the technical problems of managing superintelligent artificial intelligence over the next four years.
The goal of the emerging team is to train AI systems, using human feedback, to evaluate other similar systems. The main result should be the appearance of an artificial intelligence configuration that is capable of conducting investigations by agreement.
The OpenAI hypothesis is that AI can make faster and better progress in superintelligence alignment research.
Jan Leike and his colleagues John Shulman and Jeffrey Wu say that as artificial intelligence systems develop, they get more and more alignment abilities and in the future will be able to independently create strategies for this activity. They also say that people in the future will study the results of research conducted by advanced technologies.
The publication posted on the OpenAI blog also recognizes the fact that the company’s capabilities are limited. This circumstance is considered in the context of the probability that the analysis of the AI system carried out by another configuration of machine intelligence may provoke an increase in the number of inconsistencies, vulnerabilities, and biases.
Representatives of OpenAI say that the alignment of superintelligence is a problem of machine learning. They also announced the intention to share the results of their efforts and contribute to the coordination and security of AI models that differ from their configurations.
As we have reported earlier, OpenAI Launches Cybersecurity Grant Program.