OpenAI has started forming a new team of specialists who will be engaged in activities aimed at reducing to a minimum the degree of risks associated with the spread and integration of artificial intelligence into different spheres of life.
The need to create such a working group is an appropriate and relevant solution from the point of view of the current technological realities since AI demonstrates what can be described as rapid development. For this reason, there is an increasing need to create tools and a system for monitoring the consequences of using artificial intelligence, which, against the background of its improvement, become more significant and large-scale.
Last Thursday, October 26, the mentioned company, which became world-famous after launching a chatbot based on machine intelligence called ChatGPT, announced in its blog that a so-called preparedness team was formed, headed by Aleksander Madry, who worked at OpenAI while on leave from a faculty position at the Massachusetts Institute of Technology.
The new group of specialists will carry out activities on the analysis of potentially catastrophic risks associated with the process of functioning artificial intelligence systems. The team will also develop methods to prevent the worst-case scenarios of using AI. The specialists will pay attention to the cybersecurity problems that may arise due to applying artificial intelligence. Chemical, nuclear, and biological threats that could potentially be provoked by AI will also be studied.
Another area of work of the new team of specialists will be the creation of a concept of the company’s corporate policy aimed at forming an understanding of the algorithm of action in case risks are detected in the development of so-called boundary models, which are next-generation machine intelligence technologies that differ from existing digital thinking systems with a higher level of efficiency.
The firm is currently focused on creating artificial general intelligence, or AGI. This system of digital thinking can perform several tasks better than a person.
The company stated that there is currently a need to get some kind of assurance that there is the understanding and infrastructure that is needed to create high-performance machine intelligence configurations.
OpenAI CEO Sam Altman has repeatedly said that artificial intelligence can cause the extinction of mankind. But, as experts note, this does not mean that the new team of specialists will analyze the risks that are characteristic of the versions of reality that exist in the artistic space of scientific and factual novels-dystopias.
Also, in a message published on the company’s blog, it is noted that artificial intelligence systems, which in terms of capabilities will surpass existing advanced AI models, can benefit humanity. It is separately indicated that the probability of positive consequences does not cancel out negative risks.
OpenAI is currently requesting ideas for risk research from the community. In this case, the company offers a monetary $25,000 reward and employment to the authors of the ten best offers on this issue.
As we have reported earlier, OpenAI Executives Say About Potential of AI to Do Any Job.