News

OpenAI Develops Tool to Detect AI-Generated Images

OpenAI is currently working on creating a tool to detect images that have been created using artificial intelligence.

OpenAI Develops Tool to Detect AI-Generated Images

As part of this activity, the mentioned company, which developed the world’s most popular chatbot based on machine intelligence, has identified a high level of accuracy of the new tool as one of the main goals.

Mira Murati, technical director of OpenAI and the DALL-E image generator, says that the new development has a reliability factor of 99%. In this case, it means the effectiveness of the tool in solving the problem of determining the fact of using machine intelligence when creating an image or confirming that the technology did not generate visual material. She also said that a new tool is currently being tested on the eve of a public release, about the date of which there is no information yet.

Mira Murati made these statements during a speech at the WSJ Tech Live conference in Laguna Beach, California. OpenAI Chief Executive Officer Sam Altman was also present at the event.

Currently, there are already tools in the technological space with which it is possible to establish the fact of using artificial intelligence when creating an image. This solution is not what can be described as a kind of know-how. However, this does not mean that the new OpenAI tool is unnecessary or useless. DeepMedia’s data shows that since the beginning of this year, the number of fake videos and audio recordings in the virtual space has tripled compared to the same period last year. This trend indicates that the demand for anti-counterfeiting tools is an urgent request.

The DeepMedia data does not concern images created with the help of machine intelligence, but the mentioned problem is also observed in this sector. Experts explain the increase in the number of fakes by the fact that over the past years, the cost of technologies necessary to create visual materials has sharply decreased, and access to it has been simplified.

OpenAI in January introduced a tool with which it is possible to determine whether the text was generated by machine intelligence. In July, the implementation of this project was postponed due to the unreliability of the product. The company said that it is currently working on improving this tool. OpenAI also plans to create ways to detect images and audio recordings that were generated using machine intelligence.

The main problem with fakes is that these materials can be used to manipulate public consciousness in the context of covering certain events and in the framework of promoting narratives about certain phenomena and facts.

OpenAI executives at the event in California also hinted at a new machine intelligence model that will follow GPT-4. Back in July, the company applied for the GPT-5 trademark in the United States.

Mira Murati said that OpenAI has managed to make significant progress in solving the so-called hallucination issue with GPT-4, but currently, the brand is not where it should be. This outgiving, which contains a certain degree of abstraction, can be interpreted as a statement of the need to create a new model of machine intelligence and as a recognition that the company should make more efforts to overcome the mentioned problem.

As we have reported earlier, Reality Defender Raises $15 Million to Detect Deepfakes.

Serhii Mikhailov

3123 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.