Meta Platforms Inc. will expand the scope of labeling publications that were created using artificial intelligence technology on social media platforms.
The mentioned company strives to realize its intentions to prevent the spread of misinformation and various practices of deliberate distortion of data about what is happening in the surrounding space. These efforts will be realized in Instagram Facebook, Instagram, and Threads. The listed social media platforms are part of the ownership structure of the technology giant. Solving the problem of spreading in the public space information and narratives aimed at asserting manipulative affirmations as true meanings in the public consciousness is of increased relevance in the year of the presidential elections in the United States.
Meta is currently collaborating with other technology companies to create a system of standards for identifying publications and information materials of other formats created using machine intelligence. In this case, it is supposed to use such tools as adding invisible watermarks and metadata to images. Meta reported this on Tuesday, February 6. The tech giant also plans to develop software systems to detect the mentioned invisible markers. This solution will allow to detection of content created by artificial intelligence, even if the relevant materials were generated by a competing service.
Nick Clegg, Meta’s president of global affairs, said he expects that over the next few months, the tech giant will be able to detect and label materials created by several other brands focused on machine intelligence. In this case, it means companies such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.
It is worth noting that in the current year, elections will be held not only in the United States but also in dozens of other countries, including, for example, Indonesia, South Africa, and India. For a long period, the spread of disinformation has been an instrument of unauthorized influence on the results of the electoral process. The relevant problem is an actual at the global level. The tools of generative artificial intelligence have provided additional opportunities for those who manipulate information to influence elections or even spread absolute lies within the framework of solving the same tasks. AI allows consumers to create the most realistic images and videos. Artificial intelligence can also be used as a tool for writing texts, the semantic base of which fully corresponds to the paradigm of human thinking and is expressed in formulations that do not contain signs that the author is a digital technology.
Nick Clegg, during a conversation with media representatives, drew attention to the fact that the new Meta’s solutions will not cover all content bases and will not become a universal and final solution to the problem of applying AI to generate fakes. In this context, he separately noted that an erroneous approach should not be an alibi for inaction.
Initially, the Meta system will only detect images created by artificial intelligence using tools from other companies. At this stage, there will be no identification of audio or video generated by the AI. Images created by companies that do not meet industry standards or content of a similar type stripped of markers will be allowed to be posted on social media platforms owned by Meta. However, the technology giant is currently working on creating a separate way to automatically identify the specified materials.
For Nick Clegg, progress in detecting deepfakes is a top priority. This desire is consistent with what can be described as the preparation of Meta for elections, including in the United States. Last month, at the World Economic Forum in Davos, Switzerland, Nick Clegg said that creating an industry standard for watermarks is the most urgent current task.
In January, a fake audio message from the President of the United States Joe Biden was published. Disinformation experts were alarmed by this content. Many of them stated that information materials generated by artificial intelligence can become an important factor in the context of shaping the election result. Experts separately noted that against the background of this challenge, it is important to quickly detect and remove content created by AI. Nick Clegg is optimistic, believing that the negative scenario will not be realized, since the level of attention to solving the problem of identifying fakes is high. In his opinion, the teams of presidential candidates will monitor the deep fakes and declare this problem in the public space.
Nick Clegg noted that Meta does not check original publications authored by politicians, but will label content generated by artificial intelligence, regardless of who posted it on social networks.
On Monday, February 5, the oversight board of the mentioned technology giant published a critical analysis of its manipulated media policy. The company noted that the concept of activity in the relevant direction was too narrow. The technology giant also stated the need to improve the mechanisms for labeling publications generated by artificial intelligence, noting the higher priority of this decision compared to the removal of materials created by AI.
Nick Clegg said about a significant degree of agreement with the conclusions of the Meta oversight board. In this context, he noted that updating watermarks is a step in the right direction. According to him, as the amount of content generated by artificial intelligence on the Internet increases, a new approach to the problem will be needed, providing for the labeling of legitimate media. Nick Clegg says that in this case, a public or industry discussion is needed about how users are informed about the truthfulness or authenticity of the content.
As we have reported earlier, Meta Builds New AI-Focused Data Center in Indiana.