Science & Technology

Meta and OpenAI CEOs Back EU Artificial Intelligence Rules

The heads of Meta and OpenAI expressed support for European government initiatives that relate to the regulation of the process of development and dissemination of artificial intelligence.

Meta and OpenAI CEOs Back EU Artificial Intelligence Rules

Meta CEO Mark Zuckerberg and Sam Altman, CEO of OpenAI, after a conversation with European Commissioner Thierry Breton supported relevant initiatives.

Thierry Breton, following a conversation with Mark Zuckerberg, said that the head of Meta supported the rules of the European Union in the field of AI. Also, according to him, the head of the company agreed with the EU approach to assessing the risks associated with the use of advanced technologies and positively assessed such measures as watermarking.

At the same time, Sam Altman said that he expects to work together with European authorities to regulate the process of developing and distributing artificial intelligence.

Thierry Breton made a kind of tour of technology companies. The purpose of this campaign was to discuss the regulation of artificial intelligence based on a set of special rules. Following these meetings and discussions, the European Commissioner stated that Meta seems to be ready to comply with the EU regulatory framework. In this case, readiness is implied at the level of the presence of appropriate intentions. In the technical aspect, the company has yet to prove the consistency of its systems with the norms during a special stress test, which is scheduled for July.

Thierry Breton also met with Google CEO Sundar Pichai, who supported the idea of voluntary rules in the field of regulation of the AI industry.

In June, the European Parliament approved a bill known as the Artificial Intelligence Act. This legislative initiative is the first document in the world regulating the processes taking place in the field of artificial intelligence. The final version of the law will be approved either at the end of this year or at the beginning of 2024.

The bill provides for restrictions on some options for using artificial intelligence and establishes the degree of risk of AI systems, from minimal to unacceptable. The risk scale is necessary to identify applications based on advanced technology that can potentially cause significant harm to the user’s security.

According to the bill, the strictest regulatory measures will be applied to artificial intelligence systems that somehow relate to the functioning of critical infrastructure, human resources, education, migration management, and public order. In this case, special attention will be paid to the transparency of systems and the accuracy of data use. The bill also provides that artificial intelligence developers who violate the rules will be punished with a fine of up to 30 million euros.

Last week, the media reported that OpenAI successfully lobbied for changes to this bill. The company thus reduced the regulatory burden it would have faced if the original version of the rules had been approved. The firm, for example, proved that its generative artificial intelligence systems should not be classified as high-risk systems.

Serhii Mikhailov

2260 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.