The US Federal Trade Commission (FTC) has launched an investigation into OpenAI.
As part of this investigation, the Commission requested extensive information from the creator of the world’s most popular chatbot based on artificial intelligence ChatGPT about the procedure for processing personal data, the likelihood of providing inaccurate answers in response to requests, and the risks of harm to consumers, including damage to reputation.
Experts say that the proceedings initiated by the FTC may become a factor complicating OpenAI’s relationship with politicians, many of whom are strongly impressed by the capabilities of Sam Altman’s technology company. Also, the investigation could potentially become part of discussions about artificial intelligence and the negative consequences of its active use, including in the context of risks to the labor market, national security, and the implementation of democratic values in a practical plane.
The Commission is interested in how the company receives arrays of information that are used to teach language models. The regulator also wants to gain an understanding of how the AI model generates responses to queries about specific people that may be false, disparaging, or containing misleading data. The request also concerns other aspects of the functioning of the machine intelligence configuration.
Also, as part of this proceeding, Sam Altman’s company must provide an answer to any complaints from users of its digital product, exhaustively comment on the claims contained in lawsuits, and provide detail information about the data leak that the firm disclosed in March of this year, stating that because of this incident, it had to declassify chat history and payment details.
The FTC seeks to obtain a complete description of the procedure for testing, configuring, and manipulating OpenAI algorithms, including the framework of generating responses to requests and responding to risks. The regulator also requests information on how the company reacts to cases when a chatbot provides users with false information that is an artifice of the digital mind.
This investigation, according to experts, can be described as the most indicative case of regulation by the American authorities in the artificial intelligence sphere. Currently, Congress is striving to understand the principles of the development of a new generation of technologies and to receive information about the features of intra-industry processes. It is expected that in the fall the development of new legislative norms will begin in the United States, which will affect, among other things, the artificial intelligence industry.
In other regions of the world, work towards creating a regulatory framework for regulating AI is being carried out at a faster pace. For example, in the European Union, legislation will be created in the foreseeable future that sets standards for the use of artificial intelligence. Preliminary information states that in Europe it will be prohibited to apply AI to predict scenarios of actions of law enforcement officers and for purposes that will be identified as containing risk.
The FTC has repeatedly urged companies to refrain from making statements about excessive artificial intelligence capabilities and not to abuse the technology as part of discriminatory practices.
The chairman of the Commission, Lina Khan, says that the regulator has legal powers allowing it to initiate prosecutions on the facts of illegal use of AI. This year, a group of citizens sent a complaint to the FTC against OpenAI, in which it was claimed that the chatbot of this company is prone to biased judgments, does not always comply with privacy standards, and often invents information rather than providing reliable data.
As we have reported earlier, FTC Wants to Pause Microsoft-Activision Deal.