News

Consumer Group Calls On EU to Investigate Risks of Generative AI

On Tuesday, June 20, the EU’s largest consumer group, BEUC, appealed to regulators to immediately begin investigating the potential risks of generative artificial intelligence.

Consumer Group Calls On EU to Investigate Risks of Generative AI

Currently, the European authorities are in a state of uncertainty about decisions concerning the control of processes in the AI industry. So far, the utmost clarity exists only in concern to the evidence of the fact that the use of artificial intelligence will be commercial and non-commercial, depending on the specific case and goals.

Ursula Pachl, Deputy CEO of BEUC, says that generative AI, the embodiment of which is ChatGPT, has created a huge technological space for users with powerful prospects, but against this background, there are concerns about the potential use of artificial intelligence for deception, manipulation and harming people. She also noted that technologies can become a tool for positioning various kinds of biases as reliable truth, which contribute to increased discrimination and increased fraud.

BEUC calls on the security, data protection, and consumer protection authorities to start investigations now, and not wait for a situation to arise in which the actions of regulators will be a response to the threat already implemented by that time. Pahl noted that the current legislative system applies to all products and services, regardless of their technological content, and the authorities should ensure compliance with the existing regulatory framework.

BEUC, representing consumer organizations in 13 EU countries, issued an appeal timed to coincide with a report by one of its members, the Norwegian Forbrukerrådet, which, as an absolutely obvious fact, states the harm of artificial intelligence from the point of view of users and considers numerous problematic issues related to this.

Some representatives of the technology sector are concerned that, in their opinion, there is a possibility of AI becoming a tool for the extinction of mankind. Discussions in Europe about artificial intelligence mainly concern the use of advanced technologies in the context of equal access to services, disinformation, and competition.

During the European discussions of the potential risks of AI, attention is focused on the fact that some developers, including large technology companies, have made their systems inaccessible to external control. For this reason, it is not possible to form an extremely reliable understanding of how the data collection process is carried out and how algorithms work. Also during the discussions, attention is focused on the fact that some systems provide false data in response to a request, designating them as correct. The problem is that users do not always understand that they have been deceived. Against this background, the issue of the manipulative capabilities of artificial intelligence is becoming more acute.

The participants of the European discussions also draw attention to the fact that, depending on the arrays of information based on which artificial intelligence models were trained, there may be a problem of bias in the data provided to users. The issue of security is another component element of the discussion. In this context, the problem of using AI as a tool for deceiving people or hacking systems is considered.

The appearance of the chatbot ChatGPT has contributed to increased public attention to artificial intelligence and its capabilities, but in the EU this issue has been studied for several years. Back in 2020, a discussion of the risks associated with AI began, but initially, this campaign was aimed at increasing the level of trust in technology.

By 2021, the discussion has changed the vector. Gradually, issues of risks related to artificial intelligence applications were added to the discussion field. In Europe, about 300 organizations have declared the need to ban some forms of AI.

Over time, public sentiment about artificial intelligence and the potential consequences of its use has become more skeptical. Last week, the head of the European Commission on Competition, Margaret Vestager, said that AI creates a risk of bias in such an important area as financial services. Her comments came after the EU approved its official law on artificial intelligence. Within the framework of this legislative decision, AI-based applications are divided into three categories depending on the degree of risk. The law defines three levels of danger, among which unacceptable, high, and limited. The bill has not yet entered into force and is still at the approval stage, which is expected to be completed by the end of this year.

After coming into force, the AI Law will be the first attempt in the world to codify the understanding and legal regulation of the use of artificial intelligence for commercial and non-commercial purposes.

As we have reported earlier, Google and OpenAI Disagree on Government Oversight of AI.

 

Serhii Mikhailov

2202 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.