The Chinese Payment Association, whose activities are carried out at the expense of state funding, announced the potential danger of using generative artificial intelligence tools.
The Association of Payments and Clearing of China on Monday, April 10, reported that technologies like ChatGPT from OpenAI, in the process of use, may pose risks such as cross-border leaks of personal data.
The Association, which is a division of the People’s Bank of China, notes that representatives of the payment sector in their activities should follow the principle of strict compliance with the legislative framework and a set of regulatory acts when using artificial intelligence technologies and should not upload data that is of particular importance to the state and the financial sector.
The statement of the Chinese association was published at a time of increased global attention to artificial intelligence tools. Last week, US President Joe Biden held a meeting with the Council of Advisors on Science and Technology. During this meeting, the possibilities of AI and the risks that may arise in the process of using such technologies were discussed. Attention was paid to risk factors that relate to both state security and the security of individual users.
Artificial intelligence is already being used in the global economy. The penetration of AI into this industry is happening at a very fast pace. Regulators around the world cannot but react to this trend. The authorities are trying to keep up with the speed of the spread of advanced technologies and are taking measures to contain the process.
For several months, the popularity of ChatGPT has been gaining momentum around the world. The active development and use of the chatbot have led to concerns about the lack of regulatory mechanisms in the field of artificial intelligence. The attention of legislators to AI has led to a discussion about the balance between the control of the innovation industry and its development.
Italy became the first European country to decide to ban the OpenAI ChatGPT-4 chatbot supported by Microsoft. The local data protection authority has initiated an investigation into the possible fact that the chatbot may have ignored the norms on ensuring the safe storage of confidential information and methods for verifying the age of users.
The Italian regulator stated that there is no legal basis for the mass collection and storage of personal data necessary for training a chatbot. On this basis, a ban was introduced.
The non-profit Center for Artificial Intelligence and Digital Policy has filed a complaint with the Federal Trade Commission (FTC) with a request to launch an investigation into OpenAI and suspend the development of large language models for commercial purposes. The Commission has previously stated that the process of using artificial intelligence should comply with the principles of transparency and fairness, as well as be empirically justified and be as accountable as possible. The authors of the complaint note that the OpenAI GPT-4 product does not comply with any of these criteria.
Some experts admit that when implementing a scenario of strict regulation of the field of artificial intelligence, including the suspension of the development of relevant technologies, illegal companies that will carry out this activity for commercial purposes will be activated. In this case, analytics believe, there will be a more serious security threat than the potential risks that exist in the current situation.
As we have reported earlier, ChatGPT Maker to Propose Remedies Over Italian Ban.