The Dutch privacy regulator has sent a letter to the creators of ChatGPT OpenAI, which contains a request to clarify the algorithm for processing confidential user information.
The regulator became interested in how the company processes the personal data of consumers during the training of its basic system. This interest on the part of the authorities is part of the general trend of increased attention to the functioning of a chatbot that uses the capabilities of generative artificial intelligence.
The letter to OpenAI, supported by Microsoft, contains a question about whether the requests asked by users to machine intelligence during communication are used to train its algorithm. It is also important for the regulator to know on what principle the process of collecting and subsequent processing of confidential information is carried out. These requests are contained in a letter that was sent by e-mail to the creators of the most popular chatbot based on generative artificial intelligence.
The regulator in its letter separately focuses on the fact that very personal information may be at the disposal of third parties, for example, advice on resolving a conflict situation in the relationship of a married couple or recommendations concerning medical issues.
The Dutch Data Protection Authority acts in accordance with the approach contained in numerous and actively spreading calls to create a regulatory framework that is the basis for overseeing the operation of a chatbot based on generative artificial intelligence.
In April, the European Data Protection Council created a task force specializing in ChatGPT. This group is a kind of platform on which EU member States can exchange information on possible regulatory measures.
In the Netherlands, during the first four months since the launch of ChatGPT, 1.5 million people used the services of this chatbot.
In March, the Dutch Data Protection Authority announced that there were no plans to block the OpenAI product. Then the regulator indicated that it was monitoring the functioning of the chatbot and clarified that the practice of banning is not considered an option at the moment, but is not perceived as a fundamentally unacceptable scenario in general and regardless of the situation.
Also in March, in Italy, the Data Protection Authority banned the use of ChatGPT. The regulator stated that there is no legal basis for collecting information for the purpose of training a chatbot. In April, Italian users received again access to ChatGPT. OpenAI then reported on the fulfillment of the requirements of the regulator.