The media reports that local authorities in the United Kingdom are currently working to increase the transparency of the training process by technology companies of their artificial intelligence models.
In part, the mentioned efforts are related to the fact that recently the authors of creative works have increasingly complained that there is a significant probability of materialization of the risk that the results of their work will be illegally copied and used as part of projects in the artificial intelligence industry.
In a more global context, increasing transparency is a measure in the context of striving for security. In a certain sense, artificial intelligence is what can be described as a new form of mind, which, within the same space of existence, is neighbors to human consciousness.
So far, AI is under the full control of developers who train the advanced technologies and program algorithms for its functioning. However, there is a possibility that over time artificial intelligence will become an autonomous system of digital thinking, which will have the ability to develop independently and by themselves perceive the matter of the surrounding reality. Moreover, there are predictions that AI will surpass humans in terms of mental abilities. For example, Elon Musk reckons that next year or in 2026 artificial intelligence will demonstrate incredible cognitive power. Against the background of the relevant prospects, the issue of security is obviously important and indisputably relevant.
The Secretary of Culture of the United Kingdom, Lucy Frazer, during a conversation with media representatives, said that the UK government is currently working on rules that will determine the standards for the use of books, music, and TV shows by companies operating in the artificial intelligence industry. She noted that initially, the ministers will focus on which content is applied by firms to train their AI models. After the mechanism of the appropriate purpose is developed, creative industries will have the opportunity to find out whether their original materials are used by companies designing artificial intelligence.
Lucy Frazer says that AI is a serious problem not only for journalism but also for the creative industries. According to her, in the context of this situation, the first step is for companies operating in the artificial intelligence industry to be transparent about what they use. She also noted that there are other problems that people are concerned about. In this context, Lucy Frazer stated that there are issues regarding opt-in and opt-out for content and remuneration. She is already working on solving the relevant problems.
During a conversation with media representatives, Lucy Frazer refused to discuss what mechanisms are needed to ensure transparency so that copyright holders can determine whether their content was used to train artificial intelligence models.
Creators are concerned that AI is encroaching on new areas of activity. The corresponding concerns were especially intensified after the Google Internet search system began to offer consumers summaries of queries generated by artificial intelligence.
Marc McCollum, Raptive’s chief innovation officer, said last week during a conversation with media representatives that the initial analysis of this company showed that SGE [Search Generative Experience] can significantly reduce traffic directed to the websites of content creators. Against this background, there is a risk of falling advertising revenue. The materialization of this problem can cause a significant deterioration in the financial situation of content creators. Marc McCollum also stated that artificial intelligence could potentially deprive creators of $2 billion in total revenue within one year. He also expressed skepticism about the existing mechanisms of interaction with AI developers in the context of the use of intellectual property. According to him, the current models of relevant relationships do not provide for the possibility of adequate compensation to creators for access to their content and do not comply with the principles of good faith. At the same time, he noted that solving the relevant problems is a matter of survival ability for many independent authors.
Marc McCollum says that content creators are the backbone of a diverse and vibrant digital ecosystem, which is why their work deserves recognition and remuneration.
At the same time, pessimism is not a total mood in the context of attitudes toward the prospects of AI-based searches. Michael Hasse, a cybersecurity and technology consultant, says that artificial intelligence-based search can help and hinder consumers who are looking for certain products. At the same time, the expert noted that in this case, when using traditional functions, the first few pages of the results will be dominated by companies that have improved their SEO search or paid for preferential placement.
It is worth noting that in the era of the intensive spread of artificial intelligence, cybersecurity issues are also of particular importance. In this context, the main problem is that fraudsters also have access to advanced technology, which makes their activities more difficult to detect. Within these realities, the personal awareness of users is important. For example, a query in an Internet search engine, such as how to know if my camera is hacked, will allow anyone to get information about signs of unauthorized access to the device. Digital literacy is one of the tools to counter cybercrime.