Science & Technology

OpenAI and Google DeepMind Employees Want to Speak Out on Concerns About AI

The group of current and former employees of OpenAI and Google DeepMind stated the need to protect themselves from possible retaliation for sharing a position of concern about the potential risks associated with the use and development of artificial intelligence.

OpenAI and Google DeepMind Employees Want to Speak Out on Concerns About AI

In the absence of government oversight mechanisms for the activities of the mentioned companies, their current and former employees are among the few who can hold the largest players in the AI industry accountable to society. The corresponding statement is contained in an open letter signed by 13 people. In this case, only seven participants in the public declaration of their position on the potential risks associated with artificial intelligence, and the opportunity to speak publicly about it, indicated their names.

The open letter also notes that broad confidentiality agreements actually make it impossible to express concerns about the functioning and development of machine intelligence systems. It is worth noting that the possibilities of controlling artificial intelligence are currently what can be described as an ambiguous issue in the context of the prospects for such mechanisms in general and their potential effectiveness in particular. There is a point of view according to which, at a certain stage in the development of AI, a person may lose control of advanced technology. At the same time, this opinion belongs to the category of strictly theoretical assumptions, the materialization of which in the practical plane is not a guaranteed or maximally realistic scenario.

Over the past few weeks, OpenAI has been at the center of controversy related to the company’s approach to safeguarding artificial intelligence. The relevant discussion intensified after the firm decided to disband one of its most reputable security teams. Against the background of this decision, a series of dismissals of OpenAI employees was recorded.

Some of the current representatives of the specified company also express concern that they have been asked to sign nondisparagement tied to their shares in the firm. In this case, the concern is that such corporate obligations may cause the loss of lucrative equity deals if employees adhere to the point of view of disagreement on certain aspects and practices of OpenAI’s activities. After a clear rejection of the proposal to sign the specified agreements was recorded, the company’s management stated that such internal norms would not apply to former employees.

The mentioned decision was received positively. Jacob Hilton, one of the former OpenAI employees, who signed the open letter, posted a message on his account on the social network X, in which he noted that the company deserves credit for making changes to the nondisparagement policy. At the same time, in this publication, special attention was drawn to the fact that employees may still be afraid of other forms of retaliation for disclosing information. In the relevant context, Jacob Hilton stated that potential measures of impact could include dismissals and filing a claim for damages.

A spokesperson for OpenAI, in a comment to the media about the open letter, noted that the company is proud of its track record of providing the most capable and safe artificial intelligence systems, and believes in its scientific approach to addressing risks. The interviewee also agreed that rigorous debate is crucial, given the significance of AI. Moreover, a spokesperson for OpenAI said that the company will continue to interact with governments, civil society, and other communities around the world.

A Google representative did not respond to a media request for comment.

The open letter also expresses concern that leading companies in the artificial intelligence industry have strong financial incentives to avoid effective oversight. In this context, it is noted that these players in the area of machine intelligence have only weak obligations to share with the public the true dangers of their digital thinking systems.

In the open letter, special attention is paid to the fact that the usual measures to protect informants in the context of the specifics of the artificial intelligence industry are insufficient. This point of view is based on the fact that the standard measures of the mentioned category are aimed at combating illegal activities, but many of the risks associated with AI are outside the regulatory space.

In an open letter, employees urge companies operating in the artificial intelligence industry to commit to prohibiting non-disparagement agreements for risk-related concerns and develop a mechanism in the form of a verifiable anonymous process that allows them to contact the boards of firms and regulators with questions. It is also proposed that AI industry players refrain from retaliating against current and former employees who publicly disseminate information about risks without having other internal opportunities to initiate a discussion of corresponding content.

OpenAI reported that this company uses the practice of regular sessions with the board of directors, during which it is possible to discuss the functioning of the brand and the specifics of its products in a dialogue format. It was also noted that employees can express their concerns during working hours. Moreover, the company said it has an integrity hotline for employees and contractors.

Daniel Kokotajlo, a former OpenAI employee who quit this year, talks about concerns about the readiness of firms for the implications of artificial general intelligence, a hypothetical version of AI that can surpass human capabilities within a wide range of tasks. According to the expert, the probability that the mentioned version of machine intelligence will be developed and launched by 2027 is 50%.

Daniel Kokotajlo says that companies will not face any obstacles when designing artificial general intelligence. Also in this context, it was noted that the multifunctional use of the mentioned version of machine intelligence will not be associated with various restrictions. Daniel Kokotajlo separately underlined that the specified process does not have much transparency. This situation could theoretically be corrected by state control measures or strict obligations on the part of companies, but so far the probability of such solutions is far from maximum.

Daniel Kokotajlo stated that the decision to dismiss from OpenAI was made due to the belief that the firm was not ready for the emergence of artificial general intelligence, noting that the brand should have invested much more in preparing and considering the potential consequences of the functioning of a more advanced version of AI.

It is worth noting that the risks associated with artificial intelligence are multifaceted and, in a certain sense, multilevel. Advanced technology, under a certain scenario of its own development, can generate various kinds of global threats, as one of the points of view circulating in the expert environment suggests. Many people disagree with this opinion, arguing that a person will never lose control of artificial intelligence. At the same time, some risks and threats result from the use of AI in negative scenarios. For example, artificial intelligence is gradually becoming a tool for cybercriminals. For this reason, the mentioned activity becomes more sophisticated. In the context of relevant realities, user awareness is important. For example, a query in an Internet search engine, such as how to know if my camera is hacked, will allow anyone to get information about signs of unauthorized access to the device. Digital literacy is also a tool for countering cybercrime. Besides, it is worth noting that artificial intelligence has recently been often used to generate deepfakes, which are a kind of means of committing crimes based on financial goals. Also, deepfakes are increasingly becoming an information manipulation tool. In this case, the mechanism of action lies in the fact that artificial intelligence generates information materials that have the maximum level of realism and spread in the space of public discourse to legitimize certain narratives as the only correct ways of interpreting what is happening in the world. Such deepfakes can be audio recordings and videos featuring famous people making some statements. In fact, this is a deception that has manipulative purposes. Over time, artificial intelligence can transform into a kind of active functionary of the information policy of certain countries or business groups. This prospect also signals that AI may become a tool for so-called cognitive wars.

At the same time, tools are currently being developed to counter negative scenarios of using artificial intelligence. For example, the Reality Defender startup raised $15 million in investments in October to detect deepfakes. The co-founder and CEO of the company Ben Colman says that the process of emergence of new methods of information generation is permanent. Against the background of this situation, there is a risk that at some point deepfakes may become shocking in terms of the scale of the consequences of their use, which goes beyond responsible and legal scenarios.

Serhii Mikhailov

2529 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.