On Tuesday, May 16, Sam Altman, Chief Executive Officer of OpenAI, told US senators about the aspects of the development and application of artificial intelligence, the regulation of which he supports.
Altman stated that he positively assesses the practice of state control in order to prevent the spread of misinformation during the voting processes within the framework of the elections. The head of OpenAI also expressed support for regulatory measures aimed at identifying information materials generated by artificial intelligence. Altman agrees with external control in cases where efforts relate to preventing the risks of global catastrophes provoked by AI.
Also, the head of OpenAI said that if the process of developing technologies based on artificial intelligence is carried out in the wrong direction, then the potential failure will be as large as possible and will affect the matter of human life in all its aspects and in almost all dimensions.
Altman drew attention to the fact that on the eve of the 2024 presidential elections in the United States, he is very concerned about the prospect that AI models will generate politically biased information based on self-serving fiction.
The head of OpenAI expressed support for the new federal agency for checking the work of companies engaged in artificial intelligence. He also spoke approvingly about the initiative to grant this agency the authority to issue licenses for compliance of AI-based products with certain requirements and revocation of documents in case of violations. At the same time, Christina Montgomery, chief Privacy and trust Specialist at IBM, does not share this position, believing that existing agencies should be engaged in such activities.
Altman stated his desire for the United States to lead in the field of artificial intelligence. In his opinion, the state can set world standards by starting to control microprocessors that train and manage AI systems. These chips are called graphics processors, mainly these products are sold by the American company Nvidia Corp.
Altman agreed with the opinion of legislators that the new regulatory framework should not create a disadvantaged position for startups in the field of artificial intelligence and contribute to the monopoly of large players. He believes that stricter regulatory measures should be applied to those AI programs that contain potentially dangerous capabilities.
Senator John Ossoff from Georgia is convinced that enhanced monitoring should be applied to those artificial intelligence models that manipulate human consciousness and can create new biological agents.
Senator John Kennedy of Louisiana expressed concern that a certain part of the AI developer community may intentionally or unintentionally start using technology in a direction dangerous from the point of view of humanity security.
Against the background of such deliberations, many experts have a question about the expediency of resolving issues of regulation of the industry with its leaders, who have an obvious need to defend the interests of their companies in such discussions. But this is just an assumption, not a proven fact of a certain type of behavior.
As we have reported earlier, OpenAI Previews Business Plan for ChatGPT.