New York became the first US city to introduce regulation of the possibilities of using artificial intelligence when hiring.
The media reported on the law, which was called NYC 144 and provides for the obligation of companies using artificial intelligence software to assist in hiring and promotion, to annually audit these tools to identify potentially possible facts of discrimination based on race or gender. Firms should also publish their findings on the results of the audit.
Erin Connell, who represents employers as a lawyer in Orrick, Herrington, and Sutcliffe, said that this law, officially designated as a legal tool to combat discrimination, provides for public disclosure of information and is not analogous to the law on antibiotics, allowing the use of these antimicrobial substances for routine disease prevention without proof of the presence of the illness. She also noted that lawmakers and industry groups are currently considering NYC 144 as a possible guide for future regulation of the process of using artificial intelligence.
In the US, several federal regulators have published an open letter warning that there is a risk of AI becoming a means of legitimizing bias that violates the law and automating discrimination. The letter also claims that AI may have other negative consequences of a similar nature.
The law adopted in New York imposes an obligation on employers to publish adverse impact factors that indicate the presence or absence of a negative influence of the hiring procedure on a specific race or gender. Ignoring this rule threatens companies with consequences in the form of a fine of up to $1,500. Special attention should be paid to the fact that the amount will be charged for each day of non-compliance with the law.
The potential for discrimination and the high risk of bias are among the reasons why many AI critics are calling for the introduction of a system of strict control over the use of advanced technology. Sandra Watcher, professor of technology and regulation at Oxford University, said in May that the concept of public perception of artificial intelligence should be deprived of sci-fi reasoning on AI, including assumptions about virtual weapons of mass destruction, and focus on more substantive and specific problems concerned to machine consciousness.
The United States, China, and the European Union currently have different approaches to regulating advanced technology. The EU has made the most progress in this matter by adopting rules based on human rights and focusing on past digital regulation policies. Shaunt Sarkissian, founder, and CEO of AI-ID, said that European standards take into account the privacy factor and pay less attention to the use of artificial intelligence in trade.
As we have reported earlier, Meta and OpenAI CEOs Back EU Artificial Intelligence Rules.