New York City Moves to Regulate How AI Is Used in Hiring

Companies Are Pushing Back Harder on Union Efforts, Workers Say
May 25, 2023
Republicans Report Progress in Debt Limit Talks as Negotiations Continue
May 25, 2023
Companies Are Pushing Back Harder on Union Efforts, Workers Say
May 25, 2023
Republicans Report Progress in Debt Limit Talks as Negotiations Continue
May 25, 2023

New York City Moves to Regulate How AI Is Used in Hiring

European lawmakers are finishing work on an A.I. act. The Biden administration and leaders in Congress have their plans for reining in artificial intelligence. Sam Altman, the chief executive of OpenAI, maker of the A.I. sensation ChatGPT, recommended the creation of a federal agency with oversight and licensing authority in Senate testimony last week. And the topic came up at the Group of 7 summit in Japan.

Amid the sweeping plans and pledges, New York City has emerged as a modest pioneer in A.I. regulation.

The city government passed a law in 2021 and adopted specific rules last month for one high-stakes application of the technology: hiring and promotion decisions. Enforcement begins in July.

The city’s law requires companies using A.I. software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and be told what data is being collected and analyzed. Companies will be fined for violations.

New York City’s focused approach represents an important front in A.I. regulation. At some point, the broad-stroke principles developed by governments and international organizations, experts say, must be translated into details and definitions. Who is being affected by the technology? What are the benefits and harms? Who can intervene, and how?

“Without a concrete use case, you are not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I.

But even before it takes effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it is impractical.

The complaints from both camps point to the challenge of regulating A.I., which is advancing at a torrid pace with unknown consequences, stirring enthusiasm and anxiety.

Uneasy compromises are inevitable.

Ms. Stoyanovich is concerned that the city law has loopholes that may weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how.”

The law applies to companies with workers in New York City, but labor experts expect it to influence practices nationally. At least four states — California, New Jersey, New York and Vermont — and the District of Columbia are also working on laws to regulate A.I. in hiring. And Illinois and Maryland have enacted laws limiting the use of specific A.I. technologies, often for workplace surveillance and the screening of job candidates.

The New York City law emerged from a clash of sharply conflicting viewpoints. The City Council passed it during the final days of the administration of Mayor Bill de Blasio. Rounds of hearings and public comments, more than 100,000 words, came later — overseen by the city’s Department of Consumer and Worker Protection, the rule-making agency.

The result, some critics say, is overly sympathetic to business interests.

“What could have been a landmark law was watered down to lose effectiveness,” said Alexandra Givens, president of the Center for Democracy & Technology, a policy and civil rights organization.

That’s because the law defines an “automated employment decision tool” as technology used “to substantially assist or replace discretionary decision making,” she said. The rules adopted by the city appear to interpret that phrasing narrowly so that A.I. software will require an audit only if it is the lone or primary factor in a hiring decision or is used to overrule a human, Ms. Givens said.

That leaves out the main way the automated software is used, she said, with a hiring manager invariably making the final choice. The potential for A.I.-driven discrimination, she said, typically comes in screening hundreds or thousands of candidates down to a handful or in targeted online recruiting to generate a pool of candidates.

Ms. Givens also criticized the law for limiting the kinds of groups measured for unfair treatment. It covers bias by sex, race and ethnicity, but not discrimination against older workers or those with disabilities.

“My biggest concern is that this becomes the template nationally when we should be asking much more of our policymakers,” Ms. Givens said.

The law was narrowed to sharpen it and make sure it was focused and enforceable, city officials said. The Council and the worker protection agency heard from many voices, including public-interest activists and software companies. Its goal was to weigh trade-offs between innovation and potential harm, officials said.

“This is a significant regulatory success toward ensuring that A.I. technology is used ethically and responsibly,” said Robert Holden, who was the chair of the Council committee on technology when the law was passed and remains a committee member.

New York City is trying to address new technology in the context of federal workplace laws with guidelines on hiring that date to the 1970s. The main Equal Employment Opportunity Commission rule states that no practice or method of selection used by employers should have a “disparate impact” on a legally protected group like women or minorities.

Businesses have criticized the law. In a filing this year, the Software Alliance, a trade group that includes Microsoft, SAP and Workday, said the requirement for independent audits of A.I. was “not feasible” because “the auditing landscape is nascent,” lacking standards and professional oversight bodies.

But a nascent field is a market opportunity. The A.I. audit business, experts say, is only going to grow. It is already attracting law firms, consultants and start-ups.

Companies that sell A.I. software to assist in hiring and promotion decisions have generally come to embrace regulation. Some have already undergone outside audits. They see the requirement as a potential competitive advantage, providing proof that their technology expands the pool of job candidates for companies and increases opportunity for workers.

“We believe we can meet the law and show what good A.I. looks like,” said Roy Wang, general counsel of Eightfold AI, a Silicon Valley start-up that produces software used to assist hiring managers.

The New York City law also takes an approach to regulating A.I. that may become the norm. The law’s key measurement is an “impact ratio,” or a calculation of the effect of using the software on a protected group of job candidates. It does not delve into how an algorithm makes decisions, a concept known as “explainability.”

In life-affecting applications like hiring, critics say, people have a right to an explanation of how a decision was made. But A.I. like ChatGPT-style software is becoming more complex, perhaps putting the goal of explainable A.I. out of reach, some experts say.

“The focus becomes the output of the algorithm, not the working of the algorithm,” said Ashley Casovan, executive director of the Responsible AI Institute, which is developing certifications for the safe use of A.I. applications in the workplace, health care and finance.

Leave a Reply

Your email address will not be published. Required fields are marked *

benimbahisbullbahisrüyabetturboslotpumabetstarzbetbayspinmeybetjupiterbahishermesbahis betturkey girişbetvolegencobahisbetlikebetlikebetistrestbetSahabetTarafbetMatadorbetKralbetDeneme BonusuTipobet365hack forumXumabetBetpasbahis.comxslot1winGonebetBetticketTrendbetistanbulbahisbetixirtwinplaymegaparifixbetzbahisalobetorisbetaspercasino1winbetkom