After several delays, a New York City law requiring companies to vet their automated employee hiring or promotion tools went into effect Wednesday in an attempt to thwart biases baked into software used by HR offices.
New York City Local Law 144, also known as the Bias Audit Law, will require hiring organisations to inform job applicants that algorithms automating the process are being used and have a third-party perform an audit of the software to check for any bias.
Some experts believe the law governing the use of artificial intelligence (AI) in hiring could become a blueprint for reforms across the country.
The Bias Audit Law covers any automated hiring or employee assessment algorithm including machine learning, statistical modeling, data analytics, or AI that generates a prediction. The algorithms are defined as those used to assess a candidate’s fitness or likelihood of success, or that generate a classification of that person.
Companies that don’t comply with the law will face penalties of $375 for a first violation, $1,350 for a second violation and $1,500 for a third or more violations. Each day an automated employment decision tool is used in violation of the law will be considered another separate violation.
While New York City’s is the broadest law governing automated hiring tools to go into effect, states, including California, Illinois, Maryland, and Washington, have or are considering legislating rules around using AI for talent acquisition.
The European Union’s EU AI Act is also aimed at addressing issues surrounding automated hiring software. The text of the EU AI Act was passed in June and it’s currently being written into a proposal that can be voted on as a law.
Organisations use automation in hiring because weeding through candidates manually can take weeks, if not months. Simply scheduling next-phase interviews can take days to get on the books — not to mention delays caused by rescheduling.
A hiring manager also may not have enough time to fully prepare for an interview. Hiring algorithms can cull the field of candidates quickly based on experience, skills, and other metrics to produce a smaller, more manageable and (theoretically) better suited list of candidates.
Knowledge workers, in particular, can be difficult to sift through because of the amount of experience and skillsets required for their tasks.
The requirements contained in New York Local Law 144 could also easily bleed over into enterprise resource planning (ERP) applications and workforce planning in general, according to Cliff Jurkiewicz, vice president of Global Strategy at Phenom, an AI-enabled hiring platform provider.
For example, ERP applications have workforce management components that can play into how people are hired and trained and what competencies and skills are needed.
“All those things AI will affect. The reality is the reach of AI is going to make the extensibility of that law — which I predict will happen — much deeper than today. Work is not just recruiting and hiring someone. It goes well beyond that,” Jurkiewicz said.
For enterprises and organisations that have already embedded civil rights laws into their culture and business practices, New York’s new law will not likely be a problem. For those have haven’t, it could be a challenge.
“For example, in terms of Local Law 144, we were already compliant two years ago, as were other companies in our domain. Our domain is very well prepared for this,” Jurkiewicz said. “But if you look at some of the other domains — probably not.”
Will Rose is CTO of Talent Select AI, a company that sells software to measure personality traits and competencies of job candidates through the words they use in recorded or video job interviews. The software focuses on less traditional candidate traits such as openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism.
“Generally speaking,” Rose said, he and his company “embrace the new regulations.”
Rose believes it’s right that a candidate understand an AI algorithm is being used in the hiring process, and said his company is already using third-party audits to ensure its software isn’t biased.
“Candidates should have the ability to know what data is being collected and how it is being used,” Rose said. “For us, it’s pretty straight forward…. We do believe transparency should be a priority.
“I believe the law should have put more emphasis on requiring certain levels of ‘explainability’ in the AI systems that are used to make hiring decisions," Rose continued.
"The law is rightly concerned with the potential impact to protected groups of job candidates, but as AI systems continue to become increasingly complex in nature, there should be some accountability that the AI technology developers or vendors are able to explain how their automated hiring decisions are made.”
Implicit biases have been found in AI-based tools such as ChatGPT. Sayash Kapoor, a Princeton University PhD candidate, tested ChatGPT and found biases when the gender of the person is not obviously mentioned, apparently gleaned from other information such as pronouns.
Kapoor, who is co-authoring a book on AI problems with Arvind Narayanan, a Princeton University engineering professor, said in an email response to Computerworld that software like ChatGPT is three times more likely to use gender stereotypes when answering questions.
That was discovered by swapping the pronouns "he" and "she" and studying the results.
AI biases are not typically the result of developers intentionally programming their models to favor one gender or ethnicity, “but ultimately, the responsibility for fixing these biases rests with the developers, because they’re the ones releasing and profiting from AI models,” Kapoor said.
Companies offering AI-based recruitment software include Paradox, HireVue, iCIMS, Textio, Phenom, Jobvite, XOR.ai, Upwork, Bullhorn and Eightfold AI.
For example, HireVue’s service includes a chatbot that can hold text-based conversations with job seekers to guide them to positions that best fit their skills. Phenom’s deep-learning algorithm chatbot sends tailored job recommendations and content based on skills, position fit, location, and experience to candidates so employers can “find and choose you faster.” Not only does it screen applicants, but it can schedule job interviews.
AI-based talent management software provider Beamery, built a talent-acquisition chatbot based on GPT-4 and other large language models (LLMs) earlier this year. The chatbot aims to assist hiring managers, recruiters, candidates, and employees in talent acquisition and job searches. The company claims its AI automates rules compliance and mitigates bias risks associated with LLMs — the algorithms behind chatbots.
AI talent acquisition software uses numerical grades based on a candidate’s background, skills, and video interview to deliver an overall competency-based score and rankings that can be used in employer decision-making.
Phenom’s Jurkiewicz said because New York City is the default center of the commerce universe, Local Law 144 will have an impact far beyond the city’s borders. Though he doesn’t believe New York’s statute will spur other municipal rule-making efforts, companies outside New York will likely comply because so many do business with others in the city.
States, however, are likely to expand their regulatory oversight of automated recruiting, hiring, and retention tools similar to how California mimicked Europe’s GDPR consumer protection law with the California Consumer Privacy Act (CCPA),
“Depending on their political climate, each state may look at it,” Jurkiewicz said. “They’re likely to take a wait-and-see approach and see how this plays out over the next year.”
The Biden Administration has also expressed interest in regulating AI hiring tools in the US. Keith Sonderling, commissioner on the Equal Employment Opportunity Commission (EEOC), has said he’s “committed to ensuring that AI helps eliminate rather than exacerbate discrimination in the workplace.”
In 2021, EEOC Chair Charlotte Burrows also announced an initiative to ensure AI-based hiring tools adhere to federal civil rights laws.
“We agree with that,” Jurkiewicz said.
Smaller organisations might struggle with audits of their automated hiring tools because they don’t have experts on hand to determine how their algorithm arrives at a score, classification, or recommendation for a job candidate.
AI-based hiring tools make recommendations to hiring managers. So, for example, an automated interview scheduling system would recommend one candidate over another based on data in the system that shows a candidate meets job criteria to a higher degree than another. That AI score typically shows up as a percentage — the higher the percentage, the better the fit for an open position.
Typically, Jurkiewicz said, any score above 90% is considered to be a good target in terms of accuracy in a candidate’s fit for a job. Anything below the mid-80s as a percentage could indicate a bad fit, a bias, or it could also mean there’s a lack of data about the candidate.
For reporting purposes, it’s those nuances that smaller organisations will struggle to explain to regulators, Jurkiewicz said. And, that means organisations will have to be educated on what determines a good score.
“If you keep hiring [men] over [women], it might demonstrate you’ve got a bias against women. That score is what’s important,” Jurkiewicz said. “It may mean your data is incomplete, bad or it’s not being calculated the right way. So, the scoring system itself needs the most education for business owners.
“As they begin auditing businesses, the biggest problem you’re likely to see is not bad people, but bad data,” he added.