Thursday, December 5, 2024
Thursday, December 5, 2024
- Advertisement -

Disability bias in ChatGPT-based resume screening

AI systems may harbour inherent biases that can disadvantage candidates with disabilities

Must Read

- Advertisement -
- Advertisement -
  • UW research revealed ChatGPT and other AI tools consistently rank resumes with disability-related honours and credentials lower than their counterparts without such information.
  • As these technologies become more pervasive, they must be designed and implemented with a keen awareness of their impact on marginalised groups, including individuals with disabilities.

In today’s increasingly technology-driven job market, the use of artificial intelligence (AI) tools in the hiring process has become a common practice.

However, as the University of Washington (UW) research has revealed, these AI systems may harbour inherent biases that can disadvantage candidates with disabilities.

Addressing this issue is not only a moral imperative but also a necessary step toward creating a more inclusive and equitable job market.

The study conducted by UW researchers, led by doctoral student Kate Glazko at the UW’s Paul G. Allen School of Computer Science & Engineering, shed light on the concerning reality that ChatGPT and other AI tools consistently ranked resumes with disability-related honours and credentials lower than their counterparts without such information.

When prompted to explain its reasoning, the system revealed biased perceptions and harmful stereotypes regarding individuals with disabilities.

“Ranking resumes with AI is starting to proliferate, yet there’s not much research behind whether it’s safe and effective,” she said.

However, she said that some of GPT’s descriptions would colour a person’s entire resume based on their disability and claimed that involvement with diversity, equity and inclusion (DEI) or disability is potentially taking away from other parts of the resume.

 “In a fair world, the enhanced resume should be ranked first every time,” said senior author Jennifer Mankoff, a UW professor in the Allen School.

“I can’t think of a job where somebody who’s been recognised for their leadership skills, for example, shouldn’t be ranked ahead of someone with the same background who hasn’t.”

These findings underscore the urgent need for a deeper examination of the potential pitfalls of AI-powered resume screening. As these technologies become more pervasive, they must be designed and implemented with a keen awareness of their impact on marginalised groups, including individuals with disabilities.

Researchers, cognizant of the inherent biases present in large language models like GPT-4, have sought to explore strategies for cultivating more inclusive and equitable AI systems.

The study described in the provided text highlights a commendable effort to address ableist biases within the GPT-4 model. By leveraging the GPTs Editor tool, the researchers were able to customise the chatbot’s responses to align with disability justice and DEI principles.

The approach, which did not require complex coding, demonstrates the potential for accessible and impactful interventions in the field of AI development.

The experimental results are encouraging, as the enhanced chatbot ranked the modified CVs higher than the control CV in a majority of the trials.

 “People need to be aware of the system’s biases when using AI for these real-world tasks. Otherwise, a recruiter using ChatGPT can’t make these corrections, or be aware that, even with instructions, bias can persist,” Glazko said.

Encouragingly, the UW researchers’ efforts to customise the AI tool with written instructions directing it to avoid ableist biases yielded promising results. While the system was not able to eliminate the bias for all the disabilities tested, it did show a significant improvement in ranking resumes with five out of the six implied disabilities higher than before.

“It is so important that we study and document these biases,” Mankoff said.

“We’ve learned a lot from and will hopefully contribute back to a larger conversation — not only regarding disability but also other minoritised identities — around making sure technology is implemented and deployed in ways that are equitable and fair.”

Related Posts:



Sign up to receive top stories every day

- Advertisement -

Latest News

No more close calls: AI’s role in workplace safety

AI is reshaping workplace safety, not by replacing human oversight, but by enhancing it

du joins Telefonica Partners Program to spread wings

du and Telefonica will explore joint business opportunities across various domains

Locad raises $9m to spread wings into UAE and Saudi Arabia

Locad new funding will also be used to enhance Locad's AI-driven smart logistics capabilities.
- Advertisement -
- Advertisement -

More Articles

- Advertisement -