Monday, December 23, 2024
Monday, December 23, 2024
- Advertisement -

LLMs are highly error-prone when mapping medical codes

All tested LLMs performed poorly on medical code querying, often generating codes conveying imprecise or fabricated information

Must Read

- Advertisement -
- Advertisement -
  • Researchers emphasise the necessity for refinement and validation of these technologies before considering clinical implementation
  • GPT-4 demonstrated the best performance, with the highest exact match rates and also produced the highest proportion of incorrectly generated codes.

Large language models (LLMs) have attracted significant interest in automated clinical coding but early data show that LLMs are highly error-prone when mapping medical codes.

In the study, published in the online issue of NEJM AI, researchers at the Icahn School of Medicine at Mount Sinai emphasised the necessity for refinement and validation of these technologies before considering clinical implementation.

They evaluated models from OpenAI, Google and Meta such as GPT-3.5, GPT-4, Gemini Pro, and Llama2-70b Chat performance and error patterns when querying medical billing codes.

The study extracted a list of more than 27,000 unique diagnosis and procedure codes from 12 months of routine care in the Mount Sinai Health System, while excluding identifiable patient data.

The investigation showed limited accuracy (below 50 per cent) in reproducing the original medical codes, highlighting a significant gap in their usefulness for medical coding. GPT-4 demonstrated the best performance, with the highest exact match rates for International Classification of Diseases, 9th edition, Clinical Modification (ICD-9-CM) (45.9 per cent), ICD-10-CM (33.9 per cent), and CPT codes (49.8 per cent).

GPT-4 also produced the highest proportion of incorrectly generated codes that still conveyed the correct meaning.

Large number of errors remain

For example, when given the ICD-9-CM description “nodular prostate without urinary obstruction,” GPT-4 generated a code for “nodular prostate,” showcasing its comparatively nuanced understanding of medical terminology.

However, even considering these technically correct codes, an unacceptably large number of errors remained.

The next best-performing model, GPT-3.5, had the greatest tendency toward being vague. It had the highest proportion of incorrectly generated codes that were accurate but more general in nature compared to the precise codes.

In this case, when provided with the ICD-9-CM description “unspecified adverse effect of anaesthesia,” GPT-3.5 generated a code for “other specified adverse effects, not elsewhere classified.”

“Our findings underscore the critical need for rigorous evaluation and refinement before deploying AI technologies in sensitive operational areas like medical coding,” study corresponding author Ali Soroush, MD, MS, Assistant Professor of Data-Driven and Digital Medicine (D3M), and Medicine (Gastroenterology), at Icahn Mount Sinai, said. “While AI holds great potential, it must be approached with caution and ongoing development to ensure its reliability and efficacy in health care.”

Additional refinement needed

One potential application for these models in the healthcare industry, say the investigators, is automating the assignment of medical codes for reimbursement and research purposes based on clinical text.

“Previous studies indicate that newer large language models struggle with numerical tasks. However, the extent of their accuracy in assigning medical codes from clinical text had not been thoroughly investigated across different models,” co-senior author Eyal Klang, MD, Director of the D3M’s Generative AI Research Program, said.

“Therefore, we aimed to assess whether these models could effectively perform the fundamental task of matching a medical code to its corresponding official text description.”

The study authors proposed that integrating LLMs with expert knowledge could automate medical code extraction, potentially enhancing billing accuracy and reducing administrative costs in health care. 

“This study sheds light on the current capabilities and challenges of AI in health care, emphasising the need for careful consideration and additional refinement before widespread adoption,” co-senior author Girish Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and System Chief of D3M, said.

The researchers caution that the study’s artificial task may not fully represent real-world scenarios where LLM performance could be worse.

Next, the research team plans to develop tailored LLM tools for accurate medical data extraction and billing code assignment, aiming to improve quality and efficiency in healthcare operations.

- Advertisement -

Latest News

Apple adds ChatGPT to iPhone to bolster holiday sales

The feature aims to rejuvenate consumer interest in Apple's products, particularly the new iPhone series

Abu Dhabi moves closer to become a gaming hub with $150m fund

Beam Ventures to focus on early-stage startups specialising in web3 gaming and artificial intelligence

Oracle’s results spark further concerns among investors

Oracle's second-quarter revenue rises 9% to $14.1b, fuelled by a 52% surge in its cloud infrastructure revenue to $2.4b
- Advertisement -
- Advertisement -

More Articles

- Advertisement -