Monday, May 6, 2024
Monday, May 6, 2024

LLMs are highly error-prone when mapping medical codes

All tested LLMs performed poorly on medical code querying, often generating codes conveying imprecise or fabricated information

Must Read

  • Researchers emphasise the necessity for refinement and validation of these technologies before considering clinical implementation
  • GPT-4 demonstrated the best performance, with the highest exact match rates and also produced the highest proportion of incorrectly generated codes.

Large language models (LLMs) have attracted significant interest in automated clinical coding but early data show that LLMs are highly error-prone when mapping medical codes.

In the study, published in the online issue of NEJM AI, researchers at the Icahn School of Medicine at Mount Sinai emphasised the necessity for refinement and validation of these technologies before considering clinical implementation.

They evaluated models from OpenAI, Google and Meta such as GPT-3.5, GPT-4, Gemini Pro, and Llama2-70b Chat performance and error patterns when querying medical billing codes.

The study extracted a list of more than 27,000 unique diagnosis and procedure codes from 12 months of routine care in the Mount Sinai Health System, while excluding identifiable patient data.

The investigation showed limited accuracy (below 50 per cent) in reproducing the original medical codes, highlighting a significant gap in their usefulness for medical coding. GPT-4 demonstrated the best performance, with the highest exact match rates for International Classification of Diseases, 9th edition, Clinical Modification (ICD-9-CM) (45.9 per cent), ICD-10-CM (33.9 per cent), and CPT codes (49.8 per cent).

GPT-4 also produced the highest proportion of incorrectly generated codes that still conveyed the correct meaning.

Large number of errors remain

For example, when given the ICD-9-CM description “nodular prostate without urinary obstruction,” GPT-4 generated a code for “nodular prostate,” showcasing its comparatively nuanced understanding of medical terminology.

However, even considering these technically correct codes, an unacceptably large number of errors remained.

The next best-performing model, GPT-3.5, had the greatest tendency toward being vague. It had the highest proportion of incorrectly generated codes that were accurate but more general in nature compared to the precise codes.

In this case, when provided with the ICD-9-CM description “unspecified adverse effect of anaesthesia,” GPT-3.5 generated a code for “other specified adverse effects, not elsewhere classified.”

“Our findings underscore the critical need for rigorous evaluation and refinement before deploying AI technologies in sensitive operational areas like medical coding,” study corresponding author Ali Soroush, MD, MS, Assistant Professor of Data-Driven and Digital Medicine (D3M), and Medicine (Gastroenterology), at Icahn Mount Sinai, said. “While AI holds great potential, it must be approached with caution and ongoing development to ensure its reliability and efficacy in health care.”

Additional refinement needed

One potential application for these models in the healthcare industry, say the investigators, is automating the assignment of medical codes for reimbursement and research purposes based on clinical text.

“Previous studies indicate that newer large language models struggle with numerical tasks. However, the extent of their accuracy in assigning medical codes from clinical text had not been thoroughly investigated across different models,” co-senior author Eyal Klang, MD, Director of the D3M’s Generative AI Research Program, said.

“Therefore, we aimed to assess whether these models could effectively perform the fundamental task of matching a medical code to its corresponding official text description.”

The study authors proposed that integrating LLMs with expert knowledge could automate medical code extraction, potentially enhancing billing accuracy and reducing administrative costs in health care. 

“This study sheds light on the current capabilities and challenges of AI in health care, emphasising the need for careful consideration and additional refinement before widespread adoption,” co-senior author Girish Nadkarni, MD, MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and System Chief of D3M, said.

The researchers caution that the study’s artificial task may not fully represent real-world scenarios where LLM performance could be worse.

Next, the research team plans to develop tailored LLM tools for accurate medical data extraction and billing code assignment, aiming to improve quality and efficiency in healthcare operations.


Discover more from TechChannel News

Subscribe to get the latest posts to your email.

Latest News

Msheireb to bring Qatar’s culture to limelight with Metahug

Roblox platform will host a series of mini-games to allow players to explore the richness of Qatari culture

Microsoft replaces passwords with passkeys on consumer accounts

Users can now create a passkey on their devices and use their face, fingerprint, PIN, or security key as a means of identification

Indian food-agri startup F3 eyes Rs100cr ARR this year

F3 raises $2m funding in its pre-Series A round and will be used to spread wings

Ola captures over 52% electric two wheeler market share

OLA records 33,934 electric two-wheelers, despite a broader 52% slump in overall sales to 64,013 units in March

More Articles

Discover more from TechChannel News

Subscribe now to keep reading and get access to the full archive.

Continue reading