Thanks for checking out the website. Please call if you have any questions. We would be glad to talk to you . Our prices are reasonable and service exceptional. Have a great summer. Hope to talk to you soon. John Lee, Pharmacist/owner
904 Highway 363, Washington, LA 70589 Phone: (337) 826-9810 | Fax: (337) 826-9813 Mon-Fri 7:00am - 4:00pm | Sat-Sun Closed
Our Lady of Guadalupe Pharmacy Logo

Get Healthy!

Typos, Slang Trip Up AI Medical Assessments
  • Posted June 26, 2025

Typos, Slang Trip Up AI Medical Assessments

Common human typing errors can trip up artificial intelligence (AI) programs designed to aid health care workers by reviewing health records, a new MIT study says.

Typos and extra white spaces can interfere with AI’s ability to properly analyze patient records, researchers reported this week at an Association for Computing Machinery conference in Athens, Greece. 

Missing gender references or the use of slang also can foul up an AI’s treatment recommendations, researchers point out.

These human mistakes or language choices increased the likelihood that an AI would recommend that a patient self-manage their health problem rather than seek an appointment, results show.

They also were more likely to change an AI’s treatment recommendations for women, resulting in a higher percentage who were erroneously advised not to seek medical care, researchers add.

“These models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case,” said lead researcher Abinitha Gourabathina. She’s a graduate student with the MIT Department of Electrical Engineering and Computer Science in Cambridge, Mass.

A growing body of research is exploring the ability of AI to provide a second opinion for human doctors, researchers said in background notes. The programs already are being used to help doctors draft clinical notes and triage patient messages.

This study began when Gourabathina ran experiments in which she swapped gender cues in patient notes, then fed them into an AI. She was surprised to find that simple formatting errors caused meaningful changes in AI responses.

To further explore this problem, researchers altered records by swapping or removing gender references, inserting extra space or typos into patient messages, or adding colorful or uncertain language.

Colorful language might include exclamations like “wow,” or adverbs like “really” or “very,” researchers said. Examples of uncertain language include hedge words like “kind of,” “sort of,” “possibly” or “suppose.”

The patient notes preserved all clinical data, like prescription medications and previous diagnoses, while adding language that more accurately reflects how people type and speak.

“The medical datasets these models are trained on are usually cleaned and structured, and not a very realistic reflection of the patient population,” Gourabathina said. “We wanted to see how these very realistic changes in text could impact downstream use cases.”

The team ran these records past four different AIs, asking whether a patient should manage their symptoms at home, come in for a clinic visit, or get a lab test to better evaluate their condition.

When the AIs were fed the altered or “perturbed” data, they were 7% to 9% more likely to recommend that patients care for themselves, results show.

The use of colorful language like slang or dramatic expressions had the greatest impact, researchers said.

The AI models also made about 7% more errors for female patients and were more likely to recommend that women self-manage at home – even when researchers removed all gender cues from the records.

Follow-up research currently under review found that the same changes didn’t affect the accuracy of human doctors, researchers added.

Researchers plan to continue their work by testing records that better mimic real messages from patients. They also plan to study how AI programs infer gender from clinical tests.

Researchers reported their findings at the meeting, which ends today. Findings presented at medical meetings should be considered preliminary until published in a peer-reviewed journal.

More information

The Cleveland Clinic has more on AI in health care.

SOURCE: MIT, news release, June 23, 2025

HealthDay
Health News is provided as a service to Our Lady of Guadalupe Pharmacy site users by HealthDay. Our Lady of Guadalupe Pharmacy nor its employees, agents, or contractors, review, control, or take responsibility for the content of these articles. Please seek medical advice directly from your pharmacist or physician.
Copyright © 2025 HealthDay All Rights Reserved.

Share

Tags