Health

Asking AI health questions? Use with caution, researchers say

North America / USA0 views1 min
Asking AI health questions? Use with caution, researchers say

Two studies found that AI chatbots often provide 'problematic' or inaccurate responses to health questions, with one study showing a 49.6% problematic response rate and another showing an 80% failure rate in replicating a doctor's diagnosis. Researchers urge caution when using AI chatbots for health information.

Researchers have sounded a warning about using AI chatbots for health information, citing two new studies that tested the accuracy of chatbot responses to health questions. One study found that 49.6% of responses from five widely used chatbots were 'problematic', with nearly 20% deemed 'highly problematic'. The chatbots tested included Google's Gemini, DeepSeek, Meta AI, ChatGPT, and Grok. The study's lead author, Nick Tiller, noted that the highly problematic responses had the potential to cause harm if followed. Another study tested 21 AI models on 29 clinical vignettes and found that they struggled to replicate a doctor's diagnosis, with an 80% failure rate. Researchers recommend that users show chatbot outputs to their physicians and not rely solely on the chatbot's responses.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Comments (0)

Log in to comment.

Loading...