Health

AI Chatbot Told Users That Herbal Remedies Can Treat Cancer

North America / United States0 views1 min

A new study found that popular AI chatbots frequently produce problematic responses to health and medical questions, including fabricated citations and incorrect answers. Physicians may need to help patients understand the limitations of AI chatbots in providing reliable medical guidance.

A recent study evaluated the responses of five popular AI chatbots to 50 health-related questions across five categories, including cancer, vaccines, and nutrition. Nearly half of the responses were classified as problematic, with 20% considered highly problematic and potentially harmful. The chatbots often provided fabricated citations and answered with confidence, even when incorrect. The study's findings highlight the need for better public education on the limitations of AI chatbots in providing medical guidance. Physicians should explain to patients that AI chatbots are designed to mimic verbal fluency, not provide accurate medical information. The chatbots' performance varied across categories, with stronger performance on vaccine and cancer-related questions, but weaker performance on stem cells, nutrition, and athletic performance.

This content was automatically generated and/or translated by AI. It may contain inaccuracies. Please refer to the original sources for verification.

Rate this article

0.0 (0 ratings)Log in to rate

Comments (0)

Log in to comment.

Loading...

AI Chatbot Told Users That Herbal Remedies Can Treat Cancer - NoFOMO | NoFOMO