Chatbot-generated Drugs Prescription Can Potentially Cause Harm, Researchers Warn

Chatbot-generated Drugs Prescription Can Potentially Cause Harm, Researchers Warn

Facebook
Twitter
LinkedIn

Artificial intelligence (AI) is steadily creeping into every aspect of the economy and human life. From Industries to healthcare, the new-era technology promises a world of virtually limitless possibilities, with AI co-pilot and AI chatbots being widely implemented.

While this technology is critical in healthcare workflow automation and efficiency, researchers have warned patients against relying on AI-powered chatbots and search engines for accurate and safe information about drugs.

According to research findings published in the journal BMJ Quality & Safety on October 11, 2024, large language models can inadvertently generate wrong or potentially harmful responses to patient queries, putting chatbot users looking for medical responses at risk.

The researchers also added that the complexity of some prescription responses given by the chatbots requires a college-degree level of knowledge to understand, making it difficult for patients to fully understand them.

The Research

With the rapid advancements in large language models and recent advancements in major search engines incorporating AI co-pilots, search engines promise enhanced accuracy in search results, comprehensive answers to queries, and an intuitive user experience. However, the AI models are not fool-proof.

Despite their great potential to handle queries, including those in healthcare, the chatbot models are also capable of generating nonsensical responses, spreading disinformation, or responses that are potentially harmful to users.

The research, conducted using Bing’s AI co-pilot, evaluated 500 responses generated based on 10 commonly asked questions for each of the 50 most prescribed drugs in the United States based on the completeness, readability, and accuracy of the information.

Based on the findings, the chatbot was unable to understand the intent of patients’ questions, with 42% of the AI-generated responses likely to lead to mild or moderate harm, 22% of the responses likely to lead to severe harm or death, and only 36% likely to lead to no harm on users.

These findings underscore the need for patients to consult their healthcare professionals, as chatbots may not always generate error-free information regarding critical health-related issues.

 

Photo credit: Solen Feyissa on Unsplash