Pakistan, Feb. 10 -- Artificial intelligence (AI) tools are more likely to provide incorrect medical advice when misinformation appears to come from an authoritative source, according to a new study published in The Lancet Digital Health.
Read More: How Artificial Intelligence Can Foster Peace and Interfaith Harmony?
Researchers tested 20 open-source and proprietary large language models (LLMs) and found that the software was more easily misled by errors embedded in realistic-looking doctors' discharge summaries than by incorrect claims circulating on social media. The findings highlight growing concerns about the reliability of AI systems increasingly used in healthcare settings.
"Current AI systems can treat confident medical languag...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.