Link: Hospitals use a transcription tool powered by a hallucination-prone OpenAI model
My doctor recently demonstrated an AI transcription tool used to summarize patient meetings. Although my summary was adequate, researchers have expressed concerns about the tool's accuracy.
The AI, developed by OpenAI and called Whisper, is employed by many hospitals but occasionally creates fictitious content. ABC News reported that the tool, provided by Nabla, has been used in about 7 million medical conversations.
A recent study revealed that Whisper fabricated content in 1% of cases, sometimes inserting violent or nonsensical phrases. This issue is notably problematic during pauses in conversations with patients suffering from aphasia, a language disorder.
The incorrect transcriptions could include random medical terms or peculiar phrases similar to those found in YouTube videos. Notably, OpenAI had previously used vast amounts of YouTube content to train its models.
The findings were discussed at a major computing conference in Brazil, although it is unclear whether the study has undergone peer review. This highlights ongoing concerns about the reliability of AI in sensitive settings.
OpenAI spokesperson Taya Christianson stated that the company recognizes the seriousness of the issue and is actively working to mitigate such errors. They urge compliance with usage policies that restrict AI's application in critical decision-making processes. #
--
Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.
Member discussion