One of the potential applications of artificial intelligence is medicine, where the new technology can help doctors. For example, with the detection of fractures. Doctors are also using artificial intelligence to summarize meetings with patients. One such technology is Whisper by OpenAI, which was created for speech recognition and transcription. However, the use of AI in medicine has negative consequences, The Verge reports.
Nabla uses Whisper for its medical transcription tool, which, according to an ABC News study, has transcribed more than 7 million medical conversations. This tool is used by more than 30,000 clinicians, doctors, researchers, and 40 healthcare systems. The negative consequences are that OpenAI's artificial intelligence sometimes makes up sentences entirely and adds them to the meeting summaries.
A group of researchers from Cornell, Washington, and other universities found that OpenAI's Whisper "hallucinates" about 1% of the transcriptions. The artificial intelligence makes up complete sentences, sometimes with aggressive or nonsensical phrases, during silence in the recordings.
Harms perpetuating violence involve misrepresentation of a speaker’s words that could become part of a formal record (e.g. in a courtroom trial); we present 3 subcategories of examples: physical violence, sexual innuendo, and demographic stereotyping (4/14) pic.twitter.com/zhsKxI2qNs
— Allison Koenecke (@allisonkoe) June 3, 2024
Allison Koeneke, one of the researchers from Cornell University, has published examples of such hallucinations.
"We take this issue seriously and are continually working to improve, including reducing hallucinations. For Whisper use on our API platform, our usage policies prohibit use in certain high-stakes decision-making contexts, and our model card for open-source use includes recommendations against use in high-risk domains. We thank researchers for sharing their findings," said OpenAI representative Taya Christianson.