AMA demands further legislation as AI brings risk of medical misinformation, fraud

URL has been copied successfully!

The American Medical Association (AMA) has written a series of letters urging legislative safeguards to prevent the misuse of artificial intelligence (AI) in the medical and mental health fields.

AI has been used to promote medical misinformation, spread fraud, and erode confidence in public health services, including through deepfake videos impersonating medical professionals and through chatbots providing misleading or dangerous health advice. 

“We shouldn’t have to make the public detectives to determine whether something’s not a deepfake,” Axios cited AMA CEO John Whyte as saying.

In one case, the scientific journal Nature reported earlier this month that a research team from the University of Gothenburg in Sweden uploaded two fake medical papers describing the fictional disease “bixonimania.” The information about the made-up disease was quickly absorbed and reused by AI systems such as Microsoft Bing’s Copilot, Google’s Gemini, the Perplexity AI answer engine, and OpenAI’s ChatGPT.

“We have always been transparent about the limitations of generative AI and provide in-app prompts to encourage users to double-check information,” a Google spokesperson said about the experiment. “For sensitive matters such as medical advice, Gemini recommends users consult with qualified professionals.”

Another case of AI being used to spread medical misinformation is when Dr. Sanjay Gupta, chief medical correspondent for CNN, had his appearance replicated in a deepfake video claiming to be selling a cure for Alzheimer’s disease last year.

“What is so striking to me now is that stuff that shows up in my feed is demonstrably, objectively not true, and yet it is there,” Gupta said on CNN’s Terms of Service podcast, “and it is shared over and over and over again. So nowadays it seems like the currency is clickbait, you know. Putting out things that are demonstrably not true has become very, very normal.”

Gupta also discussed how lifelike the AI deepfakes have become, saying that even other doctors have been fooled by videos featuring him.

In Pennsylvania, a recent lawsuit against Character Technologies Inc. alleges that Character.AI chatbots have claimed to be licensed medical professionals, including psychiatrists.

The lawsuit describes how an investigator created a free account on Character.AI and conducted a discussion with a chatbot named “Emilie,” described on the site as “Doctor of psychiatry. You are her patient.” During the investigator’s conversation, the character claimed to be licensed as a doctor in Pennsylvania, and listed a fictional license number.

“We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional,” said Pennsylvania Governor Josh Shapiro. “My Administration is taking action to protect Pennsylvanians, enforce the law, and make sure new technology is used safely. Pennsylvania will continue leading the way in holding bad actors accountable and setting clear guardrails so people can use new technology responsibly.”

The AMA’s recommendations to Congress included further regulation of AI chatbots and tools, intended to counter the spread of misinformation and mistrust. 

Some of the key issues addressed in AMA’s letters were a requirement for increased transparency in chatbots intended for mental health support; regulatory boundaries preventing general-purpose AI chatbots from diagnosing illnesses without approval from the FDA; discouraging or prohibiting advertisements within AI chatbots; and reinforcing privacy protections in the collection of personal details by chatbots.

“AI-enabled tools may help expand access to mental health resources and support innovation in health care delivery, but they lack consistent safeguards against serious risks, including emotional dependency, misinformation, and inadequate crisis response,” Whyte said. “With thoughtful oversight and accountability, policymakers can support innovation and ensure technologies prioritize patient safety, strengthen public trust, and responsibly complement- not replace- clinical care.”

Legislators seek to address AI healthcare issues

Some regulatory bodies have already begun examining the prospect of AI legislation to address healthcare-related issues.

In February, California State Senator Lena Gonzalez introduced Senate Bill 1146, sponsored by the California Medical Association (CMA), which would establish clear prohibitions and penalties against those who advertise health products without disclosing the use of AI deepfakes.

“The physician-patient relationship is built on a foundation of trust. When bad actors use AI to steal a doctor’s identity to sell and market to vulnerable patients, they are not just committing fraud – they are putting lives at risk,” said CMA President René Bravo, M.D. “Patients should not have to question whether the medical advice they receive is coming from a real doctor or a fake AI version. SB 1146 is a necessary step to restore integrity to health information online and hold scammers accountable.”

According to the National Conference of State Legislatures, 43 states have 263 bills related to AI in healthcare, of which only 17 have been enacted.

Please follow us:
Follow by Email
X (Twitter)
Whatsapp
LinkedIn
Copy link

This post was originally published on here