Release Date: December 9, 2024
BUFFALO, N.Y. — The U.S. Food and Drug Administration says there are more than 900 artificial intelligence (AI) and machine learning-enabled medical devices on the market in the U.S.
But as is often the case with technological innovation, the regulatory infrastructure for such devices is still forming.
Last month, the FDA Digital Health Advisory Committee held a public meeting in Gaithersburg, Maryland, to discuss total product lifecycle considerations for generative artificial intelligence (AI)-enabled devices.
Among those invited to share their perspectives was Peter L. Elkin, MD, professor and chair of the Department of Bioinformatics in the Jacobs School of Medicine and Biomedical Sciences at the University at Buffalo. He was one of about a dozen academics on the panel.
Elkin’s first presentation to the FDA about AI was back in 2019, when he gave a grand rounds presentation regarding premarket approval and postmarket surveillance of artificial intelligence devices. He discussed the need to allow these devices to learn and therefore improve, which is one of their strengths.
“I said then that we need to start thinking differently about AI,” says Elkin, also a physician with UBMD Internal Medicine. “AI-enabled devices are no longer just tools; now they are partners in care.”
AI devices are being incorporated into health care at every level in hospitals and outpatient settings, from electronic medical records and diagnostic imaging to precision medicine and robotic surgery. So it made sense that the main focus of the recent meeting was exploring how the FDA should do premarket approval and postmarket surveillance of generative AI tools.
Elkin noted at the meeting that the FDA should issue guidance on marketing generative AI-enabled devices and should have manufacturers incorporate premarket approval data into model cards. These model cards are analogous, Elkin explained, to the nutrition labels on food packages and would provide to the clinician the critical information they need to know about the AI tool.
“To know whether an AI device is relevant to your patient, one needs to know many things,” says Elkin, “including the accuracy of the predictions the device made in testing, the error rate, the populations on which it was trained and any data on how biased the device is in its performance. For example, does it work as well in African Americans as it does in Caucasians?”
He says one of the goals of the meeting was to anticipate ways to prevent potential problems with AI devices, such as the device being used on a population on which it was not trained or being used for a purpose for which it was not validated.
Clinicians are not currently required to inform patients that they are using an AI-enabled device, although some have advocated that this would be a good idea.
As for how patients might engage with AI, Elkin says AI today reminds him of the quantum change the internet and the phenomenon commonly known as “Dr. Google” brought to health care a couple of decades ago.
“In those days, patients would come into their appointments with material that they had printed off of the internet, some of which might be relevant to them and some of which was not,” says Elkin.
He conceded that some physicians weren’t too happy about that. “But I embraced it,” says Elkin. “Any time a patient is engaged in their care and wants to do the right thing, that helps me take better care of them. But I advise them, ‘Ask me. Don’t worry on your own, because you don’t know if the information that you found applies to you.’”
He plans to tell his patients the same thing when they start sharing information with him generated by AI.
Elkin’s research focuses on improving artificial intelligence and large language models by adding to them formal semantic reasoning and by improving their ability to perform the mathematics needed for evidence-based medicine. He has created a new large language model pipeline to be used as a comprehensive medical resource for students and clinicians.
Ellen Goldbaum
News Content Manager
Medicine
Tel: 716-645-4605
goldbaum@buffalo.edu