'Behind the Times': Washington Tries to Catch Up With AI's Use in Health Care – Kaiser Health News
Republish This Story
Disponible en Español
Lawmakers and regulators in Washington are starting to puzzle over how to regulate artificial intelligence in health care — and the AI industry thinks there’s a good chance they’ll mess it up.
It can be republished for free.
“It’s an incredibly daunting problem,” said Bob Wachter, the chair of the Department of Medicine at the University of California-San Francisco. “There’s a risk we come in with guns blazing and overregulate.”
Already, AI’s impact on health care is widespread. The Food and Drug Administration has approved some 692 AI products. Algorithms are helping to schedule patients, determine staffing levels in emergency rooms, and even transcribe and summarize clinical visits to save physicians’ time. They’re starting to help radiologists read MRIs and X-rays. Wachter said he sometimes informally consults a version of GPT-4, a large language model from the company OpenAI, for complex cases.
The scope of AI’s impact — and the potential for future changes — means government is already playing catch-up.
“Policymakers are terribly behind the times,” Michael Yang, senior managing partner at OMERS Ventures, a venture capital firm, said in an email. Yang’s peers have made vast investments in the sector. Rock Health, a venture capital firm, says financiers have put nearly $28 billion into digital health firms specializing in artificial intelligence.
Subscribe to KFF Health News' free Weekly Edition.
One issue regulators are grappling with, Wachter said, is that, unlike drugs, which will have the same chemistry five years from now as they do today, AI changes over time. But governance is forming, with the White House and multiple health-focused agencies developing rules to ensure transparency and privacy. Congress is also flashing interest. The Senate Finance Committee held a hearing Feb. 8 on AI in health care.
Along with regulation and legislation comes increased lobbying. CNBC counted a 185% surge in the number of organizations disclosing AI lobbying activities in 2023. The trade group TechNet has launched a $25 million initiative, including TV ad buys, to educate viewers on the benefits of artificial intelligence.
“It is very hard to know how to smartly regulate AI since we are so early in the invention phase of the technology,” Bob Kocher, a partner with venture capital firm Venrock who previously served in the Obama administration, said in an email.
Kocher has spoken to senators about AI regulation. He emphasizes some of the difficulties the health care system will face in adopting the products. Doctors — facing malpractice risks — might be leery of using technology they don’t understand to make clinical decisions.
An analysis of Census Bureau data from January by the consultancy Capital Economics found 6.1% of health care businesses were planning to use AI in the next six months, roughly in the middle of the 14 sectors surveyed.
Like any medical product, AI systems can pose risks to patients, sometimes in a novel way. One example: They may make things up.
Wachter recalled a colleague, as a test, assigning OpenAI’s GPT-3 to write a prior authorization letter to an insurer for a purposefully “wacky” prescription: a blood thinner to treat a patient’s insomnia.
But the AI “wrote a beautiful note,” he said. The system so convincingly cited “recent literature” that Wachter’s colleague briefly wondered whether she’d missed a new line of research. It turned out the chatbot had made it up.
There’s a risk of AI magnifying bias already present in the health care system. Historically, people of color have received less care than white patients. Studies show, for example, that Black patients with fractures are less likely to get pain medication than white ones. This bias might get set in stone when artificial intelligence is trained on that data and subsequently acts.
Research into AI deployed by large insurers has confirmed that has happened. But the problem is more widespread. Wachter said UCSF tested a product to predict no-shows for clinical appointments. Patients who are deemed unlikely to show up for a visit are more likely to be double-booked.
The test showed that people of color were more likely not to show. Whether or not the finding was accurate, “the ethical response is to ask, why is that, and is there something you can do,” Wachter said.
Hype aside, those risks will likely continue to grab attention over time. AI experts and FDA officials have emphasized the need for transparent algorithms, monitored over the long term by human beings — regulators and outside researchers. AI products adapt and change as new data is incorporated. And scientists will develop new products.
Policymakers will need to invest in new systems to track AI over time, said University of Chicago Provost Katherine Baicker, who testified at the Finance Committee hearing. “The biggest advance is something we haven’t thought of yet,” she said in an interview.
DariusT@kff.org, @dariustahir
Share This Story:
We want to hear from you: Contact Us
Republish This Story
Lawmakers and regulators in Washington are starting to puzzle over how to regulate artificial intelligence in health care — and the AI industry thinks there’s a good chance they’ll mess it up.
“It’s an incredibly daunting problem,” said Bob Wachter, the chair of the Department of Medicine at the University of California-San Francisco. “There’s a risk we come in with guns blazing and overregulate.”
Already, AI’s impact on health care is widespread. The Food and Drug Administration has approved some 692 AI products. Algorithms are helping to schedule patients, determine staffing levels in emergency rooms, and even transcribe and summarize clinical visits to save physicians’ time. They’re starting to help radiologists read MRIs and X-rays. Wachter said he sometimes informally consults a version of GPT-4, a large language model from the company OpenAI, for complex cases.
The scope of AI’s impact — and the potential for future changes — means government is already playing catch-up.
“Policymakers are terribly behind the times,” Michael Yang, senior managing partner at OMERS Ventures, a venture capital firm, said in an email. Yang’s peers have made vast investments in the sector. Rock Health, a venture capital firm, says financiers have put nearly $28 billion into digital health firms specializing in artificial intelligence.
One issue regulators are grappling with, Wachter said, is that, unlike drugs, which will have the same chemistry five years from now as they do today, AI changes over time. But governance is forming, with the White House and multiple health-focused agencies developing rules to ensure transparency and privacy. Congress is also flashing interest. The Senate Finance Committee held a hearing Feb. 8 on AI in health care.
Along with regulation and legislation comes increased lobbying. CNBC counted a 185% surge in the number of organizations disclosing AI lobbying activities in 2023. The trade group TechNet has launched a $25 million initiative, including TV ad buys, to educate viewers on the benefits of artificial intelligence.
“It is very hard to know how to smartly regulate AI since we are so early in the invention phase of the technology,” Bob Kocher, a partner with venture capital firm Venrock who previously served in the Obama administration, said in an email.
Kocher has spoken to senators about AI regulation. He emphasizes some of the difficulties the health care system will face in adopting the products. Doctors — facing malpractice risks — might be leery of using technology they don’t understand to make clinical decisions.
An analysis of Census Bureau data from January by the consultancy Capital Economics found 6.1% of health care businesses were planning to use AI in the next six months, roughly in the middle of the 14 sectors surveyed.
Like any medical product, AI systems can pose risks to patients, sometimes in a novel way. One example: They may make things up.
Wachter recalled a colleague, as a test, assigning OpenAI’s GPT-3 to write a prior authorization letter to an insurer for a purposefully “wacky” prescription: a blood thinner to treat a patient’s insomnia.
But the AI “wrote a beautiful note,” he said. The system so convincingly cited “recent literature” that Wachter’s colleague briefly wondered whether she’d missed a new line of research. It turned out the chatbot had made it up.
There’s a risk of AI magnifying bias already present in the health care system. Historically, people of color have received less care than white patients. Studies show, for example, that Black patients with fractures are less likely to get pain medication than white ones. This bias might get set in stone when artificial intelligence is trained on that data and subsequently acts.
Research into AI deployed by large insurers has confirmed that has happened. But the problem is more widespread. Wachter said UCSF tested a product to predict no-shows for clinical appointments. Patients who are deemed unlikely to show up for a visit are more likely to be double-booked.
The test showed that people of color were more likely not to show. Whether or not the finding was accurate, “the ethical response is to ask, why is that, and is there something you can do,” Wachter said.
Hype aside, those risks will likely continue to grab attention over time. AI experts and FDA officials have emphasized the need for transparent algorithms, monitored over the long term by human beings — regulators and outside researchers. AI products adapt and change as new data is incorporated. And scientists will develop new products.
Policymakers will need to invest in new systems to track AI over time, said University of Chicago Provost Katherine Baicker, who testified at the Finance Committee hearing. “The biggest advance is something we haven’t thought of yet,” she said in an interview.
We encourage organizations to republish our content, free of charge. Here’s what we ask:
You must credit us as the original publisher, with a hyperlink to our kffhealthnews.org site. If possible, please include the original author(s) and KFF Health News” in the byline. Please preserve the hyperlinks in the story.
It’s important to note, not everything on kffhealthnews.org is available for republishing. If a story is labeled “All Rights Reserved,” we cannot grant permission to republish that item.
Have questions? Let us know at KHNHelp@kff.org
Experts: US Hospitals Prone to Cyberattacks Like One That Hurt Patient Care at Ascension
California Leaders Tussle With Health Industry Over Billions of New Dollars for Medi-Cal
‘We’re Flying Blind’: CDC Has 1M Bird Flu Tests Ready, but Experts See Repeat of Covid Missteps
A Tale of Two States: Arizona and Florida Diverge on How To Expand Kids’ Health Insurance
© 2024 KFF. All rights reserved.
Powered by WordPress VIP
Thank you for your interest in supporting Kaiser Health News (KHN), the nation’s leading nonprofit newsroom focused on health and health policy. We distribute our journalism for free and without advertising through media partners of all sizes and in communities large and small. We appreciate all forms of engagement from our readers and listeners, and welcome your support.
KHN is an editorially independent program of KFF (Kaiser Family Foundation). You can support KHN by making a contribution to KFF, a non-profit charitable organization that is not associated with Kaiser Permanente.
Click the button below to go to KFF’s donation page which will provide more information and FAQs. Thank you!