Safe and equitable AI needs guardrails, from legislation and humans in the loop

You May Be Interested In:Pharmacists Can Address Gaps in Long-Acting PrEP Care for HIV



Healthcare organizations have sometimes been slow to adopt new artificial intelligence tools and other leading-edge innovations because of valid safety and transparency concerns. But to  improve care quality and patient outcomes, healthcare needs those innovations. 

It’s imperative, however, they are correctly and ethically applied. Just because a generative AI application can pass a medical school test, that doesn’t mean it’s ready to be a practicing physician. Healthcare should use the latest advancements in AI and large language models to put the power of these technologies in the hands of medical experts so they can deliver better, more precise and safer care.

Dr. Tim O’Connell is a practicing radiologist and CEO and cofounder of emtelligent, a developer of AI-powered technology that transforms unstructured data. 

We spoke with him to get a better understanding of the importance of guardrails for AI in healthcare as it helps modernize the practice of medicine. We also spoke about how algorithmic discrimination can perpetuate health inequities, legislative action to establish AI safety standards – and why humans in the loop are essential.

Q. What is the importance of guardrails for AI in healthcare as the technology helps modernize the practice of medicine?

A. AI technologies have introduced exciting possibilities for healthcare providers, payers, researchers and patients, offering the potential for better outcomes and lower healthcare costs. However, to realize AI’s full potential, particularly for medical AI, we must ensure healthcare professionals understand both the capabilities and limitations of these technologies.

This includes awareness of risks such as non-determinism, hallucinations and issues with reliably referencing source data. Healthcare professionals must be equipped not only with knowledge of the benefits of AI, but also with the critical understanding of its potential pitfalls, ensuring they can use these tools safely and effectively in diverse clinical settings.

It is critical to develop and adhere to a set of thoughtful principles for the safe and ethical use of AI. These principles should include addressing concerns around privacy, security and bias, and they must be rooted in transparency, accountability and fairness.

Reducing bias requires training AI systems on more diverse datasets that account for historical disparities in diagnoses and health outcomes while also shifting training priorities to ensure AI systems are aligned with real-world healthcare needs.

This focus on diversity, transparency and robust oversight, including the development of guardrails, ensures AI can be a highly effective tool that remains resilient against errors and helps drive meaningful improvements in healthcare outcomes.

This is where guardrails – in the form of well-designed regulations, ethical guidelines and operational safeguards – become critical. These protections help ensure that AI tools are used responsibly and effectively, addressing concerns around patient safety, data privacy and algorithmic bias.

They also provide mechanisms for accountability, ensuring any errors or unintended consequences from AI systems can be traced back to specific decision points and corrected. In this context, guardrails act as both protective measures and enablers, allowing healthcare professionals to trust AI systems while safeguarding against their potential risks.

Q. How can algorithmic discrimination perpetuate health inequities, and what can be done to resolve this problem?

A. If the AI systems we rely on in healthcare are not developed and trained properly, there is a very real risk of algorithmic discrimination. AI models trained on datasets that are not large or diverse enough to represent the full spectrum of patient populations and clinical characteristics can and do produce biased results.

This means the AI could deliver less accurate or less effective care recommendations for underserved populations, including racial or ethnic minorities, women, individuals from lower socio-economic backgrounds, and individuals with very rare or uncommon conditions.

For example, if a medical language model is trained primarily on data from a specific demographic, it might struggle to accurately extract relevant information from clinical notes that reflect different medical conditions or cultural contexts. This could lead to missed diagnoses, misinterpretations of patient symptoms, or ineffective treatment recommendations for populations the model was not trained to recognize adequately.

In effect, the AI system might perpetuate the very inequities it is meant to alleviate, especially for racial minorities, women, and patients from lower socio-economic backgrounds who often already are underserved by traditional health systems.

To address this problem, it’s crucial to ensure AI systems are built on large, highly varied datasets that capture a wide range of patient demographics, clinical presentations and health outcomes. The data used to train these models must be representative of different races, ethnicities, genders, ages and socio-economic statuses to avoid skewing the system’s outputs toward a narrow view of healthcare.

This diversity enables models to perform accurately across diverse populations and clinical scenarios, minimizing the risk of perpetuating bias and ensuring AI is safe and effective for all.

Q. Why are humans in the loop essential to AI in healthcare?

A. While AI can process vast amounts of data and generate insights at speeds that far surpass human capabilities, it lacks the nuanced understanding of complex medical concepts that are integral to delivering high-quality care. Humans in the loop are essential to AI in a healthcare context because they provide the clinical expertise, oversight and context necessary to ensure algorithms perform accurately, safely and ethically.

Consider one use case, which is the extraction of structured data from clinical notes, lab reports and other healthcare documents. Without human clinicians guiding development, training and ongoing validation, AI models risk missing important information or misinterpreting medical jargon, abbreviations or context-specific nuances in clinical language.

For example, a system might incorrectly flag a symptom as significant or overlook critical information embedded in a physician’s note. Human experts can help fine-tune these models, ensuring they correctly capture and interpret complex medical language.

From a workflow perspective, humans in the loop can help interpret and act on AI-driven insights. Even when AI systems generate accurate predictions, healthcare decisions often require a level of personalization only clinicians can provide.

Human experts can combine AI outputs with their clinical experience, knowledge of the patient’s unique circumstances and understanding of broader healthcare trends to make informed, compassionate decisions.

Q. What is the status of legislative action to establish AI safety standards in healthcare, and what needs to be done by lawmakers?

A. Legislation to establish AI safety standards in healthcare is still in its early stages, though there is increasing recognition of the need for comprehensive guidelines and regulations to ensure the safe and ethical use of AI technologies in clinical settings.

Several countries have begun to introduce frameworks for AI regulation, many of which are drawing on foundational, trustworthy AI principles that emphasize safety, fairness, transparency and accountability, which are beginning to shape these conversations.

In the United States, the Food and Drug Administration has introduced a regulatory framework for AI-based medical devices, particularly software as a medical device (SaMD). The FDA’s proposed framework follows a “total product lifecycle” approach, which aligns with the principles of trustworthy AI by emphasizing continuous monitoring, updates and real-time evaluation of AI performance.

However, while this framework addresses AI-driven devices, it has not yet fully accounted for the challenges posed by non-device AI applications, which deal with complex clinical data.

Last November, the American Medical Association published proposed guidelines for using AI in a manner that is ethical, equitable, responsible and transparent.

In its “Principles for Augmented Intelligence Development, Deployment and Use,” the AMA reinforces its stance that AI enhances human intelligence rather than replaces it and argues it is “important that the physician community help guide development of these tools in a way that best meets both physician and patient needs, and helps define their own organization’s risk tolerance, particularly where AI impacts direct patient care.”

By fostering this collaboration between policymakers, healthcare professionals, AI developers and ethicists, we can craft regulations that promote both patient safety and technological progress. Lawmakers need to strike a balance, to create an environment where AI innovation can thrive while ensuring these technologies meet the highest standards of safety and ethics.

This includes developing regulations that enable agile adaptation to new AI advancements, ensuring AI systems remain flexible, transparent and responsive to the evolving needs of healthcare.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a HIMSS Media publication

share Paylaş facebook pinterest whatsapp x print

Similar Content

A digital illustration of the United States where 12 states are highlighted and a magnifying glass focuses on Missouri where text from an opioid settlement fund report is shown within the state borders.
12 States Promised To Open the Books on Their Opioid Settlement Funds. We Checked Up on Them. – KFF Health News
Publisher’s Platform: Does it seem like the Wheels of the Food Safety Bus are coming off?
Mapping EGS Care Networks With Modularity Optimization
Mapping EGS Care Networks With Modularity Optimization
Katie Camero
Social Media Made Me Try Ear Seeds For My Lower Back Pain
Are GLP-1s the Newest Fertility Treatment?
Are GLP-1s the Newest Fertility Treatment?
"WaterTok" Has Quietly Become One Of The Most Polarizing Food Trends So Far In 2023 — Here's Why
“WaterTok” Has Quietly Become One Of The Most Polarizing Food Trends So Far In 2023 — Here’s Why
Frontline Reports | © 2024 | News