- Industry Insight: Ethical AI in Practice
- Panelists Spotlight
- Key Themes from the Symposium: What This Means for HCPs
- 1. Bias Mitigation Starts with Better Data
- 2. AI is Reshaping Clinical Research
- 3. Medical Education Must Catch Up
- 4. AI Requires Clinical Oversight
- 5. Imaging and Specialty Tools Need Validation
AI is rapidly changing the future of healthcare. Its ability to analyze complex medical and research data is helping improve diagnostics, drug development, and our overall knowledge of healthcare! But along with its promise comes serious ethical questions, particularly around equity, trust, and inclusion. These topics were at the center of the “Code, Context, and Care” symposium, hosted by the Cobb Institute on Sunday, July 20, during the National Medical Association conference, bringing together experts in medicine, informatics, public health, and education to explore how AI can responsibly support healthcare delivery.
AI presents transformative opportunities in healthcare—from diagnostics to population health—but without thoughtful design, it can also amplify existing inequities. As noted by panelists like Dr. Alison Whelan and Dr. Hassan Tetteh, biased datasets, opaque algorithms, or poorly validated tools can undermine clinical trust, misguide interventions, and further marginalize vulnerable populations.
Industry Insight: Ethical AI in Practice
Dr. Gilles Gnacadja, PhD, a research strategist at Amgen, provided a critical industry perspective on the ethical integration of AI in clinical research and development. He emphasized that for AI to be truly impactful, it must be:
- Transparent in its decision-making processes,
- Accountable to both providers and patients, and
- Equitably designed to reflect diverse populations and reduce care gaps.
From a biopharmaceutical standpoint, Dr. Gnacadja underscored the responsibility of industry leaders to implement AI with clinical validity and ethical guardrails, especially when these tools influence real-world treatment decisions. His remarks were a strong reminder that advanced AI must serve all patients—not just those best represented in training datasets.
For healthcare professionals, the takeaway is clear: our engagement and oversight are essential to ensuring AI enhances care without compromising equity or trust.
Panelists Spotlight
This year’s symposium featured a dynamic roster of panelists and speakers representing diverse expertise and lived experience:
- Dr. Gilles Gnacadja, PhD – Shared Amgen’s innovations in AI-driven drug discovery and its role in accelerating therapeutic development.
- Dr. Melissa Simon, MD, MPH – Spoke on inclusive research practices and Amgen’s focus on equity in clinical trials.
- Dr. Virginia Caine, MD, MPH – Highlighted the importance of AI in community-based public health.
- Dr. Alison Whelan, MD – Emphasized AI’s integration into medical education and workforce development.
- Dr. Ronnie Sebro, MD, PhD – Discussed AI applications in medical imaging and clinical diagnostics.
- Dr. Mallory Williams, MD, MPH – Served as moderator, guiding conversations on innovation and policy.
- Dr. Marshall Chin, MD, MPH – Focused on using AI to advance health equity.
- Dr. Brenda Jamerson, PharmD – Addressed the role of AI in pharmacy education and underserved populations.
- Dr. Hassan Tetteh, MD, MBA – Provided insights from military medicine and large-scale AI implementation.
- Dr. MaCalus Hogan, MD, MBA – Shared advancements in AI-assisted orthopedic care.
Key Themes from the Symposium: What This Means for HCPs
1. Bias Mitigation Starts with Better Data
Many algorithms use flawed or incomplete data that fail to account for the health needs of Black patients. For example, using the cost of care as a proxy for health needs has led to underdiagnosis and undertreatment. HCPs must scrutinize how models are developed and push for datasets that reflect the true diversity of the populations they serve.
2. AI is Reshaping Clinical Research
AI can speed up trial recruitment and expand access—but only if underrepresented groups are deliberately included. Black patients remain significantly under-enrolled in research studies. HCPs must advocate for equity in trial design, eligibility criteria, and participant outreach to ensure inclusive, data-driven innovation.
3. Medical Education Must Catch Up
AI literacy is essential for the future of medicine. Curricula must go beyond technical training to address how bias shows up in AI tools and how to respond. For Black medical students and faculty, equitable access to these learning opportunities is also critical to leveling the field.
4. AI Requires Clinical Oversight
AI should never override clinical judgment. Providers—especially those serving vulnerable communities—must stay informed about the tools being used, question their outputs, and speak up when something doesn’t align with patient needs or ethical standards.
5. Imaging and Specialty Tools Need Validation
AI tools used in radiology, orthopedics, and OB/GYN must be validated across diverse populations. Studies show that imaging data and diagnostic accuracy often vary by race and skin tone. Without diverse training data, AI can miss critical differences in conditions affecting Black patients.
