ARISE
ARISE Logo

The ARISE Podcast

Long-form conversations on healthcare AI, including ARISE episodes, Computational Medicine colloquia, NEJMI AI podcasts, and selected interviews and guest appearances.

Research·April 21

Doctronic’s Autonomous AI with Dr. Byron Crowe

Doctronic CMO Dr. Byron Crowe describes how administrative complexity can interfere with timely, effective treatment, and how AI may help address those challenges. Crowe discusses Doctronic’s use of autonomous AI to renew prescriptions, arguing that this application can streamline care while maintaining clinical oversight. For physicians, this shift raises important questions about workflow, responsibility, and patient engagement. Crowe emphasizes that the goal is not automation for its own sake, but more reliable and accessible care. As these tools evolve, their impact will depend on how thoughtfully they are integrated into clinical practice. Download the transcript and subscribe: https://www.podbean.com/eau/pb-egua8-1a98f76 #artificialintelligence #aiinmedicine

Doctronic’s Autonomous AI with Dr. Byron Crowe
Research·April 8

Colloquia Panel - Best Practices for Secondary Clinical Data Use Sharing or Accessibility

Bio (Dr. Fries): Jason Fries’ research focuses on training and evaluating foundation models for healthcare, positioned at the intersection of computer science, medical informatics, and hospital systems. His work explores the use of electronic health record (EHR) data to contextualize human health, leveraging longitudinal patient information to inform model development and evaluation. His research has been published in venues such as NeurIPS, ICLR, AAAI, Nature Communications, Nature Medicine and npj Digital Medicine. Bio (Dr. Hernandez-Boussard): Dr. Hernandez-Boussard is the Associate Dean of Research and Professor of Medicine (Biomedical Informatics) at Stanford University. Her work is at the intersection of informatics and population health, promoting responsible AI across populations. She utilizes diverse, multimodal data to develop rigorous criteria and guidelines that steer the development of responsible AI, aiming to bridge gaps in health care and enhance patient outcomes. Dr. Hernandez-Boussard advocates for practices that ensure the benefits of digital technologies are realized across all segments of society. Bio (Dr. Langlotz): Dr. Langlotz is a Professor of Radiology, Medicine, and Biomedical Data Science, Senior Associate Vice Provost for Research, and a Senior Fellow at the Institute for Human-Centered Artificial Intelligence at Stanford University. Dr. Langlotz’s NIH-funded laboratory develops machine learning methods to eliminate diagnostic errors and detect disease early. He also serves as Director of the Center for Artificial Intelligence in Medicine and Imaging (AIMI Center), which supports over 250 faculty at Stanford who conduct interdisciplinary machine learning research to improve clinical care. He has led many national and international efforts to improve medical imaging, including the RadLex standard terminology system and the Medical Imaging and Data Resource Center (MIDRC), a U.S. national imaging research resource.

Colloquia Panel - Best Practices for Secondary Clinical Data Use Sharing or Accessibility
Research·March 30

Jonathan Chen on AI in Medicine: Promise, Pitfalls, and Practice

In this episode of “The Future of Medicine”, Euan Ashely sits down with Jonathan Chen, MD, PhD, clinician, AI researcher, and Associate Professor at Stanford, whose work focuses on combining human and artificial intelligence to improve clinical decision-making. Dr. Chen reflects on the rapid rise of AI in medicine, and the moment he realized everything had changed. He also walks through surprising findings from his research, including studies showing that AI alone can sometimes outperform doctors using AI tools. He explains why this happens, from human bias and “automation errors” to the ways AI systems are designed to agree with users, even when they’re wrong. Looking ahead, Dr. Chen shares his perspective on the future of AI in medicine, including the risks of overreliance, the importance of clinical judgment, and how these tools could transform everything from medical education to patient care. He also explores the concept of “do no harm” in AI systems—and why safety and accuracy are not the same thing.

Jonathan Chen on AI in Medicine: Promise, Pitfalls, and Practice
Research·March 20

AI’s Next Frontier with Dr. Kyunghyun Cho

Dr. Kyunghyun Cho is a leading AI researcher best known for co-authoring a landmark 2014 paper that introduced neural machine translation. In this episode, he discusses his wide-ranging career spanning fundamental AI research, co-founding Prescient Design (acquired by Genentech), and driving applications of AI in health care. For clinicians, Cho’s core message is pragmatic: AI should help health care run better. After years of work at NYU Langone, he reframed AI in medicine from solving rare diagnostic puzzles to improving operational prediction at scale. Cho emphasizes purpose‑built data, careful fine‑tuning, and regulatory accountability. His perspective connects technical rigor with system stewardship—and insists that patient voices must be present in AI governance. Read the transcript and subscribe: https://www.podbean.com/eau/pb-mncbu-1a742b8

AI’s Next Frontier with Dr. Kyunghyun Cho
Research·March 17

Holistic Evaluation of Large Language Models for Medical Tasks with MedHELM

Suhana Bedi is a third-year PhD student in Biomedical Data Science at Stanford University, advised by Nigam Shah and Sanmi Koyejo. She develops trustworthy foundation models for healthcare, focusing on rigorous evaluation, uncertainty-aware deployment, and reliable integration of multimodal AI systems into real clinical workflows. Suhana is a lead author of MedHELM, the first comprehensive framework for real-world medical LLM evaluation, and her work has appeared in Nature Medicine JAMA, ICLR, and NeurIPS. She has held research roles at Microsoft Health Futures, and Google Abstract: While large language models (LLMs) achieve near-perfect scores on medical licensing exams, these evaluations inadequately reflect the complexity and diversity of real-world clinical practice. Here we introduce MedHELM, an extensible evaluation framework with three contributions. First, a clinician-validated taxonomy organizing medical AI applications into five categories that mirror real clinical tasks—clinical decision support (diagnostic decisions, treatment planning), clinical note generation (visit documentation, procedure reports), patient communication (education materials, care instructions), medical research (literature analysis, clinical data analysis) and administration (scheduling, workflow coordination). These encompass 22 subcategories and 121 specific tasks reflecting daily medical practice. Second, a comprehensive benchmark suite of 37 evaluations covering all subcategories. Third, systematic comparison of nine frontier LLMs—Claude 3.5 Sonnet, Claude 3.7 Sonnet, DeepSeek R1, Gemini 1.5 Pro, Gemini 2.0 Flash, GPT-4o, GPT-4o mini, Llama 3.3 and o3-mini—using an automated LLM-jury evaluation method. Our LLM-jury uses multiple AI evaluators to assess model outputs against expert-defined criteria. Advanced reasoning models (DeepSeek R1, o3-mini) demonstrated superior performance with win rates of 66%, although Claude 3.5 Sonnet achieved comparable results at 15% lower computational cost. These results not only highlight current model capabilities but also demonstrate how MedHELM could enable evidence-based selection of medical AI systems for healthcare applications.

Holistic Evaluation of Large Language Models for Medical Tasks with MedHELM
Research·March 17

Towards the AI Doctor: Utah and Beyond

Dr. Crowe is an accomplished academic clinician, most recently serving on the faculty at Harvard Medical School and as a practicing physician at Beth Israel Deaconess Medical Center. His research focused on evaluating large language model–based AI systems in complex diagnosis and clinical reasoning, and he has co-authored landmark studies and major society guidelines on the use of AI in medicine. His work has been featured in JAMA, NEJM AI, and The New York Times, and Dr. Crowe has spoken nationally and internationally on AI’s role in redesigning health systems. Abstract: Doctronic is an AI-native care delivery organization whose AI technology has been used by millions of individuals, and which supports its nationwide physician practice. In partnership with the state of Utah, the organization recently became the first healthcare organization in the US to deploy an autonomous AI system with state-level authority to practice medicine. In this talk, we describe Doctronic’s approach to care, our experience operating an AI-native clinic, and future directions for medical AI.

Towards the AI Doctor: Utah and Beyond
Research·March 17

Do Contemporary Advances in AI Require Change in the Fundamental 'Theorem' of Informatics?

Charles Friedman is Professor of Learning Health Sciences at the University of Michigan Medical School, where he directs the Knowledge Systems Laboratory. He was formerly Founding Chair of the Department of Learning Health Sciences and the Josiah Macy Jr. Professor of Medical Education. He holds joint appointments in the Schools of information and Public Health. He is editor-in-chief of the open-access journal Learning Health Systems and co-chair of the multi-national movement to Mobilize Computable Biomedical Knowledge. Throughout his career, Friedman has developed and studied methods to improve health, education, and research through innovative applications of information technology. Most recently, Friedman has focused his academic interests and activities on the concept of Learning Health Systems that improve health by marrying discovery to implementation, and the socio-technical infrastructure required to sustain these systems. Friedman is a Distinguished Fellow of the American College of Medical Informatics, and a founding fellow of the International Academy of Health Sciences Informatics. He holds an honorary doctorate from the University of Lucerne in Switzerland for his contributions to the science of Learning Health Systems. Prior to coming to Michigan, Friedman held executive positions at the Office of the National Coordinator for Health IT (ONC) in the U.S. Department of Health and Human Services. Immediately prior to his work in the government, he was Associate Vice Chancellor for Biomedical Informatics, and Founding Director of the Center for Biomedical Informatics at the University of Pittsburgh. Abstract: The fundamental ‘theorem’, published in 2009, proposes that persons supported by information technology will be “better” than the same persons undertaking the same tasks unassisted. It was intended, not as an iron law of nature, but rather as a goal for the field of informatics and a hypothesis guiding the design and execution of studies conducted by individuals in the field. This presentation will introduce the original formulation of the ‘theorem’, review some extensions of the theorem that have been suggested over the years, and consider whether the current AI revolution requires a significant reconstruction of the theorem envisioning that AI unassisted might be “better” than persons supported by AI.

Do Contemporary Advances in AI Require Change in the Fundamental 'Theorem' of Informatics?
Research·February 27

AI and “Do No Harm”

In this episode, JAMA+ AI Associate Editor Yulin Hswen, ScD, MPH, speaks with David Wu, MD, PhD, and Adam Rodman, MD, MPH, about what safe clinical use of LLMs requires. Drawing on the framework of Do No Harm, they examine failure modes, limits of accuracy-based evaluation, clinician AI interaction, and safeguards needed as medical AI moves into patient care.

AI and “Do No Harm”
Research·February 20

Advice for Those (Maybe) Interested in Starting a Company

Alex is a Partner at Khosla Ventures with a focus in biotechnology, healthcare, data science, and AI/ML. He works on new investments and sits on the boards of many KV portfolio companies. Alex’s education and training encompasses physics, biology, biomedical informatics, and medicine; he also held a postdoctoral fellowship in biochemistry and genomics. As a scientist, he has published more than 50 scientific articles (h-index 38), primarily at the intersection of computer science, biology, and healthcare. As an inventor, he has licensed IP to three companies. As an entrepreneur, he has been a co-founder or early employee at a range of startups. He is committed to increasing access to high quality healthcare, STEM education and entrepreneurship opportunities for the underserved and under-represented. Early in his career Alex taught himself programming and joined the MITRE Corporation as an artificial intelligence research engineer. Subsequently, Alex went to graduate school at Stanford in biomedical informatics, where he worked with genomics data on applications for biomarker discovery and therapeutics development. At Stanford, he also initiated early clinical trials for digital health using connected devices including health wearables and AR/VR technologies. Since 2015, Alex has been at KV working on evaluating deal flow and new investments, supporting portfolio companies, and company incubation/creation. Alex holds an undergraduate degree in physics from Brandeis University and an MS in biology from Tufts University. He also holds an MS and Ph.D. in biomedical informatics as well an M.D. from Stanford University. Abstract: This talk will discuss a variety of things to consider when contemplating the leap from academia into entrepreneurship.

Advice for Those (Maybe) Interested in Starting a Company
Research·February 19

Healthcare AI Leadership & Strategy (HAILS) Virtual Info Session – 02/13/26

An intensive four-week hybrid learning experience designed to equip healthcare professionals with the strategic insight and practical tools needed to responsibly implement and scale AI solutions in clinical and organizational settings. The program blends asynchronous online learning, recorded live virtual sessions, and a two-day in-person immersion at Stanford University. Participants will engage with Stanford faculty and industry leaders, explore real-world case studies, and develop strategic approaches to integrating AI in healthcare delivery. Beyond the in-person experience, participants will deepen their learning through virtual discussions, applied exercises, and expert-led sessions that provide both conceptual understanding and practical frameworks. The program is intentionally structured to support immediate application to participants’ professional contexts. By the end of the program, participants will walk away with awareness of best practices in healthcare AI implementation, practical frameworks for AI evaluation, and a network of peers and thought leaders to support their ongoing work in advancing responsible AI in healthcare.

Healthcare AI Leadership & Strategy (HAILS) Virtual Info Session – 02/13/26
Research·February 19

Predict, Prevent, Personalize - Health AI at Northwell Health

Dr. Theo Zanos is a Professor & AVP, and the head of the Division of Health AI at Northwell Health and the Neural and Data Science Lab at the Institute of Health System Science and Institute of Bioelectronic Medicine, at the Feinstein Institutes for Medical Research and the Zucker School of Medicine, Hofstra Northwell. He received his Engineering diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in Greece, his MSc and PhD in Biomedical Engineering from the University of Southern California and postdoctoral training at the Montreal Neurological Institute at McGill. His current research focuses on developing and applying AI/machine learning methods on multimodal healthcare, neural and physiological data to enable early diagnosis, disease severity assessment, and personalization and adaptability of therapies. He has been awarded multiple federal and industry grants, totaling more than $15M of external funding from NIH, CDC and other federal and industry sources, and published more than 70 peer-reviewed papers, in journals such as Nature Communications, Nature Machine Intelligence, PNAS, JAMA, npj Digital Medicine, Neuron (Cell Press) and others. He has been awarded twice the Northwell Excellence in Research Award, finalist in Fast Company’s World Changing Ideas in AI, twice finalist in Northwell’s Innovation Challenge, the Jean Timmins Award and the Center of Excellence in Commercialization and Research Award. Abstract: Artificial intelligence offers transformative potential in healthcare through predictive algorithms, preventive interventions, and personalized treatment approaches. However, successful implementation requires rigorous research validation, health system integration, and consideration of real-world performance dynamics. We will discuss the Division of Health AI at Northwell Health’s strategic framework organized around three pillars: Prevent, Predict, and Personalize, to improve patient outcomes and health system operations. I will present our work on point-of-care multimodal inhospital deterioration prediction models, operational nurse staffing forecasting solutions with tangible ROIs, and bioelectronic medicine ML applications using anatomical data and wearable sensors to enable precision treatment selection.

Predict, Prevent, Personalize - Health AI at Northwell Health
Research·February 18

Epic’s Approach to AI with Seth Hain

Clinical AI only helps patients if clinicians and health systems trust it. Seth Hain describes how Epic is building foundation models that respect institutional autonomy, minimize burden, and prioritize safety. He discusses scaling laws in structured medical data, cautious deployment for clinical interventions, and why understanding causality—not just correlation—is essential. This conversation reframes AI not as disruption, but as infrastructure for safer, more reliable care. View the transcript and explore other episodes: https://www.podbean.com/eau/pb-atesa-1a43aa0 #podcast #artificialintelligence #aiinmedicine

Epic’s Approach to AI with Seth Hain
Research·February 2

A Unified Molecular Framework for Quantifying Immune Dysregulation Across Health, Diseases, and Treatment Response

Bio: Dr. Khatri is a faculty member in Institute for Immunity, Transplantation and Infection (ITI) and the Division of Computational Medicine in Department of Medicine at Stanford University. His research focuses on the intersection of machine learning, computational immunology, and translational medicine with the overarching goal of accelerating translation of immune response-based diagnostics and therapies to clinical practice across a broad spectrum of inflammatory diseases, including infections, autoimmune diseases, organ transplant, cancers, and vaccines. His lab develops machine learning-based methods and computational frameworks to leverage biological, clinical, and technical heterogeneity across multiple datasets to identify robust disease signatures and identify novel therapies for inflammatory conditions.

A Unified Molecular Framework for Quantifying Immune Dysregulation Across Health, Diseases, and Treatment Response
Research·January 28

Precision Medicine for Hospital Care: From Bedside Insight to Clinical Impact

Tim Sweeney, MD, PhD, is a graduate of the Stanford Biomedical Informatics (BMI) program (postdoctoral MS, 2015). He is cofounder and CEO of Inflammatix, a precision diagnostics company developing first-in-class tools for improving hospital care, including its lead product, Triverity. Dr. Sweeney has authored more than 100 peer-reviewed manuscripts and abstracts, served as principal investigator on 10 federally funded contracts from NIH, DoD, BARDA, and related agencies, and serves on the board of the STEPS Alliance.

Precision Medicine for Hospital Care: From Bedside Insight to Clinical Impact
Research·January 23

Bridging AI and Biology to Tackle Medicine’s Hardest Problems with Dr. Marinka Zitnik

For Dr. Marinka Zitnik, the promise of AI in medicine begins with acknowledging the scale of the problem. Most patients with rare diseases have no approved treatments, and traditional drug development timelines make progress painfully slow. In this conversation, she describes how AI-driven drug repurposing offers a way to work within existing constraints while still opening new therapeutic possibilities. She also highlights a structural issue that has limited impact: machine learning and biology communities often work in parallel, not together. By building shared benchmarks and collaborative spaces, Marinka argues, researchers can focus models on problems that truly matter for patients. The episode introduces her definition of AI agents as systems that can take actions and learn from outcomes — a capability she sees as essential for scientific discovery beyond static prediction. Throughout the discussion, Marinka returns to the value of academic freedom: the ability to chase difficult questions that require long time horizons and interdisciplinary thinking. Explore more episodes and subscribe to the AI Grand Rounds podcast: https://www.podbean.com/eau/pb-smxiz-1a2365c #artificialintelligence #medicalai

Bridging AI and Biology to Tackle Medicine’s Hardest Problems with Dr. Marinka Zitnik
Research·January 21

Presenting the 2026 State of Clinical AI Report

Bio (Dr. Peter Brodeur): Dr. Peter Brodeur is a rising cardiology fellow at Harvard Medical School’s Beth Israel Deaconess Medical Center. Dr. Brodeur is an affiliate of ARISE, reviewer for NEJM AI, and former life sciences strategy consultant. His research focuses on human computer interaction and LLM clinical reasoning. Bio (Dr. Liam McCoy): Liam McCoy is a resident physician in neurology at the University of Alberta, and Research Affiliate at the Massachusetts Institute of Technology. He engages in research related to the effective, ethical integration of clinical reasoning AI systems in practical healthcare contexts.

Presenting the 2026 State of Clinical AI Report
Research·January 21

Artificial Intelligence Systems to Advance Engineered T-cell Immunotherapy Designs

Zinaida Good, Ph.D., is an Assistant Professor of Medicine in the Division of Immunology and Rheumatology and the Division of Computational Medicine at Stanford University. She also serves as the Director of the Stanford Center for Cancer Cell Therapy Data Hub. The goal of her research program is to understand and enhance engineered T cell immunotherapies for cancer and immune-mediated diseases through innovative computational approaches and systems immunology. Her lab leverages innovation in machine learning and clinical multiomic datasets to build artificial intelligence systems for advanced T cell therapy design. Dr. Good earned her Ph.D. in Computational & Systems Immunology from Stanford University. Her work includes 4 first-author papers (Nature Medicine 2018 & 2022, Nature Biotechnology 2019, Trends in Immunology 2019), 18+ co-authored papers (including Nature 2019, 2022, 2024, Science 2021, Nature Methods 2016, 2022, and NEJM 2024), and an initial senior author papers (ICML 2025, NeurIPS 2025, Frontiers in Immunology 2025). Her research is supported by the NIH/NCI Pathway to Independence Award, NIH/OD Multimodal AI Initiative, and the Weill Cancer Hub West. Dr. Good has been named an Arthur & Sandra Irving Cancer Immunology Fellow in 2022, Parker Bridge Fellow in 2023, and an AACR-Woman in Cancer Research Scholar in 2024.

Artificial Intelligence Systems to Advance Engineered T-cell Immunotherapy Designs
Research·January 21

An Exploration into 3 Applications of AI to Enhance (Medical) Learning

Bio (Dr. Chen): Dr. Sharon F. Chen is an academic pediatric infectious diseases physician at Stanford University School of Medicine, involved in patient care, teaching and research. Dr. Chen has a special interest in viral infections affecting immunocompromised patients, and she collaborates with viral/immunology laboratories to conduct studies, primarily on T-cell responses. As Co-director of Stanford Children’s’ Pediatric Infectious Diseases Program for Immunocompromised Hosts (PIDPIC), Dr. Chen develops and conducts clinical studies to establish best practices and to start new clinical initiatives that push the frontier. Dr. Chen’s scholarly interests also extends to education research in how people think and make decisions. In collaboration with the learning sciences, she has created a problem-solving framework that reveals the hidden “thinking habits” needed for solving complex problems. An AI adaptation of the framework is being tested in clinical medicine to augment physicians as they make patient-care decisions. Bio (Dr. Ma): Dr. Flora Ma is a Clinical Assistant Professor at Stanford School of Medicine and a distinguished leader in geropsychology and clinical mental health. Currently serving on the Executive Leadership Committee of Stanford’s Faculty Senate, Dr. Ma leads funding, public positioning, and emerging technology discussions across the medical school. At Stanford, Dr. Ma provides both inpatient and outpatient psychological services through the ADAPT, Geriatric Psychiatry, and INSPIRE Clinics, specializing in complex cases involving dementia, psychotic-spectrum disorders, and personality disorders. Dr. Ma’s research focuses on technology-enhanced mental health interventions for older adults, with particular emphasis on Veterans’ care and cultural competency. She has published extensively in peer-reviewed journals and serves as Assistant Editor for the International Journal of Aging and Mental Health. Her leadership extends nationally as a Committee Chair in the American Psychological Association. Her research has culminated in her own AI that helps doctors and nurses with patient empathy training by practicing live, real-time Zoom conversations with AI patients. She is currently expanding this emotion-based conversation simulator across corporate HR, sales, and other high leverage environments. Bio (Dr. Oliveira): Renan Gianotto-Oliveira, MD, PhD, is an emergency physician and medical education researcher specializing in technology-enhanced assessment, including augmented reality and virtual simulation. He is a research assistant on the Clinical Mind AI project at Stanford University’s CHARIOT Lab and a faculty member at São Leopoldo Mandic Medical School (Brazil), working in the Assessment Center.

An Exploration into 3 Applications of AI to Enhance (Medical) Learning
Research·January 21

Diagnostic reasoning, Error, and Intelligence—Human and Artificial

Dr. Laura Zwaan is an Associate Professor at the Institute of Medical Education Research Rotterdam (iMERR) at Erasmus MC, The Netherlands. Trained in cognitive psychology and epidemiology, she leads a research group dedicated to advancing understanding of diagnostic reasoning and diagnostic error, with a growing focus on human–AI collaboration in clinical decision-making. Her work combines a wide range of quantitative and qualitative methods to study how clinicians make diagnostic decisions and how these processes can be optimized to strengthen diagnostic safety and accuracy. Dr. Zwaan has received multiple research grants and awards, including the Mark Graber Award, in recognition of her contributions to improving diagnostic safety.

Diagnostic reasoning, Error, and Intelligence—Human and Artificial
Research·January 21

From Bytes to Bedside: Evolution of the Queensland Health AI Sepsis Prediction Algorithm (QSA)

Bio Anton: Dr Anton Van Der Vegt is an Advanced QLD Industry Research Fellow with the Centre for Health Services Research at UQ Faculty of Medicine. Originally trained as a Mechanical Engineer and Computer Scientist, Anton has worked across Australia, Europe, the US designing, developing and implementing sophisticated software programs. Recently Anton architected and managed two projects within Queensland Health to support AI experimentation with health data, including the development of CLARA, a Clinical AI research Accelerator data lab. Currently Anton is collaborating with clinicians at Queensland health to develop and prospectively trial AI methods to predict sepsis and clinical deterioration. Bio Ian: Dr Ian Scott is consultant general physician and former Director of Internal Medicine and Clinical Epidemiology at Princess Alexandra Hospital in Brisbane. He is currently Clinical Consultant in AI at the Digital Health and Informatics Division of Metro South Hospital and Health Service, chairs the Metro South Clinical AI Working Group and Queensland Health Sepsis AI Working Group, and is Professor in Clinical Decision-making at the University of Queensland (UQ). He has co-authored multiple papers on the use of AI in healthcare, is principal investigator for several AI trials and has collaborations with colleagues within the UQ Digital Health Centre, the Centre for Health Informatics at Macquarie University, the CRC in Digital Health at Queensland University of Technology and the Clinical and Business Intelligence Unit of eHealth Queensland. He has longstanding research interests in clinical informatics, evidence-based medicine, clinical reasoning and quality and safety improvement.

From Bytes to Bedside: Evolution of the Queensland Health AI Sepsis Prediction Algorithm (QSA)
Research·January 6

Emerging Technology Mini-Series: AI as a Thinking Partner in Medicine

Artificial intelligence is reshaping how clinicians think and care for patients. In our conversation with Jonathan Chen, MD, PhD, Associate Professor of Medicine and Biomedical Data Science at Stanford University, he shares how AI has enhanced his own clinical work and the practical steps that foster trust and adoption among clinicians. The discussion goes beyond technology to explore the emotional dimensions of care, address bias, and outline the safeguards needed to use AI responsibly. We also review AI’s impact on medical education and the evolving hospital landscape for responsible, future-ready AI-enabled care. Join us for a thoughtful exploration of the promise, challenges, and path forward to integrate AI into clinical decision making.

Emerging Technology Mini-Series: AI as a Thinking Partner in Medicine
Research·December 31

Multiple Reasoning Models and the Future of AI Chatbots

AI chatbots have advanced rapidly, incorporating new reasoning architectures that reshape decision-making and medical education. Jonathan Chen, MD, PhD, and Ethan Goh, MD, MS, of Stanford University join JAMA and JAMA+ AI Associate Editor Yulin Hswen, ScD, MPH, to discuss the latest generation of AI models, the importance of evaluating benefits and harms, and sycophancy in AI systems.

Multiple Reasoning Models and the Future of AI Chatbots
Research·December 19

What Values are in AI? A Conversation with Dr. Zak Kohane

For Dr. Zak Kohane, this year’s advances in AI weren’t abstract. They were personal, practical, and deeply tied to care. After decades studying clinical data and diagnostic uncertainty, he finds himself building his own EHR, reviewing his child’s imaging with AI, and re-thinking the balance between incidental and missed findings. Across each story is the same insight: clinicians and machines make mistakes for different reasons — and understanding those differences is essential for safe deployment. In this episode, Zak also highlights where AI is spreading fastest, and why: reimbursement. While dermatology and radiology aren’t broadly using AI for interpretation, revenue-cycle optimization is advancing rapidly. Meanwhile, ambient documentation has exploded — not because it increases accuracy or throughput, but because it improves clinician satisfaction in strained systems. Yet the most profound theme, he argues, is values. Models already show implicit preferences: some conservative, some aggressive. And unlike human clinicians, no regulatory framework examines how those preferences form. Zak calls for a new form of oversight that centers patients, recognizes bias, and bridges clinical expertise with technical transparency. View the transcript and subscribe to AI Grand Rounds: https://www.podbean.com/eau/pb-4wyuh-19efe43 #artificialintelligence #medicalai #digitalhealth

What Values are in AI? A Conversation with Dr. Zak Kohane
Research·November 21

From Hindsight Bias to Machine Bias: Dr. Laura Zwaan on Learning from Mistakes

As a cognitive psychologist, Dr. Laura Zwaan studies how humans make—and learn from—mistakes. In this episode of NEJM AI Grand Rounds, she brings that lens to AI, showing how machines inherit our biases and why both need transparency and reflection. From the challenge of defining diagnostic error to the promise of “machine psychology,” Dr. Zwaan explores how human reasoning can inform safer algorithms and better care. Her message is clear: the path to trustworthy AI begins with understanding ourselves.   Download the transcript and view past episodes: https://www.podbean.com/eau/pb-42s8d-19c7688 #artificialintelligence #medicalai #machinelearning

From Hindsight Bias to Machine Bias: Dr. Laura Zwaan on Learning from Mistakes
Research·November 21

Designing AI for Uncertainty: A Conversation With Eric Horvitz

How can AI systems reason safely in the open world of medicine? JAMA+ AI Associate Editor Yulin Hswen, ScD, MPH, talks with Eric Horvitz, MD, PhD, Chief Scientific Officer at Microsoft, about the future of AI in 5 years to 100 years, from neurons to the nebulous, and how we can guide AI to be copilots while maintaining integrity and safety in the clinical arena.

Designing AI for Uncertainty: A Conversation With Eric Horvitz
Research·October 24

Medicine, Machines, and Magic: Dr. Jonathan Chen on Medical AI

In this episode, Dr. Jonathan Chen joins the hosts to discuss his path from teenage programmer to Stanford physician-informatician and why machine learning has both thrilled and unnerved him. From his 2017 NEJM essay warning about “inflated expectations” to his latest studies showing GPT‑4 outperforming doctors on diagnostic tasks, Dr. Chen describes a discipline learning humility at machine speed. This conversation spans medical education, automation anxiety, magic, and why empathy—not memorization—may become the most valuable clinical skill. Download the transcript and explore more episodes: https://www.podbean.com/eau/pb-2p293-1992a22 #artificialintelligence #gpt

Medicine, Machines, and Magic: Dr. Jonathan Chen on Medical AI
Research·October 24

From Clinician to Chief Health AI Officer: A Conversation with Dr. Karandeep Singh

Dr. Karandeep Singh brings two worlds together: programming and medicine. In this conversation, he explains how early experiments with code led him to biomedical informatics, why gaps between paper performance and clinical reality must be confronted, and how governance committees weigh ethics and safety. Now serving as Chief Health AI Officer at UC San Diego Health, he reflects on lessons from deploying sepsis prediction tools, the risks of hype, and the promise of integration. For clinicians, Singh’s story is a reminder that the best AI is guided by patient care, deep expertise, and humility about the limits of technology. Download the transcript and view more episodes: https://www.podbean.com/eau/pb-9t5e4-1965fcf #artificialintelligence #machinelearning

From Clinician to Chief Health AI Officer: A Conversation with Dr. Karandeep Singh
Research·September 1

#AIMI25 | Panel 2: Foundation Model Roadmap: What Health AI Teams Need to Know

The 2025 AIMI Symposium was a hybrid conference presented by the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI Center) on June 3, 2025. Panelists: Moderated by Ethan Goh, MD, Executive Director, Stanford AI Research and Science Evaluation, Stanford University Emily Alsentzer, PhD, Assistant Professor of Biomedical Data Science and, by courtesy, Computer Science, Stanford Karan Singhal, MS, Health AI Lead, OpenAI Khaled Saab, PhD, Research Scientist, Google DeepMind Marinka Zitnik, PhD, Associate Professor of Biomedical Informatics, Harvard Medical School

#AIMI25 | Panel 2: Foundation Model Roadmap: What Health AI Teams Need to Know
Research·August 27

Radiologist Turned CEO: Dr. Jeremy Friese on AI for Prior Authorization

Dr. Jeremy Friese knows medicine from both sides. A practicing radiologist and technology executive, he’s seen firsthand how administrative burden undermines care. In this episode of NEJM AI Grand Rounds, he walks through the origins of prior authorization, explains why he believes artificial intelligence can close the gap between patients and payers, and argues that real reform means showing your work—just like in math class. At Humata, he’s combining human oversight, LLMs, and interoperability to try to fix a broken system. For clinicians overwhelmed by back-office complexity, this conversation offers both urgency and optimism. Download the transcript and subscribe to the AI Grand Rounds podcast: https://www.podbean.com/eau/pb-fav6p-193901b #artificialintelligence #medicalai

Radiologist Turned CEO: Dr. Jeremy Friese on AI for Prior Authorization
Research·July 21

Can AI Accelerate Science? Dr. Andy Beam on AI’s Next Frontier

Dr. Andy Beam has trained models, mentored scientists, and used data to quantify the value of treatments. In this episode of NEJM AI Grand Rounds, Raj Manrai turns the table on his co-host, reflecting on how Andy’s childhood misdiagnosis, and the failure of human recall, revealed the diagnostic promise of machine learning. As a Harvard professor, he mentored hybrid thinkers and built tools to evaluate safety, not just performance. Now CTO of Lila Sciences, he’s building an experimental AI system to generate its own hypotheses and test them in the real world. This conversation is a front-row seat to the next evolution of science. View the full transcript and explore more episodes: https://www.podbean.com/eau/pb-355is-1900b60 #artificialintelligence #machinelearning #aiinmedicine

Can AI Accelerate Science? Dr. Andy Beam on AI’s Next Frontier
Research·February 6

AI Chatbots in Clinical Practice

Chatbots may have a role in enhancing clinical care, but the best way to apply them remains a work in progress. Jonathen Chen, MD, PhD, and Ethan Goh, MD, MS, of Stanford, join JAMA and JAMA+ AI Associate Editor Yulin Hswen, ScD, MPH, to discuss their randomized clinical trial published in JAMA Network Open investigating the use of chatbots in clinical practice.

AI Chatbots in Clinical Practice