Apply: Stanford AI in Healthcare Leadership & Strategy Program (May–June 2026)

Benchmarks

Currently we support First Do NOHARM, Script Concordance Test (SCT-Bench), CPC-Bench, and MedAgentBench, with more benchmarks in our roadmap. Widely-used industry benchmarks (OpenAI HealthBench) are included for reference, but are not weighed in the composite score. See our policies and submission instructions.

First, Do NOHARM

First, Do NOHARM

NOHARMv1.0
Evaluated on: 2026-01-15

The foundational benchmark of the MAST suite, and establishes a new framework to assess clinical safety and accuracy in AI-generated medical recommendations.

Script Concordance Testing (SCT)

Script Concordance Testing (SCT)

SCTv1.0
Evaluated on: 2026-01-10

Challenging cases measuring probabilistic clinical reasoning under uncertainty. The benchmark uses a Script Concordance Testing (SCT) methodology, which is a format that assesses how clinicians adjust their diagnostic or therapeutic judgments in response to new, uncertain information.

HealthBench

HealthBench

HealthBenchv1.0
Evaluated on: 2026-01-08

Broad agentic evaluation suite designed to test the capabilities of large language models specifically within the context of medical records. It features 300 clinically-derived, patient-specific tasks, categorized across 10 different areas.

CPC Bench

CPC Bench

CPCv1.0
Evaluated on: 2026-01-05

Comprehensive benchmark designed to evaluate medical artificial intelligence across a century of complex clinical cases from the New England Journal of Medicine. Includes both multimodal (2,505) and text-based (3,364) tasks.

Independent Academic Research

MAST is operated by the ARISE AI Research Network, an independent academic research collaboration. No AI company evaluated in our benchmarks has funding influence, editorial control, or methodological input over our evaluation processes.

Our evaluation schedule, scoring rubrics, and publication timeline are determined by the MAST Steering Committee. Model providers are notified of results only after scoring is finalized, and they have no opportunity to influence or preview findings before publication.

Stanford Medicine
Harvard

MAST is developed by ARISE, an independent academic research network. No AI company evaluated in our benchmarks has funding, editorial, or methodological influence over our work.

Developers & Contributors

Analyze, audit, and contribute to MAST. Explore the methodology, run evaluations, and help improve medical AI safety benchmarks.

Submission guidelines

We welcome benchmark submissions via GitHub. All submissions must include a peer-reviewed or pre-print manuscript, a publicly accessible dataset, and reproducible evaluation code. Results should be generated using the official MAST evaluation harness.

Review process

Submissions are reviewed by the MAST governance committee for clinical relevance, methodological rigor, and reproducibility. Accepted benchmarks are integrated into the composite score on a quarterly release cycle. See our policies and instructions on GitHub before opening a pull request.

Join us in shaping the future of
healthcare with AI

Mailing List Signup