Currently we support First Do NOHARM, Script Concordance Test (SCT-Bench), CPC-Bench, and MedAgentBench, with more benchmarks in our roadmap. Widely-used industry benchmarks (OpenAI HealthBench) are included for reference, but are not weighed in the composite score. See our policies and submission instructions.
MAST is operated by the ARISE AI Research Network, an independent academic research collaboration. No AI company evaluated in our benchmarks has funding influence, editorial control, or methodological input over our evaluation processes.
Our evaluation schedule, scoring rubrics, and publication timeline are determined by the MAST Steering Committee. Model providers are notified of results only after scoring is finalized, and they have no opportunity to influence or preview findings before publication.


MAST is developed by ARISE, an independent academic research network. No AI company evaluated in our benchmarks has funding, editorial, or methodological influence over our work.
Analyze, audit, and contribute to MAST. Explore the methodology, run evaluations, and help improve medical AI safety benchmarks.
We welcome benchmark submissions via GitHub. All submissions must include a peer-reviewed or pre-print manuscript, a publicly accessible dataset, and reproducible evaluation code. Results should be generated using the official MAST evaluation harness.
Submissions are reviewed by the MAST governance committee for clinical relevance, methodological rigor, and reproducibility. Accepted benchmarks are integrated into the composite score on a quarterly release cycle. See our policies and instructions on GitHub before opening a pull request.