MOOZY

A Patient-First Foundation Model for Computational Pathology

Yousef Kotp1,2, Vincent Quoc-Huy Trinh3,4, Christopher Pal2,5,6, Mahdi S. Hosseini1,2,7

1Concordia University   2Mila   3CHUM, Université de Montréal   4IRIC, Université de Montréal   5Université de Montréal   6Polytechnique Montréal   7McGill University

85.77M parameters
77,134 public slides
333 tasks from 56 datasets
+7.4% F1 over TITAN
23 anatomical sites

Quick Start

# install pip install moozy # encode from pre-computed H5 feature files moozy encode slide_1.h5 slide_2.h5 -o case_embedding.h5 # encode from raw whole-slide images (requires atlas-patch) moozy encode slide_1.svs slide_2.svs -o case_embedding.h5 --target_mag 20

Checkpoint and task definitions are downloaded automatically from HuggingFace on first use.

Abstract

Computational pathology needs whole-slide image (WSI) foundation models that transfer across diverse clinical tasks, yet current approaches remain largely slide-centric, often depend on private data and expensive paired-report supervision, and do not explicitly model relationships among multiple slides from the same patient. We present MOOZY, a patient-first pathology foundation model in which the patient case, not the individual slide, is the core unit of representation. MOOZY explicitly models dependencies across all slides from the same patient via a case transformer during pretraining, combining multi-stage open self-supervision with scaled low-cost task supervision. In Stage 1, we pretrain a vision-only slide encoder on 77,134 public slide feature grids using masked self-distillation. In Stage 2, we align these representations with clinical semantics using a case transformer and multi-task supervision over 333 tasks from 56 public datasets, including 205 classification and 128 survival tasks across four endpoints. Across eight held-out tasks with five-fold frozen-feature probe evaluation, MOOZY achieves best or tied-best performance on most metrics and improves macro averages over TITAN by +7.37%, +5.50%, and +7.83% and over PRISM by +8.83%, +10.70%, and +9.78% for weighted F1, weighted ROC-AUC, and balanced accuracy, respectively.

Two-Stage Training Pipeline

MOOZY two-stage training pipeline

Stage 1 (top): A frozen patch encoder extracts per-patch features arranged into a spatial grid. Multi-scale crops are sampled with spatial augmentations and block-based masking. A student slide encoder and EMA teacher are jointly trained via CLS-level self-distillation and masked patch prediction. Stage 2 (bottom): The pretrained slide encoder produces per-slide embeddings; a case transformer aggregates them into a unified case embedding, routed to task-specific classification and survival heads.

Training Data Scale

MOOZY data scale overview

MOOZY is trained entirely on public data. Stage 1 uses 77,134 slide feature grids (53,286 at 20× and 23,848 at 40×) extracted from ~1.67 billion patches across ~31.8 TB of raw WSI data. Stage 2 uses 41,089 supervised cases across 333 tasks from 56 datasets — all 32 TCGA cohorts, all 10 CPTAC cohorts, REG, BC-Therapy, BRACS, CAMELYON17, DHMC Kidney, DHMC LUAD, EBRAINS, IMP Colorectum, IMP Cervix, MBC, MUT-HET-RCC, NADT Prostate, NAT-BRCA, and PANDA. Supervision covers 205 classification and 128 survival tasks across four endpoints (OS, DSS, DFI, PFI) and 23 anatomical sites.

Results

Radar comparison

Macro-average radar comparison across slide encoders on eight held-out tasks.

Scatter comparison

Parameter count vs. macro F1. Bubble size indicates total parameters.

Frozen-feature MLP probe on eight held-out tasks. Bold = best, underline = second best. Mean ± std over 5 folds.

Task Metric CHIEF GigaPath PRISM Madeleine TITAN MOOZY
Residual Cancer BurdenF10.460.450.460.510.430.56
AUC0.600.550.580.630.580.74
Bal. Acc0.440.400.430.480.380.51
TP53 MutationF10.820.760.850.840.870.87
AUC0.810.760.850.850.910.86
Bal. Acc0.830.760.840.840.880.86
BAP1 MutationF10.860.840.800.850.840.89
AUC0.750.630.710.780.820.79
Bal. Acc0.750.660.660.750.750.78
ACVR2A MutationF10.890.800.850.890.870.91
AUC0.800.740.830.760.790.91
Bal. Acc0.800.650.810.810.760.90
Histologic GradeF10.710.770.730.750.730.78
AUC0.710.770.670.740.710.75
Bal. Acc0.730.770.730.740.730.77
KRAS MutationF10.770.770.720.810.800.85
AUC0.760.720.610.700.800.80
Bal. Acc0.740.760.630.770.810.79
IDH StatusF10.920.940.910.920.940.97
AUC0.960.970.950.960.970.99
Bal. Acc0.920.940.910.910.940.97
Treatment ResponseF10.530.510.570.490.490.58
AUC0.700.680.690.590.600.68
Bal. Acc0.480.400.510.350.370.48

Macro-average across eight held-out tasks. Each entry averages over five MIL architectures (MeanMIL, ABMIL, CLAM, DSMIL, TransMIL).

MetricBackboneUNI v2Phikon v2CONCH v1.5MUSKMOOZY
F1 (weighted)0.7330.7160.7150.7460.7290.801
ROC-AUC (weighted)0.7350.7190.7240.7510.7250.815
Balanced Acc0.6860.6600.6540.6960.6790.758

Macro-average across eight held-out tasks. Each stage and the case aggregator are toggled independently.

SettingStage 1Stage 2Case Agg.F1AUCBal. Acc
Stage 1 only0.7600.7530.701
Stage 2 only0.7480.7250.701
MOOZY w/o case agg.0.7710.7890.729
MOOZY0.8010.8150.758

Where Does MOOZY Look?

A board-certified pathologist reviewed attention maps across 20 representative WSIs and five encoders. MOOZY achieved the lowest mean semantic gap score (1.00) and near-balanced tumor vs. non-tumor attention (2.63), suggesting broad, diagnostically relevant coverage.

Embedding Quality

Dimensionality reduction of slide embeddings from four encoders. MOOZY shows the clearest class separation on cancer-type tasks.

MOOZY embedding

MOOZY

TITAN embedding

TITAN

Madeleine embedding

Madeleine

PRISM embedding

PRISM

Citation

@article{kotp2026moozy, title = {MOOZY: A Patient-First Foundation Model for Computational Pathology}, author = {Kotp, Yousef and Trinh, Vincent Quoc-Huy and Pal, Christopher and Hosseini, Mahdi S.}, journal = {arXiv preprint arXiv:XXXX.XXXXX}, year = {2026} }