CheXVision-ResNet
CheXVision — Deep Learning & Big Data university project. 14-class chest X-ray pathology detection + binary normal/abnormal classification on the NIH Chest X-ray14 dataset (112,120 images).
Architecture
Training Pipeline
Training Metrics
- Best validation macro AUC-ROC:
0.8008 - Best validation binary AUC-ROC:
0.7571 - Best validation binary F1:
0.6474 - Best checkpoint epoch:
60
Training Configuration
- Repository:
HlexNC/chexvision-scratch - Dataset: HlexNC/chest-xray-14-320 · revision
44443e6ee968b3c6094b63f14a27698c40b50680 - Architecture: Custom residual CNN with Squeeze-Excitation channel attention (depth [3, 4, 6, 3]) trained from scratch with shared features and dual classification heads.
- Platform: Kaggle GPU kernel (NVIDIA T4 / P100)
- Batch size:
24× grad_accum4= effective batch96 - AMP (fp16):
enabled - CLAHE preprocessing:
disabled - Label smoothing:
0.0 - Optimizer: AdamW · Scheduler: CosineAnnealingLR
- Epochs configured:
100· Early stop patience:15
Intended Use
This model is intended for research and educational work on automated chest X-ray pathology detection. It outputs two predictions per image:
- Multi-label scores — independent sigmoid probability for each of 14 NIH pathologies
- Binary score — sigmoid probability of any abnormality (Normal vs. Abnormal)
Limitations
- Not validated for clinical use. Predictions must not substitute professional medical judgment.
- Trained on NIH Chest X-ray14, which contains noisy radiologist annotations (patient-level labels, not lesion-level).
- Performance degrades on images from equipment, patient populations, or preprocessing pipelines that differ from the NIH training distribution.
- Reported AUC metrics are on the validation split, not the held-out test set.
CheXNet Benchmark Context
CheXNet (Rajpurkar et al., 2017) — the seminal paper establishing DenseNet-121 for chest X-ray classification — reported 0.841 macro AUC-ROC on a comparable split of this dataset. CheXVision-DenseNet matches this benchmark. See the CheXVision demo for live inference.
Citation
@misc{chexvision2026,
title={CheXVision: Dual-Task Chest X-ray Classification with Custom CNN and DenseNet-121},
author={BIG D(ATA) Team},
year={2026},
howpublished={\url{https://huggingface.co/HlexNC/chexvision-scratch}}
}