You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MADE: A Living Benchmark for Multi-Label Text Classification with Uncertainty Quantification of Medical Device Adverse Events (ACL 2026)

Authors: Raunak Agarwal, Markus Wenzel, Simon Baur, Jonas Zimmer, George Harvey, Jackie Ma

Abstract: Machine learning in high-stakes domains such as healthcare requires not only strong predictive performance but also reliable uncertainty quantification (UQ) to support human oversight. Multi-label text classification (MLTC) is a central task in this domain, yet remains challenging due to label imbalances, dependencies, and combinatorial complexity. Existing MLTC benchmarks are increasingly saturated and may be affected by training data contamination, making it difficult to distinguish genuine reasoning capabilities from memorization in frontier language models. We introduce MADE, a living MLTC benchmark derived from medical device adverse event reports -- continuously updated with newly published reports to prevent contamination. MADE features a long-tailed distribution of hierarchical labels and enables reproducible evaluation with strict temporal splits. We use MADE to establish baselines across more than 20 encoder- and decoder-only models under fine-tuning and few-shot settings (instruction-tuned/reasoning variants, local/API-accessible). We systematically assess entropy-/consistency-based and self-verbalized UQ methods. Our results reveal clear trade-offs: smaller discriminatively fine-tuned decoders achieve the strongest head-to-tail accuracy while maintaining competitive UQ; generative fine-tuning delivers the most reliable UQ; large reasoning models improve performance on rare labels yet exhibit surprisingly weak UQ; and self-verbalized confidence is not a reliable proxy for uncertainty. Our benchmark is publicly available at this URL.

Figure 1: Label hierarchy with product and patient problems. The outer ring shows the fifty most frequent product or patient problems in the test set, grouped by their parent classes (middle ring) and grandparent classes (inner ring).

Table 1: Summary statistics

Metric Value
Total number of samples 488,273
Training set (2015–2023) 298,825
Validation set (1–6/2024) 71,271
Test set (7/2024–6/2025) 118,177
Truncated test set 10,288
Average tokens (cl100k_base) ~370
Average labels per sample 8.79
Unique labels 1,154
Hierarchy levels of labels 3
Minimum occurrences per label 5

Figure 2: Top: Overview of the benchmarking setup, encompassing discriminative and generative language models, learning paradigms (discriminative or generative fine-tuning and few-shot prompting), and uncertainty quantification (UQ) approaches. Bottom, left: Multi-label text classification of medical device adverse events, each annotated with hierarchical product and patient problem labels. Bottom, right: UQ quality is evaluated (for of each model, learning paradigm and UQ method)

Downloads last month
-