Model Fairness

Meet the people working on it!

Haonan (Echo) Zhong
Haonan (Echo) Zhong

Research Overview

Ensuring group fairness in deep learning models is crucial in sensitive domains like medical diagnosis, yet studies show they often exhibit demographic performance gaps that undermine trust. Existing debiasing methods typically require sensitive-attribute labels, introduce costly architectural changes, or improve fairness at the expense of utility.

We introduce RLU, a medical imaging-oriented bias mitigation framework, which mitigates bias without requiring sensitive‑attribute knowledge or intrusive model modifications.

Project Slides

slide
1 / 2
0%