Model Fairness
Meet the people working on it!
Research Overview
Ensuring group fairness in deep learning models is crucial in sensitive domains like medical diagnosis, yet studies show they often exhibit demographic performance gaps that undermine trust. Existing debiasing methods typically require sensitive-attribute labels, introduce costly architectural changes, or improve fairness at the expense of utility.
We introduce RLU, a medical imaging-oriented bias mitigation framework, which mitigates bias without requiring sensitive‑attribute knowledge or intrusive model modifications.