Are demographically invariant models and representations in medical imaging fair?
Publikation: Working paper › Preprint › Forskning
Standard
Are demographically invariant models and representations in medical imaging fair? / Petersen, Eike; Ferrante, Enzo; Ganz, Melanie; Feragen, Aasa.
2023.Publikation: Working paper › Preprint › Forskning
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - UNPB
T1 - Are demographically invariant models and representations in medical imaging fair?
AU - Petersen, Eike
AU - Ferrante, Enzo
AU - Ganz, Melanie
AU - Feragen, Aasa
PY - 2023/5/2
Y1 - 2023/5/2
N2 - Medical imaging models have been shown to encode information about patient demographics (age, race, sex) in their latent representation, raising concerns about their potential for discrimination. Here, we ask whether it is feasible and desirable to train models that do not encode demographic attributes. We consider different types of invariance with respect to demographic attributes - marginal, class-conditional, and counterfactual model invariance - and lay out their equivalence to standard notions of algorithmic fairness. Drawing on existing theory, we find that marginal and class-conditional invariance can be considered overly restrictive approaches for achieving certain fairness notions, resulting in significant predictive performance losses. Concerning counterfactual model invariance, we note that defining medical image counterfactuals with respect to demographic attributes is fraught with complexities. Finally, we posit that demographic encoding may even be considered advantageous if it enables learning a task-specific encoding of demographic features that does not rely on human-constructed categories such as 'race' and 'gender'. We conclude that medical imaging models may need to encode demographic attributes, lending further urgency to calls for comprehensive model fairness assessments in terms of predictive performance.
AB - Medical imaging models have been shown to encode information about patient demographics (age, race, sex) in their latent representation, raising concerns about their potential for discrimination. Here, we ask whether it is feasible and desirable to train models that do not encode demographic attributes. We consider different types of invariance with respect to demographic attributes - marginal, class-conditional, and counterfactual model invariance - and lay out their equivalence to standard notions of algorithmic fairness. Drawing on existing theory, we find that marginal and class-conditional invariance can be considered overly restrictive approaches for achieving certain fairness notions, resulting in significant predictive performance losses. Concerning counterfactual model invariance, we note that defining medical image counterfactuals with respect to demographic attributes is fraught with complexities. Finally, we posit that demographic encoding may even be considered advantageous if it enables learning a task-specific encoding of demographic features that does not rely on human-constructed categories such as 'race' and 'gender'. We conclude that medical imaging models may need to encode demographic attributes, lending further urgency to calls for comprehensive model fairness assessments in terms of predictive performance.
KW - cs.LG
KW - cs.CY
KW - eess.IV
KW - stat.ML
M3 - Preprint
BT - Are demographically invariant models and representations in medical imaging fair?
ER -
ID: 346245985