Demographic parity is satisfied when a model's predictions have the
same predicted positive rate across groups. A value of 0 indicates parity
across groups. Note that this definition does not depend on the true
outcome; the truth
argument is included in outputted metrics
for consistency.
demographic_parity()
is calculated as the difference between the largest
and smallest value of detection_prevalence()
across groups.
Demographic parity is sometimes referred to as group fairness, disparate impact, or statistical parity.
See the "Measuring Disparity" section for details on implementation.
Value
This function outputs a yardstick fairness metric function. Given a
grouping variable by
, demographic_parity()
will return a yardstick metric
function that is associated with the data-variable grouping by
and a
post-processor. The outputted function will first generate a set
of detection_prevalence metric values by group before summarizing across
groups using the post-processing function.
The outputted function only has a data frame method and is intended to be used as part of a metric set.
Measuring Disparity
By default, this function takes the difference in range of detection_prevalence
.estimate
s across groups. That is, the maximum pair-wise disparity between
groups is the return value of demographic_parity()
's .estimate
.
For finer control of group treatment, construct a context-aware fairness
metric with the new_groupwise_metric()
function by passing a custom aggregate
function:
# the actual default `aggregate` is:
diff_range <- function(x, ...) {diff(range(x$.estimate))}
demographic_parity_2 <-
new_groupwise_metric(
fn = detection_prevalence,
name = "demographic_parity_2",
aggregate = diff_range
)
In aggregate()
, x
is the metric_set()
output with detection_prevalence values
for each group, and ...
gives additional arguments (such as a grouping
level to refer to as the "baseline") to pass to the function outputted
by demographic_parity_2()
for context.
References
Agarwal, A., Beygelzimer, A., Dudik, M., Langford, J., & Wallach, H. (2018). "A Reductions Approach to Fair Classification." Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research. 80:60-69.
Verma, S., & Rubin, J. (2018). "Fairness definitions explained". In Proceedings of the international workshop on software fairness (pp. 1-7).
Bird, S., Dudík, M., Edgar, R., Horn, B., Lutz, R., Milan, V., ... & Walker, K. (2020). "Fairlearn: A toolkit for assessing and improving fairness in AI". Microsoft, Tech. Rep. MSR-TR-2020-32.
See also
Other fairness metrics:
equal_opportunity()
,
equalized_odds()
Examples
library(dplyr)
data(hpc_cv)
head(hpc_cv)
#> obs pred VF F M L Resample
#> 1 VF VF 0.9136340 0.07786694 0.008479147 1.991225e-05 Fold01
#> 2 VF VF 0.9380672 0.05710623 0.004816447 1.011557e-05 Fold01
#> 3 VF VF 0.9473710 0.04946767 0.003156287 4.999849e-06 Fold01
#> 4 VF VF 0.9289077 0.06528949 0.005787179 1.564496e-05 Fold01
#> 5 VF VF 0.9418764 0.05430830 0.003808013 7.294581e-06 Fold01
#> 6 VF VF 0.9510978 0.04618223 0.002716177 3.841455e-06 Fold01
# evaluate `demographic_parity()` by Resample
m_set <- metric_set(demographic_parity(Resample))
# use output like any other metric set
hpc_cv %>%
m_set(truth = obs, estimate = pred)
#> # A tibble: 1 × 4
#> .metric .by .estimator .estimate
#> <chr> <chr> <chr> <dbl>
#> 1 demographic_parity Resample macro 2.78e-17
# can mix fairness metrics and regular metrics
m_set_2 <- metric_set(sens, demographic_parity(Resample))
hpc_cv %>%
m_set_2(truth = obs, estimate = pred)
#> # A tibble: 2 × 4
#> .metric .estimator .estimate .by
#> <chr> <chr> <dbl> <chr>
#> 1 sens macro 5.60e- 1 NA
#> 2 demographic_parity macro 2.78e-17 Resample