pr_auc() is a metric that computes the area under the precision recall curve. See pr_curve() for the full curve.

pr_auc(data, ...)

# S3 method for data.frame
pr_auc(data, truth, ..., estimator = NULL, na_rm = TRUE)

pr_auc_vec(truth, estimate, estimator = NULL, na_rm = TRUE, ...)



A data.frame containing the truth and estimate columns.


A set of unquoted column names or one or more dplyr selector functions to choose which variables contain the class probabilities. If truth is binary, only 1 column should be selected. Otherwise, there should be as many columns as factor levels of truth.


The column identifier for the true class results (that is a factor). This should be an unquoted column name although this argument is passed by expression and supports quasiquotation (you can unquote column names). For _vec() functions, a factor vector.


One of "binary", "macro", or "macro_weighted" to specify the type of averaging to be done. "binary" is only relevant for the two class case. The other two are general methods for calculating multiclass metrics. The default will automatically choose "binary" or "macro" based on truth.


A logical value indicating whether NA values should be stripped before the computation proceeds.


If truth is binary, a numeric vector of class probabilities corresponding to the "relevant" class. Otherwise, a matrix with as many columns as factor levels of truth. It is assumed that these are in the same order as the levels of truth.


A tibble with columns .metric, .estimator, and .estimate and 1 row of values.

For grouped data frames, the number of rows returned will be the same as the number of groups.

For pr_auc_vec(), a single numeric value (or NA).


Macro and macro-weighted averaging is available for this metric. The default is to select macro averaging if a truth factor with more than 2 levels is provided. Otherwise, a standard binary calculation is done. See vignette("multiclass", "yardstick") for more information.

Relevant Level

There is no common convention on which factor level should automatically be considered the "event" or "positive" result. In yardstick, the default is to use the first level. To change this, a global option called yardstick.event_first is set to TRUE when the package is loaded. This can be changed to FALSE if the last level of the factor is considered the level of interest by running: options(yardstick.event_first = FALSE). For multiclass extensions involving one-vs-all comparisons (such as macro averaging), this option is ignored and the "one" level is always the relevant result.

See also

pr_curve() for computing the full precision recall curve.

Other class probability metrics: average_precision(), gain_capture(), mn_log_loss(), roc_auc(), roc_aunp(), roc_aunu()


# --------------------------------------------------------------------------- # Two class example # `truth` is a 2 level factor. The first level is `"Class1"`, which is the # "event of interest" by default in yardstick. See the Relevant Level # section above. data(two_class_example) # Binary metrics using class probabilities take a factor `truth` column, # and a single class probability column containing the probabilities of # the event of interest. Here, since `"Class1"` is the first level of # `"truth"`, it is the event of interest and we pass in probabilities for it. pr_auc(two_class_example, truth, Class1)
#> # A tibble: 1 x 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 pr_auc binary 0.946
# --------------------------------------------------------------------------- # Multiclass example # `obs` is a 4 level factor. The first level is `"VF"`, which is the # "event of interest" by default in yardstick. See the Relevant Level # section above. data(hpc_cv) # You can use the col1:colN tidyselect syntax library(dplyr) hpc_cv %>% filter(Resample == "Fold01") %>% pr_auc(obs, VF:L)
#> # A tibble: 1 x 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 pr_auc macro 0.611
# Change the first level of `obs` from `"VF"` to `"M"` to alter the # event of interest. The class probability columns should be supplied # in the same order as the levels. hpc_cv %>% filter(Resample == "Fold01") %>% mutate(obs = relevel(obs, "M")) %>% pr_auc(obs, M, VF:L)
#> # A tibble: 1 x 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 pr_auc macro 0.611
# Groups are respected hpc_cv %>% group_by(Resample) %>% pr_auc(obs, VF:L)
#> # A tibble: 10 x 4 #> Resample .metric .estimator .estimate #> <chr> <chr> <chr> <dbl> #> 1 Fold01 pr_auc macro 0.611 #> 2 Fold02 pr_auc macro 0.620 #> 3 Fold03 pr_auc macro 0.689 #> 4 Fold04 pr_auc macro 0.680 #> 5 Fold05 pr_auc macro 0.620 #> 6 Fold06 pr_auc macro 0.650 #> 7 Fold07 pr_auc macro 0.607 #> 8 Fold08 pr_auc macro 0.650 #> 9 Fold09 pr_auc macro 0.628 #> 10 Fold10 pr_auc macro 0.603
# Weighted macro averaging hpc_cv %>% group_by(Resample) %>% pr_auc(obs, VF:L, estimator = "macro_weighted")
#> # A tibble: 10 x 4 #> Resample .metric .estimator .estimate #> <chr> <chr> <chr> <dbl> #> 1 Fold01 pr_auc macro_weighted 0.746 #> 2 Fold02 pr_auc macro_weighted 0.743 #> 3 Fold03 pr_auc macro_weighted 0.789 #> 4 Fold04 pr_auc macro_weighted 0.754 #> 5 Fold05 pr_auc macro_weighted 0.737 #> 6 Fold06 pr_auc macro_weighted 0.743 #> 7 Fold07 pr_auc macro_weighted 0.748 #> 8 Fold08 pr_auc macro_weighted 0.756 #> 9 Fold09 pr_auc macro_weighted 0.711 #> 10 Fold10 pr_auc macro_weighted 0.737
# Vector version # Supply a matrix of class probabilities fold1 <- hpc_cv %>% filter(Resample == "Fold01") pr_auc_vec( truth = fold1$obs, matrix( c(fold1$VF, fold1$F, fold1$M, fold1$L), ncol = 4 ) )
#> [1] 0.6109931