roc_aunu() is a multiclass metric that computes the area under the ROC curve of each class against the rest, using the uniform class distribution. This is equivalent to roc_auc(estimator = "macro").

roc_aunu(data, ...)

# S3 method for data.frame
roc_aunu(data, truth, ..., options = list(), na_rm = TRUE)

roc_aunu_vec(truth, estimate, options = list(), na_rm = TRUE, ...)

## Arguments

data A data.frame containing the truth and estimate columns. A set of unquoted column names or one or more dplyr selector functions to choose which variables contain the class probabilities. There should be as many columns as factor levels of truth. The column identifier for the true class results (that is a factor). This should be an unquoted column name although this argument is passed by expression and supports quasiquotation (you can unquote column names). For _vec() functions, a factor vector. A list of named options to pass to pROC::roc() such as direction or smooth. These options should not include response, predictor, levels, or quiet. A logical value indicating whether NA values should be stripped before the computation proceeds. A matrix with as many columns as factor levels of truth. It is assumed that these are in the same order as the levels of truth.

## Value

A tibble with columns .metric, .estimator, and .estimate and 1 row of values.

For grouped data frames, the number of rows returned will be the same as the number of groups.

For roc_aunu_vec(), a single numeric value (or NA).

## Details

Like the other ROC AUC metrics, roc_aunu() defaults to allowing pROC::roc() control the direction of the computation, but allows you to control this by passing options = list(direction = "<") or any other allowed direction value. pROC advises setting the direction when doing resampling so that the AUC values are not biased upwards.

Generally, an ROC AUC value is between 0.5 and 1, with 1 being a perfect prediction model. If your value is between 0 and 0.5, then this implies that you have meaningful information in your model, but it is being applied incorrectly because doing the opposite of what the model predicts would result in an AUC >0.5.

## Relevant Level

There is no common convention on which factor level should automatically be considered the "event" or "positive" result. In yardstick, the default is to use the first level. To change this, a global option called yardstick.event_first is set to TRUE when the package is loaded. This can be changed to FALSE if the last level of the factor is considered the level of interest by running: options(yardstick.event_first = FALSE). For multiclass extensions involving one-vs-all comparisons (such as macro averaging), this option is ignored and the "one" level is always the relevant result.

## Multiclass

This multiclass method for computing the area under the ROC curve uses the uniform class distribution and is equivalent to roc_auc(estimator = "macro").

## References

Ferri, C., Hernández-Orallo, J., & Modroiu, R. (2009). "An experimental comparison of performance measures for classification". Pattern Recognition Letters. 30 (1), pp 27-38.

roc_aunp() for computing the area under the ROC curve of each class against the rest, using the a priori class distribution.

Other class probability metrics: average_precision(), gain_capture(), mn_log_loss(), pr_auc(), roc_auc(), roc_aunp()

## Examples

# Multiclass example

# obs is a 4 level factor. The first level is "VF", which is the
# "event of interest" by default in yardstick. See the Relevant Level
# section above.
data(hpc_cv)

# You can use the col1:colN tidyselect syntax
library(dplyr)
hpc_cv %>%
filter(Resample == "Fold01") %>%
roc_aunu(obs, VF:L)#> # A tibble: 1 x 3
#>   .metric  .estimator .estimate
#>   <chr>    <chr>          <dbl>
#> 1 roc_aunu macro          0.871
# Change the first level of obs from "VF" to "M" to alter the
# event of interest. The class probability columns should be supplied
# in the same order as the levels.
hpc_cv %>%
filter(Resample == "Fold01") %>%
mutate(obs = relevel(obs, "M")) %>%
roc_aunu(obs, M, VF:L)#> # A tibble: 1 x 3
#>   .metric  .estimator .estimate
#>   <chr>    <chr>          <dbl>
#> 1 roc_aunu macro          0.871
# Groups are respected
hpc_cv %>%
group_by(Resample) %>%
roc_aunu(obs, VF:L)#> # A tibble: 10 x 4
#>    Resample .metric  .estimator .estimate
#>    <chr>    <chr>    <chr>          <dbl>
#>  1 Fold01   roc_aunu macro          0.871
#>  2 Fold02   roc_aunu macro          0.863
#>  3 Fold03   roc_aunu macro          0.898
#>  4 Fold04   roc_aunu macro          0.874
#>  5 Fold05   roc_aunu macro          0.865
#>  6 Fold06   roc_aunu macro          0.877
#>  7 Fold07   roc_aunu macro          0.865
#>  8 Fold08   roc_aunu macro          0.873
#>  9 Fold09   roc_aunu macro          0.855
#> 10 Fold10   roc_aunu macro          0.865
# Vector version
# Supply a matrix of class probabilities
fold1 <- hpc_cv %>%
filter(Resample == "Fold01")

roc_aunu_vec(
truth = fold1$obs, matrix( c(fold1$VF, fold1$F, fold1$M, fold1\$L),
ncol = 4
)
)#> [1] 0.8714461
# ---------------------------------------------------------------------------
# Options for pROC::roc()

# Pass options via a named list and not through ...!
roc_aunu(
hpc_cv,
obs,
VF:L,
options = list(smooth = TRUE)
)#> # A tibble: 1 x 3
#>   .metric  .estimator .estimate
#>   <chr>    <chr>          <dbl>
#> 1 roc_aunu macro          0.868