Kappa is a similar measure to accuracy(), but is normalized by the accuracy that would be expected by chance alone and is very useful when one or more classes have large frequency distributions.

kap(data, ...)

# S3 method for data.frame
kap(data, truth, estimate, na_rm = TRUE, ...)

kap_vec(truth, estimate, na_rm = TRUE, ...)

Arguments

data

Either a data.frame containing the truth and estimate columns, or a table/matrix where the true class results should be in the columns of the table.

...

Not currently used.

truth

The column identifier for the true class results (that is a factor). This should be an unquoted column name although this argument is passed by expression and supports quasiquotation (you can unquote column names). For _vec() functions, a factor vector.

estimate

The column identifier for the predicted class results (that is also factor). As with truth this can be specified different ways but the primary method is to use an unquoted variable name. For _vec() functions, a factor vector.

na_rm

A logical value indicating whether NA values should be stripped before the computation proceeds.

Value

A tibble with columns .metric, .estimator, and .estimate and 1 row of values.

For grouped data frames, the number of rows returned will be the same as the number of groups.

For kap_vec(), a single numeric value (or NA).

Multiclass

Kappa extends naturally to multiclass scenarios. Because of this, macro and micro averaging are not implemented.

References

Cohen, J. (1960). "A coefficient of agreement for nominal scales". Educational and Psychological Measurement. 20 (1): 37-46.

See also

Other class metrics: accuracy(), bal_accuracy(), detection_prevalence(), f_meas(), j_index(), mcc(), npv(), ppv(), precision(), recall(), sens(), spec()

Examples

# Two class data("two_class_example") kap(two_class_example, truth, predicted)
#> # A tibble: 1 x 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 kap binary 0.675
# Multiclass library(dplyr) data(hpc_cv) hpc_cv %>% filter(Resample == "Fold01") %>% kap(obs, pred)
#> # A tibble: 1 x 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 kap multiclass 0.533
# Groups are respected hpc_cv %>% group_by(Resample) %>% kap(obs, pred)
#> # A tibble: 10 x 4 #> Resample .metric .estimator .estimate #> <chr> <chr> <chr> <dbl> #> 1 Fold01 kap multiclass 0.533 #> 2 Fold02 kap multiclass 0.512 #> 3 Fold03 kap multiclass 0.594 #> 4 Fold04 kap multiclass 0.511 #> 5 Fold05 kap multiclass 0.514 #> 6 Fold06 kap multiclass 0.486 #> 7 Fold07 kap multiclass 0.454 #> 8 Fold08 kap multiclass 0.531 #> 9 Fold09 kap multiclass 0.454 #> 10 Fold10 kap multiclass 0.492
# Weighted macro averaging hpc_cv %>% group_by(Resample) %>% kap(obs, pred, estimator = "macro_weighted")
#> # A tibble: 10 x 4 #> Resample .metric .estimator .estimate #> <chr> <chr> <chr> <dbl> #> 1 Fold01 kap multiclass 0.533 #> 2 Fold02 kap multiclass 0.512 #> 3 Fold03 kap multiclass 0.594 #> 4 Fold04 kap multiclass 0.511 #> 5 Fold05 kap multiclass 0.514 #> 6 Fold06 kap multiclass 0.486 #> 7 Fold07 kap multiclass 0.454 #> 8 Fold08 kap multiclass 0.531 #> 9 Fold09 kap multiclass 0.454 #> 10 Fold10 kap multiclass 0.492
# Vector version kap_vec(two_class_example$truth, two_class_example$predicted)
#> [1] 0.6748764
# Making Class2 the "relevant" level options(yardstick.event_first = FALSE) kap_vec(two_class_example$truth, two_class_example$predicted)
#> [1] 0.6748764
options(yardstick.event_first = TRUE)