Class metrics evaluate hard classification predictions where both truth
and estimate are factors. These metrics compare predicted classes directly
against the true classes.
Available metrics
accuracy()Direction: maximize. Range: [0, 1]
bal_accuracy()Direction: maximize. Range: [0, 1]
detection_prevalence()Direction: maximize. Range: [0, 1]
f_meas()Direction: maximize. Range: [0, 1]
fall_out()Direction: minimize. Range: [0, 1]
j_index()Direction: maximize. Range: [-1, 1]
kap()Direction: maximize. Range: [-1, 1]
mcc()Direction: maximize. Range: [-1, 1]
miss_rate()Direction: minimize. Range: [0, 1]
npv()Direction: maximize. Range: [0, 1]
ppv()Direction: maximize. Range: [0, 1]
precision()Direction: maximize. Range: [0, 1]
recall()Direction: maximize. Range: [0, 1]
sens()Direction: maximize. Range: [0, 1]
sensitivity()Direction: maximize. Range: [0, 1]
spec()Direction: maximize. Range: [0, 1]
specificity()Direction: maximize. Range: [0, 1]
See also
prob-metrics for class probability metrics
ordered-prob-metrics for ordered probability metrics
vignette("metric-types") for an overview of all metric types
Examples
data("two_class_example")
head(two_class_example)
#> truth Class1 Class2 predicted
#> 1 Class2 0.003589243 0.9964107574 Class2
#> 2 Class1 0.678621054 0.3213789460 Class1
#> 3 Class2 0.110893522 0.8891064779 Class2
#> 4 Class1 0.735161703 0.2648382969 Class1
#> 5 Class2 0.016239960 0.9837600397 Class2
#> 6 Class1 0.999275071 0.0007249286 Class1
accuracy(two_class_example, truth, predicted)
#> # A tibble: 1 × 3
#> .metric .estimator .estimate
#> <chr> <chr> <dbl>
#> 1 accuracy binary 0.838
