Calculate the concordance correlation coefficient.

ccc(data, ...)

# S3 method for data.frame
ccc(data, truth, estimate, bias = FALSE, na_rm = TRUE, ...)

ccc_vec(truth, estimate, bias = FALSE, na_rm = TRUE, ...)

Arguments

data

A data.frame containing the truth and estimate columns.

...

Not currently used.

truth

The column identifier for the true results (that is numeric). This should be an unquoted column name although this argument is passed by expression and supports quasiquotation (you can unquote column names). For _vec() functions, a numeric vector.

estimate

The column identifier for the predicted results (that is also numeric). As with truth this can be specified different ways but the primary method is to use an unquoted variable name. For _vec() functions, a numeric vector.

bias

A logical; should the biased estimate of variance be used (as is Lin (1989))?

na_rm

A logical value indicating whether NA values should be stripped before the computation proceeds.

Value

A tibble with columns .metric, .estimator, and .estimate and 1 row of values.

For grouped data frames, the number of rows returned will be the same as the number of groups.

For ccc_vec(), a single numeric value (or NA).

Details

ccc() is a metric of both consistency/correlation and accuracy, while metrics such as rmse() are strictly for accuracy and metrics such as rsq() are strictly for consistency/correlation

References

Lin, L. (1989). A concordance correlation coefficient to evaluate reproducibility. Biometrics, 45 (1), 255-268.

Nickerson, C. (1997). A note on "A concordance correlation coefficient to evaluate reproducibility". Biometrics, 53(4), 1503-1507.

See also

Other numeric metrics: huber_loss_pseudo(), huber_loss(), iic(), mae(), mape(), mase(), rmse(), rpd(), rpiq(), rsq_trad(), rsq(), smape()

Other consistency metrics: rpd(), rpiq(), rsq_trad(), rsq()

Other accuracy metrics: huber_loss_pseudo(), huber_loss(), iic(), mae(), mape(), mase(), rmse(), smape()

Examples

# Supply truth and predictions as bare column names ccc(solubility_test, solubility, prediction)
#> # A tibble: 1 x 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 ccc standard 0.934
library(dplyr) set.seed(1234) size <- 100 times <- 10 # create 10 resamples solubility_resampled <- bind_rows( replicate( n = times, expr = sample_n(solubility_test, size, replace = TRUE), simplify = FALSE ), .id = "resample" ) # Compute the metric by group metric_results <- solubility_resampled %>% group_by(resample) %>% ccc(solubility, prediction) metric_results
#> # A tibble: 10 x 4 #> resample .metric .estimator .estimate #> <chr> <chr> <chr> <dbl> #> 1 1 ccc standard 0.926 #> 2 10 ccc standard 0.928 #> 3 2 ccc standard 0.934 #> 4 3 ccc standard 0.947 #> 5 4 ccc standard 0.934 #> 6 5 ccc standard 0.916 #> 7 6 ccc standard 0.924 #> 8 7 ccc standard 0.913 #> 9 8 ccc standard 0.945 #> 10 9 ccc standard 0.930
# Resampled mean estimate metric_results %>% summarise(avg_estimate = mean(.estimate))
#> # A tibble: 1 x 1 #> avg_estimate #> <dbl> #> 1 0.930