CRAN release: 2022-06-06
metrics()that was passed along to the pROC package is now deprecated and no longer has any affect. This is a result of changing to an ROC curve implementation that supports case weights, but does not support any of the previous options. If you need these options, we suggest wrapping pROC yourself in a custom metric (#296).
conf_mat()now ignores any inputs passed through
...and warns if you try to do such a thing. Previously, those were passed on to
base::table(), but with the addition of case weight support,
table()is no longer used (#295).
Fixed a small mistake in
ccc()where the unbiased covariance wasn’t being used when
bias = FALSE.
j_index()now throws a more correct warning if
0is in the denominator when computing
sens()internally. Additionally, in the multiclass case it now removes the levels where this occurs from the multiclass weighted average computation, which is consistent with how other metrics were updated to handle this in #118 (#265).
Improved on some possible ambiguity in the documentation of the
dataargument for all metrics (#255).
purrr has been removed from Suggests.
The pROC package has been removed as a dependency (#300).
Moved the Custom Metrics vignette to tidymodels.org (#236).
CRAN release: 2021-11-22
roc_curve()now throws a more informative error if
truthdoesn’t have any control or event observations.
Removed internal hardcoding of
"dplyr_error"to avoid issues with an upcoming dplyr 1.0.8 release (#244).
Updated test suite to testthat 3e (#243).
Internal upkeep has been done to move from
rlang::warn(.subclass = )to
rlang::warn(class = ), since the
.subclassargument has been deprecated (#225).
CRAN release: 2021-03-28
metric_tweak()for adjusting the default values of optional arguments in an existing yardstick metric. This is useful to quickly adjust the defaults of a metric that will be included in a
metric_set(), especially if that metric set is going to be used for tuning with the tune package (#206, #182).
class_predobjects from the probably package are now supported, and are automatically converted to factors before computing any metric. Note that this means that any equivocal values are materialized as
CRAN release: 2020-07-13
The global option,
yardstick.event_first, has been deprecated in favor of the new explicit argument,
event_level. All metric functions that previously supported changing the “event” level have gained this new argument. The global option was a historical design decision that can be classified as a case of a hidden argument. Existing code that relied on this global option will continue to work in this version of yardstick, however you will now get a once-per-session warning that requests that you update to instead use the explicit
event_levelargument. The global option will be completely removed in a future version. To update, follow the guide below (#163).
`options(yardstick.event_first = TRUE)` -> `event_level = "first"` (the default) `options(yardstick.event_first = FALSE)` -> `event_level = "second"`
roc_auc()Hand-Till multiclass estimator will now ignore levels in
truththat occur zero times in the actual data. With other methods of multiclass averaging, this usually returns an
NA, however, ignoring levels in this manner is more consistent with implementations in the HandTill2001 and pROC packages (#123).
direction = "<"when computing the ROC curve using
pROC::roc(). Results were being computed incorrectly with
direction = "auto"when most probability values were predicting the wrong class (#123).
mn_log_loss()now respects the (deprecated) global option
yardstick.event_first. However, you should instead change the relevant event level through the
Rcpp has been removed as a direct dependency.
CRAN release: 2020-03-17
roc_aunp()are two new ROC AUC metrics for multiclass classifiers. These measure the AUC of each class against the rest,
roc_aunu()using the uniform class distribution (#69) and
roc_aunp()using the a priori class distribution (#70).
CRAN release: 2020-01-23
autoplot()mosaic plot for confusion matrices had the
yaxis labels backwards. This has been corrected.
CRAN release: 2019-08-26
iic()is a new numeric metric for computing the index of ideality of correlation. It can be seen as a potential alternative to the traditional correlation coefficient, and has been used in QSAR models (@jyuu, #115).
average_precision()is a new probability metric that can be used as an alternative to
pr_auc(). It has the benefit of avoiding any issues of ambiguity in the case where
recall == 0and the current number of false positives is
metric_set()output now includes a
metricsattribute which contains a list of the original metric functions used to generate the metric set.
Each metric function now has a
directionattribute attached to it, specifying whether to minimize or maximize the metric.
All valid arguments to
pROC::roc()are now utilized, including those passed on to
pr_curve()now places a
1as the first precision value, rather than
NAis technically correct as precision is undefined here,
1is practically more correct because it generates a correct PR Curve graph and, more importantly, allows
pr_auc()to compute the correct AUC.
pr_curve()could generate the wrong results in the somewhat rare case when two class probability estimates were the same, but had different truth values.
CRAN release: 2019-03-08
metric_set()now returns a classed function. If numeric metrics are used, a
"numeric_metric_set"function is returned. If class or probability metrics are used, a
Tests related to the fixed R 3.6
sample()function have been fixed.
"micro"estimators now propagate
NAvalues through correctly.
roc_auc(estimator = "hand_till")now correctly computes the metric when the column names of the probability matrix are not the exact same as the levels of
truth. Note that the computation still assumes that the order of the supplied probability matrix columns still matches the order of
levels(truth), like other multiclass metrics (#86).
CRAN release: 2018-11-05
A desire to standardize the yardstick API is what drove these breaking changes. The output of each metric is now in line with tidy principles, returning a tibble rather than a single numeric. Additionally, all metrics now have a standard argument list so you should be able to switch between metrics and combine them together effortlessly.
All metrics now return a tibble rather than a single numeric value. This format allows metrics to work with grouped data frames (for resamples). It also allows you to bundle multiple metrics together with a new function,
For all class probability metrics, now only 1 column can be passed to
...when a binary implementation is used. Those metrics will no longer select only the first column when multiple columns are supplied, and will instead throw an error.
conf_matobjects now returns a tibble to be consistent with the change to the metric functions.
For naming consistency,
mnLogLoss()was renamed to
mn_log_loss()now returns the negative log loss for the multinomial distribution.
na.rmhas been changed to
na_rmin all metrics to align with the
tidymodelsmodel implementation principles.
Each metric now has a vector interface to go alongside the data frame interface. All vector functions end in
_vec(). The vector interface accepts vector/matrix inputs and returns a single numeric value.
Multiclass support has been added for each classification metric. The support varies from one metric to the next, but generally macro and micro averaging is available for all metrics, with some metrics having specialized multiclass implementations (for example,
roc_auc()supports the multiclass generalization presented in a paper by Hand and Till). For more information, see
All metrics now work with grouped data frames. This produces a tibble with as many rows as there are groups, and is useful when used alongside resampling techniques.
mape()calculates the mean absolute percent error.
detection_prevalence()calculates the number of predicted positive events relative to the total number of predictions.
bal_accuracy()calculates balanced accuracy as the average of sensitivity and specificity.
roc_curve()calculates receiver operator curves and returns the results as a tibble.
pr_curve()calculates precision recall curves.
gain_capture()is a measure of performance similar in spirit to AUC but applied to a gain curve.
metric_set()constructs functions that calculate multiple metrics at once.
The infrastructure for creating metrics has been exposed to allow users to extend yardstick to work with their own metrics. You might want to do this if you want your metrics to work with grouped data frames out of the box, or if you want the standardization and error checking that yardstick already provides. See
vignette("custom-metrics", "yardstick")for a few examples.
A vignette describing the three classes of metrics used in yardstick has been added. It also includes a list of every metric available, grouped by class. See
The error messages in yardstick should now be much more informative, with better feedback about the types of input that each metric can use and about what kinds of metrics can be used together (i.e. in
There is now a
conf_mat()that returns a tibble with a list column of
Each metric now has its own help page. This allows us to better document the nuances of each metric without cluttering the help pages of other metrics.