rtidymodelsr-recipesr-parsnip

Understanding why tune::last_fit metrics are different from summary()


Context: I try to evaluate a model, made using tune::last_fit() with an independent dataset.

Problem: it seems that the metrics obtained with tune::collect_metrics() are different from the ones obtained using summary().

Question: what is the difference between the metric (here the R²) calculated using tune::collect_metrics() and summary()? Which one corresponds to the R² between observation from the independent dataset and predictions of these observations?

Reproducible example: using the example from https://tune.tidymodels.org/reference/last_fit.html as a starting point.

library(recipes)
library(rsample)
library(parsnip)

set.seed(6735)
tr_te_split <- initial_split(mtcars)

spline_rec <- recipe(mpg ~ ., data = mtcars) %>%
  step_ns(disp)

lin_mod <- linear_reg() %>%
  set_engine("lm")

spline_res <- tune::last_fit(lin_mod, spline_rec, split = tr_te_split)
spline_res
#> # Resampling results
#> # Manual resampling 
#> # A tibble: 1 × 6
#>   splits         id               .metrics .notes   .predictions     .workflow 
#>   <list>         <chr>            <list>   <list>   <list>           <list>    
#> 1 <split [24/8]> train/test split <tibble> <tibble> <tibble [8 × 4]> <workflow>
# Here are the performance metrics for the model
tune::collect_metrics(spline_res)
#> # A tibble: 2 × 4
#>   .metric .estimator .estimate .config             
#>   <chr>   <chr>          <dbl> <chr>               
#> 1 rmse    standard       3.80  Preprocessor1_Model1
#> 2 rsq     standard       0.729 Preprocessor1_Model1

spline_res %>% 
  parsnip::extract_fit_engine() %>% # back to stats lm object
  summary()
#> 
#> Call:
#> stats::lm(formula = ..y ~ ., data = data)
#> 
#> Residuals:
#>     Min      1Q  Median      3Q     Max 
#> -3.4453 -1.1980 -0.1464  1.3246  2.8223 
#> 
#> Coefficients:
#>               Estimate Std. Error t value Pr(>|t|)
#> (Intercept)  23.087028  18.641785   1.238    0.239
#> cyl           0.326218   1.402236   0.233    0.820
#> hp            0.005969   0.024848   0.240    0.814
#> drat         -0.009576   1.597293  -0.006    0.995
#> wt           -0.902839   2.503336  -0.361    0.725
#> qsec          0.185826   0.745021   0.249    0.807
#> vs            1.492756   2.255781   0.662    0.521
#> am            4.101555   3.110797   1.318    0.212
#> gear          0.174875   1.730223   0.101    0.921
#> carb         -1.278962   1.009824  -1.267    0.229
#> disp_ns_1   -15.149506  13.649995  -1.110    0.289
#> disp_ns_2    -4.905087   6.756046  -0.726    0.482
#> 
#> Residual standard error: 2.397 on 12 degrees of freedom
#> Multiple R-squared:  0.9204, Adjusted R-squared:  0.8473 
#> F-statistic: 12.61 on 11 and 12 DF,  p-value: 5.869e-05

Created on 2023-05-22 with reprex v2.0.2

As you can see, both R² are not equal.


Solution

  • The statistics that you get via last_fit() are from holdout data. The ones from summary.lm() are not; they are from the same data being used to fit the model.

    The re-use of data to assess model performance is a major pitfall when modeling. It will give you optimistic results (perhaps overwhelmingly optimistic, depending on the model).

    There are tons of references on this. We give a small example in the tdiymodels book.

    Also, while this is not the issue, tidymodels (and caret before it) use a different estimator for $R^2$ than the canonical one used by linear regression (see ?yardstick::rsq). It performs better when the models have metrics closer to zero.