rayray-tune

Checkpoint best model for a trial in ray tune


So I just ran a tune experiment and got the following output:

+--------------------+------------+-------+-------------+----------------+--------+------------+
| Trial name         | status     | loc   |          lr |   weight_decay |   loss |   accuracy |
|--------------------+------------+-------+-------------+----------------+--------+------------|
| trainable_13720f86 | TERMINATED |       | 0.00116961  |     0.00371219 | 0.673  |     0.7977 |
| trainable_13792744 | TERMINATED |       | 0.109529    |     0.0862344  | 0.373  |     0.8427 |
| trainable_137ecd98 | TERMINATED |       | 4.35062e-06 |     0.0261442  | 0.6993 |     0.7837 |
| trainable_1383f9d0 | TERMINATED |       | 1.37858e-05 |     0.0974182  | 0.4538 |     0.8428 |
| trainable_13892f72 | TERMINATED |       | 0.0335583   |     0.0403495  | 0.3399 |     0.8618 |
| trainable_138dd720 | TERMINATED |       | 0.00858623  |     0.0695453  | 0.3415 |     0.8612 |
| trainable_1395570c | TERMINATED |       | 4.6309e-05  |     0.0172459  | 0.39   |     0.8283 |
| trainable_139ce148 | TERMINATED |       | 2.32951e-05 |     0.0787076  | 0.3641 |     0.8512 |
| trainable_13a848ee | TERMINATED |       | 0.00431763  |     0.0341105  | 0.3415 |     0.8611 |
| trainable_13ad0a78 | TERMINATED |       | 0.0145063   |     0.050807   | 0.3668 |     0.8398 |
| trainable_13b3342a | TERMINATED |       | 5.96148e-06 |     0.0110345  | 0.3418 |     0.8608 |
| trainable_13bd4d3e | TERMINATED |       | 1.82617e-06 |     0.0655128  | 0.3667 |     0.8501 |
| trainable_13c45a2a | TERMINATED |       | 0.0459573   |     0.0224991  | 0.3432 |     0.8516 |
| trainable_13d561d0 | TERMINATED |       | 0.00060595  |     0.092522   | 0.3389 |     0.8623 |
| trainable_13dcb962 | TERMINATED |       | 0.000171044 |     0.0449039  | 0.3429 |     0.8584 |
| trainable_13e6fd32 | TERMINATED |       | 0.000104752 |     0.089106   | 0.3497 |     0.8571 |
| trainable_13ecd2ac | TERMINATED |       | 0.000793432 |     0.0477341  | 0.6007 |     0.8051 |
| trainable_13f27464 | TERMINATED |       | 0.0750381   |     0.0685323  | 0.3359 |     0.8616 |
| trainable_13f80b40 | TERMINATED |       | 1.3946e-06  |     0.0192844  | 0.5615 |     0.8146 |
| trainable_13fdf6e0 | TERMINATED |       | 9.4748e-06  |     0.0542356  | 0.3546 |     0.8493 |
+--------------------+------------+-------+-------------+----------------+--------+------------+

But when I look into the individual results, I find that for the third trial (trainable_137ecd98) even though its final accuracy was low, it had an iteration with higher accuracy than the other trials (89.8%):

Screenshot from 2020-08-24 11-05-33

If I want to checkpoint and report on the highest accuracy reached (or best other metric) for a given trial, is the intent for the user to keep track of a best_metric for each trial, and to write custom checkpointing when best_metric is updated?

I see there is a checkpoint_at_end option in tune.run, but wouldn't the most common use case be checkpoint_if_best since the last training iteration for a trial is rarely the best?

Thanks!


Solution

  • If you only want to keep the 1 best checkpoint for each trial you can do

    tune.run(keep_checkpoints_num=1, checkpoint_score_attr="accuracy")
    

    If you want to keep multiple checkpoints but want to get the best one after the experiment ends, you can do something like this:

    analysis = tune.run(...)
    # Gets best trial based on max accuracy across all training iterations.
    best_trial = analysis.get_best_trial(metric="accuracy", mode="max", scope="all") 
    # Gets best checkpoint for trial based on accuracy.
    best_checkpoint = analysis.get_best_checkpoint(best_trial, metric="accuracy")