Note
Click here to download the full example code
Regret curves¶
Hint
Conveys how quickly the algorithm found the best hyperparameters.
The regret is the difference between the achieved objective and the optimal objective. Plotted as a time series, it shows how fast an optimization algorithm approaches the best objective. An equivalent way to visualize it is using the cumulative minimums instead of the differences. Oríon plots the cumulative minimums so that the objective can easily be read from the y-axis.
- orion.plotting.base.regret(experiment, with_evc_tree=True, order_by='suggested', verbose_hover=True, **kwargs)[source]
Make a plot to visualize the performance of the hyper-optimization process.
The x-axis contain the trials and the y-axis their respective best performance.
- Parameters
- experiment: ExperimentClient or Experiment
The orion object containing the experiment data
- with_evc_tree: bool, optional
Fetch all trials from the EVC tree. Default: True
- order_by: str
Indicates how the trials should be ordered. Acceptable options are below. See attributes of
Trial
for more details.‘suggested’: Sort by trial suggested time (default).
‘reserved’: Sort by trial reserved time.
‘completed’: Sort by trial completed time.
- verbose_hover: bool
Indicates whether to display the hyperparameter in hover tooltips. True by default.
- kwargs: dict
All other plotting keyword arguments to be passed to plotly express.line.
- Returns
- plotly.graph_objects.Figure
- Raises
- ValueError
If no experiment is provided.
The regret plot can be executed directly from the experiment
with plot.regret()
as
shown in the example below.
from orion.client import get_experiment
# flake8: noqa
# Specify the database where the experiments are stored. We use a local PickleDB here.
storage = dict(type="legacy", database=dict(type="pickleddb", host="../db.pkl"))
# Load the data for the specified experiment
experiment = get_experiment("random-rosenbrock", storage=storage)
fig = experiment.plot.regret()
fig
algorithms is deprecated and will be removed in v0.4.0. Use algorithm instead.
`strategy` option is not supported anymore. It should be set in algorithm configuration directly.
The objective of the trials is overlaid as a scatter plot under the regret curve. Thanks to this we can see whether the algorithm focused its optimization close to the optimum (if all points are close to the regret curve near the end) or if it explored far from it (if many points are far from the regret curve near the end). We can see in this example with random search that the algorithm unsurpringly randomly explored the space. If we plot the results from the algorithm TPE applied on the same task, we will see a very different pattern (see how the results were generated in tutorial Python API basics).
algorithms is deprecated and will be removed in v0.4.0. Use algorithm instead.
`strategy` option is not supported anymore. It should be set in algorithm configuration directly.
You can hover your mouse over the trials to see the configuration of the corresponding trials. The configuration of the trials is following this format:
ID: <trial id>
value: <objective>
time: <time x-axis is ordered by>
parameters
<name>: <value>
For now, the only option to customize the regret plot is order_by
, the sorting order
of the trials on the x-axis. Contributions are more than welcome to increase the customizability
of the plot!
By default the sorting order is suggested
, the order the trials were suggested by the
optimization algorithm. Other options are reserved
, the time the trials started being executed
and completed
, the time the trials were completed.
In this example, the order by suggested
or by completed
is the same,
but parallelized experiments can lead to different order of completed trials.
# TODO: Add documentation for `regrets()`
Finally we save the image to serve as a thumbnail for this example. See the guide How to save for more information on image saving.
fig.write_image("../../docs/src/_static/regret_thumbnail.png")
Total running time of the script: ( 0 minutes 6.666 seconds)