Partial Dependencies


Conveys a broad overview of the search space and what has been explored during the experiment. Helps identifying best optimal regions of the space.

The partial dependency computes the average predicted performance with respect to a set of hyperparameters, marginalizing out the other hyperparameters.

To predict the performance on unseen set of hyperparameters, we train a regression model on available trial history. We build a grid of g points for a hyperparameter of interest, or a 2-D grid of g ^ 2 points for a pair of hyperparameters. We sample a group of n set of hyperparameters from the entire space to marginalize over the other hyperparameters. For each value of the grid, we compute the prediction of the regression model on all points of the group, with the hyperparameter of interest set to a value of the grid. For instance, for a 1-D grid of g points and a group of n points, we compute g * n predictions.

For a search space of d hyperparameters, the partial dependency plot is organized as a matrix of (d, d) subplots. The subplots on the diagonal show the partial dependency of each hyperparameters separately, while the subplots below the diagonal show the partial dependency between two hyperparameters. Let’s look at a simple example to make it more concrete.

orion.plotting.base.partial_dependencies(experiment, with_evc_tree=True, params=None, smoothing=0.85, verbose_hover=True, n_grid_points=10, n_samples=50, colorscale='Blues', model='RandomForestRegressor', model_kwargs=None)[source]

Make contour plots to visualize the search space of each combination of params.

experiment: ExperimentClient or Experiment

The orion object containing the experiment data

with_evc_tree: bool, optional

Fetch all trials from the EVC tree. Default: True

params: list of str, optional

Indicates the parameters to include in the plots. All parameters are included by default.

smoothing: float, optional

Smoothing applied to the countor plot. 0 corresponds to no smoothing. Default is 0.85.

verbose_hover: bool

Indicates whether to display the hyperparameter in hover tooltips. True by default.

colorscale: str, optional

The colorscale used for the contour plots. Supported values depends on the backend. Default is ‘Blues’.

n_grid_points: int, optional

Number of points in the grid to compute partial dependency. Default is 10.

n_samples: int, optional

Number of samples to randomly generate the grid used to compute the partial dependency. Default is 50.

model: str

Name of the regression model to use. Can be one of - AdaBoostRegressor - BaggingRegressor - ExtraTreesRegressor - GradientBoostingRegressor - RandomForestRegressor (Default)

model_kwargs: dict, optional

Arguments for the regressor model.


If no experiment is provided.

The partial dependencies plot can be executed directly from the experiment with plot.partial_dependencies() as shown in the example below.

from orion.client import get_experiment

# flake8: noqa

# Specify the database where the experiments are stored. We use a local PickleDB here.
storage = dict(type="legacy", database=dict(type="pickleddb", host="../db.pkl"))

# Load the data for the specified experiment
experiment = get_experiment("2-dim-exp", storage=storage)
fig = experiment.plot.partial_dependencies()
`strategy` option is not supported anymore. It should be set in algorithm configuration directly.

For the plots on the diagonal, the y-axis is the objective and the x-axis is the value of the corresponding hyperparameter. For the contour plots below the diagonal, the y-axis and x-axis are the values of the corresponding hyperparameters labelled on the left and at the bottom. The objective is represented as a color gradient in the contour plots. The light blue area in the plots on the diagonal represents the standard deviation of the predicted objective when varying the other hyperparameters over the search space. The black dots represents the trials in the current history of the experiment. If you hover your cursor over one dot, you will see the configuration of the corresponding trial following this format:

ID: <trial id>
value: <objective>
time: <completed time>
  <name>: <value>

Even for a simple 2-d search space, the partial dependency is very useful. We see very clearly in this example the optimal regions for both hyperparameters and we can see as well that the optimal region for learning rates is larger when the dropout is low, and narrower when dropout approaches 0.5.


Make one toy example where two HPs are dependent.



The simple example involved only 2 hyperparameters, but typical search spaces can be much larger. The partial dependency plot becomes hard to read with more than 3-5 hyperparameters dependency on the size of your screen. With a fix width like in this documentation, 5 hyperparameters are impossible to read as you can see below. (Data coming from tutorial Checkpointing trials)

experiment = get_experiment("hyperband-cifar10", storage=storage)
`strategy` option is not supported anymore. It should be set in algorithm configuration directly.

You can select the hyperparameters to show with the argument params.

experiment.plot.partial_dependencies(params=["gamma", "learning_rate"])

Grid resolution

The grid used for the partial dependency can be more or less coarse. Coarser grids will be faster to compute.

import time

experiment = get_experiment("2-dim-exp", storage=