Population Based Training

Population Based Training

class orion.algo.pbt.pbt.PBT(space, seed=None, population_size=50, generations=10, exploit=None, explore=None, fork_timeout=60)[source]

Population Based Training algorithm

Population based training is an evolutionary algorithm that evolve trials from low fidelity levels to high fidelity levels (ex: number of epochs). For a population of size m, it first samples m trials at lowest fidelity level. When trials are completed, it decides based on the exploit configuration whether the trial should be promoted to next fidelity level or whether another trial should be selected instead and forked. When a trial is forked, new hyperparameters are selected based on the trials hyperparameters and the explore configuration. The original trial’s working_dir is then copied over to the new trial’s working_dir so that the user script can resume execution from model parameters of original trial.

It is important that the weights of models trained for each trial are saved in the corresponding directory at path trial.working_dir. The file name does not matter. The entire directory is copied to a new trial.working_dir when PBT selects a good model and explore new hyperparameters. The new trial can be resumed by the user by loading the weigths found in the freshly copied new_trial.working_dir, and saved back at the same path at end of trial execution. To access trial.working_dir from Oríon’s commandline API, see documentation at https://orion.readthedocs.io/en/stable/user/script.html#command-line-templating. To access trial.working_dir from Oríon’s Python API, set argument trial_arg="trial" when executing method orion.client.experiment.ExperimentClient.workon().

The number of fidelity levels is determined by the argument generations. The lowest and highest fidelity levels, and the distrubition, is determined by the search space’s dimension that will have a prior fidelity(low, high, base), where base is the logarithm base of the dimension. Original PBT algorithm uses a base of 1.

PBT will try to return as many trials as possible when calling suggest(num), up to num. When population_size trials are sampled and more trials are requested, it will try to generate new trials by promoting or forking existing trials in a queue. This queue will get filled when calling observe(trials) on completed or broken trials.

If trials are broken at lowest fidelity level, they are ignored and will not count in population size so that PBT can sample additional trials to reach population_size completed trials at lowest fidelity. If a trial is broken at higher fidelity, the original trial leading to the broken trial is examinated again for exploit and explore. If the broken trial was the result of a fork, then we backtrack to the trial that was dropped during exploit in favor of the forked trial. If the broken trial was a promotion, then we backtrack to the original trial that was promoted.

For more information on the algorithm, see original paper at https://arxiv.org/abs/1711.09846.

Jaderberg, Max, et al. “Population based training of neural networks.” arXiv preprint, arXiv:1711.09846 (2017).

Parameters
space: `orion.algo.space.Space`

Optimisation space with priors for each dimension.

seed: None, int or sequence of int

Seed for the random number generator used to sample new trials. Default: None

population_size: int, optional

Size of the population. No trial will be continued until there are population_size trials executed until lowest fidelity. If a trial is broken during execution at lowest fidelity, the algorithm will sample a new trial, keeping the population of non-broken trials at population_size. For efficiency it is better to have less workers running than population_size. Default: 50.

generations: int, optional

Number of generations, from lowest fidelity to highest one. This will determine how many branchings occur during the execution of PBT. Default: 10

exploit: dict or None, optional

Configuration for a pbt.exploit.BaseExploit object that determines when if a trial should be exploited or not. If None, default configuration is a PipelineExploit with BacktrackExploit and TruncateExploit.

explore: dict or None, optional

Configuration for a pbt.explore.BaseExplore object that returns new parameter values for exploited trials. If None, default configuration is a PipelineExplore with ResampleExplore and PerturbExplore.

fork_timeout: int, optional

Maximum amount of time in seconds that an attempt to mutate a trial should take, otherwise algorithm.suggest() will raise SuggestionTimeout. Default: 60

Notes

It is important that the experiment using this algorithm has a working directory properly set. The experiment’s working dir serve as the base for the trial’s working directories.

The trial’s working directory is trial.working_dir. This is where the weights of the model should be saved. Using trial.hash_params to determine a unique working dir for the trial will result in working on a different directory than the one copied by PBT, hence missing the copied model parameters.

Attributes
is_done

Is done if population_size trials at highest fidelity level are completed.

requires_type
rng

Random Number Generator

space

Return transformed space of PBT

state_dict

Return a state dict that can be used to reset the state of the algorithm.

Methods

observe(trials)

Observe the trials and queue those available for promotion or forking.

register(trial)

Save the trial as one suggested or observed by the algorithm

seed_rng(seed)

Seed the state of the random number generator.

set_state(state_dict)

Reset the state of the algorithm based on the given state_dict

suggest(num)

Suggest a num ber of new sets of parameters.

property is_done

Is done if population_size trials at highest fidelity level are completed.

observe(trials)[source]

Observe the trials and queue those available for promotion or forking.

Parameters
trials: list of ``orion.core.worker.trial.Trial``

Trials from a orion.algo.space.Space.

register(trial)[source]

Save the trial as one suggested or observed by the algorithm

The trial is additionally saved in the lineages object of PBT.

Parameters
trial: ``orion.core.worker.trial.Trial``

Trial from a orion.algo.space.Space.

property rng

Random Number Generator

seed_rng(seed)[source]

Seed the state of the random number generator.

Parameters
seed: int

Integer seed for the random number generator.

set_state(state_dict)[source]

Reset the state of the algorithm based on the given state_dict

property space

Return transformed space of PBT

property state_dict

Return a state dict that can be used to reset the state of the algorithm.

suggest(num)[source]

Suggest a num ber of new sets of parameters.

PBT will try to sample up to population_size trials at lowest fidelity level. If more trials are required, it will try to promote or fork trials based on the queue of available trials observed.

Parameters
num: int

Number of points to suggest. The algorithm may return less than the number of points requested.

Returns
list of trials

A list of trials representing values suggested by the algorithm.

LineageNode

class orion.algo.pbt.pbt.LineageNode(trial, parent=None)[source]

Lineage node

The lineage node is based on orion.core.utils.tree.TreeNode. It provides additional methods to help represent lineages for PBT, in particular, fork, set_jump, get_true_ancestor and get_best_trial.

A lineage node can be connected to a parent and children, like a typical TreeNode, but also to jumps and a base. The jumps and base represent the connection between nodes when PBT drops a trial and rather fork another one. In such case, the dropped trial will refer to the new trial (the forked one) with jumps (it can refer to many if the new trials crashed and required rollback) and the forked trial will refer to the dropped one with base (it can only refer one).

Parameters
trial: ``orion.core.worker.trial.Trial``

The trial to represent with the lineage node.

parent: LineageNode, optional

The parent node for this lineage node. Default: None, that is, no parent.

Attributes
base

Base trial that was dropped in favor of this forked trial, if this trial resulted from a fork.

jumps

New trials generated from forks when dropping this node.

tree_name

Name of the node for pretty printing.

Methods

fork(new_trial)

Fork the trial to the new one.

get_best_trial()

Return best trial on the path from root up to this node.

get_true_ancestor()

Return the base if current trial is the result of a fork, otherwise return parent if is has one, otherwise returns None.

register(trial)

Save the trial object.

set_jump(node)

Set the jump to given node

property base

Base trial that was dropped in favor of this forked trial, if this trial resulted from a fork.

fork(new_trial)[source]

Fork the trial to the new one.

A new lineage node refering to new_trial will be created and added as a child to current node.

The working directory of the current trial, trial.working_dir will be copied to new_trial.working_dir.

Parameters
new_trial: ``orion.core.worker.trial.Trial``

A new trial that is a child of the current one.

Returns
LineageNode

LineageNode refering to new_trial

Raises
RuntimeError

The working directory of the trials is identical. This should never happen since the working_dir is infered from a hash on trial parameters, and therefore identical working_dir would imply that different trials have identical parameters.

get_best_trial()[source]

Return best trial on the path from root up to this node.

The path followed is through true ancestors, that is, looking at base if the current node is the result of a fork, otherwise looking at the parent.

Only leaf node trials may not be completed. If there is only one node in the tree and the node’s trial is not completed, None is returned instead of a trial object.

Returns
None

Only one node in the tree and it is not completed.

orion.core.worker.trial.Trial

Trial with best objective (lowest).

get_true_ancestor()[source]

Return the base if current trial is the result of a fork, otherwise return parent if is has one, otherwise returns None.

property jumps

New trials generated from forks when dropping this node.

register(trial)[source]

Save the trial object.

Register will copy the object so that any modifications on it externally will not impact the interval representation of the Lineage node.

set_jump(node)[source]

Set the jump to given node

This will also have the effect of setting node.base = self.

Parameters
node: LineageNode

Node to refer to as the jump targen for the current node.

Raises
RuntimeError

If the given node already has a base.

property tree_name

Name of the node for pretty printing.

Lineages

class orion.algo.pbt.pbt.Lineages[source]

Lineages of trials for workers in PBT

This class regroup all lineages of trials generated by PBT for a given experiment.

Each lineage is a path from a leaf trial (highest fidelity level) up to the root (lowest fidelity level). Multiple lineages can fork from the same root, forming a tree. A Lineages object may reference multiple trees of lineages. Iterating a Lineages object will iterate on the roots of these trees.

Methods

add(trial)

Add a trial to the lineages

fork(base_trial, new_trial)

Fork a base trial to a new one.

get_elites([max_depth])

Get best trials of each lineage

get_lineage(trial)

Get the lineage node corresponding to a given trial.

get_trials_at_depth(trial_or_depth)

Returns the trials or all lineages at a given depth

register(trial)

Add or save the trial in the Lineages

set_jump(base_trial, new_trial)

Set a jump between two trials

add(trial)[source]

Add a trial to the lineages

If the trial is already in the lineages, this will only return the corresponding lineage node. Otherwise, a new lineage node will be created and added as a root.

Parameters
trial: ``orion.core.worker.trial.Trial``

Trial from a orion.algo.space.Space.

Returns
orion.algo.pbt.pbt.LineageNode

The lineage node for the given trial.

fork(base_trial, new_trial)[source]

Fork a base trial to a new one.

The base trial should already be registered in the Lineages

Parameters
base_trial: ``orion.core.worker.trial.Trial``

The base trial that will be the parent lineage node.

new_trial: ``orion.core.worker.trial.Trial``

The new trial that will be the child lineage node.

Raises
KeyError

If the base trial is not already registered in the Lineages

get_elites(max_depth=None)[source]

Get best trials of each lineage

Each lineage is a path from a leaf to the root. When there is a forking, the path followed is not from child (new trial) to parent (forked trial), but rather to base trial (trial dropped). This is to represent the path taken by the sequence of trial execution within a worker. This also avoids having duplicate elite trials on different lineages.

Best trials may be looked for up to a max_depth.

Parameters
max_depth: int or ``orion.core.worker.trial.Trial``, optional

The maximum depth to look for best trials. It can be an int to represent the depth directly, or a trial, from which the depth will be infered. If a trial, this trial should be in the Lineages. Default: None, that is, no max depth.

get_lineage(trial)[source]

Get the lineage node corresponding to a given trial.

Parameters
trial: ``orion.core.worker.trial.Trial``

The trial for which the function should return the corresponding lineage node.

Raises
KeyError

If the base trial is not already registered in the Lineages

get_trials_at_depth(trial_or_depth)[source]

Returns the trials or all lineages at a given depth

Parameters
trial_or_depth: int or ``orion.core.worker.trial.Trial``

If an int, this represents the depth directly. If a trial, the depth will be infered from it. This trial should be in the Lineages.

Raises
KeyError

If depth is infered from trial but trial is not already registered in the Lineages

register(trial)[source]

Add or save the trial in the Lineages

If the trial is not already in the Lineages, it is added as root. Otherwise, the corresponding lineage node is updated with given trial object.

Parameters
trial: ``orion.core.worker.trial.Trial``

The trial to register.

set_jump(base_trial, new_trial)[source]

Set a jump between two trials

This jump is set to represent the relation between the base trial and the new trial. This means the base trial was dropped during exploit and the new trial is the result of a fork from another trial selected during exploit.

Both trials should already be registered in the Lineages.

Parameters
base_trial: ``orion.core.worker.trial.Trial``

The base trial that was dropped.

new_trial: ``orion.core.worker.trial.Trial``

The new trial that was forked.

Raises
KeyError

If the base trial or the new trial are not already registered in the Lineages.

Exploit classes for Population Based Training

BaseExploit

class orion.algo.pbt.exploit.BaseExploit[source]

Abstract class for Exploit in orion.algo.pbt.pbt.PBT

The exploit class is responsible for deciding whether the Population Based Training algorithm should continue training a trial configuration at next fidelity level or whether it should fork from another trial configuration.

This class is expected to be stateless and serve as a configurable callable object.

Attributes
configuration

Configuration of the exploit object

Methods

__call__(rng, trial, lineages)

Execute exploit

__call__(rng, trial, lineages)[source]

Execute exploit

The method receives the current trial under examination and all lineages of population based training. It must then decide whether the trial should be promoted (continue with a higher fidelity) or if another trial should be forked instead.

Parameters
rng: numpy.random.Generator

A random number generator. It is not contained in BaseExploit because the exploit class must be stateless.

trial: Trial

The orion.core.worker.trial.Trial that is currently under examination.

lineages: Lineages

All orion.algo.pbt.pbt.Lineages created by the population based training algorithm that is using this exploit class.

Returns
None

The exploit class signals that there are not enough completed trials in lineages to make a decision for current trial.

Trial

If the returned trial is the same as the one received as argument, it means that population based training should continue with same parameters. If another trial from the lineages is returned, it means that population based training should try to explore new parameters.

property configuration

Configuration of the exploit object

PipelineExploit

class orion.algo.pbt.exploit.PipelineExploit(exploit_configs)[source]

Pipeline of BaseExploit objects

The pipeline executes the BaseExploit objects sequentially. If one object returns None, the pipeline is stopped and it returns None. Likewise, if one object returns a trial different than the one passed, the pipeline is stopped and this trial is returned. Otherwise, if all BaseExploit objects return the same trial as the one passed to the pipeline, then the pipeline returns it.

Parameters
exploit_configs: list of dict

List of dictionary representing the configurations of BaseExploit children.

Examples

>>> PipelineExploit(
    exploit_configs=[
        {'of_type': 'BacktrackExploit'},
        {'of_type': 'TruncateExploit'}
    ])
Attributes
configuration

Configuration of the exploit object

Methods

__call__(rng, trial, lineages)

Execute exploit objects sequentially

__call__(rng, trial, lineages)[source]

Execute exploit objects sequentially

If one object returns None, the pipeline is stopped and it returns None. Likewise, if one object returns a trial different than the one passed, the pipeline is stopped and this trial is returned. Otherwise, if all BaseExploit objects return the same trial as the one passed to the pipeline, then the pipeline returns it.

Parameters
rng: numpy.random.Generator

A random number generator. It is not contained in BaseExploit because the exploit class must be stateless.

trial: Trial

The orion.core.worker.trial.Trial that is currently under examination.

lineages: Lineages

All orion.algo.pbt.pbt.Lineages created by the population based training algorithm that is using this exploit class.

Returns
None

The exploit class signals that there are not enough completed trials in lineages to make a decision for current trial.

Trial

If the returned trial is the same as the one received as argument, it means that population based training should continue with same parameters. If another trial from the lineages is returned, it means that population based training should try to explore new parameters.

property configuration

Configuration of the exploit object

TruncateExploit

class orion.algo.pbt.exploit.TruncateExploit(min_forking_population=5, truncation_quantile=0.8, candidate_pool_ratio=0.2)[source]

Truncate Exploit

If the given trial is under a truncation_quantile compared to all other trials that has reached the same fidelity level, then a new candidate trial is selected for forking. The new candidate is selected from a pool of best candidate_pool_ratio% of the available trials at the same fidelity level.

If there are less than min_forking_population trials that have reached the fidelity level as the passed trial, then None is return to signal that we should reconsider this trial later on when more trials are completed at this fidelity level.

Parameters
min_forking_population: int, optional

Minimum number of trials that should be completed up to the fidelity level of the current trial passed. TruncateExploit will return None when this requirement is not met. Default: 5

truncation_quantile: float, optional

If the passed trial’s objective is above quantile truncation_quantile, then another candidate is considered for forking. Default: 0.8

candidate_pool_ratio: float, optional

When choosing another candidate for forking, it will be randomly selected from the best candidate_pool_ratio% of the available trials. Default: 0.2

Attributes
configuration

Configuration of the exploit object

Methods

__call__(rng, trial, lineages)

Select other trial if current one not good enough

__call__(rng, trial, lineages)[source]

Select other trial if current one not good enough

If the given trial is under a self.truncation_quantile compared to all other trials that has reached the same fidelity level, then a new candidate trial is selected for forking. The new candidate is selected from a pool of best self.candidate_pool_ratio% of the available trials at the same fidelity level.

If there are less than self.min_forking_population trials that have reached the fidelity level as the passed trial, then None is return to signal that we should reconsider this trial later on when more trials are completed at this fidelity level.

Parameters
rng: numpy.random.Generator

A random number generator. It is not contained in BaseExploit because the exploit class must be stateless.

trial: Trial

The orion.core.worker.trial.Trial that is currently under examination.

lineages: Lineages

All orion.algo.pbt.pbt.Lineages created by the population based training algorithm that is using this exploit class.

Returns
None

The exploit class signals that there are not enough completed trials in lineages to make a decision for current trial.

Trial

If the returned trial is the same as the one received as argument, it means that population based training should continue with same parameters. If another trial from the lineages is returned, it means that population based training should try to explore new parameters.

property configuration

Configuration of the exploit object

BacktrackExploit

class orion.algo.pbt.exploit.BacktrackExploit(min_forking_population=5, truncation_quantile=0.8, candidate_pool_ratio=0.2)[source]

Backtracking Exploit

This exploit is inspired from PBT with backtracking proposed in [1]. Instead of using all trials at the same level of fidelity as in TruncateExploit, it selects the best trials from each lineage (worker), one per lineage. The objective of the best trial is compared to the objective of the trial under analysis, and if the ratio is higher than some treshold the current trial is not promoted. A trial from the pool of best trials is selected randomly.

The backtracking threshold defined by [1] is unstable however and cause division error by 0 when the best candidate trial has an objective of 0. Also, if we select trials at any fidelity levels, we would likely drop any trial at a low fidelity in favor of best trials at high fidelity. This class use a quantile threshold instead of the ratio in [1] to determine if a trial should be continued at next fidelity level. The candidates for forking are select from best trials from all running lineages (workers), like proposed in [1], but limited to trials up to the fidelity level of the current trial under analysis.

[1] Zhang, Baohe, Raghu Rajan, Luis Pineda, Nathan Lambert, André Biedenkapp, Kurtland Chua, Frank Hutter, and Roberto Calandra. “On the importance of hyperparameter optimization for model-based reinforcement learning.” In International Conference on Artificial Intelligence and Statistics, pp. 4015-4023. PMLR, 2021.

Methods

__call__(rng, trial, lineages)

Select other trial if current one not good enough

__call__(rng, trial, lineages)[source]

Select other trial if current one not good enough

If the given trial is under a self.truncation_quantile compared to all other best trials with lower or equal fidelity level, then a new candidate trial is selected for forking. The new candidate is selected from a pool of best self.candidate_pool_ratio% of the best trials with lower or equal fidelity level. See class description for more explanation on the rationale.

If there are less than self.min_forking_population trials that have reached the fidelity level as the passed trial, then None is return to signal that we should reconsider this trial later on when more trials are completed at this fidelity level.

Parameters
rng: numpy.random.Generator

A random number generator. It is not contained in BaseExploit because the exploit class must be stateless.

trial: Trial

The orion.core.worker.trial.Trial that is currently under examination.

lineages: Lineages

All orion.algo.pbt.pbt.Lineages created by the population based training algorithm that is using this exploit class.

Returns
None

The exploit class signals that there are not enough completed trials in lineages to make a decision for current trial.

Trial

If the returned trial is the same as the one received as argument, it means that population based training should continue with same parameters. If another trial from the lineages is returned, it means that population based training should try to explore new parameters.

Explore classes for Population Based Training

BaseExplore

class orion.algo.pbt.explore.BaseExplore[source]

Abstract class for Explore in orion.algo.pbt.pbt.PBT

The explore class is responsible for proposing new parameters for a given trial and space.

This class is expected to be stateless and serve as a configurable callable object.

Attributes
configuration

Configuration of the exploit object

Methods

__call__(rng, space, params)

Execute explore

__call__(rng, space, params)[source]

Execute explore

The method receives the space and the parameters of the current trial under examination. It must then select new parameters for the trial.

Parameters
rng: numpy.random.Generator

A random number generator. It is not contained in BaseExplore because the explore class must be stateless.

space: Space

The search space optimized by the algorithm.

params: dict

Dictionary representing the parameters of the current trial under examination (trial.params).

Returns
dict

The new set of parameters for the trial to be branched.

property configuration

Configuration of the exploit object

PipelineExplore

class orion.algo.pbt.explore.PipelineExplore(explore_configs)[source]

Pipeline of BaseExploit objects

The pipeline executes the BaseExplore objects sequentially. If one object returns the parameters that are different than the ones passed (params), then the pipeline returns these parameter values. Otherwise, if all BaseExplore objects return the same parameters as the one passed to the pipeline, then the pipeline returns it.

Parameters
explore_configs: list of dict

List of dictionary representing the configurations of BaseExplore children.

Examples

This pipeline is useful if for instance you want to sample from the space with a small probability, but otherwise use a local perturbation.

>>> PipelineExplore(
    explore_configs=[
        {'of_type': 'ResampleExplore', probability=0.05},
        {'of_type': 'PerturbExplore'}
    ])
Attributes
configuration

Configuration of the exploit object

Methods

__call__(rng, space, params)

Execute explore objects sequentially

__call__(rng, space, params)[source]

Execute explore objects sequentially

If one explore object returns the parameters that are different than the ones passed (params), then the pipeline returns these parameter values. Otherwise, if all BaseExplore objects return the same parameters as the one passed to the pipeline, then the pipeline returns it.

Parameters
rng: numpy.random.Generator

A random number generator. It is not contained in BaseExplore because the explore class must be stateless.

space: Space

The search space optimized by the algorithm.

params: dict

Dictionary representing the parameters of the current trial under examination (trial.params).

Returns
dict

The new set of parameters for the trial to be branched.

property configuration

Configuration of the exploit object

PerturbExplore

class orion.algo.pbt.explore.PerturbExplore(factor=1.2, volatility=0.0001)[source]

Perturb parameters for exploration

Given a set of parameter values, this exploration object randomly perturb them with a given factor. It will multiply the value of a dimension with probability 0.5, otherwise divide it. Values are clamped to limits of the search space when exceeding it. For categorical dimensions, a new value is sampled from categories with equal probability for each categories.

Parameters
factor: float, optional

Factor used to multiply or divide with probability 0.5 the values of the dimensions. Only applies to real or int dimensions. Integer dimensions are pushed to next integer if new_value > value otherwise reduced to previous integer, where new_value is the result of either value * factor or value / factor. Categorial dimensions are sampled from categories randomly. Default: 1.2

volatility: float, optional

If the results of value * factor or value / factor exceeds the limit of the search space, the new value is set to limit and then added or substracted abs(normal(0, volatility)) (if at lower limit or upper limit). Default: 0.0001

Notes

Categorical dimensions with special probabilities are not supported for now. A category with be sampled with equal probability for each categories.

Attributes
configuration

Configuration of the exploit object

Methods

__call__(rng, space, params)

Execute perturbation

perturb_cat(rng, dim_value, dim)

Perturb categorical dimension

perturb_int(rng, dim_value, interval)

Perturb integer value dimension

perturb_real(rng, dim_value, interval)

Perturb real value dimension

__call__(rng, space, params)[source]

Execute perturbation

Given a set of parameter values, this exploration object randomly perturb them with a given factor. It will multiply the value of a dimension with probability 0.5, otherwise divide it. Values are clamped to limits of the search space when exceeding it. For categorical dimensions, a new value is sampled from categories with equal probability for each categories.

Parameters
rng: numpy.random.Generator

A random number generator. It is not contained in BaseExplore because the explore class must be stateless.

space: Space

The search space optimized by the algorithm.

params: dict

Dictionary representing the parameters of the current trial under examination (trial.params).

Returns
dict

The new set of parameters for the trial to be branched.

property configuration

Configuration of the exploit object

perturb_cat(rng, dim_value, dim)[source]

Perturb categorical dimension

Parameters
rng: numpy.random.Generator

Random number generator

dim_value: object

Value of the dimension, can be any type.

dim: orion.algo.space.CategoricalDimension

CategoricalDimension object defining the search space for this dimension.

perturb_int(rng, dim_value, interval)[source]

Perturb integer value dimension

Parameters
rng: numpy.random.Generator

Random number generator

dim_value: int

Value of the dimension

interval: tuple of int

Limit of the dimension (lower, upper)

perturb_real(rng, dim_value, interval)[source]

Perturb real value dimension

Parameters
rng: numpy.random.Generator

Random number generator

dim_value: float

Value of the dimension

interval: tuple of float

Limit of the dimension (lower, upper)

ResampleExplore

class orion.algo.pbt.explore.ResampleExplore(probability=0.2)[source]

Sample parameters search space

With given probability probability, it will sample a new set of parameters from the search space totally independently of the parameters passed to __call__. Otherwise, it will return the passed parameters.

Parameters
probability: float, optional

Probability of sampling a new set of parameters. Default: 0.2

Attributes
configuration

Configuration of the exploit object

Methods

__call__(rng, space, params)

Execute resampling

__call__(rng, space, params)[source]

Execute resampling

With given probability self.probability, it will sample a new set of parameters from the search space totally independently of the parameters passed to __call__. Otherwise, it will return the passed parameters.

Parameters
rng: numpy.random.Generator

A random number generator. It is not contained in BaseExplore because the explore class must be stateless.

space: Space

The search space optimized by the algorithm.

params: dict

Dictionary representing the parameters of the current trial under examination (trial.params).

Returns
dict

The new set of parameters for the trial to be branched.

property configuration

Configuration of the exploit object