Forrester

Forrester Task from the Profet paper.

This Forrester class is based on a synthetic function, whereas the ForresterTask is baseed on a meta-model trained on multiple such functions.

Klein, Aaron, Zhenwen Dai, Frank Hutter, Neil Lawrence, and Javier Gonzalez. “Meta-surrogate benchmarking for hyperparameter optimization.” Advances in Neural Information Processing Systems 32 (2019): 6270-6280.

class orion.benchmark.task.forrester.Forrester(max_trials: int, alpha: float = 0.5, beta: float = 0.5)[source]

Task based on the Forrester function, as described in https://arxiv.org/abs/1905.12982

\[f(x) = ((lpha x - 2)^2) sin(eta x - 4)\]
Parameters
max_trialsint

Maximum number of trials for this task.

alphafloat, optional

Alpha parameter used in the above equation, by default 0.5

betafloat, optional

Beta parameter used in the above equation, by default 0.5

Methods

call(x)

Define the black box function to optimize, the function will expect hyper-parameters to search and return objective values of trial with the hyper-parameters.

get_search_space()

Return the search space for the task objective function

call(x: float) List[Dict][source]

Define the black box function to optimize, the function will expect hyper-parameters to search and return objective values of trial with the hyper-parameters.

This method should be overridden by subclasses. It should receive the hyper-parameters as keyword arguments, with argument names matching the keys of the dictionary returned by get_search_space.

get_search_space() Dict[str, str][source]

Return the search space for the task objective function