Profet Task modules¶
Surrogate (simulated) tasks created using the Profet algorithm.
For a detailed description of Profet, see original paper at https://arxiv.org/abs/1905.12982 or source code at https://github.com/EmuKit/emukit/tree/main/emukit/examples/profet
Klein, Aaron, Zhenwen Dai, Frank Hutter, Neil Lawrence, and Javier Gonzalez. “Meta-surrogate benchmarking for hyperparameter optimization.” Advances in Neural Information Processing Systems 32 (2019): 6270-6280.
- class orion.benchmark.task.profet.ProfetFcNetTask(max_trials: int = 100, input_dir: Union[Path, str] = 'profet_data', checkpoint_dir: Optional[Union[Path, str]] = None, model_config: Optional[MetaModelConfig] = None, device: Optional[Union[str, Any]] = None, with_grad: bool = False)[source]¶
Simulated Task consisting in training a fully-connected network.
Methods
ModelConfig
([benchmark, task_id, seed, ...])Config for training the Profet model on an FcNet task.
call
(learning_rate, batch_size, ...)Get the value of the sampled objective function at the given point (hyper-parameters).
Return the search space for the task objective function
- class ModelConfig(benchmark: str = 'fcnet', task_id: int = 0, seed: int = 123, num_burnin_steps: int = 50000, num_steps: int = 13000, mcmc_thining: int = 100, lr: float = 0.01, batch_size: int = 5, max_samples: Optional[int] = None, n_inducing_lvm: int = 50, max_iters: int = 10000, n_samples_task: int = 500)[source]¶
Config for training the Profet model on an FcNet task.
Size of the hidden space for this benchmark.
- json_file_name: ClassVar[str] = 'data_sobol_fcnet.json'¶
Name of the json file that contains the data of this benchmark.
- log_cost: ClassVar[bool] = True¶
Whether to apply
numpy.log
onto the raw data for the cost of each point.
- log_target: ClassVar[bool] = False¶
Whether to apply
numpy.log
onto the raw data for the y of each point.
- call(learning_rate: float, batch_size: int, units_layer1: int, units_layer2: int, dropout_rate_l1: float, dropout_rate_l2: float) List[Dict] [source]¶
Get the value of the sampled objective function at the given point (hyper-parameters).
If self.with_grad is set, also returns the gradient of the objective function with respect to the inputs.
- Parameters
- **kwargs
Dictionary of hyper-parameters.
- Returns
- List[Dict]
Result dictionaries: objective and optionally gradient.
- Raises
- ValueError
If the input isn’t of a supported type.
- class orion.benchmark.task.profet.ProfetForresterTask(max_trials: int = 100, input_dir: Union[Path, str] = 'profet_data', checkpoint_dir: Optional[Union[Path, str]] = None, model_config: Optional[MetaModelConfig] = None, device: Optional[Union[str, Any]] = None, with_grad: bool = False)[source]¶
Simulated Task consisting in training a model on a variant of the Forrester function.
Methods
ModelConfig
([benchmark, task_id, seed, ...])Config for training the Profet model on a Forrester task.
call
(x)Get the value of the sampled objective function at the given point (hyper-parameters).
Return the search space for the task objective function
- class ModelConfig(benchmark: str = 'forrester', task_id: int = 0, seed: int = 123, num_burnin_steps: int = 50000, num_steps: int = 13000, mcmc_thining: int = 100, lr: float = 0.01, batch_size: int = 5, max_samples: Optional[int] = None, n_inducing_lvm: int = 50, max_iters: int = 10000, n_samples_task: int = 500)[source]¶
Config for training the Profet model on a Forrester task.
Methods
get_architecture
- get_architecture() Any ¶
Callable that takes the input dimensionality and returns the network to be trained.
Size of the hidden space for this benchmark.
- json_file_name: ClassVar[str] = 'data_sobol_forrester.json'¶
Name of the json file that contains the data of this benchmark.
- log_cost: ClassVar[bool] = False¶
Whether to apply
numpy.log
onto the raw data for the cost of each point.
- log_target: ClassVar[bool] = False¶
Whether to apply
numpy.log
onto the raw data for the y of each point.
- call(x: float) List[Dict] [source]¶
Get the value of the sampled objective function at the given point (hyper-parameters).
If self.with_grad is set, also returns the gradient of the objective function with respect to the inputs.
- Parameters
- **kwargs
Dictionary of hyper-parameters.
- Returns
- List[Dict]
Result dictionaries: objective and optionally gradient.
- Raises
- ValueError
If the input isn’t of a supported type.
- class orion.benchmark.task.profet.ProfetSvmTask(max_trials: int = 100, input_dir: Union[Path, str] = 'profet_data', checkpoint_dir: Optional[Union[Path, str]] = None, model_config: Optional[MetaModelConfig] = None, device: Optional[Union[str, Any]] = None, with_grad: bool = False)[source]¶
Simulated Task consisting in training a Support Vector Machine.
Methods
ModelConfig
([benchmark, task_id, seed, ...])Config for training the Profet model on an SVM task.
call
(C, gamma)Get the value of the sampled objective function at the given point (hyper-parameters).
Return the search space for the task objective function
- class ModelConfig(benchmark: str = 'svm', task_id: int = 0, seed: int = 123, num_burnin_steps: int = 50000, num_steps: int = 13000, mcmc_thining: int = 100, lr: float = 0.01, batch_size: int = 5, max_samples: Optional[int] = None, n_inducing_lvm: int = 50, max_iters: int = 10000, n_samples_task: int = 500)[source]¶
Config for training the Profet model on an SVM task.
Methods
get_architecture
(*[, classification, n_hidden])- get_architecture(*, classification: bool = True, n_hidden: int = 500) Any ¶
Callable that takes the input dimensionality and returns the network to be trained.
Size of the hidden space for this benchmark.
- json_file_name: ClassVar[str] = 'data_sobol_svm.json'¶
Name of the json file that contains the data of this benchmark.
- log_cost: ClassVar[bool] = True¶
Whether to apply
numpy.log
onto the raw data for the cost of each point.
- log_target: ClassVar[bool] = False¶
Whether to apply
numpy.log
onto the raw data for the y of each point.
- call(C: float, gamma: float) List[Dict] [source]¶
Get the value of the sampled objective function at the given point (hyper-parameters).
If self.with_grad is set, also returns the gradient of the objective function with respect to the inputs.
- Parameters
- **kwargs
Dictionary of hyper-parameters.
- Returns
- List[Dict]
Result dictionaries: objective and optionally gradient.
- Raises
- ValueError
If the input isn’t of a supported type.
- class orion.benchmark.task.profet.ProfetTask(max_trials: int = 100, input_dir: Union[Path, str] = 'profet_data', checkpoint_dir: Optional[Union[Path, str]] = None, model_config: Optional[MetaModelConfig] = None, device: Optional[Union[str, Any]] = None, with_grad: bool = False)[source]¶
Base class for Tasks that are generated using the Profet algorithm.
For more information on Profet, see original paper at https://arxiv.org/abs/1905.12982.
Klein, Aaron, Zhenwen Dai, Frank Hutter, Neil Lawrence, and Javier Gonzalez. “Meta-surrogate benchmarking for hyperparameter optimization.” Advances in Neural Information Processing Systems 32 (2019): 6270-6280.
- Parameters
- max_trialsint, optional
Max number of trials to run, by default 100
- input_dirUnion[Path, str], optional
Input directory containing the data used to train the meta-model, by default None.
- checkpoint_dirUnion[Path, str], optional
Directory used to save/load trained meta-models, by default None.
- model_configMetaModelConfig, optional
Configuration options for the training of the meta-model, by default None
- devicestr, optional
The device to use for training, by default None.
- with_gradbool, optional
Whether the task should also return the gradients of the objective function with respect to the inputs. Defaults to False.
- Attributes
configuration
Return the configuration of the task.
- space
Methods
alias of
MetaModelConfig
call
(**kwargs)Get the value of the sampled objective function at the given point (hyper-parameters).
- ModelConfig¶
alias of
MetaModelConfig
- call(**kwargs) List[Dict] [source]¶
Get the value of the sampled objective function at the given point (hyper-parameters).
If self.with_grad is set, also returns the gradient of the objective function with respect to the inputs.
- Parameters
- **kwargs
Dictionary of hyper-parameters.
- Returns
- List[Dict]
Result dictionaries: objective and optionally gradient.
- Raises
- ValueError
If the input isn’t of a supported type.
- property configuration¶
Return the configuration of the task.
- class orion.benchmark.task.profet.ProfetXgBoostTask(max_trials: int = 100, input_dir: Union[Path, str] = 'profet_data', checkpoint_dir: Optional[Union[Path, str]] = None, model_config: Optional[MetaModelConfig] = None, device: Optional[Union[str, Any]] = None, with_grad: bool = False)[source]¶
Simulated Task consisting in fitting a Extreme-Gradient Boosting predictor.
Methods
ModelConfig
([benchmark, task_id, seed, ...])Config for training the Profet model on an XgBoost task.
call
(learning_rate, gamma, ...)Get the value of the sampled objective function at the given point (hyper-parameters).
Return the search space for the task objective function
- class ModelConfig(benchmark: str = 'xgboost', task_id: int = 0, seed: int = 123, num_burnin_steps: int = 50000, num_steps: int = 13000, mcmc_thining: int = 100, lr: float = 0.01, batch_size: int = 5, max_samples: Optional[int] = None, n_inducing_lvm: int = 50, max_iters: int = 10000, n_samples_task: int = 500)[source]¶
Config for training the Profet model on an XgBoost task.
Size of the hidden space for this benchmark.
- json_file_name: ClassVar[str] = 'data_sobol_xgboost.json'¶
Name of the json file that contains the data of this benchmark.
- log_cost: ClassVar[bool] = True¶
Whether to apply
numpy.log
onto the raw data for the cost of each point.
- log_target: ClassVar[bool] = True¶
Whether to apply
numpy.log
onto the raw data for the y of each point.
- call(learning_rate: float, gamma: float, l1_regularization: float, l2_regularization: float, nb_estimators: int, subsampling: float, max_depth: int, min_child_weight: int) List[Dict] [source]¶
Get the value of the sampled objective function at the given point (hyper-parameters).
If self.with_grad is set, also returns the gradient of the objective function with respect to the inputs.
- Parameters
- **kwargs
Dictionary of hyper-parameters.
- Returns
- List[Dict]
Result dictionaries: objective and optionally gradient.
- Raises
- ValueError
If the input isn’t of a supported type.