Assessment modules¶
Benchmark Assessments definition¶
- class orion.benchmark.assessment.AverageRank(repetitions=1)[source]¶
Evaluate the average performance (objective value) between different search algorithms from the rank perspective at different time steps (trial number). The performance (objective value) used for a trial will the best result until the trial.
Methods
analysis
(task, experiments)Generate a
plotly.graph_objects.Figure
to display average rankings between different search algorithms.- analysis(task, experiments)[source]¶
Generate a
plotly.graph_objects.Figure
to display average rankings between different search algorithms.- task: str
Name of the task
- experiments: list
A list of (task_index, experiment), where task_index is the index of task to run for this assessment, and experiment is an instance of
orion.core.worker.experiment
.
- class orion.benchmark.assessment.AverageResult(repetitions=1)[source]¶
Evaluate the average performance (objective value) for each search algorithm at different time steps (trial number). The performance (objective value) used for a trial will the best result until the trial.
Methods
analysis
(task, experiments)Generate a
plotly.graph_objects.Figure
to display average performance for each search algorithm.- analysis(task, experiments)[source]¶
Generate a
plotly.graph_objects.Figure
to display average performance for each search algorithm.- task: str
Name of the task
- experiments: list
A list of experiment, instances of
orion.core.worker.experiment
.
- class orion.benchmark.assessment.BenchmarkAssessment(repetitions, **kwargs)[source]¶
Base class describing what an assessment can do.
- Parameters
- repetitionsint
Number of experiment the assessment ask to run the corresponding task
- kwargsdict
Configurable parameters of the assessment, a particular assessment implementation can have its own parameters.
- Attributes
configuration
Return the configuration of the assessment.
repetitions
Return the task number to run for this assessment
Methods
analysis
(task, experiments)Generate
plotly.graph_objects.Figure
objects to display the performance analysis based on the assessment purpose.get_executor
(task_index)Return an instance of orion.executor.base.Executor based on the index of tasks that the assessment is asking to run.
- abstract analysis(task, experiments)[source]¶
Generate
plotly.graph_objects.Figure
objects to display the performance analysis based on the assessment purpose.- task: str
Name of the task
- experiments: list
A list of (task_index, experiment), where task_index is the index of task to run for this assessment, and experiment is an instance of
orion.core.worker.experiment
.
- Returns
- Dict of plotly.graph_objects.Figure objects with a format as like
- {“assessment name”: {“task name”: {“figure name”: plotly.graph_objects.Figure}}}
- Examples
>>> {"AverageRank": {"RosenBrock": {"rankings": plotly.graph_objects.Figure}}} ..
- property configuration¶
Return the configuration of the assessment.
- get_executor(task_index)[source]¶
Return an instance of orion.executor.base.Executor based on the index of tasks that the assessment is asking to run.
- property repetitions¶
Return the task number to run for this assessment
- class orion.benchmark.assessment.ParallelAssessment(repetitions=1, executor=None, n_workers=(1, 2, 4), **executor_config)[source]¶
Evaluate how algorithms’ sampling efficiency is affected by different degrees of parallelization.
Evaluate the average performance (objective value) for each search algorithm at different time steps (trial number). The performance (objective value) used for a trial will the best result until the trial.
- Parameters
- repetitions: int, optional
Number of experiment to run for each number of workers. Default: 1
- executor: str, optional
Name of orion worker exeuctor. If None, the default executor of the benchmark will be used. Default: None.
- n_workers: list or tuple, optional
List or tuple of integers for the number of workers for each experiment. Default: (1, 2, 4)
- **executor_config: dict
Parameters for the corresponding executor.
- Attributes
configuration
Return the configuration of the assessment.
Methods
analysis
(task, experiments)Generate a
plotly.graph_objects.Figure
to display average performance for each search algorithm.get_executor
(repetition_index)Return an instance of orion.executor.base.Executor based on the index of tasks that the assessment is asking to run.
- analysis(task, experiments)[source]¶
Generate a
plotly.graph_objects.Figure
to display average performance for each search algorithm.- task: str
Name of the task
- experiments: list
A list of (repetition_index, experiment), where repetition_index is the index of the repetition to run for this assessment, and experiment is an instance of
orion.core.worker.experiment
.
- property configuration¶
Return the configuration of the assessment.