Average Rank

Parallel Advantage Assessment

class orion.benchmark.assessment.parallelassessment.ParallelAssessment(repetitions=1, executor=None, n_workers=(1, 2, 4), **executor_config)[source]

Evaluate how algorithms’ sampling efficiency is affected by different degrees of parallelization.

Evaluate the average performance (objective value) for each search algorithm at different time steps (trial number). The performance (objective value) used for a trial will the best result until the trial.

Parameters
repetitions: int, optional

Number of experiment to run for each number of workers. Default: 1

executor: str, optional

Name of orion worker exeuctor. If None, the default executor of the benchmark will be used. Default: None.

n_workers: list or tuple, optional

List or tuple of integers for the number of workers for each experiment. Default: (1, 2, 4)

**executor_config: dict

Parameters for the corresponding executor.

Attributes
configuration

Return the configuration of the assessment.

Methods

analysis(task, experiments)

Generate a plotly.graph_objects.Figure to display average performance for each search algorithm.

get_executor(repetition_index)

Return an instance of orion.executor.base.Executor based on the index of tasks that the assessment is asking to run.

analysis(task, experiments)[source]

Generate a plotly.graph_objects.Figure to display average performance for each search algorithm.

task: str

Name of the task

experiments: list

A list of (repetition_index, experiment), where repetition_index is the index of the repetition to run for this assessment, and experiment is an instance of orion.core.worker.experiment.

property configuration

Return the configuration of the assessment.

get_executor(repetition_index)[source]

Return an instance of orion.executor.base.Executor based on the index of tasks that the assessment is asking to run.