Note
Click here to download the full example code
Python API basics¶
This short tutorial will show you the basics to use Oríon in python. We will optimize a simple
1-d rosenbrock
function with random search and TPE and visualize the regret curve to compare
the algorithms.
Note for macos users : You will need to either run this page as a jupyter notebook in order for it to compile, or
encapsulate the code in a main function and running it under if __name__ == '__main__'
.
We first import the only function needed, build experiment
.
from orion.client import build_experiment
# flake8: noqa: E266
We configure the database with PickledDB so that the results are saved locally on disk. This enables resuming the experiment and running parallel workers.
storage = {
"type": "legacy",
"database": {
"type": "pickleddb",
"host": "./db.pkl",
},
}
We define the search space for the optimization. Here, the optimization algorithm may explore
real values for x
between 0 and 30 only. See documentation of Search Space for more
information.
space = {"x": "uniform(0, 30)"}
We then build the experiment with the name random-rosenbrock
. The name is by Oríon as
an id for the experiment. Each experiment must have a unique name.
experiment = build_experiment(
"random-rosenbrock",
space=space,
storage=storage,
)
For this example we use a 1-d rosenbrock function. We must return a list of results,
for Oríon. Results must have the format
{name: <str>: type: <'objective', 'constraint' or 'gradient'>, value=<float>}
otherwise
a ValueError
will be raised. At least one of the results must have the type objective
,
the metric that is minimized by the algorithm.
def rosenbrock(x, noise=None):
"""Evaluate partial information of a quadratic."""
y = x - 34.56789
z = 4 * y**2 + 23.4
return [{"name": "objective", "type": "objective", "value": z}]
We then pass the function rosenbrock
to
workon()
. This method
will iteratively try new sets of hyperparameters suggested by the optimization algorithm
until it reaches 20 trials.
experiment.workon(rosenbrock, max_trials=20)
Now let’s plot the regret curve to see how well went the optimization.
experiment.plot.regret().show()