« Back to top page

Bayesian Optimization

Bayesian Optimization

Constrained robust Bayesian optimization of expensive noisyblack-box functions with guaranteed regret bounds

Abstract This package implements a modified Constrained Adversarially Robust Bayesian Optimization (CARBO) sampler based on the paper Constrained robust Bayesian optimization of expensive noisyblack-box functions with guaranteed regret bounds. This sampler robustly optimizes a function along with inequality constraints that incurs a noise in its input. The algorithm details are described in the Others section. APIs CARBOSampler(*, seed: int | None = None, independent_sampler: BaseSampler | None = None, n_startup_trials: int = 10, deterministic_objective: bool = False, constraints_func: Callable[[FrozenTrial], Sequence[float]] | None = None, rho: float = 1e3, beta: float = 4.

HEBO (Heteroscedastic and Evolutionary Bayesian Optimisation)

Class or Function Names HEBOSampler Installation # Install the dependencies. pip install optunahub hebo # NOTE: Below is optional, but pymoo must be installed after NumPy for faster HEBOSampler, # we run the following command to make sure that the compiled version is installed. pip install --upgrade pymoo APIs HEBOSampler(search_space: dict[str, BaseDistribution] | None = None, *, seed: int | None = None, constant_liar: bool = False, independent_sampler: BaseSampler | None = None) search_space: By specifying search_space, the sampling speed at each iteration becomes slightly quicker, but this argument is not necessary to run this sampler.

LLAMBO (Large Language Models to Enhance Bayesian Optimization)

Abstract Large Language Models to Enhance Bayesian Optimization (LLAMBO) LLAMBO, by Liu et al., is a novel approach that integrates Large Language Models (LLMs) into the Bayesian Optimization (BO) framework to improve the optimization of complex, expensive-to-evaluate black-box functions. By leveraging the contextual understanding and few-shot learning capabilities of LLMs, LLAMBO enhances multiple facets of the BO pipeline: Zero-Shot Warmstarting LLAMBO frames the optimization problem in natural language, allowing the LLM to propose promising initial solutions.

PFNs4BO sampler

Class or Function Names PFNs4BOSampler Installation pip install -r https://hub.optuna.org/samplers/pfns4bo/requirements.txt Example from __future__ import annotations import os import optuna import optunahub module = optunahub.load_module("samplers/pfns4bo") PFNs4BOSampler = module.PFNs4BOSampler def objective(trial: optuna.Trial) -> float: x = trial.suggest_float("x", -10, 10) return (x - 2) ** 2 if __name__ == "__main__": study = optuna.create_study( sampler=PFNs4BOSampler(), ) study.optimize(objective, n_trials=100) print(study.best_params) print(study.best_value) See example.py for a full example. You need GPU to run this example. The following figures are experimental results of the comparison between PFNs4BO and the random search.

PLMBO (Preference Learning Multi-Objective Bayesian Optimization)

Class or Function Names PLMBOSampler Installation pip install -r https://hub.optuna.org/samplers/plmbo/requirements.txt Example from __future__ import annotations import matplotlib.pyplot as plt import optuna import optunahub from optuna.distributions import FloatDistribution import numpy as np PLMBOSampler = optunahub.load_module( # type: ignore "samplers/plmbo", ).PLMBOSampler if __name__ == "__main__": f_sigma = 0.01 def obj_func1(x): return np.sin(x[0]) + x[1] def obj_func2(x): return -np.sin(x[0]) - x[1] + 0.1 def obs_obj_func(x): return np.array( [ obj_func1(x) + np.random.normal(0, f_sigma), obj_func2(x) + np.

SMAC3

APIs A sampler that uses SMAC3 v2.2.0 verified by unittests that can be run by the following: $ pip install pytest optunahub smac $ python -m pytest package/samplers/smac_sampler/tests/ Please check the API reference for more details: https://automl.github.io/SMAC3/main/5_api.html SMACSampler(search_space: dict[str, BaseDistribution], n_trials: int = 100, seed: int | None = None, *, surrogate_model_type: str = "rf", acq_func_type: str = "ei_log", init_design_type: str = "sobol", surrogate_model_rf_num_trees: int = 10, surrogate_model_rf_ratio_features: float = 1.

Syne Tune: Large-Scale and Reproducible Hyperparameter Optimization

APIs A sampler that uses Syne Tune v0.14.2 that can be run by the following: $ pip install optunahub syne-tune Please check the API reference for more details: https://syne-tune.readthedocs.io/en/latest/_apidoc/modules.html SyneTuneSampler(metric: str, search_space: dict[str, BaseDistribution], direction: str = "minimize", searcher_method: str = "CQR", searcher_kwargs: dict = None) search_space: A dictionary of Optuna distributions. direction: Defines direction of optimization. Must be one of the following: [minimize, maximize]. metric: The metric to be optimized.