« Back to top page

Sampler

Sampler

AutoSampler

Abstract This package automatically selects an appropriate sampler for the provided search space based on the developers’ recommendation. The following article provides detailed information about AutoSampler. πŸ“° AutoSampler: Automatic Selection of Optimization Algorithms in Optuna Class or Function Names AutoSampler This sampler currently accepts only seed and constraints_func. constraints_func enables users to handle constraints along with the objective function. These arguments follow the same convention as the other samplers, so please take a look at the reference.

BoTorch Sampler

Class or Function Names BoTorchSampler Installation pip install optuna-integration botorch Example from optuna_integration import BoTorchSampler sampler = BoTorchSampler() Others See the documentation for more details.

Brute Force Search

Class or Function Names BruteForceSampler Example import optuna from optuna.samplers import BruteForceSampler def objective(trial): x = trial.suggest_float("x", -5, 5) return x**2 sampler = BruteForceSampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=10) Others See the documentation for more details.

c-TPE; Tree-structured Parzen Estimator with Inequality Constraints for Expensive Hyperparameter Optimization

Abstract This package aims to reproduce the TPE algorithm used in the paper published at IJCAI'23: c-TPE: Tree-structured Parzen Estimator with Inequality Constraints for Expensive Hyperparameter Optimization The default parameter set of this sampler is the recommended setup from the paper and the experiments in the paper can also be reproduced by this sampler. Note that this sampler is officially implemented by the first author of the original paper. The performance was verified, c.

CatCMA Sampler

Abstract The cutting-edge evolutionary computation algorithm CatCMA has been published on OptunaHub. CatCMA is an algorithm that excels in mixed search spaces with continuous and discrete variables. This figure is from https://arxiv.org/abs/2405.09962. πŸ“ Introduction to CatCMA in OptunaHub: Blog post by Hideaki Imamura. Class or Function Names CatCmaSampler Installation pip install -r https://hub.optuna.org/samplers/catcma/requirements.txt Example import numpy as np import optuna from optuna.distributions import CategoricalDistribution from optuna.distributions import FloatDistribution import optunahub def objective(trial: optuna.

CMA-ES Sampler

Class or Function Names CmaEsSampler Installation pip install cmaes Example import optuna from optuna.samplers import CmaEsSampler def objective(trial): x = trial.suggest_float("x", -1, 1) y = trial.suggest_int("y", -1, 1) return x**2 + y sampler = CmaEsSampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=20) Others See the documentation for more details.

CMA-ES with User Prior

Abstract As the Optuna CMA-ES sampler does not support any flexible ways to initialize the parameters of the Gaussian distribution, so I created a workaround to do so. Class or Function Names UserPriorCmaEsSampler In principle, most arguments follow optuna.samplers.CmaEsSampler, but some parts are modified. For example, UserPriorCmaEsSampler does not support source_trials and use_separable_cma due to their incompatibility. Instead, we replaced x0 and sigma0 in CmaEsSampler with mu0 and cov0. In CmaEsSampler, we needed to provide x0 as dict and sigma0 only as float.

CMA-MAE Sampler

Abstract This package provides a sampler using CMA-MAE as implemented in pyribs. CMA-MAE is a quality diversity algorithm that has demonstrated state-of-the-art performance in a variety of domains. Pyribs is a bare-bones Python library for quality diversity optimization algorithms. For a primer on CMA-MAE, quality diversity, and pyribs, we recommend referring to the series of pyribs tutorials. For simplicity, this implementation provides a default instantiation of CMA-MAE with a GridArchive and EvolutionStrategyEmitter with improvement ranking, all wrapped up in a Scheduler.

Demo Sampler

Class or Function Names DemoSampler Example module = optunahub.load_module("samplers/demo") sampler = module.DemoSampler(seed=42) See example.py for more details.

Differential Evolution Sampler

Abstract Differential Evolution (DE) Sampler This implementation introduces a novel Differential Evolution (DE) sampler, tailored to optimize both numerical and categorical hyperparameters effectively. The DE sampler integrates a hybrid approach: Differential Evolution for Numerical Parameters: Exploiting DE’s strengths, the sampler efficiently explores numerical parameter spaces through mutation, crossover, and selection mechanisms. Random Sampling for Categorical Parameters: For categorical variables, the sampler employs random sampling, ensuring comprehensive coverage of discrete spaces. The sampler also supports dynamic search spaces, enabling seamless adaptation to varying parameter dimensions during optimization.

Differential Evolution with Hyperband (DEHB) Sampler

Class or Function Names DEHBSampler DEHBPruner Installation There is no additional installation required for this sampler and pruner, but if you want to run the example.py script, you need to install the following packages: $ pip install sklearn Example sampler = DEHBSampler() pruner = DEHBPruner(min_resource=1, max_resource=n_train_iter, reduction_factor=3) study = optuna.create_study(sampler=sampler, pruner=pruner) See example.py for a full example. The following figures are obtained from the analysis of the optimization. Others References Awad, N.

Ensembled Sampler

Installation No additional packages are required. Abstract This package provides a sampler that ensembles multiple samplers. You can specify the list of samplers to be ensembled. Class or Function Names EnsembledSampler Example import optuna import optunahub mod = optunahub.load_module("samplers/ensembled") samplers = [ optuna.samplers.RandomSampler(), optuna.samplers.TPESampler(), optuna.samplers.CmaEsSampler(), ] sampler = mod.EnsembledSampler(samplers) See example.py for more details.

Evolutionary LLM Merge Sampler

Class or Function Names EvoMergeSampler EvoMergeTrial Installation pip install git+https://github.com/arcee-ai/mergekit.git pip install sentencepiece accelerate protobuf bitsandbytes langchain langchain-community datasets pip install pandas cmaes export HF_TOKEN=xxx Example sampler = EvoMergeSampler(base_config="path/to/config/yml/file") study = optuna.create_study(sampler=sampler) for _ in range(100): trial = study.ask() evo_merge_trial = EvoMergeTrial(study, trial._trial_id) model = evo_merge_trial.suggest_model() acc = try_model(model) study.tell(trial, acc) print(study.trials_dataframe(attrs=("number", "value"))) See example.py for a full example. You need GPU with 16G VLAM to run this example. The following figures are obtained from the analysis of the optimization.

Gaussian Process-Based Sampler

Class or Function Names GPSampler Installation pip install scipy torch Example import optuna from optuna.samplers import GPSampler def objective(trial): x = trial.suggest_float("x", -5, 5) return x**2 sampler = GPSampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=100) Others See the documentation for more details.

Gaussian-Process Probability of Improvement from Maximum of Sample Path Sampler

Class or Function Names PIMSSampler Installation $ pip install -r https://hub.optuna.org/samplers/gp_pims/requirements.txt Example Please see example.py. Others Reference Shion Takeno, Yu Inatsu, Masayuki Karasuyama, Ichiro Takeuchi, Posterior Sampling-Based Bayesian Optimization with Tighter Bayesian Regret Bounds, Proceedings of the 41st International Conference on Machine Learning, PMLR 235:47510-47534, 2024. Bibtex @InProceedings{pmlr-v235-takeno24a, title = {Posterior Sampling-Based {B}ayesian Optimization with Tighter {B}ayesian Regret Bounds}, author = {Takeno, Shion and Inatsu, Yu and Karasuyama, Masayuki and Takeuchi, Ichiro}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {47510--47534}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.

Grey Wolf Optimization (GWO) Sampler

Class or Function Names GreyWolfOptimizationSampler Example from __future__ import annotations import matplotlib.pyplot as plt import optuna import optunahub GreyWolfOptimizationSampler = optunahub.load_module( "samplers/grey_wolf_optimization" ).GreyWolfOptimizationSampler if __name__ == "__main__": def objective(trial: optuna.trial.Trial) -> float: x = trial.suggest_float("x", -10, 10) y = trial.suggest_float("y", -10, 10) return x**2 + y**2 # Note: `n_trials` should match the `n_trials` passed to `study.optimize`. sampler = GreyWolfOptimizationSampler(n_trials=100) study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=sampler.n_trials) optuna.visualization.matplotlib.plot_optimization_history(study) plt.show() Others Reference Mirjalili, S., Mirjalili, S.

Grid Search

Class or Function Names GridSampler Example import optuna from optuna.samplers import GridSampler def objective(trial): x = trial.suggest_float("x", -100, 100) y = trial.suggest_int("y", -100, 100) return x**2 + y**2 search_space = {"x": [-50, 0, 50], "y": [-99, 0, 99]} sampler = GridSampler(search_space) study = optuna.create_study(sampler=sampler) study.optimize(objective) Others See the documentation for more details.

HEBO (Heteroscedastic and Evolutionary Bayesian Optimisation)

Class or Function Names HEBOSampler Installation # Install the dependencies. pip install optunahub hebo # NOTE: Below is optional, but pymoo must be installed after NumPy for faster HEBOSampler, # we run the following command to make sure that the compiled version is installed. pip install --upgrade pymoo APIs HEBOSampler(search_space: dict[str, BaseDistribution] | None = None, *, seed: int | None = None, constant_liar: bool = False, independent_sampler: BaseSampler | None = None) search_space: By specifying search_space, the sampling speed at each iteration becomes slightly quicker, but this argument is not necessary to run this sampler.

Hill Climb Local Search Sampler

Abstract The hill climbing algorithm is an optimization technique that iteratively improves a solution by evaluating neighboring solutions in search of a local maximum or minimum. Starting with an initial guess, the algorithm examines nearby “neighbor” solutions, moving to a better neighbor if one is found. This process continues until no improvement is possible, resulting in a locally optimal solution. Hill climbing is efficient and easy to implement but can get stuck in local optima, making it suitable for simple optimization landscapes or applications with limited time constraints.

Implicit Natural Gradient Sampler (INGO)

Class or Function Names ImplicitNaturalGradientSampler Example import optuna import optunahub def objective(trial: optuna.Trial) -> float: x = trial.suggest_float("x", -100, 100) y = trial.suggest_float("y", -100, 100) return x**2 + y**2 def main() -> None: mod = optunahub.load_module("samplers/implicit_natural_gradient") sampler = mod.ImplicitNaturalGradientSampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=200) print(study.best_trial.value, study.best_trial.params) if __name__ == "__main__": main() Others πŸ“ A Natural Gradient-Based Optimization Algorithm Registered on OptunaHub: Blog post by Hiroki Takizawa. In the post, benchmark results are presented as shown in the figure below.

Mean Variance Analysis Scalarization Sampler)

Class or Function Names MeanVarianceAnalysisScalarizationSimulatorSampler Installation $ pip install scipy Example Please see example.ipynb Others For example, you can add sections to introduce a corresponding paper. Reference Iwazaki, Shogo, Yu Inatsu, and Ichiro Takeuchi. “Mean-variance analysis in Bayesian optimization under uncertainty.” International Conference on Artificial Intelligence and Statistics. PMLR, 2021. Bibtex @inproceedings{iwazaki2021mean, title={Mean-variance analysis in Bayesian optimization under uncertainty}, author={Iwazaki, Shogo and Inatsu, Yu and Takeuchi, Ichiro}, booktitle={International Conference on Artificial Intelligence and Statistics}, pages={973--981}, year={2021}, organization={PMLR} }

MOEA/D sampler

Abstract Sampler using MOEA/D algorithm. MOEA/D stands for “Multi-Objective Evolutionary Algorithm based on Decomposition. This sampler is specialized for multiobjective optimization. The objective function is internally decomposed into multiple single-objective subproblems to perform optimization. It may not work well with multi-threading. Check results carefully. Class or Function Names MOEADSampler Installation pip install scipy or pip install -r https://hub.optuna.org/samplers/moead/requirements.txt Example import optuna import optunahub def objective(trial: optuna.Trial) -> tuple[float, float]: x = trial.

Multi-objective CMA-ES (MO-CMA-ES) Sampler

Abstract MoCmaSampler provides the implementation of the s-MO-CMA-ES algorithm. This algorithm extends (1+1)-CMA-ES to multi-objective optimization by introducing a selection strategy based on non-domination sorting and contributing hypervolume (S-metric). It inherits important properties of CMA-ES, invariance against order-preserving transformations of the fitness function value and rotation and translation of the search space. Class or Function Names MoCmaSampler(*, search_space: dict[str, BaseDistribution] | None = None, popsize: int | None = None, seed: int | None = None) search_space: A dictionary containing the search space that defines the parameter space.

NelderMead Sampler

Abstract This Nelder-Mead method implemenation employs the effective initialization method proposed by Takenaga et al., 2023. Class or Function Names NelderMeadSampler Installation pip install -r https://hub.optuna.org/samplers/nelder_mead/requirements.txt Example from __future__ import annotations import optuna from optuna.distributions import BaseDistribution from optuna.distributions import FloatDistribution import optuna.study.study import optunahub def objective(x: float, y: float) -> float: return x**2 + y**2 def optuna_objective(trial: optuna.trial.Trial) -> float: x = trial.suggest_float("x", -5, 5) y = trial.suggest_float("y", -5, 5) return objective(x, y) if __name__ == "__main__": # You can specify the search space before optimization.

NSGAII sampler with Initial Trials

Abstract If Optuna’s built-in NSGAII has a study obtained from another sampler, but continues with that study, it cannot be used as the first generation, and optimization starts from zero. This means that even if you already know good individuals, you cannot use it in the GA. In this implementation, the already sampled results are included in the initial individuals of the GA to perform the optimization. Note, however, that this has the effect that the implementation does not necessarily support multi-threading in the generation of the initial generation.

NSGAII Search

Class or Function Names NSGAIISampler Example import optuna from optuna.samplers import NSGAIISampler def objective(trial): x = trial.suggest_float("x", -5, 5) return x**2 sampler = NSGAIISampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=10) Others See the documentation for more details.

NSGAIII Search

Class or Function Names NSGAIIISampler Example import optuna from optuna.samplers import NSGAIIISampler def objective(trial): x = trial.suggest_float("x", -5, 5) return x**2 sampler = NSGAIIISampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=10) Others See the documentation for more details.

NSGAIISampler Using TPESampler for the Initialization

Abstract This sampler uses TPESampler instead of RandomSampler for the initialization of NSGAIISampler. APIs NSGAIIWithTPEWarmupSampler This class takes the identical interface as the Optuna NSGAIISampler. Example from __future__ import annotations import optuna import optunahub def objective(trial: optuna.Trial) -> tuple[float, float]: x = trial.suggest_float("x", -5, 5) y = trial.suggest_float("y", -5, 5) return x**2 + y**2, (x - 2) ** 2 + (y - 2) ** 2 package_name = "samplers/nsgaii_with_tpe_warmup" sampler = optunahub.

Partial Fixed Sampler

Class or Function Names PartialFixedSampler Example import optuna from optuna.samplers import PartialFixedSampler def objective(trial): x = trial.suggest_float("x", -1, 1) y = trial.suggest_int("y", -1, 1) return x**2 + y tpe_sampler = optuna.samplers.TPESampler() fixed_params = {"y": 0} partial_sampler = PartialFixedSampler(fixed_params, tpe_sampler) study = optuna.create_study(sampler=partial_sampler) study.optimize(objective, n_trials=10) Others See the documentation for more details.

PFNs4BO sampler

Class or Function Names PFNs4BOSampler Installation pip install -r https://hub.optuna.org/samplers/pfns4bo/requirements.txt Example from __future__ import annotations import os import optuna import optunahub module = optunahub.load_module("samplers/pfns4bo") PFNs4BOSampler = module.PFNs4BOSampler def objective(trial: optuna.Trial) -> float: x = trial.suggest_float("x", -10, 10) return (x - 2) ** 2 if __name__ == "__main__": study = optuna.create_study( sampler=PFNs4BOSampler(), ) study.optimize(objective, n_trials=100) print(study.best_params) print(study.best_value) See example.py for a full example. You need GPU to run this example. The following figures are experimental results of the comparison between PFNs4BO and the random search.

PLMBO (Preference Learning Multi-Objective Bayesian Optimization)

Class or Function Names PLMBOSampler Installation pip install -r https://hub.optuna.org/samplers/plmbo/requirements.txt Example from __future__ import annotations import matplotlib.pyplot as plt import optuna import optunahub from optuna.distributions import FloatDistribution import numpy as np PLMBOSampler = optunahub.load_module( # type: ignore "samplers/plmbo", ).PLMBOSampler if __name__ == "__main__": f_sigma = 0.01 def obj_func1(x): return np.sin(x[0]) + x[1] def obj_func2(x): return -np.sin(x[0]) - x[1] + 0.1 def obs_obj_func(x): return np.array( [ obj_func1(x) + np.random.normal(0, f_sigma), obj_func2(x) + np.

PyCMA Sampler

Class or Function Names PyCmaSampler Installation pip install optuna-integration cma Example from optuna_integration import PyCmaSampler sampler = PyCmaSampler() Others See the documentation for more details.

QMC Search

Class or Function Names QMCSampler Example import optuna from optuna.samplers import QMCSampler def objective(trial): x = trial.suggest_float("x", -5, 5) return x**2 sampler = QMCSampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=10) Others See the documentation for more details.

Random Search

Class or Function Names RandomSampler Example import optuna from optuna.samplers import RandomSampler def objective(trial): x = trial.suggest_float("x", -5, 5) return x**2 sampler = RandomSampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=10) Others See the documentation for more details.

Sampler Using Multi-Aarmed Bandit Epsilon-Greedy Algorithm

Class or Function Names MABEpsilonGreedySampler Example mod = optunahub.load_module("samplers/mab_epsilon_greedy") sampler = mod.MABEpsilonGreedySampler() See example.py for more details. Others This package provides a sampler based on Multi-armed bandit algorithm with epsilon-greedy selection.

Sampler using Whale Optimization Algorithm

Class or Function Names WhaleOptimizationSampler Example from __future__ import annotations import matplotlib.pyplot as plt import optuna import optunahub WhaleOptimizationSampler = optunahub.load_module( "samplers/whale_optimization" ).WhaleOptimizationSampler if __name__ == "__main__": def objective(trial: optuna.trial.Trial) -> float: x = trial.suggest_float("x", -10, 10) y = trial.suggest_float("y", -10, 10) return x**2 + y**2 sampler = WhaleOptimizationSampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=100) optuna.visualization.matplotlib.plot_optimization_history(study) plt.show() Others Reference Mirjalili, Seyedali & Lewis, Andrew. (2016). The Whale Optimization Algorithm. Advances in Engineering Software.

Simple Sampler

SimpleBaseSampler has been moved to optunahub.samplers. Please use optunahub.samplers.SimpleBaseSampler instead of this package. Class or Function Names SimpleBaseSampler Example import optunahub class UserDefinedSampler( optunahub.samplers.SimpleBaseSampler ): ... See example.py for more details. Others This package provides an easy sampler base class to implement custom samplers. You can make your own sampler easily by inheriting SimpleBaseSampler and by implementing necessary methods.

Simulated Annealing Sampler

Class or Function Names SimulatedAnnealingSampler Example mod = optunahub.load_module("samplers/simulated_annealing") sampler = mod.SimulatedAnnealingSampler() See example.py for more details. You can run the example in Google Colab. Others This package provides a sampler based on Simulated Annealing algorithm. For more details, see the documentation.

SMAC3

APIs A sampler that uses SMAC3 v2.2.0 verified by unittests that can be run by the following: $ pip install pytest optunahub smac $ python -m pytest package/samplers/smac_sampler/tests/ Please check the API reference for more details: https://automl.github.io/SMAC3/main/5_api.html SMACSampler(search_space: dict[str, BaseDistribution], n_trials: int = 100, seed: int | None = None, *, surrogate_model_type: str = "rf", acq_func_type: str = "ei_log", init_design_type: str = "sobol", surrogate_model_rf_num_trees: int = 10, surrogate_model_rf_ratio_features: float = 1.

TPE Sampler

Class or Function Names TPESampler Example import optuna from optuna.samplers import TPESampler def objective(trial): x = trial.suggest_float("x", -10, 10) return x**2 sampler = TPESampler() study = optuna.create_study(sampler=sampler) study.optimize(objective, n_trials=10) Others See the documentation for more details.

Tree-Structured Parzen Estimator; Understanding Its Algorithm Components and Their Roles for Better Empirical Performance

Abstract This package aims to reproduce the TPE algorithm used in the paper: Tree-Structured Parzen Estimator: Understanding Its Algorithm Components and Their Roles for Better Empirical Performance The default parameter set of this sampler is the recommended setup from the paper and the experiments in the paper can also be reproduced by this sampler. Class or Function Names CustomizableTPESampler Installation The version constraint of this package is Optuna v4.0.0 or later.

Uniform Design Sampler

Abstract This package provides an implementation of the uniform design (UD) algorithm. UD is a variety of design-of-experiment (DOE) methods, and it has better sample efficiency than simple random sampling. Class or Function Names UniformDesignSampler Installation $ pip install -r https://hub.optuna.org/samplers/uniform_design/requirements.txt Example import matplotlib.pyplot as plt import numpy as np import optuna import optunahub from optuna.distributions import FloatDistribution module = optunahub.load_module("samplers/uniform_design") UniformDesignSampler = module.UniformDesignSampler def objective(trial): x = trial.suggest_float("x", 0, 1) y = trial.