Abstract This package automatically selects an appropriate sampler for the provided search space based on the developers’ recommendation. The following articles provide detailed information about AutoSampler.
📰 AutoSampler: Automatic Selection of Optimization Algorithms in Optuna 📰 AutoSampler: Full Support for Multi-Objective & Constrained Optimization Class or Function Names AutoSampler This sampler currently accepts only seed and constraints_func. constraints_func enables users to handle constraints along with the objective function. These arguments follow the same convention as the other samplers, so please take a look at the reference.
Abstract ConfOptSampler provides flexible and robust hyperparameter optimization via calibrated quantile regression surrogates.
It supports the following acquisition functions:
Thompson Sampling Optimistic Bayesian Sampling Expected Improvement It is robust to heteroskedastic, skewed, non-normal and highly categorical environments where traditional GPs might fail.
Its single-fidelity performance in popular HPO benchmarks puts it consistently ahead of TPE and SMAC and contextually ahead of GPs (when hyperparameters are categorical):
API ConfOptSampler class takes the following parameters:
Abstract HypE (Hypervolume Estimation Algorithm) is a fast hypervolume-based evolutionary algorithm designed for many-objective optimization problems.
Unlike traditional hypervolume-based methods that become computationally expensive with increasing objectives, HypE uses Monte Carlo sampling to efficiently estimate hypervolume contributions.
It employs a greedy selection strategy that preferentially retains individuals with higher hypervolume contributions, enabling effective convergence toward the Pareto front.
APIs HypESampler(*, population_size=50, n_samples=4096, mutation=None, mutation_prob=None, crossover=None, crossover_prob=0.9, hypervolume_method="auto", seed=None) population_size: Size of the population for the evolutionary algorithm.
Abstract The Multi-dimensional Knapsack Problem (MKP) is a fundamental combinatorial optimization problem that generalizes the classic knapsack problem to multiple dimensions. In this problem, each item has multiple attributes (e.g., weight, volume, size) and the goal is to maximize the total value of selected items while satisfying constraints on each attribute. Despite its conceptual simplicity, the MKP is NP-hard and appears frequently in real-world applications, such as resource allocation, capital budgeting, and project selection, as remarked in recent surveys e.
Abstract If Optuna’s built-in NSGAII has a study obtained from another sampler, but continues with that study, it cannot be used as the first generation, and optimization starts from zero. This means that even if you already know good individuals, you cannot use it in the GA.
In this implementation, the already sampled results are included in the initial individuals of the GA to perform the optimization.
Note, however, that this has the effect that the implementation does not necessarily support multi-threading in the generation of the initial generation.
Abstract CatCMA with Margin [Hamano et al. 2025] CatCMA with Margin (CatCMAwM) is a method for mixed-variable optimization problems, simultaneously optimizing continuous, integer, and categorical variables. CatCMAwM extends CatCMA by introducing a novel integer handling mechanism, and supports arbitrary combinations of continuous, integer, and categorical variables in a unified framework. This Optuna sampler uses https://github.com/CyberAgentAILab/cmaes under the hood, so please refer to it for details.
APIs CatCmawmSampler
Example from __future__ import annotations import numpy as np import optuna import optunahub def SphereIntCOM(x: np.
Abstract Particle Swarm Optimization (PSO) is a population-based stochastic optimizer inspired by flocking behavior, where particles iteratively adjust their positions using personal and global bests to search for optima. This sampler supports single-objective, continuous optimization only.
Note: Categorical distributions are suggested by the underlaying RandomSampler.
Note: Multi-objective optimization is not supported.
Note: PSO Sampler cannot process dynamic changes to the search space.
For details on the algorithm, see Kennedy and Eberhart (1995): Particle Swarm Optimization.
Abstract SPEA-II (Strength Pareto Evolutionary Algorithm 2) is an improved multi-objective evolutionary algorithm that differs from NSGA-II in its selection mechanism. While NSGA-II uses non-dominated sorting and crowding distance, SPEA-II maintains an external archive to preserve elite non-dominated solutions and uses a fine-grained fitness assignment strategy based on the strength of domination.
Note that when using warm-start with existing trials, the initial generation may not support concurrent sampling. After the initial generation, the implementation follows standard evolutionary algorithm parallelization.
Abstract This package provides test suites for the C-DTLZ problems (Jain & Deb, 2014), a constrained version of the DTLZ problems (Deb et al., 2001). The DTLZ problems are a set of continuous multi-objective optimization problems consisting of seven types, each supporting a variable number of objectives and variables. The C-DTLZ problems extend the DTLZ problems by adding various types of constraints to some of them. The objective functions are wrapped from the DTLZ test suite in optproblems, while the constraint components are implemented separately according to the original paper (Jain & Deb, 2014).
Abstract Hyperparameter optimization is crucial to achieving high performance in deep learning. On top of the performance, other criteria such as inference time or memory requirement often need to be optimized due to some practical reasons. This motivates research on multi-objective optimization (MOO). However, Pareto fronts of MOO methods are often shown without considering the variability caused by random seeds, making the performance stability evaluation difficult. This package provides empirical attainment surface implementation based on the original implementation.