« Back to top page

GA

GA

HypE Sampler

Abstract HypE (Hypervolume Estimation Algorithm) is a fast hypervolume-based evolutionary algorithm designed for many-objective optimization problems. Unlike traditional hypervolume-based methods that become computationally expensive with increasing objectives, HypE uses Monte Carlo sampling to efficiently estimate hypervolume contributions. It employs a greedy selection strategy that preferentially retains individuals with higher hypervolume contributions, enabling effective convergence toward the Pareto front. APIs HypESampler(*, population_size=50, n_samples=4096, mutation=None, mutation_prob=None, crossover=None, crossover_prob=0.9, hypervolume_method="auto", seed=None) population_size: Size of the population for the evolutionary algorithm.

NSGAII sampler with Initial Trials

Abstract If Optuna’s built-in NSGAII has a study obtained from another sampler, but continues with that study, it cannot be used as the first generation, and optimization starts from zero. This means that even if you already know good individuals, you cannot use it in the GA. In this implementation, the already sampled results are included in the initial individuals of the GA to perform the optimization. Note, however, that this has the effect that the implementation does not necessarily support multi-threading in the generation of the initial generation.

SPEAII sampler

Abstract SPEA-II (Strength Pareto Evolutionary Algorithm 2) is an improved multi-objective evolutionary algorithm that differs from NSGA-II in its selection mechanism. While NSGA-II uses non-dominated sorting and crowding distance, SPEA-II maintains an external archive to preserve elite non-dominated solutions and uses a fine-grained fitness assignment strategy based on the strength of domination. Note that when using warm-start with existing trials, the initial generation may not support concurrent sampling. After the initial generation, the implementation follows standard evolutionary algorithm parallelization.