« Back to top page

Multi-Objective Optimization

Multi-Objective Optimization

MOEA/D sampler

Abstract Sampler using MOEA/D algorithm. MOEA/D stands for “Multi-Objective Evolutionary Algorithm based on Decomposition. This sampler is specialized for multiobjective optimization. The objective function is internally decomposed into multiple single-objective subproblems to perform optimization. It may not work well with multi-threading. Check results carefully. APIs MOEADSampler(*, population_size = 100, n_neighbors = None, scalar_aggregation_func = "tchebycheff", mutation = None, mutation_prob = None, crossover = None, crossover_prob = 0.9, seed = None n_neighbors: The number of the weight vectors in the neighborhood of each weight vector.

Multi-objective CMA-ES (MO-CMA-ES) Sampler

Abstract MoCmaSampler provides the implementation of the s-MO-CMA-ES algorithm. This algorithm extends (1+1)-CMA-ES to multi-objective optimization by introducing a selection strategy based on non-domination sorting and contributing hypervolume (S-metric). It inherits important properties of CMA-ES, invariance against order-preserving transformations of the fitness function value and rotation and translation of the search space. Class or Function Names MoCmaSampler(*, search_space: dict[str, BaseDistribution] | None = None, popsize: int | None = None, seed: int | None = None) search_space: A dictionary containing the search space that defines the parameter space.

PLMBO (Preference Learning Multi-Objective Bayesian Optimization)

Class or Function Names PLMBOSampler Installation pip install -r https://hub.optuna.org/samplers/plmbo/requirements.txt Example from __future__ import annotations import matplotlib.pyplot as plt import optuna import optunahub from optuna.distributions import FloatDistribution import numpy as np PLMBOSampler = optunahub.load_module( # type: ignore "samplers/plmbo", ).PLMBOSampler if __name__ == "__main__": f_sigma = 0.01 def obj_func1(x): return np.sin(x[0]) + x[1] def obj_func2(x): return -np.sin(x[0]) - x[1] + 0.1 def obs_obj_func(x): return np.array( [ obj_func1(x) + np.random.normal(0, f_sigma), obj_func2(x) + np.

Plot Hypervolume History for Multiple Studies

Class or Function Names plot_optimization_history Example import optuna import optunahub def objective(trial: optuna.Trial) -> tuple[float, float]: x = trial.suggest_float("x", 0, 5) y = trial.suggest_float("y", 0, 3) v0 = 4 * x**2 + 4 * y**2 v1 = (x - 5) ** 2 + (y - 5) ** 2 return v0, v1 samplers = [ optuna.samplers.RandomSampler(), optuna.samplers.TPESampler(), optuna.samplers.NSGAIISampler(), ] studies = [] for sampler in samplers: study = optuna.create_study( sampler=sampler, study_name=f"{sampler.__class__.__name__}", directions=["minimize", "minimize"], ) study.

Plot Hypervolume History with Reference Point

Class or Function Names plot_hypervolume_history Example mod = optunahub.load_module("visualization/plot_hypervolume_history_with_rp") mod.plot_hypervolume_history(study, reference_point) See example.py for more details. The example of generated image is as follows.

Plot Pareto Front for Multiple Studies

Class or Function Names plot_pareto_front Example import optuna import optunahub def objective(trial: optuna.Trial) -> tuple[float, float]: x = trial.suggest_float("x", 0, 5) y = trial.suggest_float("y", 0, 3) v0 = 4 * x**2 + 4 * y**2 v1 = (x - 5) ** 2 + (y - 5) ** 2 return v0, v1 samplers = [ optuna.samplers.RandomSampler(), optuna.samplers.TPESampler(), optuna.samplers.NSGAIISampler(), ] studies = [] for sampler in samplers: study = optuna.create_study( sampler=sampler, study_name=f"{sampler.__class__.__name__}", directions=["minimize", "minimize"], ) study.

Visualizing Variability of Pareto Fronts over Multiple Runs (Empirical Attainment Surface)

Abstract Hyperparameter optimization is crucial to achieving high performance in deep learning. On top of the performance, other criteria such as inference time or memory requirement often need to be optimized due to some practical reasons. This motivates research on multi-objective optimization (MOO). However, Pareto fronts of MOO methods are often shown without considering the variability caused by random seeds, making the performance stability evaluation difficult. This package provides empirical attainment surface implementation based on the original implementation.