« Back to top page

CMA-ES with Quasi-Random Refinement Sampler

Three-phase sampler combining Sobol QMC initialization, CMA-ES optimization, and quasi-random Gaussian refinement using Sobol-based perturbation vectors. Achieves 36% lower regret than pure CMA-ES on BBOB benchmarks.

Abstract

CMA-ES is the gold standard for continuous black-box optimization, but it has diminishing returns: after convergence, additional CMA-ES trials provide little improvement. This sampler addresses that by splitting the trial budget into three phases:

  1. Sobol QMC (8 trials) — quasi-random space-filling initialization
  2. CMA-ES (132 trials) — covariance matrix adaptation for main optimization
  3. Quasi-random Gaussian refinement (60 trials) — targeted local search around the best point using Sobol-based perturbation vectors with exponentially decaying scale

The refinement phase uses quasi-random Sobol sequences transformed via inverse CDF to generate Gaussian-distributed perturbation vectors. Compared to pseudo-random Gaussian perturbation, this provides more uniform directional coverage in high-dimensional spaces — systematically exploring directions that pseudo-random sampling might miss.

The perturbation scale follows an exponential decay: sigma(n) = 0.13 * exp(-0.11 * n), starting wide for basin exploration and tightening for precise convergence.

Benchmark Results

Evaluated on the BBOB benchmark suite — 24 noiseless black-box optimization functions spanning 5 difficulty categories, used as the gold standard in GECCO competitions. All results use 5 dimensions, 10 random seeds, and 200 trials per run.

Metric: Normalized regret = (sampler_best - f_opt) / (random_best - f_opt) where 0.0 = optimal and 1.0 = random-level. Optimal values computed via scipy.differential_evolution (5 restarts). Random baselines from 10 seeds of 200 random trials.

SamplerMean Normalized Regretvs Random
Random baseline1.0000
Default TPE0.246375% better
CMA-ES (tuned)0.200480% better
CMA-ES + Refinement0.128487% better

Per-category breakdown:

CategoryFunctionsCMA-ESCMA-ES + RefinementChange
Separablef1–f50.16820.0996-41%
Low conditioningf6–f90.02810.0244-13%
High conditioningf10–f140.05920.0513-13%
Multimodal (global)f15–f190.25080.1374-45%
Multimodal (weak)f20–f240.46150.3084-33%

The improvement is strongest on separable and multimodal functions, where the quasi-random refinement’s uniform directional coverage systematically finds improvements that pseudo-random perturbation misses. Results are deterministic and reproducible. Full experiment logs (135 experiments): github.com/EliMunkey/autoresearch-optuna.

Cross-validation on standard test functions

Independent validation on 8 standard test functions (Sphere, Rosenbrock, Rastrigin, Ackley, Griewank, Levy, Styblinski-Tang, Schwefel) with 5 seeds, 200 trials. Includes TPE with multivariate=True as a stronger baseline.

5D results:

SamplerMean Normalized Regretvs Random
Random baseline1.0000
TPE0.243776% better
TPE (multivariate)0.336566% better
CMA-ES0.271573% better
CMA-ES + Refinement0.203880% better

10D results — the advantage widens at higher dimensions:

SamplerMean Normalized Regretvs Random
Random baseline1.0000
TPE0.480352% better
TPE (multivariate)0.546345% better
CMA-ES0.373763% better
CMA-ES + Refinement0.171983% better

Per-function breakdown (10D):

FunctionCategoryTPETPE(mv)CMA-ESCMA-ES+Refine
SphereUnimodal0.27510.26990.03120.0000
RosenbrockUnimodal0.11390.08510.00710.0059
RastriginMultimodal0.68910.81870.65660.0000
AckleyMultimodal0.61460.72820.37040.0000
GriewankMultimodal0.30500.27820.04440.0000
LevyMultimodal0.53830.39410.09650.0188
Styblinski-TangMultimodal0.56510.82380.63550.3981
SchwefelDeceptive0.74130.97231.14810.9526

Limitation: On deceptive functions like Schwefel — where the global optimum is far from typical local optima — the refinement phase can reinforce a suboptimal basin. CMA-ES itself struggles on Schwefel (regret >1.0), and refinement does not recover from that. For problems with known deceptive structure, consider using TPE or increasing the CMA-ES budget.

APIs

CmaEsRefinementSampler

CmaEsRefinementSampler(
    *,
    n_startup_trials: int = 8,
    cma_n_trials: int = 132,
    popsize: int = 6,
    sigma0: float = 0.2,
    sigma_start: float = 0.13,
    decay_rate: float = 0.11,
    seed: int | None = None,
)

Parameters

  • n_startup_trials — Number of Sobol QMC initialization trials. Powers of 2 recommended. Default: 8.
  • cma_n_trials — Number of CMA-ES optimization trials. Default: 132.
  • popsize — CMA-ES population size. Default: 6.
  • sigma0 — CMA-ES initial step size. Default: 0.2.
  • sigma_start — Initial refinement perturbation scale as a fraction of parameter range. Default: 0.13.
  • decay_rate — Exponential decay rate for refinement perturbation scale. Default: 0.11.
  • seed — Random seed for reproducibility. Default: None.

CmaEsRefinementSampler.for_budget

CmaEsRefinementSampler.for_budget(n_trials, *, seed=None, **kwargs)

Factory method that scales phase boundaries proportionally to the trial budget. The default parameters are tuned for 200 trials; use this when running a different number.

# 1000-trial study with auto-scaled phases
sampler = module.CmaEsRefinementSampler.for_budget(1000, seed=42)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=1000)

Installation

$ pip install optunahub cmaes scipy

Example

import math
import optuna
import optunahub


def objective(trial: optuna.Trial) -> float:
    n = 5
    variables = [trial.suggest_float(f"x{i}", -5.12, 5.12) for i in range(n)]
    return 10 * n + sum(x**2 - 10 * math.cos(2 * math.pi * x) for x in variables)


module = optunahub.load_module("samplers/cma_es_refinement")
sampler = module.CmaEsRefinementSampler(seed=42)

study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=200)

print(f"Best value: {study.best_value:.6f}")
print(f"Best params: {study.best_params}")
Package
samplers/cma_es_refinement
Author
Elias Munk
License
MIT License
Verified Optuna version
  • 4.7.0
Last update
2026-03-24
Discussions & Issues
Create a discussion
Create a bug report