« Back to top page

AutoML

AutoML

Async Optimization Benchmark Simulator

Abstract When running parallel optimization experiments using tabular or surrogate benchmarks, each evaluation must be ordered based on the runtime that each configuration would take in reality. However, the evaluation of tabular or surrogate benchmarks, by design, does not take long. For this reason, the timing of each configuration must be ordered as if we actually evaluated each configuration. This package provides a simulator that automatically handles this problem by internally managing the order of hyperparameter configuration evaluations.

HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO

Abstract Hyperparameter optimization benchmark introduced in the paper HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO. The original benchmark is available here. Please note that this benchmark provides the results only at the last epoch of each configuration. APIs class Problem(dataset_id: int, seed: int | None = None, metric_names: list[str] | None = None) dataset_id: ID of the dataset to use. It must be in the range of [0, 7].

HPOLib: Tabular Benchmarks for Hyperparameter Optimization and Neural Architecture Search

Abstract Hyperparameter optimization benchmark introduced in the paper Tabular Benchmarks for Hyperparameter Optimization and Neural Architecture Search. The original benchmark is available here. Please note that this benchmark provides the results only at the last epoch of each configuration. APIs class Problem(dataset_id: int, seed: int | None = None, metric_names: list[str] | None = None) dataset_id: ID of the dataset to use. It must be in the range of [0, 3].

NATS-Bench (NAS-Bench-201): Benchmarking NAS Algorithms for Architecture Topology and Size

Abstract Neural architecture search benchmark introduced in the paper NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size. The original benchmark is available here. Please note that this benchmark provides the results only at the last epoch of each architecture. The preliminary version is the NAS-Bench-201, but since the widely used name is NAS-Bench-201, we stick to the name, NAS-Bench-201 APIs class Problem(dataset_id: int, seed: int | None = None, metric_names: list[str] | None = None) dataset_id: ID of the dataset to use.