Abstract Hyperparameter optimization benchmark introduced in the paper HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO. The original benchmark is available here. Please note that this benchmark provides the results only at the last epoch of each configuration.
APIs class Problem(dataset_id: int, seed: int | None = None, metric_names: list[str] | None = None) dataset_id: ID of the dataset to use. It must be in the range of [0, 7].
Abstract Hyperparameter optimization benchmark introduced in the paper Tabular Benchmarks for Hyperparameter Optimization and Neural Architecture Search. The original benchmark is available here. Please note that this benchmark provides the results only at the last epoch of each configuration.
APIs class Problem(dataset_id: int, seed: int | None = None, metric_names: list[str] | None = None) dataset_id: ID of the dataset to use. It must be in the range of [0, 3].
Abstract Neural architecture search benchmark introduced in the paper NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size. The original benchmark is available here. Please note that this benchmark provides the results only at the last epoch of each architecture.
The preliminary version is the NAS-Bench-201, but since the widely used name is NAS-Bench-201, we stick to the name, NAS-Bench-201
APIs class Problem(dataset_id: int, seed: int | None = None, metric_names: list[str] | None = None) dataset_id: ID of the dataset to use.