The submission site was open July 1, 2020 - October 15, 2020.
The challenge will give the participants 3 months to iterate on their algorithms. We will use a benchmark system built on top of the AutoML challenge workflow and the Bayesmark package, which evaluates black-box optimization algorithms on real-world objective functions. For example, it will include tuning (validation set) performance of standard machine learning models on real data sets. This competition has widespread impact as black-box optimization (e.g., Bayesian optimization) is relevant for hyper-parameter tuning in almost every machine learning project (especially deep learning), as well as many applications outside of machine learning. The leader board will be determined using the optimization performance on held-out (hidden) objective functions, where the optimizer must run without human intervention. Baselines will be set using the default settings of six open source black-box optimization packages and random search.
Bayesian optimization is a popular sample-efficient approach for derivative-free optimization of objective functions that take several minutes or hours to evaluate. Bayesian optimization builds a surrogate model (often a Gaussian process) for the objective function that provides a measure of uncertainty. Using this surrogate model, an acquisition function is used to determine the most promising point to evaluate next.
Bayesian optimization has many applications, with hyperparameter tuning of machine learning models (e.g., deep neural networks) being one of the most popular applications. However, the choice of surrogate model and acquisition function are both problem-dependent and the goal of this challenge is to compare different approaches over a large number of different problems. This challenge focuses on the application of Bayesian optimization to tuning the hyper-parameters of machine learning models.