Welcome to the BBOB workshop series!

The Black-box Optimization Benchmarking (BBOB) workshop series provides an easy-to-use toolchain for benchmarking black-box optimization algorithms for continuous domains and a place to present, compare, and discuss the performance of numerical black-box optimization algorithms. The former is realized through the Comparing Continuous Optimizers platform (Coco).

So far, nine workshops have been held (in 2009, 2010, 2012, 2013, 2015, 2016, 2017, and 2018 at GECCO and in 2015 at CEC).

The next workshop, BBOB 2019, celebrating the workshop series’ 10th anniversary, will take place at GECCO‘2019.

Generally, three benchmark suites are available:

  • bbob containing 24 noiseless functions
  • bbob-noisy containing 30 noisy functions
  • bbob-biobj containing 55 noiseless, bi-objective functions, generated from the bbob suite

Note that due to the rewriting of the Coco platform, the bbob-noisy test suite is not yet available in the new code from http://github.com/numbbo/coco . Please use the old code at http://coco.gforge.inria.fr/doku.php?id=downloads instead for running experiments on bbob-noisy.