Welcome to the BBOB workshop series!¶
The Black-box Optimization Benchmarking (BBOB) workshop series provides an easy-to-use toolchain for benchmarking black-box optimization algorithms for continuous domains and a place to present, compare, and discuss the performance of numerical black-box optimization algorithms. The former is realized through the Comparing Continuous Optimizers platform (Coco).
So far, seven workshops have been held (in 2009, 2010, 2012, 2013, 2015, and 2016 at GECCO and in 2015 at CEC). The next workshop, BBOB 2017, is going to take place at GECCO 2017 with a continued emphasis on our bi-objective test suite(s).
Generally, four benchmark suites are available:
bbobcontaining 24 noiseless functions
bbob-noisycontaining 30 noisy functions
bbob-biobjcontaining 55 noiseless, bi-objective functions, generated from the
bbob-biobj-extcontaining 92 noiseless, bi-objective functions, as an extension of
Note that due to the rewriting of the Coco platform, the
bbob-noisy test suite is not yet available in the new code from http://github.com/numbbo/coco . Please use the old code at http://coco.gforge.inria.fr/doku.php?id=downloads instead for running experiments on
Table of Contents:¶
- GECCO Workshop on Real-Parameter Black-Box Optimization Benchmarking (BBOB 2017)
- GECCO Workshop on Real-Parameter Black-Box Optimization Benchmarking (BBOB 2016) - focus on multi-objective problems
- BBOB workshops before 2016