GECCO Workshop on Black-Box Optimization Benchmarking (BBOB 2021) - focus on mixed-integer problems

Welcome to the web page of the 10th GECCO Workshop on Black-Box Optimization Benchmarking (BBOB 2021) taking place online during GECCO 2021.

WORKSHOP ON BLACK-BOX OPTIMIZATION BENCHMARKING (BBOB 2021)

held as part of the

2021 Genetic and Evolutionary Computation Conference (GECCO-2021)
July 10–14, Lille, France
Submission opening: February 11, 2021
Submission deadline: April 12, 2021
Notification: April 26, 2021
Camera-ready: May 3, 2021

register for news

COCO quick start (scroll down a bit)

latest COCO release

Benchmarking of optimization algorithms is a crucial part in their design and application in practice. The Comparing Continuous Optimizers platform (COCO, https://github.com/numbbo/coco) has been developed in the past decade to support algorithm developers and practitioners alike by automating benchmarking experiments for blackbox optimization algorithms in single- and bi-objective, unconstrained continuous problems in exact and noisy, as well as expensive and non-expensive scenarios.

For the 11th Blackbox Optimization Benchmarking workshop (BBOB 2021) and the 10th edition at GECCO (1 workshop was held at CEC), we plan to widen our focus towards mixed-integer benchmark problems. Concretely, we highly encourage submissions describing the benchmarking results from blackbox optimization algorithms on the single-objective bbob-mixint and the bi-objective bbob-biobj-mixint suites previously released at GECCO-2019.

Any other submission discussing other aspects of (blackbox) benchmarking, especially on the other available bbob, bbob-noisy, bbob-biobj, and bbob-largescale test suites are welcome as well. We encourage particularly submissions about algorithms from outside the evolutionary computation community and papers analyzing the large amount of already publicly available algorithm data of COCO (see https://numbbo.github.io/data-archive/).

Like for the previous editions of the workshop, we will provide source code in various languages (C/C++, Matlab/Octave, Java, and Python) to benchmark algorithms on the various test suites mentioned. Postprocessing data and comparing algorithm performance will be equally automatized with COCO (up to already prepared ACM-compliant LaTeX templates for writing papers).

For details, please see below.

Updates and News

Get updated about the latest news regarding the workshop and releases and bugfixes of the supporting NumBBO/COCO platform, by registering at http://numbbo.github.io/register.

Submissions

We encourage any submission that is concerned with black-box optimization benchmarking of continuous optimizers, for example papers that:

  • describe and benchmark new or not-so-new algorithms on one of the above testbeds,

  • compare new or existing algorithms from the COCO/BBOB database 2,

  • analyze the data obtained in previous editions of BBOB 2, or

  • discuss, compare, and improve upon any benchmarking methodology for continuous optimizers such as design of experiments, performance measures, presentation methods, benchmarking frameworks, test functions, …

Paper submissions are expected to be done through the official GECCO submission system at https://ssl.linklings.net/conferences/gecco/ until the (extended) deadline on April 12, 2021. ACM-compliant LaTeX templates are available in the github repository under code-postprocessing/latex-templates/.

In order to finalize your submission, we kindly ask you to submit your data files if this applies by clicking on “Submit a COCO data set” here: https://github.com/numbbo/coco/issues/new/choose. To upload your data to the web, you might want to use https://zenodo.org/ which offers uploads of data sets up to 50GB in size or any other provider of online data storage.

2(1,2)

The data of previously compared algorithms can be found at https://numbbo.github.io/data-archive/ and are easily accessible by name in the cocopp post-processing and from the python cocopp.archives module.

Supporting material

The basis of the workshop is the Comparing Continuous Optimizer platform (https://github.com/numbbo/coco), written in ANSI C with other languages calling the C code. Languages currently available are C, Java, MATLAB/Octave, and Python.

Most likely, you want to read the COCO quick start (scroll down a bit). This page also provides the code for the benchmark functions 1, for running the experiments in C, Java, Matlab, Octave, and Python, and for postprocessing the experiment data into plots, tables, html pages, and publisher-conform PDFs via provided LaTeX templates. Please refer to http://numbbo.github.io/coco-doc/experimental-setup/ for more details on the general experimental set-up for black-box optimization benchmarking.

The latest (hopefully) stable release of the COCO software can be downloaded as a whole here. Please use at least version v2.4 for running your benchmarking experiments in 2021.

Documentation of the functions used in the different test suites can be found here:

1

Note that the current release of the new COCO platform does not contain the original noisy BBOB testbed yet, such that you must use the old code at https://numbbo.github.io/coco/oldcode/bboball15.03.tar.gz for the time being if you want to compare your algorithm on the noisy testbed.

Important Dates

  • 2020-12-15 release 2.4 of the COCO platform https://github.com/numbbo/coco/releases/

  • 2021-02-11 paper submission system opens

  • 2021-04-12 paper and data submission deadline

  • 2021-04-26 decision notification

  • 2021-05-03 deadline camera-ready papers

  • 2021-05-03 deadline author registration

  • 2021-07-10 or 2021-07-11 workshop

All dates are given in ISO 8601 format (yyyy-mm-dd).

Organizers

  • Anne Auger, Inria and CMAP, Ecole Polytechnique, Institut Polytechnique de Paris, France

  • Peter A. N. Bosman, Centrum Wiskunde & Informatica (CWI) and TU Delft, The Netherlands

  • Dimo Brockhoff, Inria and CMAP, Ecole Polytechnique, Institut Polytechnique de Paris, France

  • Tobias Glasmachers, Ruhr-Universität Bochum, Germany

  • Nikolaus Hansen, Inria and CMAP, Ecole Polytechnique, Institut Polytechnique de Paris, France

  • Petr Pošík, Czech Technical University, Czech Republic

  • Tea Tušar, Jozef Stefan Institute (JSI), Slovenia