GECCO Workshop on Black-Box Optimization Benchmarking (BBOB 2023)

Welcome to the web page of the 12th GECCO Workshop on Black-Box Optimization Benchmarking (BBOB 2023) which took place during GECCO 2023.

WORKSHOP ON BLACK-BOX OPTIMIZATION BENCHMARKING (BBOB 2023)

held as part of the

2023 Genetic and Evolutionary Computation Conference (GECCO-2023)
July 15--19, Lisbon, Portugal
https://gecco-2023.sigevo.org
Submission opening: February 13, 2023
Submission deadline: April 17, 2023 AoE (not extendable, was: April 14)
Notification: May 3, 2023
Camera-ready: May 10, 2023
Presenter mandatory registration: May 10, 2023
register for news COCO quick start (scroll down a bit) latest COCO release



Benchmarking optimization algorithms is a crucial part in the design and application of them in practice. Since 2009, the Blackbox Optimization Benchmarking Workshop at GECCO has been a place to discuss general recent advances of benchmarking practices and the concrete results from actual benchmarking experiments with a large variety of (blackbox) optimizers.

The Comparing Continuous Optimizers platform (COCO1, https://github.com/numbbo/coco) has been developed in this context to support algorithm developers and practicioners alike by automating benchmarking experiments for blackbox optimization algorithms in single- and bi-objective, unconstrained continuous problems in exact and noisy, as well as expensive and non-expensive scenarios. A new bbob-constrained test suite has been released in 2022.

For the next BBOB 2023 edition of the workshop, we invite participants to discuss all kind of aspects of (blackbox) benchmarking but welcome in particular contributions related to constrained optimization. As in previous years, presenting benchmarking results on the supported test suites of COCO are a focus, but submissions are not limited to those topics:

  • single-objective unconstrained problems (bbob)
  • single-objective unconstrained problems with noise (bbob-noisy)
  • biobjective unconstrained problems (bbob-biobj)
  • large-scale single-objective problems (bbob-largescale) and
  • mixed-integer single- and bi-objective problems (bbob-mixint and bbob-biobj-mixint)
  • constrained optimization (bbob-constrained)

We encourage particularly submissions about algorithms from outside the evolutionary computation community and papers analyzing the large amount of already publicly available algorithm data of COCO (see https://numbbo.github.io/data-archive/). Like for the previous editions, we will provide source code in various languages (C/C++, Matlab/Octave, Java, and Python) to benchmark algorithms on the various test suites mentioned. Postprocessing data and comparing algorithm performance will be equally automatized with COCO (up to already prepared ACM-compliant LaTeX templates for writing papers).

For more details, please see below.

Accepted papers

  • Dimo Brockhoff, Pascal Capetillo, Jonathan Hornewall, Raphael Walker: Benchmarking the Borg algorithm on the Biobjective bbob-biobj Testbed
  • Óscar Espinoza, Katya Rodríguez-Vázquez, Carlos Ignacio Hernández-Castellanos, Suemi Rodriguez-Romo: Comparison Of Three Versions Of Whale Optimization Algorithm (WOA) On The Bbob Test Suite
  • Armand Gissler: Evaluation of the impact of various modifications on CMA-ES for a theoretical perspective
  • Victoria Johnson, João Duro, Visakan Kadirkamanathan, Robin Purshouse: A distributed multi-disciplinary design optimization benchmark test suite with constraints and multiple conflicting objectives
  • Jakub Kudela: Benchmarking State-of-the-art DIRECT-type Methods on the BBOB Noiseless Testbed
  • Tristan Marty, Yann Semet, Anne Auger, Sébastien Héron, Nikolaus Hansen: Benchmarking CMA-ES with Basic Integer Handling on a Mixed-Integer Test Problem Suite

Schedule

The BBOB-2023 workshop got assigned the two middle slots on Saturday, July 15, 2024 at GECCO in which the talks were scheduled according to the tables below. For both slots, the room was Porto (F13). Speakers are highlighted with a star behind the name. Please click on the provided links to download the slides if available.

Session I
10:40 - 11:15 The BBOBies: Introduction to Blackbox Optimization Benchmarking (slides)
11:15 - 11:40 Tristan Marty*, Yann Semet, Anne Auger, Sébastien Héron, Nikolaus Hansen: Benchmarking CMA-ES with Basic Integer Handling on a Mixed-Integer Test Problem Suite
11:40 - 12:05 Dimo Brockhoff, Pascal Capetillo, Jonathan Hornewall*, Raphael Walker: Benchmarking the Borg algorithm on the Biobjective bbob-biobj Testbed (slides)
12:05 - 12:30 Victoria Johnson*, João Duro, Visakan Kadirkamanathan, Robin Purshouse: A Distributed Multi-Disciplinary Design Optimization Benchmark Test Suite with Constraints and Multiple Conflicting Objectives
Session II
14:00 - 14:05 The BBOBies: Introduction to Blackbox Optimization Benchmarking (slides)
14:05 - 14:30 Óscar Espinoza, Katya Rodríguez-Vázquez, Carlos Hernández-Castellanos, Suemi Rodriguez-Romo: Comparison Of Three Versions Of Whale Optimization Algorithm (WOA) On The Bbob Test Suite
14:30 - 14:55 Armand Gissler*: Evaluation of the impact of various modifications to CMA-ES that facilitate its theoretical analysis (slides)
14:55 - 15:20 Jakub Kudela*: Benchmarking State-of-the-art DIRECT-type Methods on the BBOB Noiseless Testbed (slides)
15:20 - 15:30 The BBOBies: The COCO data archive and This Year’s Results (slides)
15:30 - 15:50 The BBOBies: Wrap-up and Open Discussion

Updates and News

Get updated about the latest news regarding the workshop and releases and bugfixes of the supporting NumBBO/COCO platform, by registering at http://numbbo.github.io/register.

Submissions

We encourage any submission that is concerned with black-box optimization benchmarking of continuous optimizers, for example papers that:

  • describe and benchmark new or not-so-new algorithms on one of the above testbeds,
  • compare new or existing algorithms from the COCO/BBOB database2,
  • analyze the data obtained in previous editions of BBOB3, or
  • discuss, compare, and improve upon any benchmarking methodology for continuous optimizers such as design of experiments, performance measures, presentation methods, benchmarking frameworks, test functions, ...

Paper submissions are expected to be done through the official GECCO submission system at https://ssl.linklings.net/conferences/gecco/ until the deadline. ACM-compliant LaTeX templates are available in the github repository under code-postprocessing/latex-templates/.

In order to finalize your submission, we kindly ask you to submit your data files if this applies by clicking on "Submit a COCO data set" here: https://github.com/numbbo/coco/issues/new/choose. To upload your data to the web, you might want to use https://zenodo.org/ which offers uploads of data sets up to 50GB in size or any other provider of online data storage.

Supporting material

The basis of the workshop is the Comparing Continuous Optimizer platform (https://github.com/numbbo/coco), written in ANSI C with other languages calling the C code. Languages currently available are C, Java, MATLAB/Octave, and Python.

Most likely, you want to read the COCO quick start (scroll down a bit). This page also provides the code for the benchmark functions4, for running the experiments in C, Java, Matlab, Octave, and Python, and for postprocessing the experiment data into plots, tables, html pages, and publisher-conform PDFs via provided LaTeX templates. Please refer to http://numbbo.github.io/coco-doc/experimental-setup/ for more details on the general experimental set-up for black-box optimization benchmarking.

The latest (hopefully) stable release of the COCO software can be downloaded as a whole here. Please use at least version v2.6.3 for running your benchmarking experiments in 2023.

Documentation of the functions used in the different test suites can be found here:

Important Dates

  • 2023-04-17 paper and data submission deadline (not extendable, was: April 14)
  • 2023-05-03 decision notification
  • 2023-05-10 deadline camera-ready papers
  • 2023-05-10 deadline author registration
  • 2023-07-15 or 2023-07-16 workshop

All dates are given in ISO 8601 format (yyyy-mm-dd).

Organizers

  • Anne Auger, Inria and CMAP, Ecole Polytechnique, Institut Polytechnique de Paris, France
  • Dimo Brockhoff, Inria and CMAP, Ecole Polytechnique, Institut Polytechnique de Paris, France
  • Paul Dufossé, Inria and Thales Defense Mission Systems, France
  • Nikolaus Hansen, Inria and CMAP, Ecole Polytechnique, Institut Polytechnique de Paris, France
  • Olaf Mersmann, TU Köln, Germany
  • Petr Pošík, Czech Technical University, Czech Republic
  • Tea Tušar, Jozef Stefan Institute (JSI), Slovenia

Footnotes

  1. Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff. "COCO: A platform for comparing continuous optimizers in a black-box setting." Optimization Methods and Software (2020): 1-31.↩︎

  2. The data of previously compared algorithms can be found at https://numbbo.github.io/data-archive and are easily accessible by name in the cocopp post-processing and from the python cocopp.archives module or in (fixed) html form at https://numbbo.github.io/ppdata-archive.↩︎

  3. The data of previously compared algorithms can be found at https://numbbo.github.io/data-archive and are easily accessible by name in the cocopp post-processing and from the python cocopp.archives module or in (fixed) html form at https://numbbo.github.io/ppdata-archive.↩︎

  4. Note that the current release of the new COCO platform does not contain the original noisy BBOB testbed yet, such that you must use the old code at https://numbbo.github.io/coco/oldcode/bboball15.03.tar.gz for the time being if you want to compare your algorithm on the noisy testbed.↩︎