A Short Introduction to COCO

COCO (COmparing Continuous Optimizers) is a platform for systematic and sound comparisons of real-parameter global optimizers. COCO provides benchmark function testbeds, experimentation templates which are easy to parallelize, and tools for processing and visualizing data generated by one or several optimizers. The COCO platform has been used for the Black-Box-Optimization-Benchmarking (BBOB) workshops that took place during the GECCO conference in 2009, 2010, 2012, 2013, 2015-2019, and in 2021. It was also used at the IEEE Congress on Evolutionary Computation (CEC’2015) in Sendai, Japan.

The COCO experiment source code has been rewritten in the years 2014-2015 and the current production code is available on our COCO Github page. The old code is still available here and shall be used for experiments on the noisy test suite until this test suite will be available in the new code as well.

For a general introduction to the COCO software and its underlying concepts of performance assessment, please see this article

Hansen, N., Auger, A., Ros, R., Mersmann, O., Tušar, T., & Brockhoff, D. (2021). COCO: A platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software, 36(1), 114-144.

or its publicly available version on HAL.