Link Search Menu Expand Document

COCO: COmparing Continuous Optimizers

COCO (COmparing Continuous Optimizers) is a platform for systematic and sound comparisons of real-parameter global optimizers. COCO provides benchmark function testbeds, experimentation templates which are easy to parallelize, and tools for processing and visualizing data generated by one or several optimizers. The COCO platform has been used for the Black-Box-Optimization-Benchmarking (BBOB) workshops that took place during the GECCO conference in 2009, 2010, 2012, 2013, 2015-2019, and in 2021. It was also used at the IEEE Congress on Evolutionary Computation (CEC’2015) in Sendai, Japan.

The COCO experiment source code has been rewritten in the years 2014-2015 and the current production code is available on our COCO Github page. The old code is still available here and shall be used for experiments on the noisy test suite until this test suite will be available in the new code as well.


You may cite this work in a scientific context as

N. Hansen, A. Auger, R. Ros, O. Mersmann, T. Tušar, D. Brockhoff. COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting, Optimization Methods and Software, 36(1), pp. 114-144, 2021. [pdf, arXiv]

    author = {Hansen, N. and Auger, A. and Ros, R. and Mersmann, O. and Tu{\v s}ar, T. and Brockhoff, D.},
    title = {{COCO}: A Platform for Comparing Continuous Optimizers in a Black-Box Setting},
    journal = {Optimization Methods and Software},
    doi = {},
    pages = {114--144},
    issue = {1},
    volume = {36},
    year = 2021