COCO: COmparing Continuous Optimizers
COCO (COmparing Continuous Optimizers) is a platform for systematic and sound comparisons of real-parameter global optimizers. COCO provides benchmark function testbeds, experimentation templates which are easy to parallelize, and tools for processing and visualizing data generated by one or several optimizers. The COCO platform has been used for the Black-Box-Optimization-Benchmarking (BBOB) workshops that took place during the GECCO conference in 2009, 2010, 2012, 2013, 2015-2019, and in 2021. It was also used at the IEEE Congress on Evolutionary Computation (CEC’2015) in Sendai, Japan.
The COCO experiment source code has been rewritten in the years 2014-2015 and the current production code is available on our COCO Github page. The old code is still available here and shall be used for experiments on the noisy test suite until this test suite will be available in the new code as well.
Related links
- Code web page on Github (for how to run experiments)
- Data archive of all officially registered benchmark experiments (also accessible via the postprocessing module)
- Postprocessed data of these archives for browsing
- How to submit a data set
- How to create and use COCO data archives with the cocopp.archiving Python module
- Get news about COCO by registering here
- To visit the old COCO webpage, see the Internet Archive
Citation
You may cite this work in a scientific context as
N. Hansen, A. Auger, R. Ros, O. Mersmann, T. Tušar, D. Brockhoff. COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting, Optimization Methods and Software, 36(1), pp. 114-144, 2021. [pdf, arXiv]
@ARTICLE{hansen2021coco,
author = {Hansen, N. and Auger, A. and Ros, R. and Mersmann, O. and Tu{\v s}ar, T. and Brockhoff, D.},
title = {{COCO}: A Platform for Comparing Continuous Optimizers in a Black-Box Setting},
journal = {Optimization Methods and Software},
doi = {https://doi.org/10.1080/10556788.2020.1808977},
pages = {114--144},
issue = {1},
volume = {36},
year = 2021
}