Module for generating tables used by rungeneric1.py.

The generated tables give the aRT and in brackets the 10th to 90th percentile range divided by two of 100 simulated runs divided by the aRT of a reference algorithm (given in the respective first row and as indicated in testbedsettings.py) for different target precisions for different functions. If the reference algorithm did not reach the target precision, the absolute values are given.

The median number of conducted function evaluations is given in italics, if no run reached 1e-7. #succ is the number of trials that reached the target precision 1e-8 Bold entries are statistically significantly better (according to the rank-sum test) compared to the given reference algorithm, with p = 0.05 or p = 1e-k where k > 1 is the number following the downarrow symbol, with Bonferroni correction by the number of functions.

Function get_table_caption Sets table caption, based on the testbedsettings.current_testbed and genericsettings.runlength_based_targets.
Function main Generate a table of ratio aRT/aRTref vs target precision.
def get_table_caption():
Sets table caption, based on the testbedsettings.current_testbed and genericsettings.runlength_based_targets.
def main(dsList, dims_of_interest, outputdir, latex_commands_file):

Generate a table of ratio aRT/aRTref vs target precision.

1 table per dimension will be generated.

Rank-sum tests table on "Final Data Points" for only one algorithm. that is, for example, using 1/#fevals(ftarget) if ftarget was reached and -f_final otherwise as input for the rank-sum test, where obviously the larger the better.

API Documentation for cocopp, generated by pydoctor at 2020-01-21 16:27:37.