module documentation

Module for generating tables used by rungeneric1.py.

The generated tables give the ERT and in brackets the 10th to 90th percentile range divided by two of 100 simulated runs divided by the ERT of a reference algorithm (given in the respective first row and as indicated in testbedsettings.py) for different target precisions for different functions. If the reference algorithm did not reach the target precision, the absolute values are given.

The median number of conducted function evaluations is given in italics, if no run reached 1e-7. #succ is the number of trials that reached the target precision 1e-8 Bold entries are better than the given reference algorithm with a p-value of at least 0.05 or 1e-k where k is the number following the down arrow (computed in the rank-sum test with Bonferroni correction by the number of functions).

Function get_table_caption Sets table caption, based on the testbedsettings.current_testbed and genericsettings.runlength_based_targets.
Function main Generate a table of ratio ERT/ERTref vs target precision.
def get_table_caption():

Sets table caption, based on the testbedsettings.current_testbed and genericsettings.runlength_based_targets.

def main(dsList, dims_of_interest, outputdir, latex_commands_file):

Generate a table of ratio ERT/ERTref vs target precision.

1 table per dimension will be generated.

Rank-sum tests table on "Final Data Points" for only one algorithm. that is, for example, using 1/#fevals(ftarget) if ftarget was reached and -f_final otherwise as input for the rank-sum test, where obviously the larger the better.