module documentation

Generate performance scaling figures.

The figures show the scaling of the performance in terms of ERT w.r.t. dimensionality on a log-log scale. On the y-axis, data is represented as a number of function evaluations divided by dimension, this is in order to compare at a glance with a linear scaling for which ERT is proportional to the dimension and would therefore be represented by a horizontal line in the figure.

Crosses (+) give the median number of function evaluations of successful trials divided by dimension for the smallest reached target function value. Numbers indicate the number of successful runs for the smallest reached target. If the smallest target function value (1e-8) is not reached for a given dimension, crosses (x) give the average number of overall conducted function evaluations divided by the dimension.

Horizontal lines indicate linear scaling with the dimension, additional grid lines show quadratic and cubic scaling. The thick light line with diamond markers shows the results of the specified reference algorithm for df = 1e-8 or a runlength-based target (if in the expensive/runlength-based targets setting).

Example

Function beautify Customize figure presentation.
Function generateData Computes an array of results to be plotted.
Function main From a DataSetList, returns a convergence and ERT/dim figure vs dim.
Function plot From a DataSetList, plot a figure of ERT/dim vs dim.
Function plot_a_bar plot/draw a notched error bar, x is the x-position, y[0,1,2] are lower, median and upper percentile respectively.
Function plot_previous_algorithms Add graph of the reference algorithm, specified in testbedsettings.current_testbed using the last, most difficult target in target.
Function scaling_figure_caption Provides a figure caption with the help of captions.py for replacing common texts, abbreviations, etc.
Variable refcolor Undocumented
Variable styles Undocumented
Variable xlim_max Undocumented
Variable ynormalize_by_dimension Undocumented
def beautify(axesLabel=True):

Customize figure presentation.

Uses information from the appropriate benchmark short infos file for figure title.

def generateData(dataSet, targetFuncValue):

Computes an array of results to be plotted.

Returns
(ert, success rate, number of success, total number of function evaluations, median of successful runs).
def main(dsList, _valuesOfInterest, outputdir):

From a DataSetList, returns a convergence and ERT/dim figure vs dim.

If available, uses data of a reference algorithm as specified in :py:genericsettings.py.

Parameters
dsListUndocumented
_valuesOfInterestUndocumented
outputdirUndocumented
DataSetList dsListdata sets
seq _valuesOfInteresttarget precisions, either as list or as pproc.TargetValues class instance. There will be as many graphs as there are elements in this input.
string outputdiroutput directory
def plot(dsList, valuesOfInterest=None, styles=styles):

From a DataSetList, plot a figure of ERT/dim vs dim.

There will be one set of graphs per function represented in the input data sets. Most usually the data sets of different functions will be represented separately.

Parameters
dsListUndocumented
valuesOfInterestUndocumented
stylesUndocumented
DataSetList dsListdata sets
seq valuesOfInteresttarget precisions via class TargetValues, there might be as many graphs as there are elements in this input. Can be different for each function (a dictionary indexed by ifun).
Returns
handles
def plot_a_bar(x, y, plot_cmd=plt.loglog, rec_width=0.1, rec_taille_fac=0.3, styles={'color': 'b'}, linewidth=1, fill_color=None, fill_transparency=0.7):

plot/draw a notched error bar, x is the x-position, y[0,1,2] are lower, median and upper percentile respectively.

hold(True) to see everything.

TODO: with linewidth=0, inf is not visible

def plot_previous_algorithms(func, target=None):

Add graph of the reference algorithm, specified in testbedsettings.current_testbed using the last, most difficult target in target.

def scaling_figure_caption():

Provides a figure caption with the help of captions.py for replacing common texts, abbreviations, etc.

refcolor: str =

Undocumented

styles: list[dict] =

Undocumented

xlim_max =

Undocumented

ynormalize_by_dimension: bool =

Undocumented