Generate performance scaling figures.
The figures show the scaling of the performance in terms of ERT w.r.t. dimensionality on a log-log scale. On the y-axis, data is represented as a number of function evaluations divided by dimension, this is in order to compare at a glance with a linear scaling for which ERT is proportional to the dimension and would therefore be represented by a horizontal line in the figure.
Crosses (+) give the median number of function evaluations of successful trials divided by dimension for the smallest reached target function value. Numbers indicate the number of successful runs for the smallest reached target. If the smallest target function value (1e-8) is not reached for a given dimension, crosses (x) give the average number of overall conducted function evaluations divided by the dimension.
Horizontal lines indicate linear scaling with the dimension, additional grid lines show quadratic and cubic scaling. The thick light line with diamond markers shows the results of the specified reference algorithm for df = 1e-8 or a runlength-based target (if in the expensive/runlength-based targets setting).
Example
Function | beautify |
Customize figure presentation. |
Function | generate |
Computes an array of results to be plotted. |
Function | main |
From a DataSetList, returns a convergence and ERT/dim figure vs dim. |
Function | plot |
From a DataSetList, plot a figure of ERT/dim vs dim. |
Function | plot |
plot/draw a notched error bar, x is the x-position, y[0,1,2] are lower, median and upper percentile respectively. |
Function | plot |
Add graph of the reference algorithm, specified in testbedsettings.current_testbed using the last, most difficult target in target. |
Function | scaling |
Provides a figure caption with the help of captions.py for replacing common texts, abbreviations, etc. |
Variable | refcolor |
Undocumented |
Variable | styles |
Undocumented |
Variable | xlim |
Undocumented |
Variable | ynormalize |
Undocumented |
Customize figure presentation.
Uses information from the appropriate benchmark short infos file for figure title.
Computes an array of results to be plotted.
Returns | |
(ert, success rate, number of success, total number of function evaluations, median of successful runs). |
From a DataSetList, returns a convergence and ERT/dim figure vs dim.
If available, uses data of a reference algorithm as specified in :py:genericsettings.py.
Parameters | |
ds | Undocumented |
_values | Undocumented |
outputdir | Undocumented |
data sets | |
seq _values | target precisions, either as list or as pproc.TargetValues class instance. There will be as many graphs as there are elements in this input. |
string outputdir | output directory |
From a DataSetList, plot a figure of ERT/dim vs dim.
There will be one set of graphs per function represented in the input data sets. Most usually the data sets of different functions will be represented separately.
Parameters | |
ds | Undocumented |
values | Undocumented |
styles | Undocumented |
data sets | |
seq values | target precisions via class TargetValues, there might be as many graphs as there are elements in this input. Can be different for each function (a dictionary indexed by ifun). |
Returns | |
handles |
plot/draw a notched error bar, x is the x-position, y[0,1,2] are lower, median and upper percentile respectively.
hold(True) to see everything.
TODO: with linewidth=0, inf is not visible
Add graph of the reference algorithm, specified in testbedsettings.current_testbed using the last, most difficult target in target.