On 18.09.20 20:19, Vladimir Sementsov-Ogievskiy wrote: > Performance improvements / degradations are usually discussed in > percentage. Let's make the script calculate it for us. > > Signed-off-by: Vladimir Sementsov-Ogievskiy <[email protected]> > --- > scripts/simplebench/simplebench.py | 46 +++++++++++++++++++++++++++--- > 1 file changed, 42 insertions(+), 4 deletions(-) > > diff --git a/scripts/simplebench/simplebench.py > b/scripts/simplebench/simplebench.py > index 56d3a91ea2..0ff05a38b8 100644 > --- a/scripts/simplebench/simplebench.py > +++ b/scripts/simplebench/simplebench.py
[...]
> + for j in range(0, i):
> + env_j = results['envs'][j]
> + res_j = case_results[env_j['id']]
> +
> + if 'average' not in res_j:
> + # Failed result
> + cell += ' --'
> + continue
> +
> + col_j = chr(ord('A') + j)
> + avg_j = res_j['average']
> + delta = (res['average'] - avg_j) / avg_j * 100
I was wondering why you’d subtract, when percentage differences usually
mean a quotient. Then I realized that this would usually be written as:
(res['average'] / avg_j - 1) * 100
> + delta_delta = (res['delta'] + res_j['delta']) / avg_j * 100
Why not use the new format_percent for both cases?
> + cell += f' {col_j}{round(delta):+}±{round(delta_delta)}%'
I don’t know what I should think about ±delta_delta. If I saw “Compared
to run A, this is +42.1%±2.0%”, I would think that you calculated the
difference between each run result, and then based on that array
calculated average and standard deviation.
Furthermore, I don’t even know what the delta_delta is supposed to tell
you. It isn’t even a delta_delta, it’s an average_delta.
The delta_delta would be (res['delta'] / res_j['delta'] - 1) * 100.0.
And that might be presented perhaps like “+42.1% Δ± +2.0%” (if delta
were the SD, “Δx̅=+42.1% Δσ=+2.0%” would also work; although, again, I do
interpret ± as the SD anyway).
Max
signature.asc
Description: OpenPGP digital signature
