Ahmet,
> I have edited the code aligning with the Hin-Tak’s suggestion. Here
> is the two results pages, also pushed on gitlab.
Thanks. It seems to me we are getting nearer. However, there are
still large differences.
* Chris mentioned a potential problem with `clock_gettime` in the
code of `ftbench.c`. Please have a look.
* As mentioned a few times already in previous e-mails I think we need
some code to increase the run time for individual tests. For
example, the line
```
Load_Advances (Fast) 47500 202 284 -40.6
```
indicates that 47500 iterations only took 202µs vs. 284µs – due to
the 'CPU noise' I think this interval is far too short to be
meaningful.
For example, if the cumulative time for test X is less than a
certain threshold, increase N so that the cumulative time is very
near to the threshold. This value, if performed by the 'baseline',
should be stored in a configuration file so that the 'benchmark'
stage can extract this information to use exactly the same N value
for test X.
Please work on that.
* It would be great if you could use a statistics program of your
choice and prepare some diagrams of the most problematic cases that
show the actual timing distributions graphically. Right now, we
only see the final cumulative value; however, it would be most
interesting to see more details of the timing slots.
For example, I can imagine that you add some `printf` calls
(printing to a buffer, which gets dumped to stdout or whatever
*after* the tests); then GNU plot or something similar can generate
diagrams.
Werner