Hi,

On Fri, Sep 17, 2010 at 3:50 AM, Michael Hope <michael.h...@linaro.org> wrote:
> It's only part of the puzzle, but I run speed benchmarks as part of
> the continious build:
>  http://ex.seabright.co.nz/helpers/buildlog
>  http://ex.seabright.co.nz/helpers/benchcompare
>  http://ex.seabright.co.nz/build/gcc-linaro-4.5-2010.09-1/logs/armv7l-maverick-cbuild4-pavo4/pybench-test.txt
>
> I've just modified this to build different variants as well.  ffmpeg
> now builds as supplied (-O2 and others), with -Os, with hand-written
> assembler turned off, and with -mfpu=neon.  corebench builds in -O2
> and -Os.
>
> This might be one way to approach things.  It's simple to add other
> programs into the mix.

Could you easily add code size metrics?

It would be useful to watch those for regressions also, especially if
there's an ongoing effort to make -Os better.


It would be good to have more system-oriented metrics as well, such as
boot, login and app launch times, and cache and TLB performance.
Results of microbenchmarks can be quite misleading when it comes to
the performance of the system as a whole.  I'm not sure the best way
to approach that--- many variables affect performance, and you'd need
to build many packages to get a system to benchmark.  It might be
overkill; the toolchain can definitely influence these such metrics,
but it may become a less-dominant factor once you're studying a large
enough blob of software.

Cheers
---Dave

_______________________________________________
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-toolchain

Reply via email to