Hi Stan,

> Would metrics generated by the benchmark be documented in the code review, as 
> comments within the tst_*.cpp file itself, or in a README.txt file within the 
> benchmark folder?

I normally include the benchmark results in the commit message.
That makes it easier for the reviewers + leaves some info in the git log.

Also, if you're rewriting some pre-existing function, it might make sense to
add the benchmark as a prequel commit, before actually doing the refactoring.

Best regards,
Ivan

________________________________________
From: Interest <interest-boun...@qt-project.org> on behalf of Stan Morris 
<pixelgre...@gmail.com>
Sent: Monday, September 30, 2024 10:36 PM
To: interest@qt-project.org
Subject: [Interest] Best practices for Qt code base benchmarks?

I want to add a benchmark that compares a legacy Qt function's performance 
against one I am developing as a patch. I cannot find guidance on best 
practices for benchmarks.

Is there documentation regarding best practices for benchmarks within the Qt 
framework?
Are "<module>/tests/benchmarks/..." intended to be run from with Qt Creator?
Is there a convention for documenting the goals of benchmarks?

It appears to me that the "tests/benchmarks" folder is for Qt code base 
developers for researching the performance during development, but where is the 
explanation for interpreting results?

For example, consider: "/qtdeclarative/tests/benchmarks/quick/events/". Are the 
results for a specific platform recorded anywhere? What do the results *mean*?

I'm testing on two platforms, a desktop and an embedded device and getting 
results that show the patch can improve performance. Would metrics generated by 
the benchmark be documented in the code review, as comments within the 
tst_*.cpp file itself, or in a README.txt file within the benchmark folder?
_______________________________________________
Interest mailing list
Interest@qt-project.org
https://lists.qt-project.org/listinfo/interest

Reply via email to