[
https://issues.apache.org/jira/browse/TINKERPOP-1016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15208287#comment-15208287
]
ASF GitHub Bot commented on TINKERPOP-1016:
-------------------------------------------
Github user twilmes commented on the pull request:
https://github.com/apache/incubator-tinkerpop/pull/274#issuecomment-200309448
I had done it like that so you could switch back and forth between
branches, run benchmarks, and the results would be persistent for comparison
purposes without copying them out of `target`. Having said that, I'm just
using the benchmark class name as the name of the file so these would get
overwritten every run anyway. I could update that naming to include an
incrementing number in the file name. I'm not opposed to putting them under
target, though. I'd imagine the usual performance tuning cycle may go
something like iterate on perf improvements in your branch, running benchmarks
as you make updates. Then commit, switch back to master or latest stable, run
benchmarks again, and note how much of an improvement you made.
> Replace junit-benchmarks with JMH
> ---------------------------------
>
> Key: TINKERPOP-1016
> URL: https://issues.apache.org/jira/browse/TINKERPOP-1016
> Project: TinkerPop
> Issue Type: Improvement
> Components: test-suite
> Affects Versions: 3.1.0-incubating
> Reporter: Ted Wilmes
> Assignee: Ted Wilmes
> Priority: Minor
>
> Replace junit-benchmark with JMH. This includes the following tasks:
> * Evaluate which existing benchmarks should be kept and port them to JMH.
> * Add an initial JMH benchmark for TINKERPOP-957
> * See if we can trigger perf tests with the same scheme that is currently
> used {noformat}mvn verify -DskipPerformanceTests=true/false{noformat}
> * Write developer docs that outline our initial performance testing approach.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)