[ 
https://issues.apache.org/jira/browse/GEODE-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16701219#comment-16701219
 ] 

ASF GitHub Bot commented on GEODE-6086:
---------------------------------------

WireBaron opened a new pull request #7: GEODE-6086: Adding a tool to allow 
command line analysis of test runs
URL: https://github.com/apache/geode-benchmarks/pull/7
 
 
   This change adds the analyzeRun gradle target, which, when passed a baseline
   result directory and test result directory via --args, will analyze the 
output
   and report the variation of the test from the baseline.
   
   
   Sample output: 
   ```
   MacBook-Pro-2:geode-performance browe$ ./gradlew analyzeRun --args 
"/Users/browe/project/geode-performance/geode-benchmarks/output2 
/Users/browe/project/geode-performance/geode-benchmarks/output"
   
   > Task :harness:analyzeRun
   Running analyzer
   Comparing test result at 
/Users/browe/project/geode-performance/geode-benchmarks/output2 to baseline at 
/Users/browe/project/geode-performance/geode-benchmarks/output
   -- PartitionedPutBenchmark --
   average ops/second
   Result: 47386.93220338983
   Baseline: 46237.745762711864
   Relative performance: 1.0248538595842343
   
   99th percentile latency
   Result: 292.85714285714283
   Baseline: 294.11764705882354
   Relative performance: 0.9957142857142857
   ```
   
   Two remaining pain points I'd like to address:
   1. When adding a benchmark it would be easier to use roles rather than node 
names
   2. It would also be nice to reference the benchmark classes directly in the 
analyzer (to get their output directories), but that dependency goes the wrong 
way.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Provide a tool that can extract the key metrics we want from the yardstick 
> output
> ---------------------------------------------------------------------------------
>
>                 Key: GEODE-6086
>                 URL: https://issues.apache.org/jira/browse/GEODE-6086
>             Project: Geode
>          Issue Type: New Feature
>          Components: benchmarks
>            Reporter: Dan Smith
>            Assignee: Brian Rowe
>            Priority: Major
>              Labels: pull-request-available
>
> *Given* some output directories from our benchmark running tool.
> *Produce* text (csv, json, ??) that gives a single number for each output 
> directory that is passed in to the tool.
> The initial numbers we would like to be able to get is average throughput and 
> 99th percentile latency, but we want to be able to extract other metrics in 
> the future.
> *Acceptance*
> Given that a developer has run some benchmarks with the benchmark running 
> tool, they can run this tool and get a summary of the benchmark data (average 
> throughput and 99th percentile latency) for each output directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to