Historically I would do

$ ./dotest.py +b <path to my benchmark test>

but I am not strongly attached to that workflow - it's just what I learnt the 
first time I needed to run one

Sent from my iPhone

> On Dec 9, 2015, at 2:08 PM, Zachary Turner <ztur...@google.com> wrote:
> 
> When you do run the benchmark tests, what command line options do you use?  
> At the moment I'm mostly just trying to remove dead options from the test 
> suite.  I removed one already that allowed you to specify the benchmark 
> executable, but then when I started looking at the rest and seeing how 
> tightly integrated they are with the benchamrk tests in general, I started to 
> wonder.
> 
> The three benchmark related command line options are:
> 
> 1. An option to specify the benchmark executable (defaults to lldb.exe)
> 2. An option to specify the breakpoint spec (defaults to -n main)
> 3. An option to specify the breakpoint iteration count (defaults to 30 I 
> think)
> 4. An option to specify that you only want to run benchmark tests and no 
> other tests.
> 
> I deleted #4 because you can use the category system for that.  I deleted #1 
> because nobody said they needed it on the spreadsheet.  Nobody said they 
> needed #2 or #3 either, but I just want to double check that deleting them is 
> fine.
> 
>> On Wed, Dec 9, 2015 at 2:01 PM Enrico Granata <egran...@apple.com> wrote:
>> I have actually added a few benchmark tests recently. We admittedly are not 
>> that good with running those tests ever (because they're not run by default 
>> most likely - and I do wonder if some of them would take a long time to 
>> run.. I don't think I have ever run the full set, just my own as I increment 
>> on performance work).
>> 
>> Maybe we could try flipping the default to be "run the benchmarks", see if 
>> test suite run times explode and take it from there in terms of feasibility 
>> as well as whether they all still make sense.
>> 
>> The other problem with the tests as they stand is that they mark themselves 
>> as PASS or FAIL purely on the basis of whether they encounter command or API 
>> errors, and do nothing to track performance regressions. That is admittedly 
>> a harder problem to tackle given heterogeneous hardware and workload - but 
>> maybe we could have them fail if the timings go wildly crazy over some 
>> threshold?
>> 
>> Sent from my iPhone
>> 
>>> On Dec 9, 2015, at 1:22 PM, Todd Fiala via lldb-dev 
>>> <lldb-dev@lists.llvm.org> wrote:
>>> 
>>> Hey Jason,
>>> 
>>> Are you the benchmark user?
>>> 
>>> -Todd
>>> 
>>>> On Wed, Dec 9, 2015 at 12:32 PM, Zachary Turner via lldb-dev 
>>>> <lldb-dev@lists.llvm.org> wrote:
>>>> Is anyone using the benchmark tests?  None of the command line options 
>>>> related to the benchmark tests were claimed as being used by anyone.  
>>>> Which makes me wonder if the tests are even being used by anyone.  
>>>> 
>>>> What I really want to know is: Is it really ok to delete the -x and -y 
>>>> command line options?  And what is the status of these tests?  Does anyone 
>>>> use them?
>>>> 
>>>> _______________________________________________
>>>> lldb-dev mailing list
>>>> lldb-dev@lists.llvm.org
>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>> 
>>> 
>>> 
>>> -- 
>>> -Todd
>>> _______________________________________________
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
_______________________________________________
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

Reply via email to