[lldb-dev] buildbot master configs [Re: buildbot deployment: gsutil: Anonymous caller does not have storage.objects.create access to lldb_test_traces]

2018-08-23 Thread Jan Kratochvil via lldb-dev
Hello,

I need a testing local buildbot instance to develop a buildbot slave config:

On Thu, 02 Aug 2018 14:47:42 +0200, Pavel Labath via lldb-dev wrote:
> On Thu, 2 Aug 2018 at 13:39, Jan Kratochvil  wrote:
> > On Thu, 02 Aug 2018 13:47:25 +0200, Pavel Labath wrote:
> > > *However*, for setting up a new bot, I'd recommend not using this
> > > particular slave factory (getLLDBScriptCommandsFactory) at all,
> > > because it's heavily customized for our use case (*), and very
> > > different from how typical llvm buildbots are set up. You might be
> > > better off setting up a new factory, which just does the typical
> > > checkout+build+(optional) test steps, and avoids all of this mess.
> >
> > OK. For development of these new steps I guess I should run my own buildbot
> > master instance? As otherwise that will be probably several/many commits to
> > zorg repo (+requested buildbot master restarts) and I may screw up something
> > along.
> 
> Yes, that would definitely be the best, but last time I tried that, I
> couldn't get my master instance to run, for any approximation of the
> word "run" (which is part of the reason why I haven't done anything
> about this slave factory, even though I really don't like it)..

I have found buildbot versions different than 0.8.5 are incompatibile with
LLVM infrastructure/configs so to run 0.8.5 on Fedora 28 x86_64 I have
backported:
https://people.redhat.com/jkratoch/buildbot-0.8.5-fix.patch
https://people.redhat.com/jkratoch/buildbot-0.8.5-fix2.patch

So I downloaded zorg from LLVM and set it up
[buildbot@host1 ~]$ ls -l lldbmaster
lrwxrwxrwx 1 buildbot buildbot 32 Aug 14 18:55 lldbmaster -> 
zorg-git/buildbot/osuosl/master/
[buildbot@host1 ~]$ ls -l lldbmaster/
total 76
-rw-r--r-- 1 buildbot buildbot   878 Aug 14 15:25 buildbot.tac
drwxr-xr-x 2 buildbot buildbot  4096 Aug 14 19:01 config
-rw-r--r-- 1 buildbot buildbot  9552 Aug 14 15:25 master.cfg
drwxr-xr-x 2 buildbot buildbot  4096 Aug 14 15:25 public_html
-rw-r--r-- 1 buildbot buildbot   465 Aug 14 15:25 README.txt
drwxr-xr-x 2 buildbot buildbot  4096 Aug 14 15:25 templates
-rw-r--r-- 1 buildbot buildbot 34088 Aug 14 19:01 twistd.log
-rw--- 1 buildbot buildbot 7 Aug 14 19:01 twistd.pid
lrwxrwxrwx 1 buildbot buildbot28 Aug 14 19:00 zorg -> 
/home/buildbot/zorg-git/zorg
with zorg-git directory from https://llvm.org/git/zorg.git patched as attached
but then I still get:

--
$ buildbot start ~/lldbmaster
Following twistd.log until startup finished..
/home/buildbot/.local/lib/python2.7/site-packages/buildbot-latest-py2.7.egg/buildbot/schedulers/base.py:111:
 DeprecationWarning: twisted.internet.defer.deferredGenerator was deprecated in 
Twisted 15.0.0; please use twisted.internet.defer.inlineCallbacks instead
  @defer.deferredGenerator
... ^^^ this looks harmless

2018-08-14 14:35:04+0200 [-] error while parsing config file
2018-08-14 14:35:04+0200 [-] Unhandled Error
Traceback (most recent call last):
  File 
"/home/buildbot/.local/lib/python2.7/site-packages/buildbot-latest-py2.7.egg/buildbot/master.py",
 line 197, in loadTheConfigFile
d = self.loadConfig(f)
  File 
"/home/buildbot/.local/lib/python2.7/site-packages/buildbot-latest-py2.7.egg/buildbot/master.py",
 line 579, in loadConfig
d.addCallback(do_load)
  File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", 
line 317, in addCallback
callbackKeywords=kw)
  File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", 
line 306, in addCallbacks
self._runCallbacks()
---  ---
  File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", 
line 587, in _runCallbacks
current.result = callback(current.result, *args, **kw)
  File 
"/home/buildbot/.local/lib/python2.7/site-packages/buildbot-latest-py2.7.egg/buildbot/master.py",
 line 226, in do_load
exec f in localDict
  File "/quad/home/buildbot/lldbmaster/master.cfg", line 104, in 

standard_categories)
  File "/quad/home/buildbot/lldbmaster/config/status.py", line 31, in 
get_status_targets
default_email = config.options.get('Master Options', 
'default_email')
  File "/usr/lib64/python2.7/ConfigParser.py", line 330, in get
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'Master Options'
--

I have found last zorg files containing '[Master Options]' have been removed by:
https://reviews.llvm.org/D30503
commit a4a7c00a15e94bf2a26ec209d27e6ece5c20a16b
git-svn-id: https://llvm.org/svn/llvm-project/zorg/trunk@296756 
91177308-0d34-0410-b5e6-96231b3b80d8
Dele

Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-23 Thread Vedant Kumar via lldb-dev
Pinging this because I'd like this to go forward to make testing easier.

I know folks have concerns about maintaining completeness of the scripting APIs 
and about keeping the test suite debuggable. I just don't think making 
FileCheck available in inline tests is counter to those goals :).

I think this boils down to having a more powerful replacement for `self.expect` 
in lldbinline tests. As we're actively discouraging use of pexpect during code 
review now, we need some replacement.

vedant

> On Aug 15, 2018, at 12:18 PM, Vedant Kumar  wrote:
> 
> 
> 
>> On Aug 15, 2018, at 12:12 PM, Jason Molenda  wrote:
>> 
>> 
>> 
>>> On Aug 15, 2018, at 11:34 AM, Vedant Kumar  wrote:
>>> 
>>> 
>>> 
 On Aug 14, 2018, at 6:19 PM, Jason Molenda  wrote:
 
 It's more verbose, and it does mean test writers need to learn the public 
 API, but it's also much more stable and debuggable in the future.
>>> 
>>> I'm not sure about this. Having looked at failing sb api tests for a while 
>>> now, I find them about as easy to navigate and fix as FileCheck tests in 
>>> llvm.
>> 
>> I don't find that to be true.  I see a failing test on line 79 or whatever, 
>> and depending on what line 79 is doing, I'll throw in some 
>> self.runCmd("bt")'s or self.runCmd("fr v") to the test, re-run, and see what 
>> the relevant context is quickly. For most simple tests, I can usually spot 
>> the issue in under a minute.  dotest.py likes to eat output when it's run in 
>> multiprocess mode these days, so I have to remember to add 
>> --no-multiprocess.  If I'm adding something that I think is generally useful 
>> to debug the test case, I'll add a conditional block testing again 
>> self.TraceOn() and print things that may help people who are running 
>> dotest.py with -t trace mode enabled.
> 
> I do agree that there are effective ways of debugging sb api tests. Having 
> worked with plenty of filecheck-based tests in llvm/clang/swift, I find them 
> to be as easy (or easier for me personally) to debug.
> 
> 
>> Sometimes there is a test written so it has a "verify this value" function 
>> that is run over a variety of different variables during the test timeframe, 
>> and debugging that can take a little more work to understand the context 
>> that is failing.  But that kind of test would be harder (or at least much 
>> more redundant) to express in a FileCheck style system anyway, so I can't 
>> ding it.
> 
> 
> Yep, sounds like a great candidate for a unit test or an SB API test.
> 
> 
>> As for the difficulty of writing SB API tests, you do need to know the 
>> general architecture of lldb (a target has a process, a process has threads, 
>> a thread has frames, a frame has variables), the public API which quickly 
>> becomes second nature because it is so regular, and then there's the 
>> testsuite specific setup and template code.  But is that that intimidating 
>> to anyone familiar with lldb?
> 
> Not intimidating, no. Cumbersome and slow, absolutely. So much so that I 
> don't see a way of adequately testing my patches this way. It would just take 
> too much time.
> 
> vedant
> 
>> packages/Python/lldbsuite/test/sample_test/TestSampleTest.py is 50 lines 
>> including comments; there's about ten lines of source related to 
>> initializing / setting up the testsuite, and then 6 lines is what's needed 
>> to run to a breakpoint, get a local variable, check the value. 
>> 
>> 
>> J
>> 
>> 
>> 
>>> 
>>> 
 It's a higher up front cost but we're paid back in being able to develop 
 lldb more quickly in the future, where our published API behaviors are 
 being tested directly, and the things that must not be broken.
>>> 
>>> I think the right solution here is to require API tests when new 
>>> functionality is introduced. We can enforce this during code review. Making 
>>> it impossible to write tests against the driver's output doesn't seem like 
>>> the best solution. It means that far fewer tests will be written (note that 
>>> a test suite run of lldb gives less than 60% code coverage). It also means 
>>> that the driver's output isn't tested as much as it should be.
>>> 
>>> 
 The lldb driver's output isn't a contract, and treating it like one makes 
 the debugger harder to innovate in the future.
>>> 
>>> I appreciate your experience with this (pattern matching on driver input) 
>>> in gdb. That said, I think there are reliable/maintainable ways to do this, 
>>> and proven examples we can learn from in llvm/clang/etc.
>>> 
>>> 
 It's also helpful when adding new features to ensure you've exposed the 
 feature through the API sufficiently.  The first thing I thought to try 
 when writing the example below was SBFrame::IsArtificial() (see 
 SBFrame::IsInlined()) which doesn't exist.  If a driver / IDE is going to 
 visually indicate artificial frames, they'll need that.
>>> 
>>> Sure. That's true, we do need API exposure for new features, and again we 
>>> can enforce that during 

Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-23 Thread Zachary Turner via lldb-dev
I’m fine with it. I still would like to see inline tests ported to a custom
lit test format eventually, but this seems orthogonal to that and it can be
done in addition to this
On Thu, Aug 23, 2018 at 4:25 PM Vedant Kumar  wrote:

> Pinging this because I'd like this to go forward to make testing easier.
>
> I know folks have concerns about maintaining completeness of the scripting
> APIs and about keeping the test suite debuggable. I just don't think making
> FileCheck available in inline tests is counter to those goals :).
>
> I think this boils down to having a more powerful replacement for
> `self.expect` in lldbinline tests. As we're actively discouraging use of
> pexpect during code review now, we need some replacement.
>
> vedant
>
> On Aug 15, 2018, at 12:18 PM, Vedant Kumar  wrote:
>
>
>
> On Aug 15, 2018, at 12:12 PM, Jason Molenda  wrote:
>
>
>
> On Aug 15, 2018, at 11:34 AM, Vedant Kumar  wrote:
>
>
>
> On Aug 14, 2018, at 6:19 PM, Jason Molenda  wrote:
>
> It's more verbose, and it does mean test writers need to learn the public
> API, but it's also much more stable and debuggable in the future.
>
>
> I'm not sure about this. Having looked at failing sb api tests for a while
> now, I find them about as easy to navigate and fix as FileCheck tests in
> llvm.
>
>
> I don't find that to be true.  I see a failing test on line 79 or
> whatever, and depending on what line 79 is doing, I'll throw in some
> self.runCmd("bt")'s or self.runCmd("fr v") to the test, re-run, and see
> what the relevant context is quickly. For most simple tests, I can usually
> spot the issue in under a minute.  dotest.py likes to eat output when it's
> run in multiprocess mode these days, so I have to remember to add
> --no-multiprocess.  If I'm adding something that I think is generally
> useful to debug the test case, I'll add a conditional block testing again
> self.TraceOn() and print things that may help people who are running
> dotest.py with -t trace mode enabled.
>
>
> I do agree that there are effective ways of debugging sb api tests. Having
> worked with plenty of filecheck-based tests in llvm/clang/swift, I find
> them to be as easy (or easier for me personally) to debug.
>
>
> Sometimes there is a test written so it has a "verify this value" function
> that is run over a variety of different variables during the test
> timeframe, and debugging that can take a little more work to understand the
> context that is failing.  But that kind of test would be harder (or at
> least much more redundant) to express in a FileCheck style system anyway,
> so I can't ding it.
>
>
>
> Yep, sounds like a great candidate for a unit test or an SB API test.
>
>
> As for the difficulty of writing SB API tests, you do need to know the
> general architecture of lldb (a target has a process, a process has
> threads, a thread has frames, a frame has variables), the public API which
> quickly becomes second nature because it is so regular, and then there's
> the testsuite specific setup and template code.  But is that that
> intimidating to anyone familiar with lldb?
>
>
> Not intimidating, no. Cumbersome and slow, absolutely. So much so that I
> don't see a way of adequately testing my patches this way. It would just
> take too much time.
>
> vedant
>
> packages/Python/lldbsuite/test/sample_test/TestSampleTest.py is 50 lines
> including comments; there's about ten lines of source related to
> initializing / setting up the testsuite, and then 6 lines is what's needed
> to run to a breakpoint, get a local variable, check the value.
>
>
> J
>
>
>
>
>
> It's a higher up front cost but we're paid back in being able to develop
> lldb more quickly in the future, where our published API behaviors are
> being tested directly, and the things that must not be broken.
>
>
> I think the right solution here is to require API tests when new
> functionality is introduced. We can enforce this during code review. Making
> it impossible to write tests against the driver's output doesn't seem like
> the best solution. It means that far fewer tests will be written (note that
> a test suite run of lldb gives less than 60% code coverage). It also means
> that the driver's output isn't tested as much as it should be.
>
>
> The lldb driver's output isn't a contract, and treating it like one makes
> the debugger harder to innovate in the future.
>
>
> I appreciate your experience with this (pattern matching on driver input)
> in gdb. That said, I think there are reliable/maintainable ways to do this,
> and proven examples we can learn from in llvm/clang/etc.
>
>
> It's also helpful when adding new features to ensure you've exposed the
> feature through the API sufficiently.  The first thing I thought to try
> when writing the example below was SBFrame::IsArtificial() (see
> SBFrame::IsInlined()) which doesn't exist.  If a driver / IDE is going to
> visually indicate artificial frames, they'll need that.
>
>
> Sure. That's true, we do need API exposure for new fe

Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-23 Thread Frédéric Riss via lldb-dev
FWIW, I’m supportive of this.

I do find SB API based tests to be powerful but extremely cumbersome to write. 
If Vedant wants to write 15 different tests for the various cases he’s 
covering, it’s easy to see that they would be much easier to write this way. It 
is very powerful to have the test source and its inputs in the same file and it 
makes writing new kinds of tests much much faster.

There would definitely be a cost if the output of the driver changes in 
significant ways. It feels like anything that would be worth changing the 
driver output in significant ways could certainly absorb the cost of writing a 
script to update the existing tests that rely on this output.

And maybe the correct way is to expose the debugger object model in some kind 
of serialized form and match this instead of the driver output, but I’m not 
sure this is so much different. The model could evolve in a way that requires 
the tests to be updated.

Fred
 

> On Aug 23, 2018, at 4:25 PM, Vedant Kumar  wrote:
> 
> Pinging this because I'd like this to go forward to make testing easier.
> 
> I know folks have concerns about maintaining completeness of the scripting 
> APIs and about keeping the test suite debuggable. I just don't think making 
> FileCheck available in inline tests is counter to those goals :).
> 
> I think this boils down to having a more powerful replacement for 
> `self.expect` in lldbinline tests. As we're actively discouraging use of 
> pexpect during code review now, we need some replacement.
> 
> vedant
> 
>> On Aug 15, 2018, at 12:18 PM, Vedant Kumar > > wrote:
>> 
>> 
>> 
>>> On Aug 15, 2018, at 12:12 PM, Jason Molenda >> > wrote:
>>> 
>>> 
>>> 
 On Aug 15, 2018, at 11:34 AM, Vedant Kumar >>> > wrote:
 
 
 
> On Aug 14, 2018, at 6:19 PM, Jason Molenda  > wrote:
> 
> It's more verbose, and it does mean test writers need to learn the public 
> API, but it's also much more stable and debuggable in the future.
 
 I'm not sure about this. Having looked at failing sb api tests for a while 
 now, I find them about as easy to navigate and fix as FileCheck tests in 
 llvm.
>>> 
>>> I don't find that to be true.  I see a failing test on line 79 or whatever, 
>>> and depending on what line 79 is doing, I'll throw in some 
>>> self.runCmd("bt")'s or self.runCmd("fr v") to the test, re-run, and see 
>>> what the relevant context is quickly. For most simple tests, I can usually 
>>> spot the issue in under a minute.  dotest.py likes to eat output when it's 
>>> run in multiprocess mode these days, so I have to remember to add 
>>> --no-multiprocess.  If I'm adding something that I think is generally 
>>> useful to debug the test case, I'll add a conditional block testing again 
>>> self.TraceOn() and print things that may help people who are running 
>>> dotest.py with -t trace mode enabled.
>> 
>> I do agree that there are effective ways of debugging sb api tests. Having 
>> worked with plenty of filecheck-based tests in llvm/clang/swift, I find them 
>> to be as easy (or easier for me personally) to debug.
>> 
>> 
>>> Sometimes there is a test written so it has a "verify this value" function 
>>> that is run over a variety of different variables during the test 
>>> timeframe, and debugging that can take a little more work to understand the 
>>> context that is failing.  But that kind of test would be harder (or at 
>>> least much more redundant) to express in a FileCheck style system anyway, 
>>> so I can't ding it.
>> 
>> 
>> Yep, sounds like a great candidate for a unit test or an SB API test.
>> 
>> 
>>> As for the difficulty of writing SB API tests, you do need to know the 
>>> general architecture of lldb (a target has a process, a process has 
>>> threads, a thread has frames, a frame has variables), the public API which 
>>> quickly becomes second nature because it is so regular, and then there's 
>>> the testsuite specific setup and template code.  But is that that 
>>> intimidating to anyone familiar with lldb?
>> 
>> Not intimidating, no. Cumbersome and slow, absolutely. So much so that I 
>> don't see a way of adequately testing my patches this way. It would just 
>> take too much time.
>> 
>> vedant
>> 
>>> packages/Python/lldbsuite/test/sample_test/TestSampleTest.py is 50 lines 
>>> including comments; there's about ten lines of source related to 
>>> initializing / setting up the testsuite, and then 6 lines is what's needed 
>>> to run to a breakpoint, get a local variable, check the value. 
>>> 
>>> 
>>> J
>>> 
>>> 
>>> 
 
 
> It's a higher up front cost but we're paid back in being able to develop 
> lldb more quickly in the future, where our published API behaviors are 
> being tested directly, and the things that must not be broken.
 
 I think the right solution here is to require API tests when new 
 functiona