Re: [lldb-dev] [RFC]The future of pexpect

2019-02-26 Thread Pavel Labath via lldb-dev

On 25/02/2019 22:15, Davide Italiano wrote:

On Fri, Feb 22, 2019 at 6:32 AM Pavel Labath  wrote:


On 21/02/2019 19:48, Ted Woodward wrote:




-Original Message-
From: lldb-dev  On Behalf Of Pavel Labath
via lldb-dev
Sent: Thursday, February 21, 2019 8:35 AM
To: Davide Italiano 
Cc: LLDB 
Subject: [EXT] Re: [lldb-dev] [RFC]The future of pexpect

On 21/02/2019 00:03, Davide Italiano wrote:

I found out that there are tests that effectively require
interactivity. Some of the lldb-mi ones are an example.
A common use-case is that of sending SIGTERM in a loop to make sure
`lldb-mi` doesn't crash and handle the signal correctly.

This functionality is really hard to replicate in lit_as is_.
Any ideas on how we could handle this case?


How hard is it to import a new version of pexpect which supports python3 and
stuff?

I'm not sure how the situation is on darwin, but I'd expect (:P) that most linux
systems either already have it installed, or have an easy way to do so. So we
may not even be able to get away with just using the system one and skipping
tests when it's not present.

BTW, for lldb-mi I would actually argue that it should *not* use pexpect :D.
Interactivity is one thing, and I'm very much in favour of keeping that ability,
but pexpect is not a prerequisite for that. For me, the main advantage of
pexpect is that it emulates a real terminal. However, lldb-mi does not need
that stuff. It doesn't have any command line editing capabilities or similar. 
It's
expecting to communicate with an IDE over a pipe, and that's it.

Given that, it should be fairly easy to rewrite the lldb-mi tests to work on top
of the standard python "subprocess" library. While we're doing that, we might
actually fix some of the issues that have been bugging everyone in the lldb-mi
tests. At least for me, the most annoying thing was that when lldb-mi fails to
produce the expected output, the test does not fail immediately, but instead
the implementation of self.expect("^whatever") waits until the timeout
expires, optimistically hoping that it will find some output that match the
pattern.



Pavel, I think yours is a really nice idea.
I'm no python expert, but I found out making the conversion is
relatively simple.
I propose a proof-of-concept API and implementation here:

https://gist.github.com/dcci/94a4936a227d9c7627b91ae9575b7b68

Comments appreciated! Once we agree on how this should look like, I do
recommend to have a new lldbMITest base class and incrementally start
moving the tests to it.
Once we're done, we can delete the old class.

Does this sound reasonable?

--
Davide



Sounds great. Let's ship it. :)

pl
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [lldb-mi] unable to debug GTK applications

2019-02-26 Thread Eran Ifrah via lldb-dev
Hi,

I am not sure if this is the correct mailing list for this question, so
excuse me if its not.
I am trying a GTK application using lldb-mi, however, the application
terminates immediately with the error message:

@"15:47:16: Error: Unable to initialize GTK+, is DISPLAY set properly?\r\n"

Running the same using "pure" lldb works as expected.

Using lldb-6.0 on Debian stretch


Any hints?

-- 
Eran Ifrah,
Author of CodeLite, a cross platform open source C/C++ IDE:
http://www.codelite.org
CodeLite IDE Blog: http://codeliteide.blogspot.com/
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-26 Thread Frédéric Riss via lldb-dev


> On Feb 25, 2019, at 10:21 AM, Zachary Turner via lldb-dev 
>  wrote:
> 
> Hi all,
> 
> We've got some internal efforts in progress, and one of those would benefit 
> from debug info parsing being out of process (independently of whether or not 
> the rest of LLDB is out of process).
> 
> There's a couple of advantages to this, which I'll enumerate here:
> It improves one source of instability in LLDB which has been known to be 
> problematic -- specifically, that debug info can be bad and handling this can 
> often be difficult and bring down the entire debug session.  While other 
> efforts have been made to address stability by moving things out of process, 
> they have not been upstreamed, and even if they had I think we would still 
> want this anyway, for reasons that follow.
> It becomes theoretically possible to move debug info parsing not just to 
> another process, but to another machine entirely.  In a broader sense, this 
> decouples the physical debug info location (and for that matter, 
> representation) from the debugger host.
> It becomes testable as an independent component, because you can just send 
> requests to it and dump the results and see if they make sense.  Currently 
> there is almost zero test coverage of this aspect of LLDB apart from what you 
> can get after going through many levels of indirection via spinning up a full 
> debug session and doing things that indirectly result in symbol queries.
> The big win here, at least from my point of view, is the second one.  
> Traditional symbol servers operate by copying entire symbol files (DSYM, DWP, 
> PDB) from some machine to the debugger host.  These can be very large -- 
> we've seen 12+ GB in some cases -- which ranges from "slow bandwidth hog" to 
> "complete non-starter" depending on the debugger host and network.  In this 
> kind of scenario, one could theoretically run the debug info process on the 
> same NAS, cloud, or whatever as the symbol server.  Then, rather than copying 
> over an entire symbol file, it responds only to the query you issued -- if 
> you asked for a type, it just returns a packet describing the type you 
> requested.
> 
> The API itself would be stateless (so that you could make queries for 
> multiple targets in any order) as well as asynchronous (so that responses 
> might arrive out of order).  Blocking could be implemented in LLDB, but 
> having the server be asynchronous means multiple clients could connect to the 
> same server instance.  This raises interesting possibilities.  For example, 
> one can imagine thousands of developers connecting to an internal symbol 
> server on the network and being able to debug remote processes or core dumps 
> over slow network connections or on machines with very little storage (e.g. 
> chromebooks).
> 
> 
> On the LLDB side, all of this is hidden behind the SymbolFile interface, so 
> most of LLDB doesn't have to change at all.   While this is in development, 
> we could have SymbolFileRemote and keep the existing local codepath the 
> default, until such time that it's robust and complete enough that we can 
> switch the default.
> 
> Thoughts?

Interesting idea.

Would you build the server using the pieces we have in the current SymbolFile 
implementations? What do you mean by “switching the default”? Do you expect 
LLDB to spin up a server if there’s none configured in the environment?

Fred___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-26 Thread Zachary Turner via lldb-dev
I would probably build the server by using mostly code from LLVM.  Since it
would contain all of the low level debug info parsing libraries, i would
expect that all knowledge of debug info (at least, in the form that
compilers emit it in) could eventually be removed from LLDB entirely.

So, for example, all of the efforts to merge LLDB and LLVM's DWARF parsing
libraries could happen by first implementing inside of LLVM whatever
functionality is missing, and then using that from within the server.  And
yes, I would expect lldb to spin up a server, just as it does with
lldb-server today if you try to debug something.  It finds the lldb-server
binary and runs it.

When I say "switching the default", what I mean is that if someday this
hypothetical server supports everything that the current in-process parsing
codepath supports, we could just delete that entire codepath and switch
everything to the out of process server, even if that server were running
on the same physical machine as the debugger client (which would be
functionally equivalent to what we have today).

On Tue, Feb 26, 2019 at 3:46 PM Frédéric Riss  wrote:

>
> On Feb 25, 2019, at 10:21 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Hi all,
>
> We've got some internal efforts in progress, and one of those would
> benefit from debug info parsing being out of process (independently of
> whether or not the rest of LLDB is out of process).
>
> There's a couple of advantages to this, which I'll enumerate here:
>
>- It improves one source of instability in LLDB which has been known
>to be problematic -- specifically, that debug info can be bad and handling
>this can often be difficult and bring down the entire debug session.  While
>other efforts have been made to address stability by moving things out of
>process, they have not been upstreamed, and even if they had I think we
>would still want this anyway, for reasons that follow.
>- It becomes theoretically possible to move debug info parsing not
>just to another process, but to another machine entirely.  In a broader
>sense, this decouples the physical debug info location (and for that
>matter, representation) from the debugger host.
>- It becomes testable as an independent component, because you can
>just send requests to it and dump the results and see if they make sense.
>Currently there is almost zero test coverage of this aspect of LLDB apart
>from what you can get after going through many levels of indirection via
>spinning up a full debug session and doing things that indirectly result in
>symbol queries.
>
> The big win here, at least from my point of view, is the second one.
> Traditional symbol servers operate by copying entire symbol files (DSYM,
> DWP, PDB) from some machine to the debugger host.  These can be very large
> -- we've seen 12+ GB in some cases -- which ranges from "slow bandwidth
> hog" to "complete non-starter" depending on the debugger host and network.
> In this kind of scenario, one could theoretically run the debug info
> process on the same NAS, cloud, or whatever as the symbol server.  Then,
> rather than copying over an entire symbol file, it responds only to the
> query you issued -- if you asked for a type, it just returns a packet
> describing the type you requested.
>
> The API itself would be stateless (so that you could make queries for
> multiple targets in any order) as well as asynchronous (so that responses
> might arrive out of order).  Blocking could be implemented in LLDB, but
> having the server be asynchronous means multiple clients could connect to
> the same server instance.  This raises interesting possibilities.  For
> example, one can imagine thousands of developers connecting to an internal
> symbol server on the network and being able to debug remote processes or
> core dumps over slow network connections or on machines with very little
> storage (e.g. chromebooks).
>
>
> On the LLDB side, all of this is hidden behind the SymbolFile interface,
> so most of LLDB doesn't have to change at all.   While this is in
> development, we could have SymbolFileRemote and keep the existing local
> codepath the default, until such time that it's robust and complete enough
> that we can switch the default.
>
> Thoughts?
>
>
> Interesting idea.
>
> Would you build the server using the pieces we have in the current
> SymbolFile implementations? What do you mean by “switching the default”? Do
> you expect LLDB to spin up a server if there’s none configured in the
> environment?
>
> Fred
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-26 Thread Frédéric Riss via lldb-dev


> On Feb 26, 2019, at 4:03 PM, Zachary Turner  wrote:
> 
> I would probably build the server by using mostly code from LLVM.  Since it 
> would contain all of the low level debug info parsing libraries, i would 
> expect that all knowledge of debug info (at least, in the form that compilers 
> emit it in) could eventually be removed from LLDB entirely.

That’s quite an ambitious goal.

I haven’t looked at the SymbolFile API, what do you expect the exchange 
currency between the server and LLDB to be? Serialized compiler ASTs? If that’s 
the case, it seems like you need a strong rev-lock between the server and the 
client. Which in turn add quite some complexity to the rollout of new versions 
of the debugger.

> So, for example, all of the efforts to merge LLDB and LLVM's DWARF parsing 
> libraries could happen by first implementing inside of LLVM whatever 
> functionality is missing, and then using that from within the server.  And 
> yes, I would expect lldb to spin up a server, just as it does with 
> lldb-server today if you try to debug something.  It finds the lldb-server 
> binary and runs it.
> 
> When I say "switching the default", what I mean is that if someday this 
> hypothetical server supports everything that the current in-process parsing 
> codepath supports, we could just delete that entire codepath and switch 
> everything to the out of process server, even if that server were running on 
> the same physical machine as the debugger client (which would be functionally 
> equivalent to what we have today).

(I obviously knew what you meant by "switching the default”, I was trying to 
ask about how… to which the answer is by spinning up a local server)

Do you envision LLDB being able to talk to more than one server at the same 
time? It seems like this could be useful to debug a local build while still 
having access to debug symbols for your dependencies that have their symbols in 
a central repository.

Fred


> 
> On Tue, Feb 26, 2019 at 3:46 PM Frédéric Riss  > wrote:
> 
>> On Feb 25, 2019, at 10:21 AM, Zachary Turner via lldb-dev 
>> mailto:lldb-dev@lists.llvm.org>> wrote:
>> 
>> Hi all,
>> 
>> We've got some internal efforts in progress, and one of those would benefit 
>> from debug info parsing being out of process (independently of whether or 
>> not the rest of LLDB is out of process).
>> 
>> There's a couple of advantages to this, which I'll enumerate here:
>> It improves one source of instability in LLDB which has been known to be 
>> problematic -- specifically, that debug info can be bad and handling this 
>> can often be difficult and bring down the entire debug session.  While other 
>> efforts have been made to address stability by moving things out of process, 
>> they have not been upstreamed, and even if they had I think we would still 
>> want this anyway, for reasons that follow.
>> It becomes theoretically possible to move debug info parsing not just to 
>> another process, but to another machine entirely.  In a broader sense, this 
>> decouples the physical debug info location (and for that matter, 
>> representation) from the debugger host.
>> It becomes testable as an independent component, because you can just send 
>> requests to it and dump the results and see if they make sense.  Currently 
>> there is almost zero test coverage of this aspect of LLDB apart from what 
>> you can get after going through many levels of indirection via spinning up a 
>> full debug session and doing things that indirectly result in symbol queries.
>> The big win here, at least from my point of view, is the second one.  
>> Traditional symbol servers operate by copying entire symbol files (DSYM, 
>> DWP, PDB) from some machine to the debugger host.  These can be very large 
>> -- we've seen 12+ GB in some cases -- which ranges from "slow bandwidth hog" 
>> to "complete non-starter" depending on the debugger host and network.  In 
>> this kind of scenario, one could theoretically run the debug info process on 
>> the same NAS, cloud, or whatever as the symbol server.  Then, rather than 
>> copying over an entire symbol file, it responds only to the query you issued 
>> -- if you asked for a type, it just returns a packet describing the type you 
>> requested.
>> 
>> The API itself would be stateless (so that you could make queries for 
>> multiple targets in any order) as well as asynchronous (so that responses 
>> might arrive out of order).  Blocking could be implemented in LLDB, but 
>> having the server be asynchronous means multiple clients could connect to 
>> the same server instance.  This raises interesting possibilities.  For 
>> example, one can imagine thousands of developers connecting to an internal 
>> symbol server on the network and being able to debug remote processes or 
>> core dumps over slow network connections or on machines with very little 
>> storage (e.g. chromebooks).
>> 
>> 
>> On the LLDB side, all of this is hidden behind the Sym

Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-26 Thread Zachary Turner via lldb-dev
On Tue, Feb 26, 2019 at 4:49 PM Frédéric Riss  wrote:

>
> On Feb 26, 2019, at 4:03 PM, Zachary Turner  wrote:
>
> I would probably build the server by using mostly code from LLVM.  Since
> it would contain all of the low level debug info parsing libraries, i would
> expect that all knowledge of debug info (at least, in the form that
> compilers emit it in) could eventually be removed from LLDB entirely.
>
>
> That’s quite an ambitious goal.
>
> I haven’t looked at the SymbolFile API, what do you expect the exchange
> currency between the server and LLDB to be? Serialized compiler ASTs? If
> that’s the case, it seems like you need a strong rev-lock between the
> server and the client. Which in turn add quite some complexity to the
> rollout of new versions of the debugger.
>
Definitely not serialized ASTs, because you could be debugging some
language other than C++.  Probably something more like JSON, where you
parse the debug info and send back some JSON representation of the type /
function / variable the user requested, which can almost be a direct
mapping to LLDB's internal symbol hierarchy (e.g. the Function, Type, etc
classes).  You'd still need to build the AST on the client


>
> So, for example, all of the efforts to merge LLDB and LLVM's DWARF parsing
> libraries could happen by first implementing inside of LLVM whatever
> functionality is missing, and then using that from within the server.  And
> yes, I would expect lldb to spin up a server, just as it does with
> lldb-server today if you try to debug something.  It finds the lldb-server
> binary and runs it.
>
> When I say "switching the default", what I mean is that if someday this
> hypothetical server supports everything that the current in-process parsing
> codepath supports, we could just delete that entire codepath and switch
> everything to the out of process server, even if that server were running
> on the same physical machine as the debugger client (which would be
> functionally equivalent to what we have today).
>
>
> (I obviously knew what you meant by "switching the default”, I was trying
> to ask about how… to which the answer is by spinning up a local server)
>
> Do you envision LLDB being able to talk to more than one server at the
> same time? It seems like this could be useful to debug a local build while
> still having access to debug symbols for your dependencies that have their
> symbols in a central repository.
>

I hadn't really thought of this, but it certainly seems possible.  Since
the API is stateless, it could send requests to any server it wanted, with
some mechanism of selecting between them.

>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-26 Thread Frédéric Riss via lldb-dev


> On Feb 26, 2019, at 4:52 PM, Zachary Turner  wrote:
> 
> 
> 
> On Tue, Feb 26, 2019 at 4:49 PM Frédéric Riss  > wrote:
> 
>> On Feb 26, 2019, at 4:03 PM, Zachary Turner > > wrote:
>> 
>> I would probably build the server by using mostly code from LLVM.  Since it 
>> would contain all of the low level debug info parsing libraries, i would 
>> expect that all knowledge of debug info (at least, in the form that 
>> compilers emit it in) could eventually be removed from LLDB entirely.
> 
> That’s quite an ambitious goal.
> 
> I haven’t looked at the SymbolFile API, what do you expect the exchange 
> currency between the server and LLDB to be? Serialized compiler ASTs? If 
> that’s the case, it seems like you need a strong rev-lock between the server 
> and the client. Which in turn add quite some complexity to the rollout of new 
> versions of the debugger.
> Definitely not serialized ASTs, because you could be debugging some language 
> other than C++.  Probably something more like JSON, where you parse the debug 
> info and send back some JSON representation of the type / function / variable 
> the user requested, which can almost be a direct mapping to LLDB's internal 
> symbol hierarchy (e.g. the Function, Type, etc classes).  You'd still need to 
> build the AST on the client

This seems fairly easy for Function or symbols in general, as it’s easy to 
abstract their few properties, but as soon as you get to the type system, I get 
worried.

Your representation needs to have the full expressivity of the underlying debug 
info format. Inventing something new in that space seems really expensive. For 
example, every piece of information we add to the debug info in the compiler 
would need to be handled in multiple places:
 - the server code
 - the client code that talks to the server
 - the current “local" code (for a pretty long while)
Not ideal. I wish there was a way to factor at least the last 2. 

But maybe I’m misunderstanding exactly what you’d put in your JSON. If it’s 
very close to the debug format (basically a JSON representation of the DWARF or 
the PDB), then it becomes more tractable as the client code can be the same as 
the current local one with some refactoring.

Fred

> 
>> So, for example, all of the efforts to merge LLDB and LLVM's DWARF parsing 
>> libraries could happen by first implementing inside of LLVM whatever 
>> functionality is missing, and then using that from within the server.  And 
>> yes, I would expect lldb to spin up a server, just as it does with 
>> lldb-server today if you try to debug something.  It finds the lldb-server 
>> binary and runs it.
>> 
>> When I say "switching the default", what I mean is that if someday this 
>> hypothetical server supports everything that the current in-process parsing 
>> codepath supports, we could just delete that entire codepath and switch 
>> everything to the out of process server, even if that server were running on 
>> the same physical machine as the debugger client (which would be 
>> functionally equivalent to what we have today).
> 
> (I obviously knew what you meant by "switching the default”, I was trying to 
> ask about how… to which the answer is by spinning up a local server)
> 
> Do you envision LLDB being able to talk to more than one server at the same 
> time? It seems like this could be useful to debug a local build while still 
> having access to debug symbols for your dependencies that have their symbols 
> in a central repository.
> 
> I hadn't really thought of this, but it certainly seems possible.  Since the 
> API is stateless, it could send requests to any server it wanted, with some 
> mechanism of selecting between them.

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-26 Thread via lldb-dev
When I see this "parsing DWARF and turning it into something else" it is very 
reminiscent of what clayborg is trying to do with GSYM.  You're both talking 
about leveraging LLVM's parser, which is great, but I have to wonder if there 
isn't more commonality being left on the table.  Just throwing that thought out 
there; I don't have anything specific to suggest.
--paulr

From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Frédéric 
Riss via lldb-dev
Sent: Tuesday, February 26, 2019 5:40 PM
To: Zachary Turner
Cc: LLDB
Subject: Re: [lldb-dev] RFC: Moving debug info parsing out of process




On Feb 26, 2019, at 4:52 PM, Zachary Turner 
mailto:ztur...@google.com>> wrote:


On Tue, Feb 26, 2019 at 4:49 PM Frédéric Riss 
mailto:fr...@apple.com>> wrote:


On Feb 26, 2019, at 4:03 PM, Zachary Turner 
mailto:ztur...@google.com>> wrote:

I would probably build the server by using mostly code from LLVM.  Since it 
would contain all of the low level debug info parsing libraries, i would expect 
that all knowledge of debug info (at least, in the form that compilers emit it 
in) could eventually be removed from LLDB entirely.

That’s quite an ambitious goal.

I haven’t looked at the SymbolFile API, what do you expect the exchange 
currency between the server and LLDB to be? Serialized compiler ASTs? If that’s 
the case, it seems like you need a strong rev-lock between the server and the 
client. Which in turn add quite some complexity to the rollout of new versions 
of the debugger.
Definitely not serialized ASTs, because you could be debugging some language 
other than C++.  Probably something more like JSON, where you parse the debug 
info and send back some JSON representation of the type / function / variable 
the user requested, which can almost be a direct mapping to LLDB's internal 
symbol hierarchy (e.g. the Function, Type, etc classes).  You'd still need to 
build the AST on the client

This seems fairly easy for Function or symbols in general, as it’s easy to 
abstract their few properties, but as soon as you get to the type system, I get 
worried.

Your representation needs to have the full expressivity of the underlying debug 
info format. Inventing something new in that space seems really expensive. For 
example, every piece of information we add to the debug info in the compiler 
would need to be handled in multiple places:
 - the server code
 - the client code that talks to the server
 - the current “local" code (for a pretty long while)
Not ideal. I wish there was a way to factor at least the last 2.

But maybe I’m misunderstanding exactly what you’d put in your JSON. If it’s 
very close to the debug format (basically a JSON representation of the DWARF or 
the PDB), then it becomes more tractable as the client code can be the same as 
the current local one with some refactoring.

Fred




So, for example, all of the efforts to merge LLDB and LLVM's DWARF parsing 
libraries could happen by first implementing inside of LLVM whatever 
functionality is missing, and then using that from within the server.  And yes, 
I would expect lldb to spin up a server, just as it does with lldb-server today 
if you try to debug something.  It finds the lldb-server binary and runs it.

When I say "switching the default", what I mean is that if someday this 
hypothetical server supports everything that the current in-process parsing 
codepath supports, we could just delete that entire codepath and switch 
everything to the out of process server, even if that server were running on 
the same physical machine as the debugger client (which would be functionally 
equivalent to what we have today).

(I obviously knew what you meant by "switching the default”, I was trying to 
ask about how… to which the answer is by spinning up a local server)

Do you envision LLDB being able to talk to more than one server at the same 
time? It seems like this could be useful to debug a local build while still 
having access to debug symbols for your dependencies that have their symbols in 
a central repository.

I hadn't really thought of this, but it certainly seems possible.  Since the 
API is stateless, it could send requests to any server it wanted, with some 
mechanism of selecting between them.

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev