Re: [lldb-dev] I had to back out r313904, it causes lldb to assert on startup if you have a .lldbinit file

2017-09-28 Thread Leonard Mosescu via lldb-dev
Thank you Jim! I'm at cppcon and I won't be able to work on it until
Monday, but I can help with a code review if you're planning to take a stab
at it.

I was hoping we can avoid dealing with reentrancy but I was wrong. For
handling reentrancy I was briefly considering either maintaining a full
blown command stack or perhaps just a nesting counter.

Also, with reentrancy, I think that interruption should affect the
"outmost" command scope rather than just interrupting the current command,
what do you think?




On Wed, Sep 27, 2017 at 6:46 PM, Jim Ingham via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> This actually asserts on any use "command source" is the one command that
> re-enters the command interpreter.  It should be as simple as getting
> command source to rest the state flag before it goes to do the sourcing.
> I'll check that out tomorrow if nobody gets to it first.
>
> command source is one of a set of early commands that we got into lldb
> before we had hired the person who wrote the testsuite way way back in the
> day, and though we went and backfilled the tests at that point, apparently
> we missed command source.  So we'll also have to add a test for that.
>
> I also filed:
>
> https://bugs.llvm.org/show_bug.cgi?id=34758
>
> to cover the issue.
>
> Jim
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Unifying ctor+Clear() inits into in-class initializers?

2017-11-20 Thread Leonard Mosescu via lldb-dev
>
> void Clear() {
>   this->~ClassName();
>   new (this) ClassName();
> }


My 2c: this is clever, but not without downsides:
1. It may do more than intended (it will destroy all members / bases)
2. It forces construction and 'reset' to be exactly the same, which is not
always desirable
3. Most importantly if you really want a freshly initialized object, just
do that (create a new object)

I like in-class initializers, but for clear/reset operations I prefer a
standalone operation. And as Pavel suggested, calling 'clear' from the
constructor is a good way to factor out commonality.



On Sun, Nov 19, 2017 at 6:58 AM, Jan Kratochvil via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi,
>
> https://reviews.llvm.org/D40212
>
> At least DWARFCompileUnit and I see even for example MachVMRegion duplicate
> intialization of fields in both their constructor and Clear().  Moreover
> the
> initialization is in different place than declaration of the member
> variable.
>
> Is it OK to just use in-class member variable initializers and:
> void Clear() {
>   this->~ClassName();
>   new (this) ClassName();
> }
> ?
>
> Pavel Labath otherwise suggests to just call Clear() from the constructor.
> Still then I find the code could be more readable with in-class members
> initializers - moreover during further refactorizations+extensions.
>
>
> Thanks,
> Jan Kratochvil
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Object identities in the LLDB's C++ API

2017-12-13 Thread Leonard Mosescu via lldb-dev
LLDB's C++ API deals with SBxxx objects, most of which are PIMPL-style
wrappers around an opaque pointer to the internal implementation. These
SBxxx objects act as handles and are passed/returned by value, which is
generally convenient, except for the situations where one would need to
keep track of object identities, ex. using them as keys in associative
containers.

As far as I can tell, there's a bit of inconsistency in the current state:

1. Some types, ex. SBThread, SBTarget, SBSection, ... offer ==, != that map
directly to the corresponding operator on the opaque pointer (good!), but:
.. there are no ordering operators, nor obvious ways to hash the objects
2. SBModule offer the == , != operators, but:
... the implementations for == and != are not exactly forwarded to the
corresponding operator on the opaque pointer (1)
3. Things like SBFrame offer IsEqual() in addition to ==, !=, creating a
bit of confusion
4. Other types (ex. SBProcess, SBSymbol, SBBlock) don't offer any kind of
comparison operations.

IMO it would be nice to have a consistent "handle type" semantics regarding
identity, ordering and hashing. I can see the following options:

1. Expose the opaque ptr as an opaque handle()
 - this is an easy, quick and convenient solution for many SBxxx types
but it may not work for all
2. Design and implement a consistent, first class identity/ordering/hashing
for all the SBxxx types
 - perhaps the most elegant and flexible approach, but also the most
work

Any thoughts on this? Did I miss anything fundamental here?

Thanks,
Lemo.

(1) example of operator== from SBModule:

bool SBModule::operator==(const SBModule &rhs) const {
if (m_opaque_sp)
return m_opaque_sp.get() == rhs.m_opaque_sp.get();
return false;
}
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Object identities in the LLDB's C++ API

2017-12-13 Thread Leonard Mosescu via lldb-dev
Thanks Greg,

1. Expose the opaque ptr as an opaque handle()
 - this is an easy, quick and convenient solution for many SBxxx types
but it may not work for all

That would be nice, but that won't always work with how LLDB is currently
> coded for SBFrame and possibly SBThread. These objects will be problems as
> they can come and go and the underlying object isn't always the same even
> through they lock onto the same logical object. SBThread and SBFrame have
> "lldb::ExecutionContextRefSP m_opaque_sp" members. The execution context
> reference is a class that contains weak pointers to the
> lldb_private::Thread and lldb_private::StackFrame objects, but it also
> contains the thread ID and frame ID so it can reconstitute the
> value lldb_private::Thread and lldb_private::StackFrame even if the weak
> pointer isn't valid. So the opaque handle will work for many objects but
> not all.


Indeed. One, relatively small but interesting benefit of the opaque handle
type is that it opens the possibility of generic "handle maps" (I'll
elaborate below)

2. Design and implement a consistent, first class identity/ordering/hashing
for all the SBxxx types
 - perhaps the most elegant and flexible approach, but also the most
work

I would be fine with adding new members to classes we know we want to hash
> and order, like by adding:
> uint32_t SB*::GetHash();
> bool SB*::operator==(const SB*& ohs);
> bool SB*::operator<(const SB*& ohs);
> Would those be enough?


I think so. If we use the standard containers as reference, technically we
only need operator< to satisfy the Compare
<http://en.cppreference.com/w/cpp/concept/Compare> concept. (also, a small
nit - size_t would be a better type for the hash value). Also, both the
hashing and the compare can be implemented as non-member functions (or even
specializing std::hash, std::less for SBxxx types). A few minor concerns:

a. if we keep things like SBModule::operator==() unchanged, it's not going
to be the same as the equiv(a, b) for the case where a and b have null
opaque pointers (not sure if this breaks anything, but I wouldn't want to
be the first to debug a case where this matter)
b. defining just the minimum set of operations may be technically enough
but it may look a bit weird to have a type define < but none of the other
relational operators.
c. if some of the hash/compare implementation end up going through multiple
layers (the execution context with thread, frame IDs example) the
performance characteristics can be unpredictable, right?


For context, the use case that brought this to my attention is managing a
set of data structures that contain custom data associated with modules,
frames, etc. It's easy to create, let's say a MyModule from a SBModule, but
if later on I get the module for a particular frame, SBFrame::GetModule()
will return a SBModule, which I would like to map to the corresponding
MyModule instance. Logically this would require a SBModule -> MyModule map.
The standard associative containers (map or unordered_map) would make this
trivial if SBxxx types satisfy the key requirements.

Another option for maintaining such a mapping, suggested by Mark Mentovai,
is to use provision for an "user data" tag associated with every SBxxx
object (this tag can simply be a void*, maybe wrapped with type safe
accessors). This would be extremely convenient for the API users (since
they don't have to worry about maintaining any maps themselves) but
implementing it would hit the same complications around the synthesized
instances (like SBFrame) and it may carry a small price - one pointer per
SBxxx instance even if this facility is not used. I personally like this
approach and in this particular case it has the additional benefit of being
additive (we can graft it on with minimal risk of breaking existing stuff),
although it still seems nice to have consistent identity semantics for the
SBxxx types.

On Wed, Dec 13, 2017 at 12:40 PM, Greg Clayton  wrote:

>
> On Dec 13, 2017, at 11:44 AM, Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> LLDB's C++ API deals with SBxxx objects, most of which are PIMPL-style
> wrappers around an opaque pointer to the internal implementation. These
> SBxxx objects act as handles and are passed/returned by value, which is
> generally convenient, except for the situations where one would need to
> keep track of object identities, ex. using them as keys in associative
> containers.
>
> As far as I can tell, there's a bit of inconsistency in the current state:
>
> 1. Some types, ex. SBThread, SBTarget, SBSection, ... offer ==, != that
> map directly to the corresponding operator on the opaque pointer (good!),
> but:
> .. there are no ordering operators, nor obvious ways to hash the
> object

[lldb-dev] postmortem debugging (core/minidump) & modules

2018-04-10 Thread Leonard Mosescu via lldb-dev
I'm looking at how the LLDB minidump reader creates the list of modules:

void ProcessMinidump::ReadModuleList() {

...

const auto file_spec = FileSpec(name.getValue(), true);
ModuleSpec module_spec = file_spec;
Status error;
lldb::ModuleSP module_sp = GetTarget().GetSharedModule(module_spec, &error);
if (!module_sp || error.Fail()) {
continue;
}

...

}


LLDB currently will insist on finding a local image for the module, which
is usually not the case for postmortem debugging on machines different than
the the the one where the minidump was created.

I don't see an obvious way to model modules which have no local image
(which is still different than the remote scenario where there is a remote
module image), am I missing anything?

Thanks!
Lemo.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] postmortem debugging (core/minidump) & modules

2018-04-10 Thread Leonard Mosescu via lldb-dev
Thanks Greg! It makes sense and looking at the code it's already
implemented along those lines: Target::GetSharedModule() defaults to
Platform::GetSharedModule() if the initial attempt to get the module fails.

The part I'd like to understand is if there's a precedence for modules
which don't have any accessible file image (local or remote). Is everything
expected to work if we create placeholder Module & ModuleSpecs?
(it seems that the current implementation assumes that we have a file
somewhere. Ex. even creating a Module from a ModuleSpec will still try to
map the source ModuleSpec to some files).

At Apple, we call "dsymForUUID " which, if global defaults were set
> to point at Apple's build servers, would go out and download the correct
> file for us and store it locally in a cache for future use.


Just curious, what happens if the download fails? Is the corresponding
module skipped? (is this strictly about the dSYMs or the same mechanism
works for the Mach-O binaries?)

That way if you create a target that is a minidump on a different system
> (macOS, linux, etc), the platform would be remote-windows.
>

Not sure if I understand this one, core & minidumps are currently not using
any of the the remote debugging machinery, right? Are you suggesting
changing that?

On Tue, Apr 10, 2018 at 11:56 AM, Greg Clayton  wrote:

>
>
> On Apr 10, 2018, at 11:32 AM, Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> I'm looking at how the LLDB minidump reader creates the list of modules:
>
> void ProcessMinidump::ReadModuleList() {
>
> ...
>
> const auto file_spec = FileSpec(name.getValue(), true);
> ModuleSpec module_spec = file_spec;
> Status error;
> lldb::ModuleSP module_sp = GetTarget().GetSharedModule(module_spec, &
> error);
> if (!module_sp || error.Fail()) {
> continue;
> }
>
> ...
>
> }
>
>
> LLDB currently will insist on finding a local image for the module, which
> is usually not the case for postmortem debugging on machines different than
> the the the one where the minidump was created.
>
> I don't see an obvious way to model modules which have no local image
> (which is still different than the remote scenario where there is a remote
> module image), am I missing anything?
>
>
> The lldb_private::Platform is responsible for digging up any binaries for
> a given target, so this code should be grabbing the platform from the
> target and using that to get the shared module. That way if you create a
> target that is a minidump on a different system (macOS, linux, etc), the
> platform would be remote-windows.
>
> That being said "ModuleSpec" should be filled in with more than just the
> path. It should specify the UUID info from the mini dump that specifies
> exactly which version the file that you want. That way if the file on the
> current system exists, it won't return it unless the path matches. I assume
> the mini dump has each module's UUID information? If so, set it. If not the
> file format assumes you will be running the dumping the same machine and it
> should be updated to include this information. The platform code can then
> use this UUID info to possible go and fetch the right version from a UUID
> database, or how ever the platform wants to provide access to certain
> binaries. At Apple, we call "dsymForUUID " which, if global defaults
> were set to point at Apple's build servers, would go out and download the
> correct file for us and store it locally in a cache for future use.
>
> So fill in the UUID in ModuleSpec and modify the
> Platform::GetSharedModule() for your platform to do the right thing is the
> correct way to go. ProcessMinidump should switch over to using the
> Platform::GetSharedModule() instead of the target one, or use it after the
> target one if the target returns an invalid module.
>
> Let me know if you have any more questions,
>
> Greg
>
>
> Thanks!
> Lemo.
>
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] postmortem debugging (core/minidump) & modules

2018-04-10 Thread Leonard Mosescu via lldb-dev
>
>
> No. Each binary knows how to tell LLDB what target triple it is. PECOFF
> files will always map to the host windows platform or remote-windows when
> not on a Windows host computer. If you say "file a.out" and give it a
> PECOFF file, just do "target list" and see the platform was selected for
> you. Since the Minidump is specific to Windows, it should select the right
> platform for you. If it doesn't we will need to fix that.
>

Thanks for the clarification. A small side note: yes, the minidump format
originates on Windows, but Breakpad/Crashpad use it across all supported
platforms (including Linux and macOS).


> Does the mini dump format have the UUID or some sort of checksum of the
> file in it?
>

Yes, the minidump has both the checksum for modules and UUID for the debug
information.

On Tue, Apr 10, 2018 at 3:12 PM, Greg Clayton  wrote:

>
>
> On Apr 10, 2018, at 2:30 PM, Leonard Mosescu  wrote:
>
> Thanks Greg! It makes sense and looking at the code it's already
> implemented along those lines: Target::GetSharedModule() defaults to
> Platform::GetSharedModule() if the initial attempt to get the module fails.
>
> The part I'd like to understand is if there's a precedence for modules
> which don't have any accessible file image (local or remote). Is everything
> expected to work if we create placeholder Module & ModuleSpecs?
>
>
> No, it is up to the platform to be able to track down files that don't
> exist locally. Most platforms do nothing and will return an empty module
> shared pointer. We need a file to use or we will just not have any info.
>
> (it seems that the current implementation assumes that we have a file
> somewhere. Ex. even creating a Module from a ModuleSpec will still try to
> map the source ModuleSpec to some files).
>
>
> Yes. Right now with only the path, we will load the file if it exists on
> disk since no UUID was specified in the ModuleSpec which is really bad and
> can lead to incorrect info being displayed.
>
>
> At Apple, we call "dsymForUUID " which, if global defaults were set
>> to point at Apple's build servers, would go out and download the correct
>> file for us and store it locally in a cache for future use.
>
>
> Just curious, what happens if the download fails? Is the corresponding
> module skipped? (is this strictly about the dSYMs or the same mechanism
> works for the Mach-O binaries?)
>
>
> It will block until the module is downloaded, and it can and often does
> fail and returns an error that can be displayed. When we need to download
> large debug info files, it creates delays with no user interaction and
> often leads to people wondering what is going on. Not optimal, but it does
> work if you wait for it.
>
>
> That way if you create a target that is a minidump on a different system
>> (macOS, linux, etc), the platform would be remote-windows.
>>
>
> Not sure if I understand this one, core & minidumps are currently not
> using any of the the remote debugging machinery, right? Are you suggesting
> changing that?
>
>
> No. Each binary knows how to tell LLDB what target triple it is. PECOFF
> files will always map to the host windows platform or remote-windows when
> not on a Windows host computer. If you say "file a.out" and give it a
> PECOFF file, just do "target list" and see the platform was selected for
> you. Since the Minidump is specific to Windows, it should select the right
> platform for you. If it doesn't we will need to fix that.
>
> Does the mini dump format have the UUID or some sort of checksum of the
> file in it?
>
>
> On Tue, Apr 10, 2018 at 11:56 AM, Greg Clayton  wrote:
>
>>
>>
>> On Apr 10, 2018, at 11:32 AM, Leonard Mosescu via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>> I'm looking at how the LLDB minidump reader creates the list of modules:
>>
>> void ProcessMinidump::ReadModuleList() {
>>
>> ...
>>
>> const auto file_spec = FileSpec(name.getValue(), true);
>> ModuleSpec module_spec = file_spec;
>> Status error;
>> lldb::ModuleSP module_sp = GetTarget().GetSharedModule(module_spec, &
>> error);
>> if (!module_sp || error.Fail()) {
>> continue;
>> }
>>
>> ...
>>
>> }
>>
>>
>> LLDB currently will insist on finding a local image for the module, which
>> is usually not the case for postmortem debugging on machines different than
>> the the the one where the minidump was created.
>>
>> I don't see an obvious way to model modules which have no local image
>> (which is still different

Re: [lldb-dev] postmortem debugging (core/minidump) & modules

2018-04-10 Thread Leonard Mosescu via lldb-dev
>
> Ahh. So then hopefully it extracts the triple from the mini dump file and
> sets it correctly which gets us right platform set?
>

Yes, this part seems to be working fine.

Just to make sure I understand:  we need a file with the debug info (e.g.,
> a PDB), but we shouldn't need the actual executable/shared library/DLL file
> except on platforms where the debug info is embedded in the binary.  Right?
>

As far as I can tell, LLDB today does need the binary image
(executable/shared library).

The minidump has a list of modules and memory ranges, but it's a bit tricky
to map that to LLDB modules and sections: I did a quick experiment with
placeholder modules, but the easy patch doesn't seem it gets us very far
(ex. without an image file to parse we don't get the sections)

On Tue, Apr 10, 2018 at 3:31 PM, Adrian McCarthy 
wrote:

>
>
> On Tue, Apr 10, 2018 at 3:12 PM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>>
>>
>> On Apr 10, 2018, at 2:30 PM, Leonard Mosescu  wrote:
>>
>> Thanks Greg! It makes sense and looking at the code it's already
>> implemented along those lines: Target::GetSharedModule() defaults to
>> Platform::GetSharedModule() if the initial attempt to get the module fails.
>>
>> The part I'd like to understand is if there's a precedence for modules
>> which don't have any accessible file image (local or remote). Is everything
>> expected to work if we create placeholder Module & ModuleSpecs?
>>
>>
>> No, it is up to the platform to be able to track down files that don't
>> exist locally. Most platforms do nothing and will return an empty module
>> shared pointer. We need a file to use or we will just not have any info.
>>
>
> Just to make sure I understand:  we need a file with the debug info (e.g.,
> a PDB), but we shouldn't need the actual executable/shared library/DLL file
> except on platforms where the debug info is embedded in the binary.  Right?
>
>
>>
>> (it seems that the current implementation assumes that we have a file
>> somewhere. Ex. even creating a Module from a ModuleSpec will still try to
>> map the source ModuleSpec to some files).
>>
>>
>> Yes. Right now with only the path, we will load the file if it exists on
>> disk since no UUID was specified in the ModuleSpec which is really bad and
>> can lead to incorrect info being displayed.
>>
>>
>> At Apple, we call "dsymForUUID " which, if global defaults were set
>>> to point at Apple's build servers, would go out and download the correct
>>> file for us and store it locally in a cache for future use.
>>
>>
>> Just curious, what happens if the download fails? Is the corresponding
>> module skipped? (is this strictly about the dSYMs or the same mechanism
>> works for the Mach-O binaries?)
>>
>>
>> It will block until the module is downloaded, and it can and often does
>> fail and returns an error that can be displayed. When we need to download
>> large debug info files, it creates delays with no user interaction and
>> often leads to people wondering what is going on. Not optimal, but it does
>> work if you wait for it.
>>
>>
>> That way if you create a target that is a minidump on a different system
>>> (macOS, linux, etc), the platform would be remote-windows.
>>>
>>
>> Not sure if I understand this one, core & minidumps are currently not
>> using any of the the remote debugging machinery, right? Are you suggesting
>> changing that?
>>
>>
>> No. Each binary knows how to tell LLDB what target triple it is. PECOFF
>> files will always map to the host windows platform or remote-windows when
>> not on a Windows host computer. If you say "file a.out" and give it a
>> PECOFF file, just do "target list" and see the platform was selected for
>> you. Since the Minidump is specific to Windows, it should select the right
>> platform for you. If it doesn't we will need to fix that.
>>
>> Does the mini dump format have the UUID or some sort of checksum of the
>> file in it?
>>
>>
>> On Tue, Apr 10, 2018 at 11:56 AM, Greg Clayton 
>> wrote:
>>
>>>
>>>
>>> On Apr 10, 2018, at 11:32 AM, Leonard Mosescu via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
>>> I'm looking at how the LLDB minidump reader creates the list of modules:
>>>
>>> void ProcessMinidump::ReadModuleList() {
>>>
>>> ...
>>>
>>> const auto file_spec = FileS

Re: [lldb-dev] Proposal: Using LLD in tests

2018-04-19 Thread Leonard Mosescu via lldb-dev
>
>  the PDB tests under lit/SymbolFile/PDB need a linker to produce the program
> database


With this proposal, would we preserve any coverage for MSVC produced debug
information?

On Thu, Apr 19, 2018 at 9:47 AM, Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

>
>
> > On Apr 19, 2018, at 9:39 AM, Pavel Labath  wrote:
> >
> > Yes, I considered the lld+Mach-O situation. I understand that it does not
> > work very well, but I don't know what exactly that means.
> >
> > However, I am not sure that we even need a linker for Mach-O. As I
> > understand it, in a darwin world, the linker does not even touch the
> debug
> > info, and lldb is already capable of reading the debug info from .o
> files.
> > So at least for non dsym-case it seems to me we should be good. It may be
> > possible we need a linker for the dSYM (i don't know exactly how that
> > works), in which case we may just not be able to test dsym this way, but
> > even that will be better than what we have now.
>
> The problem is the linker must produce a debug map in the symbol table
> that contains the addresses of all linked items. So using LLD won't work
> unless that is fully supported as LLDB won't be able to like the debug info
> on the fly, nor will dsymutil have a debug map to use in order to make a
> dSYM file.
>
> >
> > PS: I am not proposing to do anything to the existing dotest tests (they
> > need access to a running process anyway, so playing with the linker will
> > not help). I am just trying to make new debug-info specific tests
> > universally available.
>
> What I was trying to say is this is fine if we always compile/link as an
> ELF file with some triple that uses ELF for these tests. They just probably
> shouldn't be mach-o.
>
> Greg
>
> >
> >
> > On Thu, 19 Apr 2018 at 17:24, Greg Clayton  wrote:
> >
> >> The last I knew LLD doesn't work on mach-o very well, so be sure to not
> > require LLD for linking any Darwin executables.
> >
> >>> On Apr 19, 2018, at 6:42 AM, Pavel Labath via lldb-dev <
> > lldb-dev@lists.llvm.org> wrote:
> >>>
> >>> Hello all,
> >>>
> >>> currently we have a couple of tests, in-tree or under review, which are
> >>> very close to being host-independent. The only part they are missing is
> > the
> >>> ability to link a intermediate object file:
> >>> - the ppc64 test in https://reviews.llvm.org/D44437 needs a linker to
> >>> resolve relocations in the debug info (*)
> >>> - the PDB tests under lit/SymbolFile/PDB need a linker to produce the
> >>> program database.
> >>>
> >>> I think it would be great if everyone were able to run these tests and
> >>> verify they don't regress them before they actually push a patch.
> >>>
> >>> Apart from that, I have started looking at writing some non-execution
> > debug
> >>> info (**) tests as a part of adding DWARF v5 accelerator table support
> > to
> >>> lldb (both to test the new implementation, and to make sure I don't
> > regress
> >>> existing ones). Ideally I'd like to make sure that everyone is able to
> > run
> >>> them, regardless of their primary (or only) development platform. For
> > this,
> >>> I also need a linker capable of running everywhere (*)
> >>>
> >>> To achieve these goals, I'd like to propose that we add LLD as a
> > (optional,
> >>> but strongly recommended) dependency for running tests and start using
> > it
> >>> in the tests I mention. Doing this would optional in the sense that the
> >>> tests would be marked "REQUIRED: lld", and simply skipped if lld is not
> >>> available (so the tests would still be green). I say "strongly
> > recommended"
> >>> because not having lld checked out should not be an excuse for breaking
> > the
> >>> test, and the patch author should pro-actively revert a patch which
> > breaks
> >>> such tests and investigate.
> >>>
> >>> I hope this proposal is not too controversial. LLD is already required
> > on
> >>> windows to run dotest tests. Also, all monorepo users likely have it
> >>> already, or it is very easy for them to enable it. For non-monorepo
> > users
> >>> it should be a matter of checking out one extra repository. Please Let
> > me
> >>> know what you think.
> >>>
> >>> pavel
> >>>
> >>> (*) our ELF parser has very limited support for applying debug info
> >>> relocations -- it only works for x86, and only a couple of relocations
> > are
> >>> currently implemented. It would be possible to remove the linker
> > dependency
> >>> by implementing these (essentially, doing the link ourselves -- this is
> >>> what llvm does), but given the large number of architectures and
> > relocation
> >>> types, combined with the long term goal of reusing the llvm's ELF
> > parser,
> >>> this does not seem like a worthwhile goal. Also, it does not help the
> >>> windows situation, as in the PDB model it's the linker who produces the
> >>> pdb's.
> >>>
> >>> (**) I'll write a separate email about this, but what I'm essentially
> >>> thinking of is producing a stand-alone module (either from .yaml, .s,

Re: [lldb-dev] Proposal: Using LLD in tests

2018-04-19 Thread Leonard Mosescu via lldb-dev
>
> we have some good coverage there that our PDBs are "as good as" Microsoft
> PDBs, and in the future we have plans to have a debug info test suite that
> tests LLD-generated PDBs with Microsoft debuggers.
>

Thanks Zach. What I was asking is exactly the other half of this equation.
Testing LLDB with MSVC produced PDBs should be complementary and I doubt we
can get full coverage by just testing from the other direction (it's
plausible that even with equivalent semantic information, certain patterns
may only occur in MSVC produced debug information)

IMO reliable support for MSVC generated binaries and debug information is
critical since even with a LLVM/LLD toolchain you'll still have system/3rd
party modules, right?

Anyway, I was curious if any of this is in scope for Pavel's proposal and
the answer seems to be no, thanks everyone.

On Thu, Apr 19, 2018 at 10:44 AM, Pavel Labath  wrote:

> On Thu, 19 Apr 2018 at 18:19, Leonard Mosescu  wrote:
>
> >>the PDB tests under lit/SymbolFile/PDB need a linker to produce the
> program database
>
>
> > With this proposal, would we preserve any coverage for MSVC produced
> debug information?
>
>
> Well.. the question there is what are you trying to test? Is it the fact
> your debugger works with a particular compiler+linker
> combination (note that those tests already compile with clang-cl), or that
> your pdb-parsing code is sane. (integration vs. regression test).
>
> Historically we've only had the former kind of tests (dotest), and we've
> had the ability (and used it) to run those tests against different kinds of
> compilers. This is all nice, but it means that a specific test will be
> testing a different thing for every person who runs it. That's why I would
> like to build up a suite of more regression-like tests (*). I would say
> that the tests under lit/*** should be regression tests and our goal should
> be to remove as many system dependencies as possible, and leave the job of
> testing integration with a specific toolchain to "dotest" tests (**).
>
> Technically, the answer to your question is "no", because currently dotest
> tests don't know how to work with cl+link. Making that work would be an
> interesting project (although a bit annoying as the Makefiles are full of
> gcc-isms). However, I don't think that should stop us here.
>
> (*) Ideally I would like to leave even the compiler out of the equation for
> these tests, and make it so that the tests always run on the exact same set
> of bytes. I am hoping I will be able to write at least some tests using .s
> files. However, I don't think I will do that for all of them, because these
> files can be long/verbose/tedious to write.
>
> (**) However, even "dotest" tests should have a "default" mode which is as
> hermetic as possible.
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLDB tests getting stuck on GDBRemoteCommunicationClientTest.GetMemoryRegionInfo ?

2018-04-26 Thread Leonard Mosescu via lldb-dev
I just did a clean build (debug) on Linux, and I noticed that the LLDB
tests seem to consistently get stuck:

* -- Testing:
1002 tests, 12 threads --
*
* 99%
[==-]
ETA: 00:00:01*
*lldb-Suite :: types/TestIntegerTypes.py  *


At this point there are a bunch of llvm-lit processes waiting and two
suspicious LLDB unit tests:


*ProcessGdbRemoteTests
--gtest_filter=GDBRemoteCommunicationClientTest.GetMemoryRegionInfo*
*ProcessGdbRemoteTests
--gtest_filter=GDBRemoteCommunicationClientTest.GetMemoryRegionInfoInvalidResponse*


I took a quick look and they both seem to blocked on communicating with the
remote:

*thread #2, name = 'ProcessGdbRemot', stop reason = signal SIGSTOP*
*frame #0: 0x7f2d216e4383 libc.so.6`__GI_select + 51*
*frame #1: 0x56464a7afd6c
ProcessGdbRemoteTests`SelectHelper::Select(this=0x7f2d1eb07910) at
SelectHelper.cpp:224*
*frame #2: 0x564647c24745
ProcessGdbRemoteTests`lldb_private::ConnectionFileDescriptor::BytesAvailable(this=0x56464d563800,
timeout=0x7f2d1eb09f40, error_ptr=0x7f2d1eb07dd0) at
ConnectionFileDescriptorPosix.cpp:586*
*frame #3: 0x564647c23e58
ProcessGdbRemoteTests`lldb_private::ConnectionFileDescriptor::Read(this=0x56464d563800,
dst=0x7f2d1eb07e00, dst_len=8192, timeout=0x7f2d1eb09f40,
status=0x7f2d1eb07dcc, error_ptr=0x7f2d1eb07dd0) at
ConnectionFileDescriptorPosix.cpp:390*
*frame #4: 0x564647afc2ca
ProcessGdbRemoteTests`lldb_private::Communication::ReadFromConnection(this=0x56464d53e580,
dst=0x7f2d1eb07e00, dst_len=8192, timeout=0x7f2d1eb09f40,
status=0x7f2d1eb07dcc, error_ptr=0x7f2d1eb07dd0) at
Communication.cpp:286*
*frame #5: 0x564647afbad6
ProcessGdbRemoteTests`lldb_private::Communication::Read(this=0x56464d53e580,
dst=0x7f2d1eb07e00, dst_len=8192, timeout=0x7f2d1eb09f40,
status=0x7f2d1eb07dcc, error_ptr=0x7f2d1eb07dd0) at
Communication.cpp:169*
*frame #6: 0x564647c3bf6a
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunication::WaitForPacketNoLock(this=0x56464d53e580,
packet=0x7f2d1eb0a0e0, timeout=Timeout > @
0x7f2d1eb09f40, sync_on_timeout=true) at GDBRemoteCommunication.cpp:351*
*frame #7: 0x564647c3bca5
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunication::ReadPacket(this=0x56464d53e580,
response=0x7f2d1eb0a0e0, timeout=Timeout > @
0x7f2d1eb09f90, sync_on_timeout=true) at GDBRemoteCommunication.cpp:301*
*frame #8: 0x564647c39c72
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteClientBase::SendPacketAndWaitForResponseNoLock(this=0x56464d53e580,
payload=(Data = "qSupported:xmlRegisters=i386,arm,mips", Length = 37),
response=0x7f2d1eb0a0e0) at GDBRemoteClientBase.cpp:212*
*frame #9: 0x564647c39a23
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteClientBase::SendPacketAndWaitForResponse(this=0x56464d53e580,
payload=(Data = "qSupported:xmlRegisters=i386,arm,mips", Length = 37),
response=0x7f2d1eb0a0e0, send_async=false) at
GDBRemoteClientBase.cpp:176*
*frame #10: 0x564647c44e0a
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetRemoteQSupported(this=0x56464d53e580)
at GDBRemoteCommunicationClient.cpp:370*
*frame #11: 0x564647c4427b
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapReadSupported(this=0x56464d53e580)
at GDBRemoteCommunicationClient.cpp:200*
*frame #12: 0x564647c4c661
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::LoadQXferMemoryMap(this=0x56464d53e580)
at GDBRemoteCommunicationClient.cpp:1609*
*frame #13: 0x564647c4bb4e
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapRegionInfo(this=0x56464d53e580,
addr=16384, region=0x7f2d1eb0a6c0) at
GDBRemoteCommunicationClient.cpp:1583*
*frame #14: 0x564647c4b95d
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetMemoryRegionInfo(this=0x56464d53e580,
addr=16384, region_info=0x7ffd8b1a8870) at
GDBRemoteCommunicationClient.cpp:1558*
*frame #15: 0x56464797ee25
ProcessGdbRemoteTests`operator(__closure=0x56464d5636a8) at
GDBRemoteCommunicationClientTest.cpp:339*
*frame #16: 0x56464798a9d6
ProcessGdbRemoteTests`std::__invoke_impl
>((null)=__invoke_other @ 0x7f2d1eb0a910, __f=0x56464d5636a8)> &&)
at invoke.h:60*
*frame #17: 0x56464798613c
ProcessGdbRemoteTests`std::__invoke
>(__fn=0x56464d5636a8)> &&) at invoke.h:96*
*frame #18: 0x5646479c1750
ProcessGdbRemoteTests`std::thread::_Invoker
> >::_M_invoke<0>(this=0x564

Re: [lldb-dev] LLDB tests getting stuck on GDBRemoteCommunicationClientTest.GetMemoryRegionInfo ?

2018-05-01 Thread Leonard Mosescu via lldb-dev
Thanks Pavel. It doesn't look like a timeout to me:

1. First, the other (main) thread is just waiting on the std::future::get()
on the final EXPECT_TRUE(result.get().Success())

*#0  0x7fe4bdfbb6cd in pthread_join (threadid=140620333614848,
thread_return=0x0) at pthread_join.c:90*
*...*
*#14 0x55b855bdf370 in std::future::get
(this=0x7ffe4498aad0) at /usr/include/c++/7/future:796*
*#15 0x55b855b8c502 in
GDBRemoteCommunicationClientTest_GetMemoryRegionInfo_Test::TestBody
(this=0x55b85bc195d0)*
*at
/usr/local/google/home/mosescu/extra/llvm/src/tools/lldb/unittests/Process/gdb-remote/GDBRemoteCommunicationClientTest.cpp:*
330


2. The part that seems interesting to me is this part of the callstack I
mentioned:

*frame #9: 0x564647c39a23
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteClientBase::SendPacketAndWaitForResponse(this=0x56464d53e580,
payload=(Data = "qSupported:xmlRegisters=i386,arm,mips", Length = 37),
response=0x7f2d1eb0a0e0, send_async=false) at
GDBRemoteClientBase.cpp:176*
*frame #10: 0x564647c44e0a
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetRemoteQSupported(this=0x56464d53e580)
at GDBRemoteCommunicationClient.cpp:370*
*frame #11: 0x564647c4427b
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapReadSupported(this=0x56464d53e580)
at GDBRemoteCommunicationClient.cpp:200*
*frame #12: 0x564647c4c661
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::LoadQXferMemoryMap(this=0x56464d53e580)
at GDBRemoteCommunicationClient.cpp:1609*
*frame #13: 0x564647c4bb4e
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapRegionInfo(this=0x56464d53e580,
addr=16384, region=0x7f2d1eb0a6c0) at
GDBRemoteCommunicationClient.cpp:1583*
*frame #14: 0x564647c4b95d
ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetMemoryRegionInfo(this=0x56464d53e580,
addr=16384, region_info=0x7ffd8b1a8870) at
GDBRemoteCommunicationClient.cpp:1558*
*frame #15: 0x56464797ee25
ProcessGdbRemoteTests`operator(__closure=0x56464d5636a8) at
GDBRemoteCommunicationClientTest.cpp:339*

It seems that the client is attempting extra communication which is not
modeled in the mock HandlePacket(), so it simply hangs in there. If that's
the case I'd expect this issue to be more widespread (unless my source tree
is in a broken state).

This is the fist time I looked at this part of the code so it's possible I
missed something obvious though.



On Fri, Apr 27, 2018 at 2:11 AM, Pavel Labath  wrote:

> On Thu, 26 Apr 2018 at 22:58, Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> > I just did a clean build (debug) on Linux, and I noticed that the LLDB
> tests seem to consistently get stuck:
>
> >   -- Testing:
> 1002 tests, 12 threads --
>
> >   99%
> [===
> 
> ===-]
> ETA: 00:00:01
> > lldb-Suite :: types/TestIntegerTypes.py
>
>
> > At this point there are a bunch of llvm-lit processes waiting and two
> suspicious LLDB unit tests:
>
>
> > ProcessGdbRemoteTests
> --gtest_filter=GDBRemoteCommunicationClientTest.GetMemoryRegionInfo
> > ProcessGdbRemoteTests
> --gtest_filter=GDBRemoteCommunicationClientTest.
> GetMemoryRegionInfoInvalidResponse
>
>
> > I took a quick look and they both seem to blocked on communicating with
> the remote:
>
> > thread #2, name = 'ProcessGdbRemot', stop reason = signal SIGSTOP
>
> These tests should have two threads communicating with each other. Can you
> check what the other thread is doing?
>
> My bet would be that fact that we are now running dotest tests concurrently
> with the unittests is putting more load on the system (particularly in
> debug builds), and the communication times out. You can try increasing the
> timeout in GDBRemoteTestUtils.cpp:GetPacket to see if that helps.
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB tests getting stuck on GDBRemoteCommunicationClientTest.GetMemoryRegionInfo ?

2018-05-01 Thread Leonard Mosescu via lldb-dev
PS. just a wild guess, could it be related to : rL327970: Re-land: [lldb]
Use vFlash commands when writing to target's flash memory… ?

On Tue, May 1, 2018 at 1:24 PM, Leonard Mosescu  wrote:

> Thanks Pavel. It doesn't look like a timeout to me:
>
> 1. First, the other (main) thread is just waiting on the
> std::future::get() on the final EXPECT_TRUE(result.get().Success())
>
> *#0  0x7fe4bdfbb6cd in pthread_join (threadid=140620333614848,
> thread_return=0x0) at pthread_join.c:90*
> *...*
> *#14 0x55b855bdf370 in std::future::get
> (this=0x7ffe4498aad0) at /usr/include/c++/7/future:796*
> *#15 0x55b855b8c502 in
> GDBRemoteCommunicationClientTest_GetMemoryRegionInfo_Test::TestBody
> (this=0x55b85bc195d0)*
> *at
> /usr/local/google/home/mosescu/extra/llvm/src/tools/lldb/unittests/Process/gdb-remote/GDBRemoteCommunicationClientTest.cpp:*
> 330
>
>
> 2. The part that seems interesting to me is this part of the callstack I
> mentioned:
>
> *frame #9: 0x564647c39a23
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteClientBase::SendPacketAndWaitForResponse(this=0x56464d53e580,
> payload=(Data = "qSupported:xmlRegisters=i386,arm,mips", Length = 37),
> response=0x7f2d1eb0a0e0, send_async=false) at
> GDBRemoteClientBase.cpp:176*
> *frame #10: 0x564647c44e0a
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetRemoteQSupported(this=0x56464d53e580)
> at GDBRemoteCommunicationClient.cpp:370*
> *frame #11: 0x564647c4427b
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapReadSupported(this=0x56464d53e580)
> at GDBRemoteCommunicationClient.cpp:200*
> *frame #12: 0x564647c4c661
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::LoadQXferMemoryMap(this=0x56464d53e580)
> at GDBRemoteCommunicationClient.cpp:1609*
> *frame #13: 0x564647c4bb4e
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetQXferMemoryMapRegionInfo(this=0x56464d53e580,
> addr=16384, region=0x7f2d1eb0a6c0) at
> GDBRemoteCommunicationClient.cpp:1583*
> *frame #14: 0x564647c4b95d
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::GDBRemoteCommunicationClient::GetMemoryRegionInfo(this=0x56464d53e580,
> addr=16384, region_info=0x7ffd8b1a8870) at
> GDBRemoteCommunicationClient.cpp:1558*
> *frame #15: 0x56464797ee25
> ProcessGdbRemoteTests`operator(__closure=0x56464d5636a8) at
> GDBRemoteCommunicationClientTest.cpp:339*
>
> It seems that the client is attempting extra communication which is not
> modeled in the mock HandlePacket(), so it simply hangs in there. If that's
> the case I'd expect this issue to be more widespread (unless my source tree
> is in a broken state).
>
> This is the fist time I looked at this part of the code so it's possible I
> missed something obvious though.
>
>
>
> On Fri, Apr 27, 2018 at 2:11 AM, Pavel Labath  wrote:
>
>> On Thu, 26 Apr 2018 at 22:58, Leonard Mosescu via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>> > I just did a clean build (debug) on Linux, and I noticed that the LLDB
>> tests seem to consistently get stuck:
>>
>> >   --
>> Testing:
>> 1002 tests, 12 threads --
>>
>> >   99%
>> [===
>> 
>> ===-]
>> ETA: 00:00:01
>> > lldb-Suite :: types/TestIntegerTypes.py
>>
>>
>> > At this point there are a bunch of llvm-lit processes waiting and two
>> suspicious LLDB unit tests:
>>
>>
>> > ProcessGdbRemoteTests
>> --gtest_filter=GDBRemoteCommunicationClientTest.GetMemoryRegionInfo
>> > ProcessGdbRemoteTests
>> --gtest_filter=GDBRemoteCommunicationClientTest.GetMemoryReg
>> ionInfoInvalidResponse
>>
>>
>> > I took a quick look and they both seem to blocked on communicating with
>> the remote:
>>
>> > thread #2, name = 'ProcessGdbRemot', stop reason = signal SIGSTOP
>>
>> These tests should have two threads communicating with each other. Can you
>> check what the other thread is doing?
>>
>> My bet would be that fact that we are now running dotest tests
>> concurrently
>> with the unittests is putting more load on the system (particularly in
>> debug builds), and the communication times out. You can try increasing the
>> timeout in GDBRemoteTestUtils.cpp:GetPacket to see if that helps.
>>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB tests getting stuck on GDBRemoteCommunicationClientTest.GetMemoryRegionInfo ?

2018-05-02 Thread Leonard Mosescu via lldb-dev
Great, thanks Pavel!

On Wed, May 2, 2018 at 10:06 AM, Pavel Labath  wrote:

> Ok, r331374 ought to fix that. The situation was a bit more complicated
> then I thought, because the function was behaving differently if one builds
> lldb with xml support, so i've had to update the test to work correctly in
> both situations.
> On Wed, 2 May 2018 at 16:34, Pavel Labath  wrote:
>
> > Right, I see what's going on now. Yes, you're right, the commit you
> mention
> > has added extra packets which are not handled in the mock. The reason
> this
> > is hanging for you is because you are using a debug build, which has a
> much
> > larger packet timeout (1000s i think). In the release build this passes,
> > because the second packet is optional and the function treats the lack of
> > response to the second packet as an error/not implemented. If you waited
> > for 15 minutes, I think you'd see the tests pass as well.
>
> > I'll have this fixed soon.
> > On Tue, 1 May 2018 at 21:26, Leonard Mosescu  wrote:
>
> > > PS. just a wild guess, could it be related to : rL327970: Re-land:
> [lldb]
> > Use vFlash commands when writing to target's flash memory… ?
>
> > > On Tue, May 1, 2018 at 1:24 PM, Leonard Mosescu 
> > wrote:
>
> > >> Thanks Pavel. It doesn't look like a timeout to me:
>
> > >> 1. First, the other (main) thread is just waiting on the
> > std::future::get() on the final EXPECT_TRUE(result.get().Success())
>
> > >> #0  0x7fe4bdfbb6cd in pthread_join (threadid=140620333614848,
> > thread_return=0x0) at pthread_join.c:90
> > >> ...
> > >> #14 0x55b855bdf370 in std::future::get
> > (this=0x7ffe4498aad0) at /usr/include/c++/7/future:796
> > >> #15 0x55b855b8c502 in
> > GDBRemoteCommunicationClientTest_GetMemoryRegionInfo_Test::TestBody
> > (this=0x55b85bc195d0)
> > >>  at
>
> /usr/local/google/home/mosescu/extra/llvm/src/tools/
> lldb/unittests/Process/gdb-remote/GDBRemoteCommunicationClientTest.cpp:330
>
>
> > >> 2. The part that seems interesting to me is this part of the callstack
> I
> > mentioned:
>
> > >>  frame #9: 0x564647c39a23
>
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::
> GDBRemoteClientBase::SendPacketAndWaitForResponse(this=0x56464d53e580,
> > payload=(Data = "qSupported:xmlRegisters=i386,arm,mips", Length = 37),
> > response=0x7f2d1eb0a0e0, send_async=false) at
> > GDBRemoteClientBase.cpp:176
> > >>  frame #10: 0x564647c44e0a
>
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::
> GDBRemoteCommunicationClient::GetRemoteQSupported(this=0x56464d53e580)
> > at GDBRemoteCommunicationClient.cpp:370
> > >>  frame #11: 0x564647c4427b
>
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::
> GDBRemoteCommunicationClient::GetQXferMemoryMapReadSupported
> (this=0x56464d53e580)
> > at GDBRemoteCommunicationClient.cpp:200
> > >>  frame #12: 0x564647c4c661
>
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::
> GDBRemoteCommunicationClient::LoadQXferMemoryMap(this=0x56464d53e580)
> > at GDBRemoteCommunicationClient.cpp:1609
> > >>  frame #13: 0x564647c4bb4e
>
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::
> GDBRemoteCommunicationClient::GetQXferMemoryMapRegionInfo(
> this=0x56464d53e580,
> > addr=16384, region=0x7f2d1eb0a6c0) at
> > GDBRemoteCommunicationClient.cpp:1583
> > >>  frame #14: 0x564647c4b95d
>
> ProcessGdbRemoteTests`lldb_private::process_gdb_remote::
> GDBRemoteCommunicationClient::GetMemoryRegionInfo(this=0x56464d53e580,
> > addr=16384, region_info=0x7ffd8b1a8870) at
> > GDBRemoteCommunicationClient.cpp:1558
> > >>  frame #15: 0x56464797ee25
> > ProcessGdbRemoteTests`operator(__closure=0x56464d5636a8) at
> > GDBRemoteCommunicationClientTest.cpp:339
>
> > >> It seems that the client is attempting extra communication which is
> not
> > modeled in the mock HandlePacket(), so it simply hangs in there. If
> that's
> > the case I'd expect this issue to be more widespread (unless my source
> tree
> > is in a broken state).
>
> > >> This is the fist time I looked at this part of the code so it's
> possible
> > I missed something obvious though.
>
>
>
> > >> On Fri, Apr 27, 2018 at 2:11 AM, Pavel Labath 
> wrote:
>
> > >>> On Thu, 26 Apr 2018 at 22:58, Leonard Mosescu via l

[lldb-dev] Making changes to the SB API

2018-06-08 Thread Leonard Mosescu via lldb-dev
What is the governing philosophy around making changes to the SB API? The "SB
API Coding Rules "
page establishes the practices on how to avoid introducing accidental
incompatibility, but what
about the cases where there's a case for intentionally making changes?

For example, I'd like to make a small change to SBTarget to allow surfacing
errors during LoadCore():

SBProcess SBTarget::LoadCore(const char *core_file)


And add an explicit out error parameter (in line with SBTarget::Attach(),
Launch(), ...):

SBProcess SBTarget::LoadCore(const char *core_file*, SBError **&**error*)

If the rule is to strictly avoid any kind of changes then I'd have to
resort to
a COM-like versioning and introduce a new SBTarget::LoadCore2 (or
LoadCoreEx, ... pick
your poison, I'm not set on any name) while also keeping the existing
LoadCore().

Any guidance on this? Thanks!
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Making changes to the SB API

2018-06-11 Thread Leonard Mosescu via lldb-dev
Thanks. I wasn't sure how well C++ overloading works with SWIG, that's
definitely a more ergonomic solution.


On Fri, Jun 8, 2018 at 1:16 PM, Greg Clayton  wrote:

>
>
> On Jun 8, 2018, at 12:54 PM, Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> What is the governing philosophy around making changes to the SB API? The
> "SB API Coding Rules <http://lldb.llvm.org/SB-api-coding-rules.html>"
> page establishes the practices on how to avoid introducing accidental
> incompatibility, but what
> about the cases where there's a case for intentionally making changes?
>
> For example, I'd like to make a small change to SBTarget to allow
> surfacing errors during LoadCore():
>
> SBProcess SBTarget::LoadCore(const char *core_file)
>
>
> And add an explicit out error parameter (in line with SBTarget::Attach(),
> Launch(), ...):
>
> SBProcess SBTarget::LoadCore(const char *core_file*, SBError **&**error*)
>
> If the rule is to strictly avoid any kind of changes then I'd have to
> resort to
> a COM-like versioning and introduce a new SBTarget::LoadCore2 (or
> LoadCoreEx, ... pick
> your poison, I'm not set on any name) while also keeping the existing
> LoadCore().
>
> Any guidance on this? Thanks!
>
>
> Just add an extra overloaded version of LoadCore. We don't want people's
> code to not link and since Apple uses a LLDBRPC.framework that sub-launches
> a lldb-rpc-server which can connect to older LLDB.framework binaries, we
> can't remove functions.
>
> Greg
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Do we have any infrastructure for creating mini dump files from a loaded process or from a core file?

2018-06-13 Thread Leonard Mosescu via lldb-dev
The minidump format is more or less documented in MSDN

.

That being said, it's not exactly trivial to produce a good minidump. Crashpad
has a native &
cross-platform minidump writer, that's what I'd start with.

On Wed, Jun 13, 2018 at 1:38 PM, Adrian McCarthy via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Zach's right.  On Windows, lldb can produce a minidump, but it just calls
> out to a Microsoft library to do so.  We don't have any platform-agnostic
> code for producing a minidump.
>
> I've also pinged another Googler who I know might be interested in
> converting between minidumps and core files (the opposite direction) to see
> if he has any additional info.  I don't think he's on lldb-dev, though, so
> I'll act as a relay if necessary.
>
> On Wed, Jun 13, 2018 at 12:07 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> We can’t produce them, but you should check out the source code of google
>> breakpad / crashpad which can.
>>
>> That said it’s a pretty simple format, there may be enough in our
>> consumer code that should allow you to produce them
>>
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Do we have any infrastructure for creating mini dump files from a loaded process or from a core file?

2018-06-13 Thread Leonard Mosescu via lldb-dev
>
> That being said, it's not exactly trivial to produce a good minidump.
> Crashpad  has a
> native & cross-platform minidump writer, that's what I'd start with.
>

Addendum: I realized after sending the email that if the goal is to convert
core files -> LLDB -> minidump a lot of the complexity found in Crashpad
can be avoided, so perhaps writing an LLDB minidump writer from scratch
would not be too bad.

On Wed, Jun 13, 2018 at 1:50 PM, Leonard Mosescu  wrote:

> The minidump format is more or less documented in MSDN
> 
> .
>
> That being said, it's not exactly trivial to produce a good minidump. Crashpad
> has a native &
> cross-platform minidump writer, that's what I'd start with.
>
> On Wed, Jun 13, 2018 at 1:38 PM, Adrian McCarthy via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Zach's right.  On Windows, lldb can produce a minidump, but it just calls
>> out to a Microsoft library to do so.  We don't have any platform-agnostic
>> code for producing a minidump.
>>
>> I've also pinged another Googler who I know might be interested in
>> converting between minidumps and core files (the opposite direction) to see
>> if he has any additional info.  I don't think he's on lldb-dev, though, so
>> I'll act as a relay if necessary.
>>
>> On Wed, Jun 13, 2018 at 12:07 PM, Zachary Turner via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> We can’t produce them, but you should check out the source code of
>>> google breakpad / crashpad which can.
>>>
>>> That said it’s a pretty simple format, there may be enough in our
>>> consumer code that should allow you to produce them
>>>
>>>
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>>
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Do we have any infrastructure for creating mini dump files from a loaded process or from a core file?

2018-06-13 Thread Leonard Mosescu via lldb-dev
>
> What about the case where you already have a Unix core file and you aren't
> in a debugger but just want to convert it?


Just curious, would a small Python script using the LLDB SB API satisfy
this requirement?

 We could move all the code for consuming and producing Windows minidumps
> and Unix / Mach-O corefiles from LLDB down into LLVMCoreFile, write a
> library like llvm-core that can manipulate or inspect them, then have LLDB
> use it.  Kill 2 birds with one stone that way IMO.
>

I like the idea of factoring out reusable subsystems, and I'd love to see
something along these lines. Just a word of caution though: the hard part
may not be the generation of a "structurally valid" minidump file, but
"parsing" and modeling the process state (figuring out the list of modules
& memory regions, etc. See the Crashpad implementation for details).

On Wed, Jun 13, 2018 at 3:01 PM, Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Yea, I think something like this would actually make a useful llvm
> utility.  Call it llvm-core or something, and it links against the library
> LLVMCoreFile.  We could move all the code for consuming and producing
> Windows minidumps and Unix / Mach-O corefiles from LLDB down into
> LLVMCoreFile, write a library like llvm-core that can manipulate or inspect
> them, then have LLDB use it.  Kill 2 birds with one stone that way IMO.
>
> On Wed, Jun 13, 2018 at 2:56 PM Jason Molenda  wrote:
>
>> fwiw I had to prototype a new LC_NOTE load command a year ago in Mach-O
>> core files, to specify where the kernel binary was located.  I wrote a
>> utility to add the data to an existing corefile - both load command and
>> payload - and it was only about five hundred lines of C++.  I didn't link
>> against anything but libc, it's such  a simple task I didn't sweat trying
>> to find an object-file-reader/writer library.  ELF may be more complicated
>> though.
>>
>> > On Jun 13, 2018, at 2:51 PM, Zachary Turner via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>> >
>> > What about the case where you already have a Unix core file and you
>> aren't in a debugger but just want to convert it?  It seems like we could
>> have a standalone utility that did that (one could imagine doing the
>> reverse too).  I'm wondering if it wouldn't be possible to do this as a
>> library or something that didn't have any dependencies on LLDB, that way a
>> standalone tool could link against this library, and so could LLDB.  I
>> think this would improve its usefulness quite a bit.
>> >
>> > On Wed, Jun 13, 2018 at 2:42 PM Greg Clayton 
>> wrote:
>> > The goal is to take a live process (regular process just stopped, or a
>> core file) and run "save_minidump ..." as a command and export a minidump
>> file that can be sent elsewhere. Unix core files are too large to always
>> send and they are less useful if they are not examined in the machine that
>> they were produced on. So LLDB gives us the connection to the live process,
>> and we can then create a minidump file. I am going to create a python
>> module that can do this for us.
>> >
>> > Greg
>> >
>> >
>> >> On Jun 13, 2018, at 2:29 PM, Zachary Turner via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>> >>
>> >> Also, if the goal is to have this upstream somewhere, it would be nice
>> to have a tool this be a standalone tool.  This seems like something that
>> you shouldn't be required to start up a debugger to do, and probably
>> doesn't have many (or any for that matters) on the rest of LLDB.
>> >>
>> >> On Wed, Jun 13, 2018 at 1:58 PM Leonard Mosescu 
>> wrote:
>> >> That being said, it's not exactly trivial to produce a good minidump.
>> Crashpad has a native & cross-platform minidump writer, that's what I'd
>> start with.
>> >>
>> >> Addendum: I realized after sending the email that if the goal is to
>> convert core files -> LLDB -> minidump a lot of the complexity found in
>> Crashpad can be avoided, so perhaps writing an LLDB minidump writer from
>> scratch would not be too bad.
>> >>
>> >> On Wed, Jun 13, 2018 at 1:50 PM, Leonard Mosescu 
>> wrote:
>> >> The minidump format is more or less documented in MSDN.
>> >>
>> >> That being said, it's not exactly trivial to produce a good minidump.
>> Crashpad has a native & cross-platform minidump writer, that's what I'd
>> start with.
>> >>
>> >> On Wed, Jun 13, 2018 at 1:38 PM, Adrian McCarthy via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>> >> Zach's right.  On Windows, lldb can produce a minidump, but it just
>> calls out to a Microsoft library to do so.  We don't have any
>> platform-agnostic code for producing a minidump.
>> >>
>> >> I've also pinged another Googler who I know might be interested in
>> converting between minidumps and core files (the opposite direction) to see
>> if he has any additional info.  I don't think he's on lldb-dev, though, so
>> I'll act as a relay if necessary.
>> >>
>> >> On Wed, Jun 13, 2018 at 12:07 PM, Zachary Turner via lldb-dev <
>> lldb-dev@lists.llvm.

Re: [lldb-dev] Do we have any infrastructure for creating mini dump files from a loaded process or from a core file?

2018-06-13 Thread Leonard Mosescu via lldb-dev
For core -> minidump conversion, a Python tool might end up as trivial as:

process = target.LoadCore(source_core_file)
process.SaveMinidump(output_minidump_file)


Another angle on reusing existing minidump writing code: we could reuse
just the minidump file creation from Crashpad (the part which constructs
the minidump datastructures and puts together the final minidump file)
without the "process parsing" part of Crashpad (since LLDB already covers
that)

If anyone is interested in exploring that, the interface to this low level
minidump writing part is crashpad::ProcessSnapshot
<https://chromium.googlesource.com/crashpad/crashpad/+/master/snapshot/process_snapshot.h>
& the rest of the xxxSnapshot interfaces.


On Wed, Jun 13, 2018 at 3:33 PM, Jim Ingham  wrote:

> Greg already wrote a "save_crashlog" Python command that writes the state
> of the program as a macOS flavor Crashlog file.  It's in
> examples/Python/crashlog.py.  My guess is he had something similar to that
> in mind, but writing a mini dump file instead.
>
> Jim
>
>
> > On Jun 13, 2018, at 3:20 PM, Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > What about the case where you already have a Unix core file and you
> aren't in a debugger but just want to convert it?
> >
> > Just curious, would a small Python script using the LLDB SB API satisfy
> this requirement?
> >
> >  We could move all the code for consuming and producing Windows
> minidumps and Unix / Mach-O corefiles from LLDB down into LLVMCoreFile,
> write a library like llvm-core that can manipulate or inspect them, then
> have LLDB use it.  Kill 2 birds with one stone that way IMO.
> >
> > I like the idea of factoring out reusable subsystems, and I'd love to
> see something along these lines. Just a word of caution though: the hard
> part may not be the generation of a "structurally valid" minidump file, but
> "parsing" and modeling the process state (figuring out the list of modules
> & memory regions, etc. See the Crashpad implementation for details).
> >
> > On Wed, Jun 13, 2018 at 3:01 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > Yea, I think something like this would actually make a useful llvm
> utility.  Call it llvm-core or something, and it links against the library
> LLVMCoreFile.  We could move all the code for consuming and producing
> Windows minidumps and Unix / Mach-O corefiles from LLDB down into
> LLVMCoreFile, write a library like llvm-core that can manipulate or inspect
> them, then have LLDB use it.  Kill 2 birds with one stone that way IMO.
> >
> > On Wed, Jun 13, 2018 at 2:56 PM Jason Molenda 
> wrote:
> > fwiw I had to prototype a new LC_NOTE load command a year ago in Mach-O
> core files, to specify where the kernel binary was located.  I wrote a
> utility to add the data to an existing corefile - both load command and
> payload - and it was only about five hundred lines of C++.  I didn't link
> against anything but libc, it's such  a simple task I didn't sweat trying
> to find an object-file-reader/writer library.  ELF may be more complicated
> though.
> >
> > > On Jun 13, 2018, at 2:51 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > >
> > > What about the case where you already have a Unix core file and you
> aren't in a debugger but just want to convert it?  It seems like we could
> have a standalone utility that did that (one could imagine doing the
> reverse too).  I'm wondering if it wouldn't be possible to do this as a
> library or something that didn't have any dependencies on LLDB, that way a
> standalone tool could link against this library, and so could LLDB.  I
> think this would improve its usefulness quite a bit.
> > >
> > > On Wed, Jun 13, 2018 at 2:42 PM Greg Clayton 
> wrote:
> > > The goal is to take a live process (regular process just stopped, or a
> core file) and run "save_minidump ..." as a command and export a minidump
> file that can be sent elsewhere. Unix core files are too large to always
> send and they are less useful if they are not examined in the machine that
> they were produced on. So LLDB gives us the connection to the live process,
> and we can then create a minidump file. I am going to create a python
> module that can do this for us.
> > >
> > > Greg
> > >
> > >
> > >> On Jun 13, 2018, at 2:29 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > >>
> > >> Also, if the goal is to have this upstream somewhere, it would be
> nice to 

Re: [lldb-dev] LLDB does not support the default 8 byte build ID generated by LLD

2018-06-20 Thread Leonard Mosescu via lldb-dev
I had made a local attempt at making UUID support arbitrary sizes
(part of extracting
the UUIDs from minidumps ). I ended up
abandoning the UUID changes since there were not strictly in scope and I
also had the same uneasy feeling about how flexible do we really want to be
with UUIDs.

Overall, the change was aesthetically pleasing since the UUID interface can
be cleaned up a bit, but there are a few small downsides I remember:

1. A variable-length UUID likely incurs an extra heap allocation.
2. Formatting arbitrary length UUIDs as string is a bit inconvenient as you
noted as well.

I may have an old patch with these changes, let me dig a bit.


On Wed, Jun 20, 2018 at 9:55 AM, Scott Funkenhauser via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I took a quick look, it should be fairly straight forward. The one wrinkle
> (which is just a design decision) is how to represent the variable length
> UUID as a human readable string (Ex. 16 byte UUIDs are represented as
> ----).
>
> I guess the one thing that is giving me pause, is that according to the
> spec UUIDs are only supposed to be 16 bytes. UUID.cpp already isn't
> strictly to spec because 20 byte IDs are already supported, but it seems
> like supporting arbitrary length IDs is an even further departure from the
> spec. Maybe this is just semantics and doesn't really matter. I don't have
> a strong opinion one way or another, you definitely have more context than
> me, and if you think using arbitrary length IDs makes more sense than
> padding we can move forward with that solution.
>
> On Wed, Jun 20, 2018 at 11:18 AM Pavel Labath  wrote:
>
>> Thanks for the heads up Scott. I was not aware that lld generates
>> build-ids with different length.
>>
>> Padding would be one option (we already do that to handle the crc
>> pseudo-build-ids), but perhaps a better one would be to teach the
>> class how to handle arbitrary-sized UUIDs (or up to 20 bytes, at
>> least).
>>
>> I don't think there's a fundamental reason reason why only these two
>> lengths are acceptable. The class originally supported 16 bytes only,
>> because that's how mac UUIDs look like. Then, later, when we were
>> bringing up linux, we added 20-byte support because that's what the
>> gnu linkers generated. But, as it seems that this field can be of any
>> size, maybe it's time to teach UUID how to handle the new reality.
>>
>> Have you looked at how hard would it be to implement something like that?
>>
>> pl
>> On Wed, 20 Jun 2018 at 16:05, Scott Funkenhauser via lldb-dev
>>  wrote:
>> >
>> > Hey guys,
>> >
>> > LLDB uses source/Utility/UUID.cpp to store the build ID. This class
>> only supports 16 or 20 byte IDs.
>> >
>> > When parsing the .note.gnu.build-id ELF section, any build ID between 4
>> and 20 bytes will be parsed and saved (which will silently fail if the size
>> isn't 16 or 20 bytes) https://github.com/llvm-mirror/lldb/blob/
>> 4dc18b8ce3f95c2aa33edc4c821909c329e94be9/source/Plugins/
>> ObjectFile/ELF/ObjectFileELF.cpp#L1279 .
>> >
>> > I discovered this issue because by default LLD will generate a 8 byte
>> build ID, causing LLDB to ignore the .note.gnu.build-id ELF section and
>> compute a crc32 at runtime.
>> >
>> > Is this a know issue that somebody is already working on? (After a
>> quick search I couldn't find any open bugs with a similar description).
>> >
>> > Does anybody have any objection to modifying UUID::SetBytes to accept
>> any byte array with a size between 4 - 20 bytes, and pad with zeros to the
>> next largest supported size (either 16 or 20 bytes).
>> >
>> > ex.
>> > Setting a UUID with length of 8, would pad with 8 trailing zeros to
>> have an overall length of 16.
>> > Setting a UUID with length of 17, would pad with 3 trailing zeros to
>> have an overall length of 20.
>> >
>> > Thanks,
>> > Scott
>> >
>> >
>> > ___
>> > lldb-dev mailing list
>> > lldb-dev@lists.llvm.org
>> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB does not support the default 8 byte build ID generated by LLD

2018-06-20 Thread Leonard Mosescu via lldb-dev
Here's a snapshot of the old changes I had: https://reviews.llvm.org/D48381
(hopefully it helps a bit but caveat emptor: this is a quick merge from an
old patch, so it's for illustrative purposes only)


On Wed, Jun 20, 2018 at 10:26 AM, Pavel Labath  wrote:

> From the looks of it, the patch stalled on the part whether we can
> consider all-zero UUIDs as valid or not. I've dug around the code a
> bit now, and I've found this comment in ObjectFileMachO.cpp.
>
>// "main bin spec" (main binary specification) data payload is
>// formatted:
>//uint32_t version   [currently 1]
>//uint32_t type  [0 == unspecified, 1 ==
> kernel, 2 == user process]
>//uint64_t address   [ UINT64_MAX if address not
> specified ]
>//uuid_t   uuid  [ all zero's if uuid not specified
> ]
>//uint32_t log2_pagesize [ process page size in log
> base 2, e.g. 4k pages are 12.  0 for unspecified ]
>
>
> So it looks like there are situations where we consider all-zero UUIDs
> as invalid.
>
> I guess that means we either have to keep IsValid() definition as-is,
> or make ObjectFileMachO check the all-zero case itself. (Some middle
> ground may be where we have two SetFromStringRef functions, one which
> treats all-zero case specially (sets m_num_uuid_bytes to 0), and one
> which doesn't). Then clients can pick which semantics they want.
>
>
> > 1. A variable-length UUID likely incurs an extra heap allocation.
> Not really. If you're happy with the current <=20 limit, then you can
> just use the existing data structure. Otherwise, you could use a
> SmallVector.
>
> > 2. Formatting arbitrary length UUIDs as string is a bit inconvenient as
> you noted as well.
> For the string representation, I would say we should just use the
> existing layout of dashes (after 4, 6, 8, 10 and 16 bytes) and just
> cut it short when we have less bytes. The implementation of that
> should be about a dozen lines of code.
>
> The fact that these new UUIDs would not be real UUIDs could be solved
> by renaming this class to something else, if anyone can think of a
> good name for it (I can't). Then the "real" UUIDs will be just a
> special case of the new object.
>
> pl
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB does not support the default 8 byte build ID generated by LLD

2018-06-21 Thread Leonard Mosescu via lldb-dev
>
> Leonard, I'm not going to use your patch, as it's a bit un-llvm-y
> (uses std::ostream and such). However, I wanted to check whether 20
> bytes will be enough for your use cases (uuids in minidumps)?
>

For minidumps we normally use either 16 or 20 byte UUIDs, so I don't see
any immediate problems. Are you planning to make 20 a hard limit or have
the 20 bytes "inlined" and dynamically allocate if larger?

On Thu, Jun 21, 2018 at 8:18 AM, Pavel Labath  wrote:

> That sounds like a plan. I have started cleaning up the class a bit
> (removing manual uuid string formatting in various places and such),
> and then I'll send a patch which implements that.
>
> Leonard, I'm not going to use your patch, as it's a bit un-llvm-y
> (uses std::ostream and such). However, I wanted to check whether 20
> bytes will be enough for your use cases (uuids in minidumps)?
> On Thu, 21 Jun 2018 at 16:03, Greg Clayton  wrote:
> >
> > I am fine if we go with any number of bytes. We should have the
> lldb_private::UUID class have an array of bytes that is in the class that
> is to to 20 bytes. We can increase it later if needed. I would rather not
> have a dynamically allocated buffer.
> >
> > That being said a few points:
> > - Length can be set to zero to indicate invalid UUID. Better that than
> filling in all zeroes and having to check for that IMHO. I know there were
> some problems with the last patch around this.
> > - Don't set length to a valid value and have UUID contain zeros unless
> that is a true UUID that was calculated. LLDB does a lot of things by
> matching UUID values so we can't have multiple modules claiming to have a
> UUID that is filled with zeroes, otherwise many matches will occur that we
> don't want
> > - 32 bit GNU debug info CRCs from ELF notes could be filled in as 4 byte
> UUIDs
> > - Comparing two UUIDs can start with the length field first the if they
> match proceed to compare the bytes (which is hopefully what is already
> happening)
> >
> >
> > On Jun 20, 2018, at 11:01 AM, Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Here's a snapshot of the old changes I had: https://reviews.llvm.org/
> D48381
> > (hopefully it helps a bit but caveat emptor: this is a quick merge from
> an old patch, so it's for illustrative purposes only)
> >
> >
> > On Wed, Jun 20, 2018 at 10:26 AM, Pavel Labath 
> wrote:
> >>
> >> From the looks of it, the patch stalled on the part whether we can
> >> consider all-zero UUIDs as valid or not. I've dug around the code a
> >> bit now, and I've found this comment in ObjectFileMachO.cpp.
> >>
> >>// "main bin spec" (main binary specification) data payload
> is
> >>// formatted:
> >>//uint32_t version   [currently 1]
> >>//uint32_t type  [0 == unspecified, 1 ==
> >> kernel, 2 == user process]
> >>//uint64_t address   [ UINT64_MAX if address not
> specified ]
> >>//uuid_t   uuid  [ all zero's if uuid not
> specified ]
> >>//uint32_t log2_pagesize [ process page size in log
> >> base 2, e.g. 4k pages are 12.  0 for unspecified ]
> >>
> >>
> >> So it looks like there are situations where we consider all-zero UUIDs
> >> as invalid.
> >>
> >> I guess that means we either have to keep IsValid() definition as-is,
> >> or make ObjectFileMachO check the all-zero case itself. (Some middle
> >> ground may be where we have two SetFromStringRef functions, one which
> >> treats all-zero case specially (sets m_num_uuid_bytes to 0), and one
> >> which doesn't). Then clients can pick which semantics they want.
> >>
> >>
> >> > 1. A variable-length UUID likely incurs an extra heap allocation.
> >> Not really. If you're happy with the current <=20 limit, then you can
> >> just use the existing data structure. Otherwise, you could use a
> >> SmallVector.
> >>
> >> > 2. Formatting arbitrary length UUIDs as string is a bit inconvenient
> as you noted as well.
> >> For the string representation, I would say we should just use the
> >> existing layout of dashes (after 4, 6, 8, 10 and 16 bytes) and just
> >> cut it short when we have less bytes. The implementation of that
> >> should be about a dozen lines of code.
> >>
> >> The fact that these new UUIDs would not be real UUIDs could be solved
> >> by renaming this class to something else, if anyone can think of a
> >> good name for it (I can't). Then the "real" UUIDs will be just a
> >> special case of the new object.
> >>
> >> pl
> >
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
> >
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB does not support the default 8 byte build ID generated by LLD

2018-06-21 Thread Leonard Mosescu via lldb-dev
I didn't know about llvm::SmallVector, but it seems a good match indeed,
thanks Greg.

On Thu, Jun 21, 2018 at 9:49 AM, Greg Clayton  wrote:

>
>
> On Jun 21, 2018, at 9:46 AM, Leonard Mosescu  wrote:
>
> Leonard, I'm not going to use your patch, as it's a bit un-llvm-y
>> (uses std::ostream and such). However, I wanted to check whether 20
>> bytes will be enough for your use cases (uuids in minidumps)?
>>
>
> For minidumps we normally use either 16 or 20 byte UUIDs, so I don't see
> any immediate problems. Are you planning to make 20 a hard limit or have
> the 20 bytes "inlined" and dynamically allocate if larger?
>
>
> We could use a llvm::SmallVector to have up to 20 bytes
> before going larger and allocating on the heap.
>
>
> On Thu, Jun 21, 2018 at 8:18 AM, Pavel Labath  wrote:
>
>> That sounds like a plan. I have started cleaning up the class a bit
>> (removing manual uuid string formatting in various places and such),
>> and then I'll send a patch which implements that.
>>
>> Leonard, I'm not going to use your patch, as it's a bit un-llvm-y
>> (uses std::ostream and such). However, I wanted to check whether 20
>> bytes will be enough for your use cases (uuids in minidumps)?
>> On Thu, 21 Jun 2018 at 16:03, Greg Clayton  wrote:
>> >
>> > I am fine if we go with any number of bytes. We should have the
>> lldb_private::UUID class have an array of bytes that is in the class that
>> is to to 20 bytes. We can increase it later if needed. I would rather not
>> have a dynamically allocated buffer.
>> >
>> > That being said a few points:
>> > - Length can be set to zero to indicate invalid UUID. Better that than
>> filling in all zeroes and having to check for that IMHO. I know there were
>> some problems with the last patch around this.
>> > - Don't set length to a valid value and have UUID contain zeros unless
>> that is a true UUID that was calculated. LLDB does a lot of things by
>> matching UUID values so we can't have multiple modules claiming to have a
>> UUID that is filled with zeroes, otherwise many matches will occur that we
>> don't want
>> > - 32 bit GNU debug info CRCs from ELF notes could be filled in as 4
>> byte UUIDs
>> > - Comparing two UUIDs can start with the length field first the if they
>> match proceed to compare the bytes (which is hopefully what is already
>> happening)
>> >
>> >
>> > On Jun 20, 2018, at 11:01 AM, Leonard Mosescu via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>> >
>> > Here's a snapshot of the old changes I had:
>> https://reviews.llvm.org/D48381
>> > (hopefully it helps a bit but caveat emptor: this is a quick merge from
>> an old patch, so it's for illustrative purposes only)
>> >
>> >
>> > On Wed, Jun 20, 2018 at 10:26 AM, Pavel Labath 
>> wrote:
>> >>
>> >> From the looks of it, the patch stalled on the part whether we can
>> >> consider all-zero UUIDs as valid or not. I've dug around the code a
>> >> bit now, and I've found this comment in ObjectFileMachO.cpp.
>> >>
>> >>// "main bin spec" (main binary specification) data payload
>> is
>> >>// formatted:
>> >>//uint32_t version   [currently 1]
>> >>//uint32_t type  [0 == unspecified, 1 ==
>> >> kernel, 2 == user process]
>> >>//uint64_t address   [ UINT64_MAX if address not
>> specified ]
>> >>//uuid_t   uuid  [ all zero's if uuid not
>> specified ]
>> >>//uint32_t log2_pagesize [ process page size in log
>> >> base 2, e.g. 4k pages are 12.  0 for unspecified ]
>> >>
>> >>
>> >> So it looks like there are situations where we consider all-zero UUIDs
>> >> as invalid.
>> >>
>> >> I guess that means we either have to keep IsValid() definition as-is,
>> >> or make ObjectFileMachO check the all-zero case itself. (Some middle
>> >> ground may be where we have two SetFromStringRef functions, one which
>> >> treats all-zero case specially (sets m_num_uuid_bytes to 0), and one
>> >> which doesn't). Then clients can pick which semantics they want.
>> >>
>> >>
>> >> > 1. A variable-length UUID likely incurs an extra heap allocatio

[lldb-dev] The mysterious case of unsupported DW_FORMs

2018-08-01 Thread Leonard Mosescu via lldb-dev
I'm sharing some notes on a strange LLDB issue I hit locally, in case
anyone else hits the same problem. The symptoms are symbols unexpectedly
not working for some modules and/or warning messages complaining about
"unsupported DW_FORMs", ex:

*warning: (x86_64) /lib/x86_64-linux-gnu/libz.so.1.2.8 unsupported DW_FORM
> values: 0x2 0x1b 0x1c 0x1d 0x1e 0x1f 0x21 0x22 0x23 0x24 0x25 0x26 0x27
> 0x28 0x29 ... lots more values ...*


Tools like dwarfdump, objdump and realelf didn't raise any complains about
the affected symbols so the symbols themselves seemed fine. The LLDB
warning was fired during parsing the .debug_abbrev section but dumping it
showed nothing out of the ordinary(*)

The first hint that the problem was on the LLDB/LLVM side came from
llvm-dwarfdump:

.debug_info contents:
> error: failed to decompress '.debug_aranges', zlib is not available
> error: failed to decompress '.debug_info', zlib is not available
> error: failed to decompress '.debug_abbrev', zlib is not available

...


Sure enough, the sections were compressed. LLDB tried to decompress, but
when failed to do so it carried on returning and alter attempting to parse
the compressed bytes as is.

Why weren't my local LLVM & LLDB builds able to decompress the sections?
CMake! Apparently the project files didn't get correctly regenerated and my
CMakeCache.txt had an unfortunate set of flags:

LLVM_ENABLE_ZLIB:BOOL=ON (great!)
> HAVE_LIBZ_ZLIB:INTERNAL= (empty, hmm...)


 It should have something like this instead:

LLVM_ENABLE_ZLIB:BOOL=ON
> HAVE_ZLIB_H:INTERNAL=1


So there you have it folks. If it doesn't work, *reboot* regenerate your
cmake projects and try again.

*(*) none of the tools bothered to make a note that the sections are
compressed (SHF_COMPRESSED)*
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] The mysterious case of unsupported DW_FORMs

2018-08-02 Thread Leonard Mosescu via lldb-dev
Thanks Paul! I have a fix for the LLDB handling of compressed sections in
an upcoming change (together with improved logging). The email was mostly
in case some other poor soul hit the same problem (until I get a chance to
commit the fixes)

*(*) none of the tools bothered to make a note that the sections are
> compressed (*
> *SHF_COMPRESSED)*That seems like a completely valid feature request.
> Again filing a bug would be the right tactic.  I'm willing to do this one
> for you, if you don't have a bugzilla account.


I'll open a request for llvm-dwarfdump. If you happen to be in touch with
the developers of the other tools (readelf, objdump, dwarfdump) feel free
to forward them the notes.




On Thu, Aug 2, 2018 at 7:21 AM,  wrote:

> Why weren't my local LLVM & LLDB builds able to decompress the sections?
> CMake!
>
>
>
> Remembering to delete CMakeCache.txt is usually the part I forget to do.
>
>
>
> LLDB tried to decompress, but when failed to do so it carried on returning
> and alter attempting to parse the compressed bytes as is.
>
>
>
> A section that is compressed but can't be decompressed should be treated
> as corrupt/unparseable.  That seems like an LLDB bug; do you have an
> account on the project bugzilla?
>
>
>
> *(*) none of the tools bothered to make a note that the sections are
> compressed (**SHF_COMPRESSED)*
>
>
>
> That seems like a completely valid feature request.  Again filing a bug
> would be the right tactic.  I'm willing to do this one for you, if you
> don't have a bugzilla account.
>
>
>
> Thanks,
>
> --paulr
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB nightly benchmarks and flamegraphs

2018-08-03 Thread Leonard Mosescu via lldb-dev
+1, really nice. Any plans to add wall clock time? (I see you're using
perf, right?)

On Fri, Aug 3, 2018 at 3:59 PM, Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> This is really cool.  Maybe you could do it for all of LLVM too?  It would
> be nice if, instead of cycling through each benchmark on a set interval,
> there were just a dropdown box where you could select the one you wanted to
> see.
>
> On Fri, Aug 3, 2018 at 3:37 PM Raphael Isemann via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi everyone,
>>
>> I wanted to share a (hopefully useful) service for LLDB that I added
>> recently:
>>
>> If you go to https://teemperor.de/lldb-bench/ you'll now see graphs
>> that show the instruction count and memory usage of the last LLDB
>> nightlies (one per day). If you click on a graph you'll see a flame
>> graph that shows how much time we spent in each function when running
>> the benchmark. The graph should make it pretty obvious where the good
>> places for optimizations are.
>>
>> You can see all graphs without the slide show under
>> https://teemperor.de/lldb-bench/static.html.
>>
>> The source code of every benchmark can be found here:
>> https://github.com/Teemperor/lldb-bench If you want to add a
>> benchmark, just make a PR to that repository and I'll merge it. See
>> the README of the repo for instructions.
>>
>> I'll add more benchmarks in the future, but you are welcome to add your
>> own.
>>
>> Also, if you for some reason don't appreciate my amazing GNUplot
>> markup skills and prefer your own graphs, you can just grab the raw
>> benchmark data from here: https://teemperor.de/lldb-bench/data/ The
>> data format is just the time, git-commit and the
>> instruction-count/memoryInKB value (depending if it's a `.mem.dat` or
>> a `.inst.dat`).
>>
>> On a side note: Today's spike in memory is related to changes in the
>> build setup, not a LLDB change. I don't expect too many of these
>> spikes to happen in the future because the benchmark framework is now
>> hopefully stable enough.
>>
>> Cheers,
>>
>> - Raphael
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Handling of the ELF files missing build-ids?

2018-08-03 Thread Leonard Mosescu via lldb-dev
Greg, Mark,

Looking at the code, LLDB falls back to a full file crc32 to create the
module UUID if the ELF build-id is missing. This works, in the sense that
the generated UUID does indeed identify the module.

But there are a few problems with this approach:

1. First, runtime performance: a full file crc32 is a terribly inefficient
way to generate a temporary UUID that is basically just used to match a
local file to itself.
- especially when some unstripped binaries can be very large. for example a
local chromium build produces a 5.3Gb chrome binary
- the crc32 implementation is decent, but single-threaded
- to add insult to the injury, it seems a small bug defeats the intention
to cache the hash value so it ends up being recalculated multiple times

2. The fake UUID is not going to match any external UUID that may be
floating around (and yet not properly embedded into the binary)
- an example is Breakpad, which unfortunately also attempts to make up
UUIDs when the build-id is missing (something we'll hopefully fix soon)

Is there a fundamental reason to calculate the full file crc32? If not I
propose to improve this based on the following observations:

A. Model the reality more accurately: an ELF w/o a build-id doesn't really
have an UUID. So use a zero-length UUID in LLDB.
B. The full file name should be enough to prove the identity of a local
module.
C. If we try to match an external UUID (ex. from a minidump) with a local
file which does not have an UUID it may help to have an option to allow it
to match (off by default, and only if there's no better match)

What do you think?

Thanks,
Lemo.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Bug 38453] New: lldb: new (unit)test failures in 7.0

2018-08-06 Thread Leonard Mosescu via lldb-dev
This should be fixed by rL338949

On Mon, Aug 6, 2018 at 12:27 AM, via lldb-dev 
wrote:

> Bug ID 38453 
> Summary lldb: new (unit)test failures in 7.0
> Product lldb
> Version 7.0
> Hardware PC
> OS Linux
> Status NEW
> Severity enhancement
> Priority P
> Component All Bugs
> Assignee lldb-dev@lists.llvm.org
> Reporter mgo...@gentoo.org
> CC llvm-b...@lists.llvm.org
> Blocks 38406
>
> Created attachment 20643  
> [details] 
> dev-util:lldb-7.0.:20180806-071614.log.xz
>
> The following tests are repeatedly failing for me in the new branch (amd64;
> Gentoo Linux):
>
> Failing Tests (12):
> lldb :: tools/lldb-mi/breakpoint/break-insert.test
> lldb :: tools/lldb-mi/data/data-info-line.test
> lldb :: tools/lldb-mi/exec/exec-continue.test
> lldb :: tools/lldb-mi/exec/exec-finish.test
> lldb :: tools/lldb-mi/exec/exec-interrupt.test
> lldb :: tools/lldb-mi/exec/exec-next-instruction.test
> lldb :: tools/lldb-mi/exec/exec-next.test
> lldb :: tools/lldb-mi/exec/exec-run-wrong-binary.test
> lldb :: tools/lldb-mi/exec/exec-step-instruction.test
> lldb :: tools/lldb-mi/exec/exec-step.test
> lldb :: tools/lldb-mi/symbol/symbol-list-lines.test
> lldb-Unit :: Utility/./UtilityTests/VMRange.CollectionContains
>
> The lldb-mi issue looks like #28253.  The VMRange I didn't see a bug for:
>
>  TEST 'lldb-Unit ::
> Utility/./UtilityTests/VMRange.CollectionContains' FAILED 
> Note: Google Test filter = VMRange.CollectionContains
> [==] Running 1 test from 1 test case.
> [--] Global test environment set-up.
> [--] 1 test from VMRange
> [ RUN  ] VMRange.CollectionContains
> /var/tmp/portage/dev-util/lldb-7.0./work/lldb-7.0./unittests/Utility/VMRangeTest.cpp:146:
> Failure
> Value of: VMRange::ContainsRange(collection, VMRange(0x100, 0x104))
>   Actual: false
> Expected: true
> /var/tmp/portage/dev-util/lldb-7.0./work/lldb-7.0./unittests/Utility/VMRangeTest.cpp:147:
> Failure
> Value of: VMRange::ContainsRange(collection, VMRange(0x108, 0x100))
>   Actual: false
> Expected: true
> [  FAILED  ] VMRange.CollectionContains (0 ms)
> [--] 1 test from VMRange (0 ms total)
>
> [--] Global test environment tear-down
> [==] 1 test from 1 test case ran. (0 ms total)
> [  PASSED  ] 0 tests.
> [  FAILED  ] 1 test, listed below:
> [  FAILED  ] VMRange.CollectionContains
>
>  1 FAILED TEST
>
> 
>
>
> Attaching complete build & test log.
>
> --
> *Referenced Bugs:*
>
>- [Bug 38406 ] [meta]
>7.0.0 Release Blockers
>
>
> --
> You are receiving this mail because:
>
>- You are the assignee for the bug.
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Handling of the ELF files missing build-ids?

2018-08-06 Thread Leonard Mosescu via lldb-dev
>
> I am fine with all the above except some reservations about case C. No
> need to calculate something if it isn't useful. For case C it should be
> fine to never match as if a file has a UUID to begin with it typically
> isn't something that gets stripped in a stripped binary. So we should
> either have it or not. If breakpad does calculate a CRC32, then we need to
> know to ignore the UUID. The problem is we probably won't be able to tell
> what the UUID is: real from build ID, or from GNU debug info CRC, or CRC of
> entire file. So the minidump code will need to do something here. If a
> minidump has the linux auxv and memory map in them, then we might need to
> dig through the section information and deduce if a file matches or not
> based off the size of mapped program headers to further help with the
> matching.
>
> One other idea is to make a set of enumerations for the UUID type:
>
> class UUID {
>   enum class Type {
> BuildID, // A build ID from the compiler or linker
> GNUDebugInfoCRC, // GNU debug info CRC
> MD5, // MD5 of entire file
> MD5NonDebug, // MD5 of the non debug info related bits
> CRC32,   // CRC32 of entire file
> Other,   // Anything else
>   };
> };
>
> The eTypeMD5NonDebug is what apple does: it MD5 checksums only the parts
> of the file that don't change with debug info or any paths found in debug
> info or symbols tables. So if you build a binary in /tmp/a or in
> /private/var/local/foo, the UUID is the same if the binary is essentially
> the same (code, data, etc).
>
> Then we can make intelligent comparisons between UUID types. Might even be
> possible for a module to have more than 1 UUID then if a binary contains a
> eTypeBuildID and a eTypeGNUDebugInfoCRC. If a tool stores its UUIDs as a
> CRC32 or MD5, then those can be calculated on the fly. The GetUUID on
> lldb_private::Module might become:
>
> const lldb_private::UUID &Module::GetUUID(UUID::Type uuid_type);
>
> Thoughts?
>
> Greg


I like the idea of UUID sub-types! This solves the problem is a more
generic fashion and it's also extensible. Interestingly enough, for
Crashpad we're considering something similar (the fabricated UUIDs would
have a different CvRecord signature)

Case C. is a bit ugly so let me elaborate: this is specific to Breakpad
minidump + ELF binaries w/o build-id. So we'd still need to have a way to
force the match of the modules in the minidump with the local files. This
ought to be an off-by-default, sharp tool which you'd only need to deal
with Breakpad minidumps, and even then it would only be a fall-back that
must be explicitly requested.

1. .gnu_debuglink separate file pointer
> .
> This is where the choice of the crc algorithm comes from.


Thanks Pavel. As you noted yourself, this doesn't mean that the UUID has to
be tied to the checksum (they are exclusive options). In particular, for
performance reasons I think it's desirable to avoid calculating checksums
for every ELF module just in case.

I think we might have something already which could serve this
> purpose. Eugene added a couple of months ago a mechanism to force-load
> symbols for a given file regardless of the UUIDs
> . It requires an explicit "target
> symbols add" command (which seems reasonable, as lldb has no way to
> tell if it's doing things right). Would something like that work for
> you?


Nice. We may have to update the C++ API, but something like this would do
the trick for case C.

To summarize the conversation so far:

1. We can fix cases A, B independent of C: if real UUIDs are missing, don't
automatically use full file CRC32 as UUID.
2. Pay attention to make sure that we don't break .gnu_debuglink or remote
debugging (thanks Pavel)
3. Multiple types/namespaces for UUIDs would be a great feature!
4. Eugene's symbol forcing trick could be extended to handle case C

Did I miss anything?

My current plan is to start with #1, then look into #4 (force symbols
match).


On Sun, Aug 5, 2018 at 12:11 PM, Pavel Labath  wrote:

> Hello Leonard,
>
> while I'm in principle supportive of this idea, I think it's not going
> to be as easy as you might imagine. There are currently at least two
> mechanisms which rely on this crc UUID.
>
> 1. .gnu_debuglink separate file pointer
> .
> This is where the choice of the crc algorithm comes from.
>
> In short, this mechanism for debug info location works like this: The
> stripped file contains a .gnu_debuglink section.  The section contains
> a file path and a crc checksum. After reading this section the
> debugger is expected to look for the file at the given path, and then
> compute it's checksum to verify it is indeed the correct file (hasn't
> been modified).
>
> In LLDB, this is implemented somewhat differently. First we have a
> mechanism for assig

Re: [lldb-dev] The mysterious case of unsupported DW_FORMs

2018-08-08 Thread Leonard Mosescu via lldb-dev
Great. Thanks for the reminder, here's the llvm-dwarfdump bug




On Wed, Aug 8, 2018 at 7:11 AM,  wrote:

> I posted this suggestion on dwarf-discuss, where I would hope various tool
> maintainers are watching.
>
>
>
> The maintainer of dwarfdump says that a new version now "prints the type
> of compression and the compression factor for each compressed section
> that's being reported."  Very small sections (< 100 bytes) tend to get
> larger, but he says he's seeing nearly 10x compression factors on string
> sections.  Other sections are in between these extremes.
>
>
>
> Don't know about the GNU tools (readelf, objdump).
>
>
>
> Did you file a bug for llvm-dwarfdump?  I haven't noticed one.
>
> --paulr
>
>
>
> *From:* Leonard Mosescu [mailto:mose...@google.com]
> *Sent:* Thursday, August 02, 2018 12:53 PM
> *To:* Robinson, Paul
> *Cc:* LLDB
> *Subject:* Re: [lldb-dev] The mysterious case of unsupported DW_FORMs
>
>
>
> Thanks Paul! I have a fix for the LLDB handling of compressed sections in
> an upcoming change (together with improved logging). The email was mostly
> in case some other poor soul hit the same problem (until I get a chance to
> commit the fixes)
>
>
>
> *(*) none of the tools bothered to make a note that the sections are
> compressed (*
> *SHF_COMPRESSED) *That seems like a completely valid feature request.
> Again filing a bug would be the right tactic.  I'm willing to do this one
> for you, if you don't have a bugzilla account.
>
>
>
> I'll open a request for llvm-dwarfdump. If you happen to be in touch with
> the developers of the other tools (readelf, objdump, dwarfdump) feel free
> to forward them the notes.
>
>
>
>
>
>
>
> On Thu, Aug 2, 2018 at 7:21 AM,  wrote:
>
> Why weren't my local LLVM & LLDB builds able to decompress the sections?
> CMake!
>
>
>
> Remembering to delete CMakeCache.txt is usually the part I forget to do.
>
>
>
> LLDB tried to decompress, but when failed to do so it carried on returning
> and alter attempting to parse the compressed bytes as is.
>
>
>
> A section that is compressed but can't be decompressed should be treated
> as corrupt/unparseable.  That seems like an LLDB bug; do you have an
> account on the project bugzilla?
>
>
>
> *(*) none of the tools bothered to make a note that the sections are
> compressed (**SHF_COMPRESSED)*
>
>
>
> That seems like a completely valid feature request.  Again filing a bug
> would be the right tactic.  I'm willing to do this one for you, if you
> don't have a bugzilla account.
>
>
>
> Thanks,
>
> --paulr
>
>
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [GSoC] Re-implement lldb-mi on top of the LLDB public API

2018-08-13 Thread Leonard Mosescu via lldb-dev
Nice to see great progress in this area!

On Sun, Aug 12, 2018 at 2:49 PM, Александр Поляков via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi LLVM folks,
>
> During this summer I was working on re-implementing of lldb-mi to
> correctly use LLDB public API. You are welcome to read my final report
> where I describe the contribution and challenges I faced with.
> Link to final report: https://apolyakov.github.io/GSoC-2018/
>
> --
> Alexander
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [GSoC] Re-implement lldb-mi on top of the LLDB public API

2018-08-13 Thread Leonard Mosescu via lldb-dev
Can you please list the missing MI commands? This would be very valuable to
both future contributors and also to the users of the LLDB MI. Thanks!

On Mon, Aug 13, 2018 at 11:28 AM, Александр Поляков 
wrote:

> Thank you, Leonard,
> I'm going to keep contributing to LLVM, so I think this is not the end!
>
> On Mon, Aug 13, 2018 at 8:15 PM Leonard Mosescu 
> wrote:
>
>> Nice to see great progress in this area!
>>
>> On Sun, Aug 12, 2018 at 2:49 PM, Александр Поляков via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hi LLVM folks,
>>>
>>> During this summer I was working on re-implementing of lldb-mi to
>>> correctly use LLDB public API. You are welcome to read my final report
>>> where I describe the contribution and challenges I faced with.
>>> Link to final report: https://apolyakov.github.io/GSoC-2018/
>>>
>>> --
>>> Alexander
>>>
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>>
>>
>
> --
> Alexander
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] LLDB Reproducers

2018-09-19 Thread Leonard Mosescu via lldb-dev
Sounds like a fantastic idea.

How would this work when the behavior of the debugee process is
non-deterministic?

On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi everyone,
>
> We all know how hard it can be to reproduce an issue or crash in LLDB.
> There
> are a lot of moving parts and subtle differences can easily add up. We
> want to
> make this easier by generating reproducers in LLDB, similar to what clang
> does
> today.
>
> The core idea is as follows: during normal operation we capture whatever
> information is needed to recreate the current state of the debugger. When
> something goes wrong, this becomes available to the user. Someone else
> should
> then be able to reproduce the same issue with only this data, for example
> on a
> different machine.
>
> It's important to note that we want to replay the debug session from the
> reproducer, rather than just recreating the current state. This ensures
> that we
> have access to all the events leading up to the problem, which are usually
> far
> more important than the error state itself.
>
> # High Level Design
>
> Concretely we want to extend LLDB in two ways:
>
> 1.  We need to add infrastructure to _generate_ the data necessary for
> reproducing.
> 2.  We need to add infrastructure to _use_ the data in the reproducer to
> replay
> the debugging session.
>
> Different parts of LLDB will have different definitions of what data they
> need
> to reproduce their path to the issue. For example, capturing the commands
> executed by the user is very different from tracking the dSYM bundles on
> disk.
> Therefore, we propose to have each component deal with its needs in a
> localized
> way. This has the advantage that the functionality can be developed and
> tested
> independently.
>
> ## Providers
>
> We'll call a combination of (1) and (2) for a given component a
> `Provider`. For
> example, we'd have an provider for user commands and a provider for dSYM
> files.
> A provider will know how to keep track of its information, how to
> serialize it
> as part of the reproducer as well as how to deserialize it again and use
> it to
> recreate the state of the debugger.
>
> With one exception, the lifetime of the provider coincides with that of the
> `SBDebugger`, because that is the scope of what we consider here to be a
> single
> debug session. The exception would be the provider for the global module
> cache,
> because it is shared between multiple debuggers. Although it would be
> conceptually straightforward to add a provider for the shared module cache,
> this significantly increases the complexity of the reproducer framework
> because
> of its implication on the lifetime and everything related to that.
>
> For now we will ignore this problem which means we will not replay the
> construction of the shared module cache but rather build it up during
> replaying, as if the current debug session was the first and only one
> using it.
> The impact of doing so is significant, as no issue caused by the shared
> module
> cache will be reproducible, but does not limit reproducing any issue
> unrelated
> to it.
>
> ## Reproducer Framework
>
> To coordinate between the data from different components, we'll need to
> introduce a global reproducer infrastructure. We have a component
> responsible
> for reproducer generation (the `Generator`) and for using the reproducer
> (the
> `Loader`). They are essentially two ways of looking at the same unit of
> repayable work.
>
> The Generator keeps track of its providers and whether or not we need to
> generate a reproducer. When a problem occurs, LLDB will request the
> Generator
> to generate a reproducer. When LLDB finishes successfully, the Generator
> cleans
> up anything it might have created during the session. Additionally, the
> Generator populates an index, which is part of the reproducer, and used by
> the
> Loader to discover what information is available.
>
> When a reproducer is passed to LLDB, we want to use its data to replay the
> debug session. This is coordinated by the Loader. Through the index
> created by
> the Generator, different components know what data (Providers) are
> available,
> and how to use them.
>
> It's important to note that in order to create a complete reproducer, we
> will
> require data from our dependencies (llvm, clang, swift) as well. This means
> that either (a) the infrastructure needs to be accessible from our
> dependencies
> or (b) that an API is provided that allows us to query this. We plan to
> address
> this issue when it arises for the respective Generator.
>
> # Components
>
> We have identified a list of minimal components needed to make reproducing
> possible. We've divided those into two groups: explicit and implicit
> inputs.
>
> Explicit inputs are inputs from the user to the debugger.
>
> -   Command line arguments
> -   Settings
> -   User commands
> -   Scripting Bridge API
>
> In addition to the com

Re: [lldb-dev] [RFC] LLDB Reproducers

2018-09-19 Thread Leonard Mosescu via lldb-dev
Great, thanks. This means that the lldb-server issues are not in scope for
this feature, right?

On Wed, Sep 19, 2018 at 10:09 AM, Jonas Devlieghere 
wrote:

>
>
> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu  wrote:
>
> Sounds like a fantastic idea.
>
> How would this work when the behavior of the debugee process is
> non-deterministic?
>
>
> All the communication between the debugger and the inferior goes through
> the
> GDB remote protocol. Because we capture and replay this, we can reproduce
> without running the executable, which is particularly convenient when you
> were
> originally debugging something on a different device for example.
>
>
> On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi everyone,
>>
>> We all know how hard it can be to reproduce an issue or crash in LLDB.
>> There
>> are a lot of moving parts and subtle differences can easily add up. We
>> want to
>> make this easier by generating reproducers in LLDB, similar to what clang
>> does
>> today.
>>
>> The core idea is as follows: during normal operation we capture whatever
>> information is needed to recreate the current state of the debugger. When
>> something goes wrong, this becomes available to the user. Someone else
>> should
>> then be able to reproduce the same issue with only this data, for example
>> on a
>> different machine.
>>
>> It's important to note that we want to replay the debug session from the
>> reproducer, rather than just recreating the current state. This ensures
>> that we
>> have access to all the events leading up to the problem, which are
>> usually far
>> more important than the error state itself.
>>
>> # High Level Design
>>
>> Concretely we want to extend LLDB in two ways:
>>
>> 1.  We need to add infrastructure to _generate_ the data necessary for
>> reproducing.
>> 2.  We need to add infrastructure to _use_ the data in the reproducer to
>> replay
>> the debugging session.
>>
>> Different parts of LLDB will have different definitions of what data they
>> need
>> to reproduce their path to the issue. For example, capturing the commands
>> executed by the user is very different from tracking the dSYM bundles on
>> disk.
>> Therefore, we propose to have each component deal with its needs in a
>> localized
>> way. This has the advantage that the functionality can be developed and
>> tested
>> independently.
>>
>> ## Providers
>>
>> We'll call a combination of (1) and (2) for a given component a
>> `Provider`. For
>> example, we'd have an provider for user commands and a provider for dSYM
>> files.
>> A provider will know how to keep track of its information, how to
>> serialize it
>> as part of the reproducer as well as how to deserialize it again and use
>> it to
>> recreate the state of the debugger.
>>
>> With one exception, the lifetime of the provider coincides with that of
>> the
>> `SBDebugger`, because that is the scope of what we consider here to be a
>> single
>> debug session. The exception would be the provider for the global module
>> cache,
>> because it is shared between multiple debuggers. Although it would be
>> conceptually straightforward to add a provider for the shared module
>> cache,
>> this significantly increases the complexity of the reproducer framework
>> because
>> of its implication on the lifetime and everything related to that.
>>
>> For now we will ignore this problem which means we will not replay the
>> construction of the shared module cache but rather build it up during
>> replaying, as if the current debug session was the first and only one
>> using it.
>> The impact of doing so is significant, as no issue caused by the shared
>> module
>> cache will be reproducible, but does not limit reproducing any issue
>> unrelated
>> to it.
>>
>> ## Reproducer Framework
>>
>> To coordinate between the data from different components, we'll need to
>> introduce a global reproducer infrastructure. We have a component
>> responsible
>> for reproducer generation (the `Generator`) and for using the reproducer
>> (the
>> `Loader`). They are essentially two ways of looking at the same unit of
>> repayable work.
>>
>> The Generator keeps track of its providers and whether or not we need to
>> generate a reproducer. When a problem occurs, LLDB will request the
>> Generator
>> to generate a reproducer. When LLDB finishes successfully, the Generator
>> cleans
>> up anything it might have created during the session. Additionally, the
>> Generator populates an index, which is part of the reproducer, and used
>> by the
>> Loader to discover what information is available.
>>
>> When a reproducer is passed to LLDB, we want to use its data to replay the
>> debug session. This is coordinated by the Loader. Through the index
>> created by
>> the Generator, different components know what data (Providers) are
>> available,
>> and how to use them.
>>
>> It's important to note that in order to create a complete reproducer, we
>> will
>>

Re: [lldb-dev] Parsing Line Table to determine function prologue?

2018-10-08 Thread Leonard Mosescu via lldb-dev
>
> Even if we do need to parse the line table, could it be done just for the
> function in question?  The debug info tells us the function's address
> range, so is there some technical reason why it couldn't parse the line
> table only for the given address range?
>

My understanding is that there's one DWARF .debug_line "program" per CU,
and normally you'd need to "execute" the whole line number program.

On Sat, Oct 6, 2018 at 8:05 PM, Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> While implementing native PDB support I noticed that LLDB is asking to
> parse an entire compile unit's line table in order to determine if 1
> address is a function prologue or epilogue.
>
> Is this necessary in DWARF-land?  It would be nice if I could just pass
> the prologue and epilogue byte size directly to the constructor of the
> lldb_private::Function object when I construct it.
>
> It seems unnecessary to parse the entire line table just to set a
> breakpoint by function name, but this is what ends up happening.
>
> Even if we do need to parse the line table, could it be done just for the
> function in question?  The debug info tells us the function's address
> range, so is there some technical reason why it couldn't parse the line
> table only for the given address range?
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] OS Awareness in LLDB

2018-10-31 Thread Leonard Mosescu via lldb-dev
Hi Alexander, are you interested in user-mode, kernel-mode debugging or
both?

Fore reference, the current state of the art regarding OS-awareness
debugging is debugging tools for windows

(windbg
& co.). This is not surprising since the tools were developed alongside
Windows. Obviously they are specific to Windows, but it's good example of
how the OS-awareness might look like.


On Mon, Oct 29, 2018 at 11:37 AM, Alexander Polyakov via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi lldb-dev,
>
> I'm a senior student at Saint Petersburg State University. The one of my
> possible diploma themes is "OS Awareness in LLDB". Generally, the OS
> awareness extends a debugger to provide a representation of the OS threads
> - or tasks - and other relevant data structures, typically semaphores,
> mutexes, or queues.
>
> I want to ask the community if OS awareness is interesting for LLDB users
> and developers? The main goal is to create some base on top of LLDB that
> can be extended to support awareness for different operating systems.
>
> Also, if you have a good article or other useful information about OS
> awareness, please share it with me.
>
> Thanks in advance!
>
> --
> Alexander
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] OS Awareness in LLDB

2018-10-31 Thread Leonard Mosescu via lldb-dev
Conceptually it's different levels of abstraction: a user-mode debugger
handles processes, threads as first class concepts. In kernel-mode (or
kernel land), these are just data structures that the code (the kernel) is
managing. From a more pragmatic perspective, the difference is in where the
debugging hooks are implemented and what interfaces are exposed (for
example a kernel mode debugger can normally "poke" around any piece of
memory and it has to be aware of things like VA mappings, while a user-mode
debugger is only allowed to control a limited slice of the system - ex.
control a sub-process through something like ptrace)

Unless you're specifically looking at kernel debugging I'd stay away from
that. For one thing, LLDB is mostly used as an user-mode debugger so the
impact of any improvements would be bigger.

Regarding the value of OS-awareness for user-mode debugging, I agree with
Zach - for example windbg provides both kernel mode
and
user mode
!locks
commands. The only suggestion I'd add is to consider an expanded view of
the "OS" to include runtime components which may not be technically part of
what most people think of as the "OS": user-mode loaders and high level
things like std::mutex, etc.

On Wed, Oct 31, 2018 at 12:29 PM, Alexander Polyakov  wrote:

> Looks like I don't completely understand what is the difference between
> user-mode and kernel-mode from the debugger's point of view. Could you
> please explain me this?
>
> On Wed, Oct 31, 2018 at 10:22 PM Zachary Turner 
> wrote:
>
>> I don’t totally agree with this. I think there are a lot of useful os
>> awareness tasks in user mode. For example, you’re debugging a deadlock and
>> want to understand the state of other mutexes, who owns them, etc. or you
>> want to examine open file descriptors. In the case of a heap corruption you
>> may wish to study the internal structures of your process’s heap, or even
>> lower level, the os virtual memory page table structures.
>>
>> There’s quite a lot you can still do in user mode, but definitely there
>> is more in kernel mode. As Leonard said, try put WinDbg as a lot of this
>> stuff already exists so it’s a good reference
>> On Wed, Oct 31, 2018 at 12:08 PM Alexander Polyakov via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hi Leonard,
>>>
>>> I think it will be kernel-mode debugging since debugging an application
>>> in user mode is not an OS awareness imo. Of course, some of kernel's
>>> modules might run in user-mode, but it will be ok I think.
>>>
>>> Thanks for your reference, I'll take a look at it.
>>>
>>> Also, I found out that ARM supports OS awareness in their DS-5 debugger.
>>> They have a mechanism for adding new operating systems. All you need to do
>>> is to describe OS' model (thread's or task's structure for example). I
>>> think that is how it might be done in LLDB.
>>>
>>> On Wed, Oct 31, 2018 at 9:26 PM Leonard Mosescu 
>>> wrote:
>>>
 Hi Alexander, are you interested in user-mode, kernel-mode debugging or
 both?

 Fore reference, the current state of the art regarding OS-awareness
 debugging is debugging tools for windows
  
 (windbg
 & co.). This is not surprising since the tools were developed alongside
 Windows. Obviously they are specific to Windows, but it's good example of
 how the OS-awareness might look like.


 On Mon, Oct 29, 2018 at 11:37 AM, Alexander Polyakov via lldb-dev <
 lldb-dev@lists.llvm.org> wrote:

> Hi lldb-dev,
>
> I'm a senior student at Saint Petersburg State University. The one of
> my possible diploma themes is "OS Awareness in LLDB". Generally, the OS
> awareness extends a debugger to provide a representation of the OS threads
> - or tasks - and other relevant data structures, typically semaphores,
> mutexes, or queues.
>
> I want to ask the community if OS awareness is interesting for LLDB
> users and developers? The main goal is to create some base on top of LLDB
> that can be extended to support awareness for different operating systems.
>
> Also, if you have a good article or other useful information about OS
> awareness, please share it with me.
>
> Thanks in advance!
>
> --
> Alexander
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>

>>>
>>> --
>>> Alexander
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>
>
> --
> Alexander
>
___
lldb

Re: [lldb-dev] Debugging Python scripts (backtraces, variables) with LLDB

2018-11-20 Thread Leonard Mosescu via lldb-dev
Not strictly related to LLDB but you might find this interesting:
https://blogs.dropbox.com/tech/2018/11/crash-reporting-in-desktop-python-applications


On Tue, Nov 20, 2018 at 8:51 AM Alexandru Croitor via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hello,
>
> It's been a while since I asked this question on the mailing list ( 2~
> years ago).
>
> I am interested in what would be the current best way for implementing
> interleaved / mixed-mode Python backtraces when debugging the CPython
> interpreter.
>
> So if I run lldb -- python /path/to/my/script, set a breakpoint somewhere
> in the C code, and then do "bt", I would see a list of both C stack frames
> and Python stack frames, and if I do "frame select x" for a python frame, I
> could inspect the Python locals for instance.
>
> Last time I asked, Jim mentioned using a custom "thread provider". Would
> this still be the way to go?
>
> I also saw mentions of Java / Go support in the VCS log, but the support
> was removed due to no maintainers, so I don't know if that would also be
> the best way of doing it for Python.
>
> I would appreciate, if someone could point me to some relevant code that
> does something similar to what I'm asking, so I could use it as a base
> point for exploration.
>
> Many thanks.
>
> > On 8. Jul 2016, at 12:24, Alexandru Croitor 
> wrote:
> >
> > Thanks for replying, it's good to know what the status is at least, as
> well as how it's done in GDB.
> >
> >> On 06 Jul 2016, at 20:56, Jim Ingham  wrote:
> >>
> >> Nothing of this sort has been done to my knowledge, and I haven't heard
> of any plans to do so either.
> >>
> >> It should certainly be possible, you just need to grub the C stack and
> recognize the pattern of a Python stack frame in it and where said frame
> stashes away the arguments & locals, and then re-present it as a Python
> frame.  The SB API's should make that fairly straight forward.
> >>
> >> It looks like the Python work in gdb is based on a generic "frame
> filter" concept in the gdb Python API's.  That's something Greg and I
> talked about when working on gdb way back, and has been a future goal for
> lldb from the start, but it hasn't ever gotten beyond discussion to date.
> We already have the notion of a "thread provider" which allows the Mach
> Kernel plugin to present its activations as threads in lldb.  You could do
> much the same thing in lldb, where a thread would have the native unwind
> based stack frame and then pluggable StackFrame provider that would show
> different representations of the stack.
> >>
> >> If anybody is interested in taking on such a project, that would be
> very cool.
> >>
> >> Jim
> >>
> >>> On Jul 6, 2016, at 8:48 AM, Alexandru Croitor via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>>
> >>> Hello,
> >>>
> >>> I've searched for information wether it is possible to debug a python
> script using LLDB, and haven't found anything so far.
> >>>
> >>> Specifically I'm interested in an LLDB counterpart to what GDB
> provides (the two main pages being
> https://wiki.python.org/moin/DebuggingWithGdb and
> http://fedoraproject.org/wiki/Features/EasierPythonDebugging ).
> >>>
> >>> So python stack traces, python values, etc.
> >>>
> >>> I assume this is not implemented, but are there any plans, or is it
> even feasible to implement?
> >>>
> >>> Regards, Alex.
> >>> ___
> >>> lldb-dev mailing list
> >>> lldb-dev@lists.llvm.org
> >>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >>
> >
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Adding breakpad "symbol" file support

2018-12-03 Thread Leonard Mosescu via lldb-dev
Yay!

In case anyone is interested in the details, the Breakpad symbol format is
documented here:
https://chromium.googlesource.com/breakpad/breakpad/+/master/docs/symbol_files.md

On Mon, Dec 3, 2018 at 5:39 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hello all,
>
> I'd like to propose adding support for reading breakpad symbol files to
> LLDB.
>
> The breakpad files are textual files, which contain just enough
> information to produce a backtrace from a crash
> dump. This information includes:
>
> - UUID, architecture and name of the module
> - line tables
> - list of symbols
> - unwind information
>
> They are meant to complement the "minidump" files (already supported by
> lldb), which are the "core" files produced by breakpad when an
> application crashes.
>
> A minimal breakpad file could look like this:
> MODULE Linux x86_64 24B5D199F0F766FF5DC30 a.out
> INFO CODE_ID B52499D1F0F766FF5DC3
> FILE 0 /tmp/a.c
> FUNC 1010 10 0 _start
> 1010 4 4 0
> 1014 5 5 0
> 1019 5 6 0
> 101e 2 7 0
> PUBLIC 1010 0 _start
> STACK CFI INIT 1010 10 .cfa: $rsp 8 + .ra: .cfa -8 + ^
> STACK CFI 1011 $rbp: .cfa -16 + ^ .cfa: $rsp 16 +
> STACK CFI 1014 .cfa: $rbp 16 +
>
> Even though this data would normally be considered "symbol" information,
> in the current lldb infrastructure it is assumed every SymbolFile object
> is backed by an ObjectFile instance. So, in order to better interoperate
> with the rest of the code (particularly symbol vendors). I propose to
> also add an ObjectFileBreakpad class to access the breakpad file at a
> lower level. My plan tentative plan is to present the individual chunks
> of the breakpad file as ObjectFile , which can then be used by
> other parts of the codebase (SymbolFileBreakpad ?) to vend the necessary
> information.
>
> I have a preliminary patch (D55214), which adds the scaffolding
> necessary to recognise breakpad files as on object format, and parses
> the information in the breakpad header (i.e., it's UUID and
> architecture). The other parts are to be added later.
>
> Please let me know if you have any questions or concerns about this,
> pavel
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Object identities in the LLDB's C++ API

2019-01-29 Thread Leonard Mosescu via lldb-dev
Reviving this old thread and +Joshua Peraza   who's
also interested in this.

On Wed, Dec 13, 2017 at 4:17 PM Leonard Mosescu  wrote:

> Thanks Greg,
>
> 1. Expose the opaque ptr as an opaque handle()
>  - this is an easy, quick and convenient solution for many SBxxx types
> but it may not work for all
>
> That would be nice, but that won't always work with how LLDB is currently
>> coded for SBFrame and possibly SBThread. These objects will be problems as
>> they can come and go and the underlying object isn't always the same even
>> through they lock onto the same logical object. SBThread and SBFrame have
>> "lldb::ExecutionContextRefSP m_opaque_sp" members. The execution context
>> reference is a class that contains weak pointers to the
>> lldb_private::Thread and lldb_private::StackFrame objects, but it also
>> contains the thread ID and frame ID so it can reconstitute the
>> value lldb_private::Thread and lldb_private::StackFrame even if the weak
>> pointer isn't valid. So the opaque handle will work for many objects but
>> not all.
>
>
> Indeed. One, relatively small but interesting benefit of the opaque handle
> type is that it opens the possibility of generic "handle maps" (I'll
> elaborate below)
>
> 2. Design and implement a consistent, first class
> identity/ordering/hashing for all the SBxxx types
>  - perhaps the most elegant and flexible approach, but also the most
> work
>
> I would be fine with adding new members to classes we know we want to hash
>> and order, like by adding:
>> uint32_t SB*::GetHash();
>> bool SB*::operator==(const SB*& ohs);
>> bool SB*::operator<(const SB*& ohs);
>> Would those be enough?
>
>
> I think so. If we use the standard containers as reference, technically we
> only need operator< to satisfy the Compare
> <http://en.cppreference.com/w/cpp/concept/Compare> concept. (also, a
> small nit - size_t would be a better type for the hash value). Also, both
> the hashing and the compare can be implemented as non-member functions (or
> even specializing std::hash, std::less for SBxxx types). A few minor
> concerns:
>
> a. if we keep things like SBModule::operator==() unchanged, it's not going
> to be the same as the equiv(a, b) for the case where a and b have null
> opaque pointers (not sure if this breaks anything, but I wouldn't want to
> be the first to debug a case where this matter)
> b. defining just the minimum set of operations may be technically enough
> but it may look a bit weird to have a type define < but none of the other
> relational operators.
> c. if some of the hash/compare implementation end up going through
> multiple layers (the execution context with thread, frame IDs example) the
> performance characteristics can be unpredictable, right?
>
>
> For context, the use case that brought this to my attention is managing a
> set of data structures that contain custom data associated with modules,
> frames, etc. It's easy to create, let's say a MyModule from a SBModule, but
> if later on I get the module for a particular frame, SBFrame::GetModule()
> will return a SBModule, which I would like to map to the corresponding
> MyModule instance. Logically this would require a SBModule -> MyModule map.
> The standard associative containers (map or unordered_map) would make this
> trivial if SBxxx types satisfy the key requirements.
>
> Another option for maintaining such a mapping, suggested by Mark Mentovai,
> is to use provision for an "user data" tag associated with every SBxxx
> object (this tag can simply be a void*, maybe wrapped with type safe
> accessors). This would be extremely convenient for the API users (since
> they don't have to worry about maintaining any maps themselves) but
> implementing it would hit the same complications around the synthesized
> instances (like SBFrame) and it may carry a small price - one pointer per
> SBxxx instance even if this facility is not used. I personally like this
> approach and in this particular case it has the additional benefit of being
> additive (we can graft it on with minimal risk of breaking existing stuff),
> although it still seems nice to have consistent identity semantics for the
> SBxxx types.
>
> On Wed, Dec 13, 2017 at 12:40 PM, Greg Clayton  wrote:
>
>>
>> On Dec 13, 2017, at 11:44 AM, Leonard Mosescu via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>> LLDB's C++ API deals with SBxxx objects, most of which are PIMPL-style
>> wrappers around an opaque pointer to the internal implementation. These
>> SBxxx objects act as handles and are passed/returne

Re: [lldb-dev] Needs help contributing to lldb-vscode.

2019-03-12 Thread Leonard Mosescu via lldb-dev
Greg, what do you think?


On Tue, Mar 12, 2019 at 11:50 AM Qianli Ma  wrote:

> Hi lldb community,
>
> I am currently working on a project related to lldb. I'd like to write a
> DAP RPC server similars to lldb-vscode.cc
> 
>  but
> exports I/O to internal RPC clients. Doing so requires me to reuse some
> functions defined in lldb-vscode.cc
> .
> However as those functions are defined using forward declaration I am not
> able to do that.
>
> I'd like refactor the code a bit. More specifically, I'd like to extract
> all helper functions in lldb-vscode.cc
> 
>  into
> a separate file and create a header for it.  BTW, IMO it's good to make
> this lldb-vscode more general so that it can be used by other debugger
> frontends besides vscode.
>
> Please let me know WDYT and how I can proceed to submit changes for
> review.
>
> Thanks and Regards
> Qianli
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Symbol Server for LLDB

2019-03-25 Thread Leonard Mosescu via lldb-dev
Not exactly a full symbol server solution, but LLDB supports the GDB-style
symbol lookup
 (search
for the build-ID notes and nn/.debug). This, together with a simple
NFS setup can get you close to a Microsoft-style symbol store.

This blog post
might
be relevant too.

As Adrian hints, there's an interest in adding first class support for
symbol servers to LLDB.



On Mon, Mar 25, 2019 at 9:02 AM Adrian McCarthy via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Not currently (at least, not for the platforms I use primarily), but there
> is definitely interest in a symbol fetcher so there may be somebody working
> on it.
>
> On Sun, Mar 24, 2019 at 11:11 PM Murali Venu Thyagarajan via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hello,
>>
>> Is there a way to setup a symbol server for lldb just like how I could
>> setup a centralized and indexed symbol server for Windbg. Please let me
>> know.
>>
>> Thanks,
>> Murali
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Symbol Server for LLDB

2019-03-25 Thread Leonard Mosescu via lldb-dev
For macOS & dSym there's already a specialized solution:
http://lldb.llvm.org/symbols.html

On Mon, Mar 25, 2019 at 11:37 AM Murali Venu Thyagarajan <
murali.thyagaraja...@gmail.com> wrote:

> Another question that I had is,
>
> Can this GDB style be inferred for MacOS packages? Will there be a
> build-ID in the package and the corresponding dSym package?
>
> Thanks,
> Murali
>
> On Mon, Mar 25, 2019 at 10:29 AM Murali Venu Thyagarajan <
> murali.thyagaraja...@gmail.com> wrote:
>
>> Thanks a lot Adrian and Leonard.
>>
>> I'm interested in setting up a local symbol server for my application
>> that is being built on MacOS. Pretty much like a indexed symbol server that
>> is used with Windows applications with Windbg.
>>
>> Thanks,
>> Murali
>>
>> On Mon, Mar 25, 2019 at 10:04 AM Leonard Mosescu 
>> wrote:
>>
>>> Not exactly a full symbol server solution, but LLDB supports the GDB-style
>>> symbol lookup
>>>  
>>> (search
>>> for the build-ID notes and nn/.debug). This, together with a simple
>>> NFS setup can get you close to a Microsoft-style symbol store.
>>>
>>> This blog post
>>> might
>>> be relevant too.
>>>
>>> As Adrian hints, there's an interest in adding first class support for
>>> symbol servers to LLDB.
>>>
>>>
>>>
>>> On Mon, Mar 25, 2019 at 9:02 AM Adrian McCarthy via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
 Not currently (at least, not for the platforms I use primarily), but
 there is definitely interest in a symbol fetcher so there may be somebody
 working on it.

 On Sun, Mar 24, 2019 at 11:11 PM Murali Venu Thyagarajan via lldb-dev <
 lldb-dev@lists.llvm.org> wrote:

> Hello,
>
> Is there a way to setup a symbol server for lldb just like how I could
> setup a centralized and indexed symbol server for Windbg. Please let me
> know.
>
> Thanks,
> Murali
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
 ___
 lldb-dev mailing list
 lldb-dev@lists.llvm.org
 https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

>>>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] The pre-built Windows LLDB binary has a dependency on an external python36.dll?

2019-11-20 Thread Leonard Mosescu via lldb-dev
I just installed the pre-built LLVM9 binaries on a Windows machine and I 
noticed that LLDB.exe imports from python36.dll. Was this an intentional change 
from LLVM8? (which doesn't depend on external python DLLs)

Trying to use the LLDB that comes with LLVM9, you'd get a pop-up complaining 
that python36.dll was not found (unless you happen to have on in your PATH) and 
LLDB fails to start.

Thanks,
Leonard.


---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] The pre-built Windows LLDB binary has a dependency on an external python36.dll?

2019-11-21 Thread Leonard Mosescu via lldb-dev
What kind of behavior did you expect?
I could be wrong, but I thought that previous versions of LLDB would use 
LoadLibrary() instead of linking to the import library?


From: Pavel Labath 
Sent: Wednesday, November 20, 2019 11:32 PM
To: Adrian McCarthy ; Leonard Mosescu 
Cc: lldb-dev@lists.llvm.org 
Subject: Re: [lldb-dev] The pre-built Windows LLDB binary has a dependency on 
an external python36.dll?

On 20/11/2019 23:53, Adrian McCarthy via lldb-dev wrote:
> That said, I didn't expect an explicit dependency on python36.dll.

What kind of behavior did you expect?

pl

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev