Re: [lldb-dev] All windows Mutex objects are recursive???

2016-05-12 Thread Tamas Berghammer via lldb-dev
We already use both std::mutex and std::condition_variable
in include/lldb/Utility/TaskPool.h for a while (since October) and nobody
complained about it so I think we can safely assume that all platform has
the necessary STL support.

On Wed, May 11, 2016 at 11:44 PM Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> It would be nice to get a patch that gets rid of the Mutex.h/Mutex.cpp and
> switches over to using C++11 std::mutex/std::recursive_mutex and get rid of
> Condition.h/Condition.cpp for std::condition_variable. Then we can be more
> consistent. We need to make sure the C++ standard libraries are ready on
> all platforms first though.
>
> Greg
>
> > On May 11, 2016, at 3:01 PM, Zachary Turner  wrote:
> >
> > I mean std::recursive_mutex is recursive
> >
> > On Wed, May 11, 2016 at 3:01 PM Zachary Turner 
> wrote:
> > Yes, eventually we should move to std::mutex and
> std::condition_variable, in which case it behaves as expected (std::mutex
> is non recursive, std::mutex is recursive).
> >
> >
> >
> > On Wed, May 11, 2016 at 2:20 PM Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > From lldb/source/Host/windows/Mutex.cpp:
> >
> >
> > Mutex::Mutex () :
> > m_mutex()
> > {
> > m_mutex =
> static_cast(malloc(sizeof(CRITICAL_SECTION)));
> > InitializeCriticalSection(static_cast(m_mutex));
> > }
> >
> > //--
> > // Default constructor.
> > //
> > // Creates a pthread mutex with "type" as the mutex type.
> > //--
> > Mutex::Mutex (Mutex::Type type) :
> > m_mutex()
> > {
> > m_mutex =
> static_cast(malloc(sizeof(CRITICAL_SECTION)));
> > InitializeCriticalSection(static_cast(m_mutex));
> > }
> >
> >
> > It also means that Condition.cpp doesn't act like its unix counterpart
> as the pthread_contition_t requires that wait be called with a non
> recursive mutex. Not sure what or if any issues are resulting from this,
> but I just thought everyone should be aware.
> >
> > Greg Clayton
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Listing memory regions in lldb

2016-05-12 Thread Howard Hellyer via lldb-dev
I'm working on a plugin for lldb and need to scan the memory of a crashed 
process. Using the API to read chunks of memory and scan (via 
SBProcess::Read*) for what I'm looking for is easy but I haven't been able 
to find a way to find which address ranges are accessible. The 
SBProcess::Read* calls will return an error on an invalid address but it's 
not an efficient way to scan a 64 bit address space.

This seems like it blocks simple tasks like scanning memory for blocks 
allocated with a header and footer to track down memory leaks, which is 
crude but traditional, and ought to be pretty quick to script via the 
Python API.

At the moment I've resorted to running a python script prior to launching 
my plugin that takes the output of "readelf --segments", /proc//maps 
or "otool -l" but this isn't ideal. On the assumption that I'm not missing 
something huge I've looked at whether it is possible to extend LLDB to 
provide this functionality and it seems possible, there are checks 
protecting calls to read memory that use the data that would need to be 
exposed. I'm working on a prototype implementation which I'd like to 
deliver back at some stage but before I go too far does this sound like a 
good idea?

Howard Hellyer
IBM Runtime Technologies, IBM Systems


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LTO debug info

2016-05-12 Thread Greg Clayton via lldb-dev
Please do submit this for review, yes.

> On May 11, 2016, at 1:27 PM, Philippe Lavoie  
> wrote:
> 
> Thanks for the explanation and the patch.
> 
> I tried it and the exception is gone.
> The debug info also seems complete. As complete as can be for an LTO build...
> 
> I attached a modified patch with the calls to ExtractDIEsIfNeeded() on 
> separate threads.
> 
> Should I submit it for review ?
> 
> -Philippe
> 
> From: Greg Clayton [gclay...@apple.com]
> Sent: Tuesday, May 03, 2016 5:02 PM
> To: Philippe Lavoie
> Cc: lldb-dev@lists.llvm.org
> Subject: Re: [lldb-dev] LTO debug info
> 
> 
> > On May 3, 2016, at 6:35 AM, Philippe Lavoie  
> > wrote:
> > 
> > The code accessing another CU while indexing is in DWARFCompileUnit::GetDIE.
> > If the DIE is not in the current CU, it finds the CU that has it and calls 
> > its ::GetDIE method, accessing its m_die_array data... 
> > 
> > So the question is: is it a producer (CU should be self-contained) or a 
> > consumer (the indexing) bug ?
> 
> The indexing needs to be fixed. We probably need to start checking the 
> DW_FORM of an attribute before we try to get the DIE for it. So the problem 
> seems to be in DWARFCompileUnit::IndexPrivate() where when the tag is a 
> DW_TAG_subprogram it eventually does:
> 
> if (specification_die_form.IsValid())
> {
> DWARFDIE specification_die = 
> dwarf_cu->GetSymbolFileDWARF()->DebugInfo()->GetDIE 
> (DIERef(specification_die_form));
> if (specification_die.GetParent().IsStructOrClass())
> is_method = true;
> }
> 
> This is the only call to GetDIE in the entire function. It is getting the DIE 
> so it can look to see who the parent it to see if the parent of the function 
> is a struct/class, which means it is a method, or anything else and the 
> function is just the basename of a raw function. So it seems like this should 
> be OK to access other DIEs as it isn't adding any information from the other 
> DWARF file, it is just using it to check who the parent of function 
> (DW_TAG_subprogram) is.
> 
> So the real problem is that "ClearDIEs()" is being called too soon before 
> everyone is done using the DIEs. The reason this was being done is that you 
> might have 1 compile units and we want to partially parse the the DWARF 
> and only use what we need, but if we index everything we want to load the 
> DIEs for a compile unit and clear them after indexing if they weren't loaded 
> before we did the index so we don't take up loads of memory. 
> 
> So what we need to do to fix this is:
> 
> 1 - Run through all of the compile units first and see if each CU has the 
> DIEs parsed and remember this
> 2 - Index all compile units
> 3 - Clear any DIEs in any compile units that didn't have their DIEs parsed 
> _after_ all compile units have been indexed
> 
> A proposed patch is attached:
> 
> 
> 
> the main issue is now all DWARF is parsed on the current thread. So this 
> patch should be updated to run all of the ExtractDIEsIfNeeded() functions on 
> separate threads since this can be time consuming:
> 
> 
> //--
> // First figure out which compile units didn't have their DIEs already
> // parsed and remember this.  If no DIEs were parsed prior to this 
> index
> // function call, we are going to want to clear the CU dies after we
> // are done indexing to make sure we don't pull in all DWARF dies, but
> // we need to wait until all compile units have been indexed in case
> // a DIE in one compile unit refers to another and the indexes 
> accesses
> // those DIEs.
> 
> //--
> for (uint32_t cu_idx = 0; cu_idx < num_compile_units; ++cu_idx)
> {
> DWARFCompileUnit* dwarf_cu = 
> debug_info->GetCompileUnitAtIndex(cu_idx);
> if (dwarf_cu)
> {
> // dwarf_cu->ExtractDIEsIfNeeded(false) will return zero if 
> the
> // DIEs for a compile unit have already been parsed.
> clear_cu_dies[cu_idx] = dwarf_cu->ExtractDIEsIfNeeded(false) 
> > 1;
> }
> }
> 
> > 
> > //--
> > // GetDIE()
> > //
> > // Get the DIE (Debug Information Entry) with the specified offset by
> > // first checking if the DIE is contained within this compile unit and
> > // grabbing the DIE from this compile unit. Otherwise we grab the DIE
> > // from the DWARF file.
> > //--
> > DWARFDIE
> > DWARFCompileUnit::GetDIE (dw_offset_t die_offset)
> > {
> >if (die_offset != DW_INVALID_OFFSET)
> >{
> >if (m_dwo_symbol_file)
> >return m_dwo_symbol_file->GetCompileUnit()->GetDIE(die_offset);
> > 
> >if (ContainsDIEOffset(die_offset))
> >{
> >ExtractD

Re: [lldb-dev] Listing memory regions in lldb

2016-05-12 Thread Jim Ingham via lldb-dev
You should be able to enumerate the memory that is occupied by loaded 
executables, by getting the list of loaded Modules from the target, and iterate 
through the all the Sections.  The Sections know their loaded locations.  I 
assume all the mapped ones will return a valid load address from 
GetLoadBaseAddress, so you can distinguish the loaded and unloaded ones.  So 
you shouldn't need "readelf --segments" or "otool -l", lldb should know this.

You can scan the currently active stacks, but we don't currently know the 
allocated stack extents, just what is being used.  It would be interesting to 
know the actual stack extents, so you could search in old stacks and to know if 
you are close to exhausting the stack of a thread.

We don't have either a generic API, or internal implementations, for getting 
all the mapped memory regions (or shared pages) of processes.  That would be 
quite useful.  IIRC ptr_refs does this by injecting some code into the target 
program that enumerates the regions.  Greg would know more about this.  Most 
systems provide some API to get at this that works cross-process, but that 
doesn't help debugging remotely.  So we either need to teach debugserver & 
lldb-server to do this, or use appropriate code injection.  The gdb-remote 
protocol has a query for the "memory map" of the process, though this is more 
tailored to identify things like memory mapped registers.  Still it might be 
possible to use this as well.

It would be nice to be able to separately query heap, executable & stack memory 
as well.  Though a properly annotated memory map would give you a way to do 
this, so that could be layered on top.

Jim

> On May 12, 2016, at 6:20 AM, Howard Hellyer via lldb-dev 
>  wrote:
> 
> I'm working on a plugin for lldb and need to scan the memory of a crashed 
> process. Using the API to read chunks of memory and scan (via 
> SBProcess::Read*) for what I'm looking for is easy but I haven't been able to 
> find a way to find which address ranges are accessible. The SBProcess::Read* 
> calls will return an error on an invalid address but it's not an efficient 
> way to scan a 64 bit address space. 
> 
> This seems like it blocks simple tasks like scanning memory for blocks 
> allocated with a header and footer to track down memory leaks, which is crude 
> but traditional, and ought to be pretty quick to script via the Python API. 
> 
> At the moment I've resorted to running a python script prior to launching my 
> plugin that takes the output of "readelf --segments", /proc//maps or 
> "otool -l" but this isn't ideal. On the assumption that I'm not missing 
> something huge I've looked at whether it is possible to extend LLDB to 
> provide this functionality and it seems possible, there are checks protecting 
> calls to read memory that use the data that would need to be exposed. I'm 
> working on a prototype implementation which I'd like to deliver back at some 
> stage but before I go too far does this sound like a good idea? 
> Howard Hellyer
> IBM Runtime Technologies, IBM Systems 
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Listing memory regions in lldb

2016-05-12 Thread Greg Clayton via lldb-dev
We have a way internally with:

virtual Error
lldb_private::Process::GetMemoryRegionInfo (lldb::addr_t load_addr, 
MemoryRegionInfo &range_info);

This isn't expose via the public API in lldb::SBProcess. If you want, you can 
expose this. We would need to expose a SBMemoryRegionInfo and the call could be:

namespace lldb
{
class SBProcess
{
SBError GetMemoryRegionInfo (lldb::addr_t load_addr, SBMemoryRegionInfo 
&range_info);
};
}

then you would call this API with address zero and it would return a 
SBMemoryRegionInfo with an address range and if that memory is 
read/write/execute. On MacOSX we always have a page zero at address 0 for 64 
bit apps so it would respond with:

[0x0 - 0x1) read=false, write=false, execute=false

Then you call the function again with the end address of the previous range. 

I would love to see this functionality exported through our public API. Let me 
know if you are up for making a patch. If you are, you might want to quickly 
read the following web page to see the rules that we apply to anything going 
into our public API:

http://lldb.llvm.org/SB-api-coding-rules.html


Greg

> On May 12, 2016, at 6:20 AM, Howard Hellyer via lldb-dev 
>  wrote:
> 
> I'm working on a plugin for lldb and need to scan the memory of a crashed 
> process. Using the API to read chunks of memory and scan (via 
> SBProcess::Read*) for what I'm looking for is easy but I haven't been able to 
> find a way to find which address ranges are accessible. The SBProcess::Read* 
> calls will return an error on an invalid address but it's not an efficient 
> way to scan a 64 bit address space. 
> 
> This seems like it blocks simple tasks like scanning memory for blocks 
> allocated with a header and footer to track down memory leaks, which is crude 
> but traditional, and ought to be pretty quick to script via the Python API. 
> 
> At the moment I've resorted to running a python script prior to launching my 
> plugin that takes the output of "readelf --segments", /proc//maps or 
> "otool -l" but this isn't ideal. On the assumption that I'm not missing 
> something huge I've looked at whether it is possible to extend LLDB to 
> provide this functionality and it seems possible, there are checks protecting 
> calls to read memory that use the data that would need to be exposed. I'm 
> working on a prototype implementation which I'd like to deliver back at some 
> stage but before I go too far does this sound like a good idea? 
> Howard Hellyer
> IBM Runtime Technologies, IBM Systems 
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Listing memory regions in lldb

2016-05-12 Thread Jim Ingham via lldb-dev
Oh, that's a cute trick, but it relies on the not (at-least-to-me) obvious fact 
that an address in an unmapped region will return the extents of the unmapped 
region.  For it to be useful, that needs to be a requirement of the API's 
implementation.

It seems to me it would be much clearer to have an API that returns the memory 
regions for you.

Jim



> On May 12, 2016, at 11:09 AM, Greg Clayton via lldb-dev 
>  wrote:
> 
> We have a way internally with:
> 
>virtual Error
>lldb_private::Process::GetMemoryRegionInfo (lldb::addr_t load_addr, 
> MemoryRegionInfo &range_info);
> 
> This isn't expose via the public API in lldb::SBProcess. If you want, you can 
> expose this. We would need to expose a SBMemoryRegionInfo and the call could 
> be:
> 
> namespace lldb
> {
>class SBProcess
>{
>SBError GetMemoryRegionInfo (lldb::addr_t load_addr, 
> SBMemoryRegionInfo &range_info);
>};
> }
> 
> then you would call this API with address zero and it would return a 
> SBMemoryRegionInfo with an address range and if that memory is 
> read/write/execute. On MacOSX we always have a page zero at address 0 for 64 
> bit apps so it would respond with:
> 
> [0x0 - 0x1) read=false, write=false, execute=false
> 
> Then you call the function again with the end address of the previous range. 
> 
> I would love to see this functionality exported through our public API. Let 
> me know if you are up for making a patch. If you are, you might want to 
> quickly read the following web page to see the rules that we apply to 
> anything going into our public API:
> 
> http://lldb.llvm.org/SB-api-coding-rules.html
> 
> 
> Greg
> 
>> On May 12, 2016, at 6:20 AM, Howard Hellyer via lldb-dev 
>>  wrote:
>> 
>> I'm working on a plugin for lldb and need to scan the memory of a crashed 
>> process. Using the API to read chunks of memory and scan (via 
>> SBProcess::Read*) for what I'm looking for is easy but I haven't been able 
>> to find a way to find which address ranges are accessible. The 
>> SBProcess::Read* calls will return an error on an invalid address but it's 
>> not an efficient way to scan a 64 bit address space. 
>> 
>> This seems like it blocks simple tasks like scanning memory for blocks 
>> allocated with a header and footer to track down memory leaks, which is 
>> crude but traditional, and ought to be pretty quick to script via the Python 
>> API. 
>> 
>> At the moment I've resorted to running a python script prior to launching my 
>> plugin that takes the output of "readelf --segments", /proc//maps or 
>> "otool -l" but this isn't ideal. On the assumption that I'm not missing 
>> something huge I've looked at whether it is possible to extend LLDB to 
>> provide this functionality and it seems possible, there are checks 
>> protecting calls to read memory that use the data that would need to be 
>> exposed. I'm working on a prototype implementation which I'd like to deliver 
>> back at some stage but before I go too far does this sound like a good idea? 
>> Howard Hellyer
>> IBM Runtime Technologies, IBM Systems
>> 
>> 
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Push work-in-progress plugin for Process NetBSD?

2016-05-12 Thread Kamil Rytarowski via lldb-dev
I keep locally almost 5k lines of code of the process plugin for NetBSD.

It's still not functional (there are bugs), but it has all or mostly all
of the code needed for amd64. Is it fine to push it upstream and
continue development against the version in-tree?

My code is based on FreeBSD with removed unsupported features, missing
in the current version of NetBSD.

It will be easier for me to keep it in sync with HEAD and should be
usable for other teams to take it into account in further changes.


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Push work-in-progress plugin for Process NetBSD?

2016-05-12 Thread Zachary Turner via lldb-dev
I think it's fine, but I'll let others comment on that too.

But i will say, if you do this, please make sure it's clang-formatted.
On Thu, May 12, 2016 at 6:23 PM Kamil Rytarowski via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I keep locally almost 5k lines of code of the process plugin for NetBSD.
>
> It's still not functional (there are bugs), but it has all or mostly all
> of the code needed for amd64. Is it fine to push it upstream and
> continue development against the version in-tree?
>
> My code is based on FreeBSD with removed unsupported features, missing
> in the current version of NetBSD.
>
> It will be easier for me to keep it in sync with HEAD and should be
> usable for other teams to take it into account in further changes.
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev