On 21 December 2017 at 12:29, xgsa <x...@yandex.ru> wrote:
> 21.12.2017, 13:45, "Pavel Labath via lldb-dev" <lldb-dev@lists.llvm.org>:
>> On 20 December 2017 at 18:40, Greg Clayton <clayb...@gmail.com> wrote:
>>>>  On Dec 20, 2017, at 3:33 AM, Pavel Labath <lab...@google.com> wrote:
>>>>
>>>>  On 19 December 2017 at 17:39, Greg Clayton via lldb-dev
>>>>  <lldb-dev@lists.llvm.org> wrote:
>>>>>  The apple accelerator tables are only enabled for Darwin target, but 
>>>>> there
>>>>>  is nothing to say we couldn't enable these for other targets in ELF 
>>>>> files.
>>>>>  It would be a quick way to gauge the performance improvement that these
>>>>>  accelerator tables provide for linux.
>>>>
>>>>  I was actually experimenting with this last month. Unfortunately, I've
>>>>  learned that the situation is not as simple as flipping a switch in
>>>>  the compiler. In fact, there is no switch to flip as clang will
>>>>  already emit the apple tables if you pass -glldb. However, the
>>>>  resulting tables will be unusable due to the differences in how dwarf
>>>>  is linked on elf vs mach-o. In elf, we have the linker concatenate the
>>>>  debug info into the final executable/shared library, which it will
>>>>  also happily do for the .apple_*** sections.
>>>
>>>  That ruins the whole idea of the accelerator tables if they are 
>>> concatenated...
>>
>> I'm not sure I'm convinced by that. I mean, obviously it's better if
>> you have just a single table to look up, but even if you have multiple
>> tables, looking up into each one may be faster that indexing the full
>> debug info yourself. Take liblldb for example. It has ~3000 compile
>> units and nearly 2GB of debug info. I don't have any solid data on
>> this (and it would certainly be interesting to make this experiment),
>> but I expect that doing 3000 hash lookups (which are basically just
>> array accesses) would be faster than indexing 2GB of dwarf (where you
>> have to deal with variable-sized fields and uleb encodings...). And
>> there is always the possibility to do the lookups in parallel or merge
>> the individual tables inside the debugger.
>>
>>>>  The second, more subtle problem I see is that these tables are an
>>>>  all-or-nothing event. If we see an accelerator table, we assume it is
>>>>  an index of the entire module, but that's not likely to be the case,
>>>>  especially in the early days of this feature's uptake. You will have
>>>>  people feeding the linkers with output from different compilers, some
>>>>  of which will produce these tables, and some not. Then the users will
>>>>  be surprised that the debugger is ignoring some of their symbols.
>>>
>>>  I think it is best to auto generate the tables from the DWARF directly 
>>> after it has all been linked. Skip teaching the linker about merging it, 
>>> just teach it to generate it.
>>
>> If the linker does the full generation, then how is that any better
>> than doing the indexing in the debugger? Somebody still has to parse
>> the entire dwarf, so it might as well be the debugger.
>
> I suppose, the difference is that linker does it one time and debugger has to 
> do it every time on startup, as the results are not saved anywhere (or are 
> they?). So possibly, instead of building accelerator tables by compiler for 
> debugger, possibly, the debugger should save its own indexes somewhere (e.g. 
> in a cache-file near the binary)? Or is there already such mechanism and I 
> just don't know about it?

Currently the indexes aren't saved, but that is exactly where I was
going with this. We *could* save this index (we already cache
downloaded remote object files in ~/.lldb/module_cache, we could just
put this next to it) and reuse it for the subsequent debug sessions.
_______________________________________________
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

Reply via email to