Re: [lldb-dev] FW: LLDB: Unwinding based on Assembly Instruction Profiling

2015-10-30 Thread Abhishek Aggarwal via lldb-dev
Hi Jason

Thanks a lot for the detailed information. I am sorry to post my
queries a bit late. Here are few things that I want to ask.

When eh_frame has epilogue description as well, the Assembly profiler
doesn't need to augment it. In this case, is eh_frame augmented unwind
plan used as Non Call Site Unwind Plan or Assembly based Unwind Plan
is used? I checked FuncUnwinders::GetUnwindPlanAtNonCallSite()
function. When there is nothing to augment in eh_frame Unwind plan,
then GetEHFrameAugmentedUnwindPlan() function returns nullptr and
AssemblyUnwindPlan is used as Non Call Site Unwind Plan. Is it the
expected behavior?

About your comments on gcc producing ''asynchronous unwind tables'',
do you mean that gcc is not producing asynchronous unwind tables as it
keeps *some* async unwind instructions and not all of them?

Abhishek


> Hi all, sorry I missed this discussion last week, I was a little busy.
>
> Greg's original statement isn't correct -- about a year ago Tong Shen changed 
> lldb to using eh_frame for the currently-executing frame.  While it is true 
> that eh_frame is not guaranteed to describe the prologue/epilogue, in 
> practice eh_frame always describes the epilogue (gdb wouldn't couldn't 
> without this, with its much more simplistic unwinder).  Newer gcc's also 
> describe the epilogue.  clang does not (currently) describe the epilogue.  
> Tong's changes *augment* the eh_frame with an epilogue description if it 
> doesn't already have one.
>
> gcc does have an "asynchronous unwind tables" option -- "asynchronous" 
> meaning the unwind rules are defined at every instruction location.  But the 
> last time I tried it, it did nothing.  They've settled on an unfortunate 
> middle ground where eh_frame (which should be compact and only describe 
> enough for exception handling) has *some* async unwind instructions.  And the 
> same unwind rules are emitted into the debug_frame section, even if 
> -fasynchronous-unwind-tables is used.
>
> In the ideal world, eh_frame should be extremely compact and only sufficient 
> for exception handling.  debug_frame should be extremely verbose and describe 
> the unwind rules at all unwind locations.
>
> As Tamas says, there's no indication in eh_frame or debug_frame as to how 
> much is described:  call-sites only (for exception handling), call-sites + 
> prologue, call-sites + prologue + epilogue, or fully asynchronous.  It's a 
> drag, if the DWARF committee ever has enough reason to break open the 
> debug_frame format for some other changes, I'd like to get more information 
> in there.
>
>
> Anyway, point is, we're living off of eh_frame (possibly "augmented") for the 
> currently-executing stack frame these days.  lldb may avoid using the 
> assembly unwinder altogether in an environment where it finds eh_frame unwind 
> instructions for every stack frame.
>
>
> (on Mac, we've switched to a format called "compact unwind" -- much like the 
> ARM unwind info that Tamas recently added support for, this is an extremely 
> small bit of information which describes one unwind rule for the entire 
> function.  It is only applicable or exception handling, it has no way to 
> describe prologues/epilogues.  compact unwind is two 4-byte words per 
> function.  lldb will use compact unwind / ARM unwind info for the non-zeroth 
> stack frames.  It will use its assembly instruction profiler for the 
> currently-executing stack frame.)
>
> Hope that helps.
>
> J
>
>
>> On Oct 15, 2015, at 2:56 AM, Tamas Berghammer via lldb-dev 
>>  wrote:
>>
>> If we are trying to unwind from a non call site (frame 0 or signal handler) 
>> then the current implementation first try to use the non call site unwind 
>> plan (usually assembly emulation) and if that one fails then it will fall 
>> back to the call site unwind plan (eh_frame, compact unwind info, etc.) 
>> instead of falling back to the architecture default unwind plan because it 
>> should be a better guess in general and we usually fail with the assembly 
>> emulation based unwind plan for hand written assembly functions where 
>> eh_frame is usually valid at all address.
>>
>> Generating asynchronous eh_frame (valid at all address) is possible with gcc 
>> (I am not sure about clang) but there is no way to tell if a given eh_frame 
>> inside an object file is valid at all address or only at call sites. The 
>> best approximation what we can do is to say that each eh_frame entry is 
>> valid only at the address what it specifies as start address but we don't 
>> make a use of it in LLDB at the moment.
>>
>> For the 2nd part of the original question, I think changing the eh_frame 
>> based unwind plan after a failed unwind using instruction emulation is only 
>> a valid option for the PC where we tried to unwind from because the assembly 
>> based unwind plan could be valid at other parts of the function. Making the 
>> change for that 1 concrete PC address would make sense, but have practically 
>> no effect because the

Re: [lldb-dev] Inquiry for performance monitors

2016-02-04 Thread Abhishek Aggarwal via lldb-dev
Hello Pavel

As per my understanding, instead of doing it by expression evaluation
if the code (to enable pt and gathering the raw traces) is written on
lldb-server side, then also lldb-server will have to wait for the
inferior to stop in order to encapsulate all the traces in packets and
send them to client for analysis.

Is it possible that client can request the lldb-server to send it a
part of the raw traces while the inferior is still running?

- Abhishek

On Thu, Feb 4, 2016 at 1:32 PM, Ravitheja Addepally via lldb-dev
 wrote:
> Yes, thanx for the clarification.
>
> On Thu, Feb 4, 2016 at 11:24 AM, Pavel Labath  wrote:
>>
>> On 4 February 2016 at 10:04, Ravitheja Addepally
>>  wrote:
>> > Hello Pavel,
>> > In the case of expression evaluation approach you
>> > mentioned
>> > that:
>> > 1. The data could be accessible only when the target is stopped. why is
>> > that
>> > ?
>> If I understand the approach correctly, the idea is the run all perf
>> calls as expressions in the debugger. Something like
>> lldb> expr perf_event_open(...)
>> We need to stop the target to be able to do something like that, as we
>> need to fiddle with its registers. I don't see any way around that...
>>
>> > 2. What sort of noise were you referring to ?
>> Since now all the perf calls will be expressions executed within the
>> context of the process being traced, they themselves will show up in
>> the trace. I am sure we could filter that out somehow, but it feels
>> like an added complication..
>>
>> Does that make it any clearer?
>>
>> pl
>
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Inquiry for performance monitors

2016-02-05 Thread Abhishek Aggarwal via lldb-dev
Hi Greg

Please find any answers/queries inlined:

On Thu, Feb 4, 2016 at 9:58 PM, Greg Clayton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:
>
>> On Feb 4, 2016, at 2:24 AM, Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:
>>
>> On 4 February 2016 at 10:04, Ravitheja Addepally
>>  wrote:
>>> Hello Pavel,
>>>In the case of expression evaluation approach you
mentioned
>>> that:
>>> 1. The data could be accessible only when the target is stopped. why is
that
>>> ?
>> If I understand the approach correctly, the idea is the run all perf
>> calls as expressions in the debugger. Something like
>> lldb> expr perf_event_open(...)
>> We need to stop the target to be able to do something like that, as we
>> need to fiddle with its registers. I don't see any way around that...
>>
>>> 2. What sort of noise were you referring to ?
>> Since now all the perf calls will be expressions executed within the
>> context of the process being traced, they themselves will show up in
>> the trace. I am sure we could filter that out somehow, but it feels
>> like an added complication..
>>
>> Does that make it any clearer?
>
> So a few questions: people seem worried about running something in the
process if expression are being used. Are you saying that if the process is
on the local machine, process 1 can just open up a file descriptor to the
trace data for process 2? If so, why pass this through lldb-server?

As you have also mentioned later in your email, irrespective of what
approach we use to implement this feature, we will have to send the trace
data from lldb-server to client in case of remote debugging. Moreover even
for local debugging, the current architecture of lldb is a client-server
architecture (atleast for macosx, linux and freebsd) as per my knowledge.
Hence, traces will have to be sent in form of packets from server to
client even
for the expression evaluation approach.

> I am not a big fan making the lldb-server become the conduits for a ton
of information. It just isn't built for that high volumes of data coming
in. I can be done, but that doesn't mean it should.  If everyone starts
passing data like memory usage, CPU time, trace info, backtraces and more
through asynchronously through lldb-server, it will become a very crowded
communication channel.
>
As per my understanding, one of the difference the expression evaluation
approach provides is to disallow sending traces from server to client
asynchronously (as traces can't be sent until inferior stops). If increased
number of asynchronous packets are the concern here then we can choose to
send the trace data only synchronously (i.e. only after the inferior
stops). Or can't we ?

> You don't need python if you want to do this using the lldb API. If your
IDE is already linking against the LLDB shared library, it can just run the
expressions using the public LLDB API. This is how view debugging is
implemented in Xcode. It runs complex expressions that gather all data
about a view and its subviews and returns all the layers in a blob of data
that can be serialized by the expression, retrieved by Xcode (memory read
from the process), and then de-serialized by the IDE into a format that can
be used. If your IDE can access the trace data for another process, why not
just read it from the IDE itself? Why get the lldb-server involved? Granted
the remote debugging parts of this make an argument for including it in the
lldb-server. But if you go this route you need to make a base
implementation for trace data that will work for any trace data, have trace
data plug-ins that somehow know how to interpret the data and provide.
>
Thanks for suggesting this.

> How do you say "here is a blob of trace data" I just got from some
process, go find me a plug-in that can parse it. You might have to say
"here is a blob of data" and it is for the "intel" trace data plug-in. How
are we going to know which trace data to ask for? Is the packet we send to
lldb-server going to reply to "qGetTraceData" with something that says the
type of data is "intel-IEEE-version-123.3.1" and the data is "xxx"?
Then we would find a plug-in in LLDB for that trace data that can parse it?
So you will need to think about completely abstracting the whole notion of
trace data into some sensible API that gets exposed via SBProcess.
>
We need to think a bit more on this.

> So yes, there are two approaches to take. Let me know which one is the
way you want to go. But I really want to avoid the GDB remote protocol's
async packets becoming the conduit for a boat load of information.
>
In order to configure/start/finish the tracing feature, a lot of expression
evaluations will have to be done (atleast perf_event_open(), mmap(),
perf_event_close() are the ones I know of). The main reason I am skeptical
of expression evaluation approach is the amount of extra packets to be sent
to the lldb-server to configure/start/finish tracing. Hence, I am more in
favor of writing the code to config

[lldb-dev] Development of a Plugin to be loaded in LLDB from external library

2016-04-26 Thread Abhishek Aggarwal via lldb-dev
Hi everyone

There has been previous discussions in this mailing list regarding *E**nabling
Intel(R) Processor Trace collection in LLDB. A new APIs are being developed
to be added to SB APIs that will provide raw traces (collected on
lldb-server side). These APIs are Trace technology independent and hence
can work for other Tracing technologies also. The decoding of the raw
traces can be done outside LLDB. For details you can refer to the thread
with the subject "Review of API and remote packets" started on March 31,
2016.*

I am working on developing a Plugin that will use these new APIs to
enable *Intel(R)
Processor Trace technology and *collect raw trace data for the inferior
being debugged by LLDB. The plugin will perform decoding on the trace data
to present it as a meaningful information to the user of LLDB Debugger. I
want to use this plugin through LLDB CLI. I have few questions regarding
development of this plugin:

1. What is the best way to develop this plugin? Should it be done as shown
in "examples/plugins/commands/fooplugin.cpp" ( i.e. a C++ based solution
and using 'plugin load ' command) or should I go for
Python based solution to add new commands using Python functions?

2. I am planning to upstream this developed plugin in LLDB public
repository once the development is finished. Any user that wants to
use *Intel(R)
Processor Trace *will be able to do so by compiling this plugin and loading
it via LLDB CLI as an external library. What should be the ideal location
to place this plugin in LLDB repository? I could think of the 'tools'
folder.


- Abhishek
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Development of a Plugin to be loaded in LLDB from external library

2016-04-27 Thread Abhishek Aggarwal via lldb-dev
Hi Greg

My comments are inlined.

On Tue, Apr 26, 2016 at 7:03 PM, Greg Clayton  wrote:

>
> > On Apr 26, 2016, at 1:50 AM, Abhishek Aggarwal via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Hi everyone
> >
> > There has been previous discussions in this mailing list regarding
> Enabling Intel(R) Processor Trace collection in LLDB. A new APIs are being
> developed to be added to SB APIs that will provide raw traces (collected on
> lldb-server side). These APIs are Trace technology independent and hence
> can work for other Tracing technologies also. The decoding of the raw
> traces can be done outside LLDB. For details you can refer to the thread
> with the subject "Review of API and remote packets" started on March 31,
> 2016.
> >
> > I am working on developing a Plugin that will use these new APIs to
> enable Intel(R) Processor Trace technology and collect raw trace data for
> the inferior being debugged by LLDB. The plugin will perform decoding on
> the trace data to present it as a meaningful information to the user of
> LLDB Debugger. I want to use this plugin through LLDB CLI. I have few
> questions regarding development of this plugin:
> >
> > 1. What is the best way to develop this plugin? Should it be done as
> shown in "examples/plugins/commands/fooplugin.cpp" ( i.e. a C++ based
> solution and using 'plugin load ' command) or should I
> go for Python based solution to add new commands using Python functions?
>
> I personally would add a new folder for all trace plug-ins:
>
> $(trunk)/source/Plugins/Trace
>
> Then add one for the Intel trace:
>
> $(trunk)/source/Plugins/Trace/Intel
>
> We should then add a new "trace" multi-word command into that would get
> added into our commands in CommandInterpreter::LoadCommandDictionary():
>
> m_command_dict["trace"] = CommandObjectSP (new
> CommandObjectMultiwordTrace (*this));
>
> This command would then have sub commands like "start", "stop", and any
> other commands we would need in order to display trace data.
>
>
Probably, I shouldn't have used the word 'Plugin'. I want to develop an
external 'tool' that will link to liblldb.so just to extract the raw trace
data using SB API of LLDB (SBProcess::StartTrace (SBTraceOptions &options))
for the debugged inferior. The tool will then process this raw data to
convert and show it as a meaningful information to the user. I am planning
to keep it external without compiling it into LLDB (the decision of keeping
it external resulted in from the previous discussions on this mailing list
as people were skeptical of compiling it into LLDB because of its
dependency on Intel(R) 'Processor Trace Decoding Library'). Making this
tool as a part of LLDB repository but not compiling into LLDB will enable
users of this tool to compile it separately and use it without affecting
anyone who doesn't want to use it.

I want to enable the user of this tool to use it through LLDB CLI. For this
purpose, I will have to provide some user-defined commands once this tool
is loaded in LLDB externally via 'plugin load ' command. I
am referring to $(trunk)/examples/plugins/commands/fooplugin.cpp file for
the implementation.

The function lldb::PluginInitialize (lldb::SBDebugger debugger) will look
something like this:
{
lldb::SBCommandInterpreter interpreter =
debugger.GetCommandInterpreter();
lldb::SBCommand processor-trace =
interpreter.AddMultiwordCommand("processor-trace",NULL);// replaced
'foo' by 'processor-trace' in fooplugin.cpp
processor-trace.AddCommand ("start", new StartCommand (), "configures
intel processor trace & start tracing the inferior");   // replaced 'child'
by 'start'
}

interpreter.AddMultiwordCommand("processor-trace",NULL) will add
'processor-trace' multiword Command Object via
CommandInterpreter::AddUserCommand() in
'm_user_dict' data member like this:m_user_dict ["processor-trace"] =
cmd_sp;

In order to load this tool externally (as a shared library) in LLDB, I
believe the user-defined commands will go to 'm_user_dict' and not
'm_command_dict' as m_command_dict represents basic built-in commands of
LLDB.



> > 2. I am planning to upstream this developed plugin in LLDB public
> repository once the development is finished. Any user that wants to use
> Intel(R) Processor Trace will be able to do so by compiling this plugin and
> loading it via LLDB CLI as an external library. What should be the ideal
> location to place this plugin in LLDB repository? I could think of the
> 'tools' folder.
>
> All plug-ins right now are all in

[lldb-dev] Developing a Plugin to be loaded in LLDB from external shared libs

2016-05-02 Thread Abhishek Aggarwal via lldb-dev
Hi everyone

There has been previous discussions in this mailing list regarding *E**nabling
Intel(R) Processor Trace collection in LLDB. A new APIs are being developed
to be added to SB APIs that will provide raw traces (collected on
lldb-server side). These APIs are Trace technology independent and hence
can work for other Tracing technologies also. The decoding of the raw
traces can be done outside LLDB. For details you can refer to the thread
with the subject "Review of API and remote packets" started on March 31,
2016.*

I am working on developing a Plugin that will use these new APIs to
enable *Intel(R)
Processor Trace technology and *collect raw trace data for the inferior
being debugged by LLDB. The plugin will perform decoding on the trace data
to present it as a meaningful information to the user of LLDB Debugger. I
want to use this plugin through LLDB CLI. I have few questions regarding
development of this plugin:

1. What is the best way to develop this plugin? Should it be done as shown
in "examples/plugins/commands/fooplugin.cpp" ( i.e. a C++ based solution
and using 'plugin load ' command) or should I go for
Python based solution to add new commands using Python functions?

2. I am planning to upstream this developed plugin in LLDB public
repository once the development is finished. Any user that wants to
use *Intel(R)
Processor Trace *will be able to do so by compiling this plugin and loading
it via LLDB CLI as an external library. What should be the ideal location
to place this plugin in LLDB repository? I could think of the 'tools'
folder.


- Abhishek
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Stop IDs for individual thread

2016-06-03 Thread Abhishek Aggarwal via lldb-dev
Hi everyone

While debugging an inferior with LLDB, for every stop event a new StopID is
generated and this ID can be extracted from SBProcess::GetStopID() API.
This ID indicates change in the state of the process between two stop
events.

As per my knowledge, in case of a multithreaded process this stop ID can't
be used to exactly pin point the thread(s) of the process that suffered
change in their states between two stop events. Is there a way to find out
this information?

- Abhishek
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Loadable Code Segment Information & SectionType in LLDB

2016-08-09 Thread Abhishek Aggarwal via lldb-dev
Hello all

I have following 2 queries:

1. Can SB APIs of LLDB provide information regarding the loadable *Code
Segment* (r-xp part of /proc/$PID/maps file in case of Linux) of a debugged
process? The information I am looking for is start address and end address
of the loadable code segment of the debugged process. I know that SBModule
class can provide all the *Sections* of the object file via SBSection
class. However, I couldn't find any API in this class that can provide the
information I need.

2. SBSection::GetSectionType() API returns an enum 'SectionType'. Does
SectionType represent the section types as specified by different object
file formats (Mach-O, PECOFF, ELF)?

As an example, ELF specification specifies section types like SHT_NULL,
SHT_PROGBITS, SHT_RELA, SHT_HASH, SHT_NOTE, SHT_NOBITS etc. However,
SectionType enum doesn't contain all these types. Hence, enum SectionType
is either a mix of all section types of different object file formats or it
is a custom type of LLDB. I will appreciate any comment on this.


Thanks
Abhishek Aggarwal
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Loadable Code Segment Information & SectionType in LLDB

2016-08-10 Thread Abhishek Aggarwal via lldb-dev
Hi Greg

My comments are inlined:

On Tue, Aug 9, 2016 at 7:01 PM, Greg Clayton  wrote:

>
> > On Aug 9, 2016, at 9:01 AM, Abhishek Aggarwal via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Hello all
> >
> > I have following 2 queries:
> >
> > 1. Can SB APIs of LLDB provide information regarding the loadable Code
> Segment (r-xp part of /proc/$PID/maps file in case of Linux) of a debugged
> process? The information I am looking for is start address and end address
> of the loadable code segment of the debugged process. I know that SBModule
> class can provide all the Sections of the object file via SBSection class.
> However, I couldn't find any API in this class that can provide the
> information I need.
>
> There can be many sections that contain code. I am not sure what you mean
> by "the loadable code segment of the debugged process". ".text" in ELF is
> just a section, but that doesn't mean that it will be the only section that
> contains code. Same goes for mach-o files. SBSection doesn't currently
> expose the permissions of sections, but we can easily add that since
> lldb_private::Section has permissions that I added about a month ago:
>
> //--
> /// Get the permissions as OR'ed bits from lldb::Permissions
> //--
> uint32_t
> lldb_private::Section::GetPermissions() const;
>
> //--
> /// Set the permissions using bits OR'ed from lldb::Permissions
> //--
> void
> lldb_private::Section::SetPermissions(uint32_t permissions);
>
> So I would think that you would want to iterate over the sections and
> check their permissions and use any that are read + execute from the main
> executable?
>
>
By "Loadable Code Segment" I wanted to refer to Segments of an elf file
having type PT_LOAD and containing executable machine instructions.
However, as you said I need all the sections that are read+execute. Thanks
for pointing about the permission API for sections. I can add an API of
getting Permissions to SBSection class and upload it for review soon.


> >
> > 2. SBSection::GetSectionType() API returns an enum 'SectionType'. Does
> SectionType represent the section types as specified by different object
> file formats (Mach-O, PECOFF, ELF)?
>
> It does in an agnostic way. You can watch for any sections that have
> eSectionTypeCode as their type from the main executable.
>
> >
> > As an example, ELF specification specifies section types like SHT_NULL,
> SHT_PROGBITS, SHT_RELA, SHT_HASH, SHT_NOTE, SHT_NOBITS etc. However,
> SectionType enum doesn't contain all these types. Hence, enum SectionType
> is either a mix of all section types of different object file formats or it
> is a custom type of LLDB. I will appreciate any comment on this.
>
> Again, we aren't trying to expose all of the different bits from ELF and
> Mach-o and COFF directly, we try to intelligently encapsulate the data by
> making more general definitions. If you feel that a SHT_XXX value should
> have its own new SectionType, we can discus that. The section type
> detections inside of ObjectFileELF is not that great, so feel free to
> improve the ObjectFileELF::CreateSections() function to "do the right
> thing". Anything that is SHT_PROGBITS should probably be eSectionTypeCode.
> It looks like just ".text" is being set to eSectionTypeCode right now.
>
>
This is what I wanted to confirm. I was not sure whether eSectionTypeCode
implies those sections which have read+execute permissions or those
sections that have  section type as SHT_PROGBITS (in case of ELF format).
Thanks for clarifying. I can try to fix it for
ObjectFileELF::CreateSections() function.


> As an example I would assume that:
>
> SHT_NULL -> eSectionTypeOther (if this section is even exposed, and I
> don't believe it is)
> SHT_PROGBITS -> eSectionTypeCode
> SHT_RELA -> eSectionTypeELFRelocationEntries (although these definition
> should never have had ELF in the name, these enums are supposed to be
> agnostic...)
> SHT_HASH -> eSectionTypeOther
> SHT_NOTE -> eSectionTypeOther
> SHT_NOBITS -> eSectionTypeOther
> >
> >
> > Thanks
> > Abhishek Aggarwal
> >
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Inquiry for performance monitors

2017-06-19 Thread Abhishek Aggarwal via lldb-dev
Hi Everyone

I have developed a tool that facilitates lldb users using Intel(R)
Processor Trace technology for debugging applications (as per discussions
in this thread). The patch is https://reviews.llvm.org/D33035.

Some highlights of this tool are:
1. The tool is built on top of lldb. It is not a part of liblldb shared
library. It resides in tool/intel-features folder. Anyone willing to use
this feature can compile this tool (by enabling some extra flags) using
cmake while building lldb.
2. As it was suggested, the trace decoding library hasn't been made a part
of lldb repository. It can be downloaded from the corresponding github repo.
3. All intel specific features are combined to form single shared library
thereby not cluttering lldb repository with each intel specific feature
(proposed by Pavel).

If something has changed or you have new concerns regarding this tool since
the last discussion in this thread, please let me know.

- Abhishek


On Fri, Feb 5, 2016 at 4:38 PM, Abhishek Aggarwal 
wrote:

> Hi Greg
>
> Please find any answers/queries inlined:
>
> On Thu, Feb 4, 2016 at 9:58 PM, Greg Clayton via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> >> On Feb 4, 2016, at 2:24 AM, Pavel Labath via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>
> >> On 4 February 2016 at 10:04, Ravitheja Addepally
> >>  wrote:
> >>> Hello Pavel,
> >>>In the case of expression evaluation approach you
> mentioned
> >>> that:
> >>> 1. The data could be accessible only when the target is stopped. why
> is that
> >>> ?
> >> If I understand the approach correctly, the idea is the run all perf
> >> calls as expressions in the debugger. Something like
> >> lldb> expr perf_event_open(...)
> >> We need to stop the target to be able to do something like that, as we
> >> need to fiddle with its registers. I don't see any way around that...
> >>
> >>> 2. What sort of noise were you referring to ?
> >> Since now all the perf calls will be expressions executed within the
> >> context of the process being traced, they themselves will show up in
> >> the trace. I am sure we could filter that out somehow, but it feels
> >> like an added complication..
> >>
> >> Does that make it any clearer?
> >
> > So a few questions: people seem worried about running something in the
> process if expression are being used. Are you saying that if the process is
> on the local machine, process 1 can just open up a file descriptor to the
> trace data for process 2? If so, why pass this through lldb-server?
>
> As you have also mentioned later in your email, irrespective of what
> approach we use to implement this feature, we will have to send the trace
> data from lldb-server to client in case of remote debugging. Moreover even
> for local debugging, the current architecture of lldb is a client-server
> architecture (atleast for macosx, linux and freebsd) as per my knowledge.
> Hence, traces will have to be sent in form of packets from server to
> client even for the expression evaluation approach.
>
> > I am not a big fan making the lldb-server become the conduits for a ton
> of information. It just isn't built for that high volumes of data coming
> in. I can be done, but that doesn't mean it should.  If everyone starts
> passing data like memory usage, CPU time, trace info, backtraces and more
> through asynchronously through lldb-server, it will become a very crowded
> communication channel.
> >
> As per my understanding, one of the difference the expression evaluation
> approach provides is to disallow sending traces from server to client
> asynchronously (as traces can't be sent until inferior stops). If increased
> number of asynchronous packets are the concern here then we can choose to
> send the trace data only synchronously (i.e. only after the inferior
> stops). Or can't we ?
>
> > You don't need python if you want to do this using the lldb API. If your
> IDE is already linking against the LLDB shared library, it can just run the
> expressions using the public LLDB API. This is how view debugging is
> implemented in Xcode. It runs complex expressions that gather all data
> about a view and its subviews and returns all the layers in a blob of data
> that can be serialized by the expression, retrieved by Xcode (memory read
> from the process), and then de-serialized by the IDE into a format that can
> be used. If your IDE can access the trace data for another process, why not
> just read it from the IDE itself? Why get the lldb-server involved? Granted
> the remote debugging parts of this make an argument for including it in the
> lldb-server. But if you go this route you need to make a base
> implementation for trace data that will work for any trace data, have trace
> data plug-ins that somehow know how to interpret the data and provide.
> >
> Thanks for suggesting this.
>
> > How do you say "here is a blob of trace data" I just got from some
> process, go find me a plug-in that can parse it. You might have to say
> "h

[lldb-dev] Fwd: Offset Calculations for Registers on Linux x86_64

2015-08-13 Thread Abhishek Aggarwal via lldb-dev
Hello

I have a question regarding offset calculations of registers for x86_64
architecture. In file source/Plugins/Process/Utility/RegisterInfos_x86_64.h:

The macro FPR_OFFSET(reg) calculates the offset of floating point
register 'reg' with respect to 'UserArea' struct while GPR_OFFSET(reg)
calculates it wrt to 'GPR' struct. Is there any specific reason of
calculating the offsets of floating point registers wrt 'UserArea' struct
and not wrt 'FPR' struct (defined in
source/Plugins/Process/Utility/RegisterContext_x86.h) ?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Offset Calculations for Registers on Linux x86_64

2015-08-14 Thread Abhishek Aggarwal via lldb-dev
Hi

As per my understanding (please correct if I am wrong):

1. There exists a file for each platform (Architecture+OS) that calculates
the offsets for that platform. e.g. RegisterContextLinux_x86_64.cpp for
x86_64 architecture on Linux OS.

2. For each platform, offset values for registers might be different
because it depends upon the way the members of structures GPR, FPR and
UserArea are organized in Platform specific file. e.g. Offset of rax will
be 80 and not 0 for RegisterContextLinux_x86_64.cpp because rax lies at
10th position in structure GPR defined in this file.

3. The main motive behind calculating offsets for each register is to fetch
data from the correct location in a chunk of data that a ptrace API
provides (atleast in case of Linux OS).

On Thu, Aug 13, 2015 at 6:42 PM, Greg Clayton  wrote:

> All registers are placed into one large buffer that contains everything.
> All offsets should be the global offset in the register context's data.
> Typically we should see:
>
>
> GPR
>rax offset 0
>rbx offset 8
>
> FPR
>mm0 offset 128
>mm1 offset 160
>...
> EXC
>fpsr offset 256
>...
>
>
> So the offsets should be based on the offset from the start of the one
> large buffer that contains all register values.
>
> > On Aug 13, 2015, at 2:26 AM, Abhishek Aggarwal via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> >
> > Hello
> >
> > I have a question regarding offset calculations of registers for x86_64
> architecture. In file source/Plugins/Process/Utility/RegisterInfos_x86_64.h:
> >
> > The macro FPR_OFFSET(reg) calculates the offset of floating point
> register 'reg' with respect to 'UserArea' struct while GPR_OFFSET(reg)
> calculates it wrt to 'GPR' struct. Is there any specific reason of
> calculating the offsets of floating point registers wrt 'UserArea' struct
> and not wrt 'FPR' struct (defined in
> source/Plugins/Process/Utility/RegisterContext_x86.h) ?
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Offset Calculations for Registers on Linux x86_64

2015-08-17 Thread Abhishek Aggarwal via lldb-dev
Hi Greg

Thanks for your reply. My next queries are based on the Bug 24457 that I
filed 2-3 days ago.

I analyzed and found the reason of this bug for x86_64-Linux platform.

A solution to fix this bug requires change in the definition of macro
FPR_OFFSET (defined in RegisterInfos_x86_64.h) to calculate offsets for fpr
registers wrt to FPR structure (defined in *RegisterContext_x86.h*) and not
wrt to UserArea structure (defined in *RegisterContextLinux_x86_64.cpp*).

I am a bit unclear on your statements
*"All offsets should be the global offset in the register context's data"*
 and
"*We just require that you append all register sets together into one chunk
(GPR + FPR + ...)*" in your last 2 replies.

In context of this bug, do your statements mean that macro FPR_OFFSET will
not be allowed to change?

- Abhishek Aggarwal

On Fri, Aug 14, 2015 at 6:17 PM, Greg Clayton  wrote:

>
> > On Aug 14, 2015, at 12:25 AM, Abhishek Aggarwal 
> wrote:
> >
> > Hi
> >
> > As per my understanding (please correct if I am wrong):
> >
> > 1. There exists a file for each platform (Architecture+OS) that
> calculates the offsets for that platform. e.g.
> RegisterContextLinux_x86_64.cpp for x86_64 architecture on Linux OS.
>
> Correct. We allow register context data buffers to just mirror exactly
> what the OS gives us which is usually N chunks of data representing the raw
> registers as they would be gotten from the OS supplied functions (like
> ptrace for reading/writing registers).
>
> > 2. For each platform, offset values for registers might be different
> because it depends upon the way the members of structures GPR, FPR and
> UserArea are organized in Platform specific file. e.g. Offset of rax will
> be 80 and not 0 for RegisterContextLinux_x86_64.cpp because rax lies at
> 10th position in structure GPR defined in this file.
>
> Yep, we adapt to the way the OS represents registers in their native
> buffers. We just require that you append all register sets together into
> one chunk (GPR + FPR + ...).
> >
> > 3. The main motive behind calculating offsets for each register is to
> fetch data from the correct location in a chunk of data that a ptrace API
> provides (atleast in case of Linux OS).
>
> Yes. Just as on MacOSX we mimic how thread_get_state(task_t task, )
> returns registers.
>
> >
> > On Thu, Aug 13, 2015 at 6:42 PM, Greg Clayton 
> wrote:
> > All registers are placed into one large buffer that contains everything.
> All offsets should be the global offset in the register context's data.
> Typically we should see:
> >
> >
> > GPR
> >rax offset 0
> >rbx offset 8
> >
> > FPR
> >mm0 offset 128
> >    mm1 offset 160
> >    ...
> > EXC
> >fpsr offset 256
> >...
> >
> >
> > So the offsets should be based on the offset from the start of the one
> large buffer that contains all register values.
> >
> > > On Aug 13, 2015, at 2:26 AM, Abhishek Aggarwal via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > >
> > >
> > > Hello
> > >
> > > I have a question regarding offset calculations of registers for
> x86_64 architecture. In file
> source/Plugins/Process/Utility/RegisterInfos_x86_64.h:
> > >
> > > The macro FPR_OFFSET(reg) calculates the offset of floating point
> register 'reg' with respect to 'UserArea' struct while GPR_OFFSET(reg)
> calculates it wrt to 'GPR' struct. Is there any specific reason of
> calculating the offsets of floating point registers wrt 'UserArea' struct
> and not wrt 'FPR' struct (defined in
> source/Plugins/Process/Utility/RegisterContext_x86.h) ?
> > >
> > > ___
> > > lldb-dev mailing list
> > > lldb-dev@lists.llvm.org
> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
> >
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Layout of FXSAVE struct for x86 Architectures in LLDB

2015-09-24 Thread Abhishek Aggarwal via lldb-dev
Hi all

I was looking into the file
"source/Plugins/Process/Utility/RegisterContext_x86.h" and I noticed one
thing in FXSAVE structure. The 'ftag' is defined as a 16 bit field.

However, on referring to Architecture Software Developer Manual for x86
architectures, one can see that the memory layout of the contents of FXSAVE
area has only 8 bits for 'ftag' register and rest of the 8 bits are
reserved. Is there any specific reason of keeping 'ftag' field to be 16
bits in FXSAVE structure in LLDB for x86 Architectures?


- Abhishek
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLDB: Unwinding based on Assembly Instruction Profiling

2015-10-14 Thread Abhishek Aggarwal via lldb-dev
Hi

As far as I know, if the unwinding based on Assembly Instruction
Profiling fails in LLDB then either EH frame unwinding or some other
mechanism comes into picture to unwind properly. Am I right?

In this case, should LLDB change the unwinder plan from Assembly
Instruction Profiling to EH Frame based unwinding so that in future
the unwinding is always done with the new unwind plan rather than
first checking the assembly based unwind plan and then falling back to
EH Frame based unwind plan?


Thanks
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev