Re: [lldb-dev] RFC: Processor Trace Support in LLDB

2020-10-01 Thread Pavel Labath via lldb-dev
Thank you for writing this Walter. I think this document will be a
useful reference both now and in the future.

The part that's not clear to me is what is the story with multi-process
traces. The file format enables those, but it's not clear how are they
going be created or used. Can you elaborate more on what you intend to
use those for?

The main reason I am asking that is because I am thinking about the
proposed command structure. I'm wondering if it would not be better to
fit this into the existing target/process/thread commands instead of
adding a new top-level command. For example, one could imagine the
following set of commands:

- "process trace start" + "thread trace start" instead of "thread trace
[tid]". That would be similar to "process continue" + "thread continue".
- "thread trace dump [tid]" instead of "trace dump [-t tid]". That would
be similar to "thread continue" and other thread control commands.
- "target create --trace" instead of "trace load". (analogous to target
create --core).
- "process trace save" instead of "trace save" -- (mostly) analogous to
"process save-core"

I am thinking this composition may fit in better into the existing lldb
command landscape, though I also see the appeal in grouping everything
trace-related under a single top-level command. What do you think?

The main place where this idea breaks down is the multi-process traces.
While we could certainly make "target create --trace" create multiple
targets, that would be fairly unusual. OTOH, the whole concept of having
multiple targets share something is a pretty unusual thing for lldb.
That's why I'd like to hear more about where you want to go with this idea.


On 21/09/2020 22:17, Walter via lldb-dev wrote:
> Thanks for your feedback Fangrui, I've just been checking Capn' Proto
> and it looks really good. I'll keep it in mind in the design and see how
> it can optimize the overall data transfer.

I'm not sure how Cap'n Proto comes into play here. The way I understand
it, the real data is contained in a separate file in the specialized
intel format and the json is just for the metadata. I'd expect the
metadata file to be small even for enormous traces, so I'm not sure
what's to be gained by optimizing it.

pl

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Release-testers] [11.0.0 Release] Release Candidate 5 is here

2020-10-01 Thread Dimitry Andric via lldb-dev
On 30 Sep 2020, at 20:07, Hans Wennborg via Release-testers 
 wrote:
> 
> We had to pick up another bug fix, so here is another release
> candidate: llvmorg-11.0.0-rc5 tag was just created.

I've built both rc4 and rc5, and again these did not need any patches.

Main results on amd64-freebsd11:

  Unsupported:  5122 (rc4:  5122, rc3:  5122)
  Passed : 69761 (rc4: 69761, rc3: 69761)
  Expectedly Failed  :   245 (rc4:   245, rc3:   245)
  Timed Out  :16 (rc4:16, rc3:16)
  Failed :   481 (rc4:   481, rc3:   480)
  Unexpectedly Passed: 2 (rc4: 2, rc3: 2)

Test suite results on amd64-freebsd11:

  Passed: 2399 (rc4: 2399, rc3: 2399)
  Failed:3 (rc4:3, rc3:3)

Main results on i386-freebsd11:

  Unsupported:   3513 (rc4:  3513, rc3:  3513)
  Passed :  66637 (rc4: 66637, rc3: 66636)
  Expectedly Failed  :230 (rc4:   230, rc3:   230)
  Timed Out  :  7 (rc4: 7, rc3: 7)
  Failed :321 (rc4:   321, rc3:   321)
  Unexpectedly Passed:  1 (rc4: 1, rc3: 1)

Uploaded:
SHA256 (clang+llvm-11.0.0-rc4-amd64-unknown-freebsd11.tar.xz) = 
b95c237df671ee507c607e8d36245126c5ea5241389aae0b20e3e4fce4f3df37
SHA256 (clang+llvm-11.0.0-rc4-i386-unknown-freebsd11.tar.xz) = 
60755863b49155d23c9fef9571aa09ca46425a9bd830d9ef498fe9855e741d11
SHA256 (clang+llvm-11.0.0-rc5-amd64-unknown-freebsd11.tar.xz) = 
712401cade6996bb7042cdd659b41ee4411cdd9cc34cbdd21e7a4cafe75ac267
SHA256 (clang+llvm-11.0.0-rc5-i386-unknown-freebsd11.tar.xz) = 
956bd26d28602f375853593631c1f413a869ba6087c51f7ef5405fa31263d06c

-Dimitry



signature.asc
Description: Message signed with OpenPGP
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Processor Trace Support in LLDB

2020-10-01 Thread Walter via lldb-dev
Hi Pavel, thanks for the comments. I'll reply inline

> The part that's not clear to me is what is the story with multi-process
traces. The file format enables those, but it's not clear how are they
going be created or used. Can you elaborate more on what you intend to
use those for?

Something we are doing at Facebook is having a global Intel PT collector
that can trace all processes of a given machine for some seconds. This can
produce a multi-process trace. I imagine these traces won't ever be
generated by LLDB, though. Having one single json trace file for this is
going to be useful for sharing the trace more easily. Multi-process tracing
is also something you can do with the perf tool, so It's not uncommon.

There are some technical details that are worth mentioning as well. Intel
PT offers two main modes of tracing: single thread tracing and logical CPU
tracing.
- The first one is the easiest to implement, but it requires having a
dedicated buffer per thread, which can consume too much RAM if there are
thousands of threads traced. It also adds a little bit of performance cost,
as the kernel disables and enables tracing whenever there's a context
switch.
- The other mode, logical CPU tracing, traces all the activity in one
logical core and uses one single buffer. Also it is more performant as the
kernel doesn't disable tracing intermittently. Sadly, that trace contains
no information regarding context switches, so a separated context switch
trace is used for splitting this big trace into per-thread subtraces. The
decoder we are implementing eventually will be able to do this splitting,
and it will require being fed with the information of all processes. This
is also a reason why allowing multi-process traces is important.

Regarding the commands structure, I'd prefer to keep it under "trace" for
now, because of the multi-process case and because we still have no users
that can report feedback. Very soon we'll start building some tools around
this feature, so we'll have more concrete experiences to share. Then it'll
be good to sync up and revisit the structure.

Btw, the gdb implementation of this kind of tracing is under the "record"
main command (
https://sourceware.org/gdb/current/onlinedocs/gdb/Process-Record-and-Replay.html).
I think this allows for some flexibility, as each trace plugin has
different characteristics.

> I'm not sure how Cap'n Proto comes into play here. The way I understand
it, the real data is contained in a separate file in the specialized intel
format and the json is just for the metadata. I'd expect the metadata file
to be small even for enormous traces, so I'm not sure what's to be gained
by optimizing it.

I didn't mention it in that email, but there is some additional information
that we'll eventually include in the traces, like the context-switch trace
I mentioned above. I think that we could probably use Cap'n Proto for cases
like this. We might also not use it at all as well, but it was nice to
learn about it and keep it in mind.


Thanks,
Walter

Il giorno gio 1 ott 2020 alle ore 07:08 Pavel Labath  ha
scritto:

> Thank you for writing this Walter. I think this document will be a
> useful reference both now and in the future.
>
> The part that's not clear to me is what is the story with multi-process
> traces. The file format enables those, but it's not clear how are they
> going be created or used. Can you elaborate more on what you intend to
> use those for?
>
> The main reason I am asking that is because I am thinking about the
> proposed command structure. I'm wondering if it would not be better to
> fit this into the existing target/process/thread commands instead of
> adding a new top-level command. For example, one could imagine the
> following set of commands:
>
> - "process trace start" + "thread trace start" instead of "thread trace
> [tid]". That would be similar to "process continue" + "thread continue".
> - "thread trace dump [tid]" instead of "trace dump [-t tid]". That would
> be similar to "thread continue" and other thread control commands.
> - "target create --trace" instead of "trace load". (analogous to target
> create --core).
> - "process trace save" instead of "trace save" -- (mostly) analogous to
> "process save-core"
>
> I am thinking this composition may fit in better into the existing lldb
> command landscape, though I also see the appeal in grouping everything
> trace-related under a single top-level command. What do you think?
>
> The main place where this idea breaks down is the multi-process traces.
> While we could certainly make "target create --trace" create multiple
> targets, that would be fairly unusual. OTOH, the whole concept of having
> multiple targets share something is a pretty unusual thing for lldb.
> That's why I'd like to hear more about where you want to go with this idea.
>
>
> On 21/09/2020 22:17, Walter via lldb-dev wrote:
> > Thanks for your feedback Fangrui, I've just been checking Capn' Proto
> > and it looks really

Re: [lldb-dev] RFC: Processor Trace Support in LLDB

2020-10-01 Thread Greg Clayton via lldb-dev


> On Oct 1, 2020, at 7:08 AM, Pavel Labath via lldb-dev 
>  wrote:
> 
> Thank you for writing this Walter. I think this document will be a
> useful reference both now and in the future.
> 
> The part that's not clear to me is what is the story with multi-process
> traces. The file format enables those, but it's not clear how are they
> going be created or used. Can you elaborate more on what you intend to
> use those for?

Mainly for system trace kinds of things where an entire system gets traced.

> 
> The main reason I am asking that is because I am thinking about the
> proposed command structure. I'm wondering if it would not be better to
> fit this into the existing target/process/thread commands instead of
> adding a new top-level command. For example, one could imagine the
> following set of commands:
> 
> - "process trace start" + "thread trace start" instead of "thread trace
> [tid]". That would be similar to "process continue" + "thread continue".
> - "thread trace dump [tid]" instead of "trace dump [-t tid]". That would
> be similar to "thread continue" and other thread control commands.
> - "target create --trace" instead of "trace load". (analogous to target
> create --core).
> - "process trace save" instead of "trace save" -- (mostly) analogous to
> "process save-core"

> I am thinking this composition may fit in better into the existing lldb
> command landscape, though I also see the appeal in grouping everything
> trace-related under a single top-level command. What do you think?
> 
> The main place where this idea breaks down is the multi-process traces.
> While we could certainly make "target create --trace" create multiple
> targets, that would be fairly unusual. OTOH, the whole concept of having
> multiple targets share something is a pretty unusual thing for lldb.
> That's why I'd like to hear more about where you want to go with this idea.

I kind of see tracing has having two sides:
1 - post mortem tracing for individual or multiple processes
2 - live debug session tracing for being able to see how you crashed where 
trace data is for current process only

For post mortem tracing, the trace top level command seemed to make sense here 
because there are no other target commands that act on more than one target. So 
"trace load" makes sense to me here for loading one or more traces. The idea is 
the trace JSON file has enough info to completely load up the state of the 
trace so we can symbolicate, dump and step around in history. So I would vote 
to keep "trace load" at the very least because it can create one or more 
targets. Options can be added to display the processes if needed:

(lldb) trace list 

But we could move "trace dump" over into "target trace dump" or "process trace 
dump" since that is effectively how we are coding these patches.

For live debugging where we gather trace data through the process plug-in, we 
will have a live process that may or may not have trace data. If tracing isn't 
available we will not be able to dump anything. But I would like to see 
process/thread commands for this scenario:

- process trace start/stop (only succeeds if we can gather trace data through 
the process plug-in)
- thread trace start/stop (which can succeed only if current tracing can enable 
tracing for only one thread)

Not sure if we need "process trace save" or "thread trace save" as the saving 
can be done as an option to "process trace stop --save /path/to/save"

So I am all for fitting these commands in where they need to go.

> 
> On 21/09/2020 22:17, Walter via lldb-dev wrote:
>> Thanks for your feedback Fangrui, I've just been checking Capn' Proto
>> and it looks really good. I'll keep it in mind in the design and see how
>> it can optimize the overall data transfer.
> 
> I'm not sure how Cap'n Proto comes into play here. The way I understand
> it, the real data is contained in a separate file in the specialized
> intel format and the json is just for the metadata. I'd expect the
> metadata file to be small even for enormous traces, so I'm not sure
> what's to be gained by optimizing it.
> 
> pl
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Processor Trace Support in LLDB

2020-10-01 Thread Walter via lldb-dev
After a chat with Greg, we agreed on this set of commands


trace load /path/to/json process trace start/stop process trace save
/path/to/json thread trace start/stop thread trace dump [instructions |
functions]

Il giorno gio 1 ott 2020 alle ore 13:21 Greg Clayton 
ha scritto:

>
>
> > On Oct 1, 2020, at 7:08 AM, Pavel Labath via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Thank you for writing this Walter. I think this document will be a
> > useful reference both now and in the future.
> >
> > The part that's not clear to me is what is the story with multi-process
> > traces. The file format enables those, but it's not clear how are they
> > going be created or used. Can you elaborate more on what you intend to
> > use those for?
>
> Mainly for system trace kinds of things where an entire system gets traced.
>
> >
> > The main reason I am asking that is because I am thinking about the
> > proposed command structure. I'm wondering if it would not be better to
> > fit this into the existing target/process/thread commands instead of
> > adding a new top-level command. For example, one could imagine the
> > following set of commands:
> >
> > - "process trace start" + "thread trace start" instead of "thread trace
> > [tid]". That would be similar to "process continue" + "thread continue".
> > - "thread trace dump [tid]" instead of "trace dump [-t tid]". That would
> > be similar to "thread continue" and other thread control commands.
> > - "target create --trace" instead of "trace load". (analogous to target
> > create --core).
> > - "process trace save" instead of "trace save" -- (mostly) analogous to
> > "process save-core"
>
> > I am thinking this composition may fit in better into the existing lldb
> > command landscape, though I also see the appeal in grouping everything
> > trace-related under a single top-level command. What do you think?
> >
> > The main place where this idea breaks down is the multi-process traces.
> > While we could certainly make "target create --trace" create multiple
> > targets, that would be fairly unusual. OTOH, the whole concept of having
> > multiple targets share something is a pretty unusual thing for lldb.
> > That's why I'd like to hear more about where you want to go with this
> idea.
>
> I kind of see tracing has having two sides:
> 1 - post mortem tracing for individual or multiple processes
> 2 - live debug session tracing for being able to see how you crashed where
> trace data is for current process only
>
> For post mortem tracing, the trace top level command seemed to make sense
> here because there are no other target commands that act on more than one
> target. So "trace load" makes sense to me here for loading one or more
> traces. The idea is the trace JSON file has enough info to completely load
> up the state of the trace so we can symbolicate, dump and step around in
> history. So I would vote to keep "trace load" at the very least because it
> can create one or more targets. Options can be added to display the
> processes if needed:
>
> (lldb) trace list 
>
> But we could move "trace dump" over into "target trace dump" or "process
> trace dump" since that is effectively how we are coding these patches.
>
> For live debugging where we gather trace data through the process plug-in,
> we will have a live process that may or may not have trace data. If tracing
> isn't available we will not be able to dump anything. But I would like to
> see process/thread commands for this scenario:
>
> - process trace start/stop (only succeeds if we can gather trace data
> through the process plug-in)
> - thread trace start/stop (which can succeed only if current tracing can
> enable tracing for only one thread)
>
> Not sure if we need "process trace save" or "thread trace save" as the
> saving can be done as an option to "process trace stop --save /path/to/save"
>
> So I am all for fitting these commands in where they need to go.
>
> >
> > On 21/09/2020 22:17, Walter via lldb-dev wrote:
> >> Thanks for your feedback Fangrui, I've just been checking Capn' Proto
> >> and it looks really good. I'll keep it in mind in the design and see how
> >> it can optimize the overall data transfer.
> >
> > I'm not sure how Cap'n Proto comes into play here. The way I understand
> > it, the real data is contained in a separate file in the specialized
> > intel format and the json is just for the metadata. I'd expect the
> > metadata file to be small even for enormous traces, so I'm not sure
> > what's to be gained by optimizing it.
> >
> > pl
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>

-- 
- Walter Erquínigo Pezo
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB got SIGCHLD on hitting the breakpoint

2020-10-01 Thread Greg Clayton via lldb-dev
LLDB 5 is really old and shouldn't be used for linux debugging as linux support 
had many issues back then. I would suggest downloading and building the latest 
and greatest LLDB from llvm.org  or using the clang 10 
release version of LLDB, or the new clang 11 release version that is about to 
be released.

Greg

> On Sep 16, 2020, at 9:03 AM, le wang via lldb-dev  
> wrote:
> 
> Hello,everyone:
> I've got a problem, when debugging my process with lldb tool on linux 
> OS(CentOS7).While I use lldb command to set breakpoints, and launch my 
> process, my process will execute a binary code which contains debug 
> information, but when my process launched, all breakpoints can not be hit, 
> and after a while, received several informations like below:
> Process 4256 stopped and restarted: thread1 received signal:   SIGCHLD
> Process 4256 stopped and restarted: thread1 received signal:   SIGCHLD
> Process 4256 stopped and restarted: thread1 received signal:   SIGCHLD
> Process 4256 stopped and restarted: thread1 received signal:   SIGCHLD
> Process 4256 stopped and restarted: thread2 received signal:   SIGCHLD
> 
> Details can be seen in my snapshot in attachment.
> It seems that lldb crashed, and at last although my process is executed,  
> this is meaningless. I have checked that debug information in IR is correct. 
> I have no idea the reason. Can anyone tell me the reason and how to fix this 
> problem. My lldb version is 5.0.0, which got from http://www.llvm.org/ 
>  with llvm5.0.0
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Processor Trace Support in LLDB

2020-10-01 Thread Greg Clayton via lldb-dev
We spoke a bit after Panel's comments which made sense and we propose the 
commands Walter sent below. Let us know what everyone thinks of this 
organization of the command structure!

> On Oct 1, 2020, at 1:32 PM, Walter  wrote:
> 
> After a chat with Greg, we agreed on this set of commands
> 
> 
> trace load /path/to/json
> 
> process trace start/stop
> process trace save /path/to/json
> 
> thread trace start/stop
> thread trace dump [instructions | functions]
> 
> 
> Il giorno gio 1 ott 2020 alle ore 13:21 Greg Clayton  > ha scritto:
> 
> 
> > On Oct 1, 2020, at 7:08 AM, Pavel Labath via lldb-dev 
> > mailto:lldb-dev@lists.llvm.org>> wrote:
> > 
> > Thank you for writing this Walter. I think this document will be a
> > useful reference both now and in the future.
> > 
> > The part that's not clear to me is what is the story with multi-process
> > traces. The file format enables those, but it's not clear how are they
> > going be created or used. Can you elaborate more on what you intend to
> > use those for?
> 
> Mainly for system trace kinds of things where an entire system gets traced.
> 
> > 
> > The main reason I am asking that is because I am thinking about the
> > proposed command structure. I'm wondering if it would not be better to
> > fit this into the existing target/process/thread commands instead of
> > adding a new top-level command. For example, one could imagine the
> > following set of commands:
> > 
> > - "process trace start" + "thread trace start" instead of "thread trace
> > [tid]". That would be similar to "process continue" + "thread continue".
> > - "thread trace dump [tid]" instead of "trace dump [-t tid]". That would
> > be similar to "thread continue" and other thread control commands.
> > - "target create --trace" instead of "trace load". (analogous to target
> > create --core).
> > - "process trace save" instead of "trace save" -- (mostly) analogous to
> > "process save-core"
> 
> > I am thinking this composition may fit in better into the existing lldb
> > command landscape, though I also see the appeal in grouping everything
> > trace-related under a single top-level command. What do you think?
> > 
> > The main place where this idea breaks down is the multi-process traces.
> > While we could certainly make "target create --trace" create multiple
> > targets, that would be fairly unusual. OTOH, the whole concept of having
> > multiple targets share something is a pretty unusual thing for lldb.
> > That's why I'd like to hear more about where you want to go with this idea.
> 
> I kind of see tracing has having two sides:
> 1 - post mortem tracing for individual or multiple processes
> 2 - live debug session tracing for being able to see how you crashed where 
> trace data is for current process only
> 
> For post mortem tracing, the trace top level command seemed to make sense 
> here because there are no other target commands that act on more than one 
> target. So "trace load" makes sense to me here for loading one or more 
> traces. The idea is the trace JSON file has enough info to completely load up 
> the state of the trace so we can symbolicate, dump and step around in 
> history. So I would vote to keep "trace load" at the very least because it 
> can create one or more targets. Options can be added to display the processes 
> if needed:
> 
> (lldb) trace list 
> 
> But we could move "trace dump" over into "target trace dump" or "process 
> trace dump" since that is effectively how we are coding these patches.
> 
> For live debugging where we gather trace data through the process plug-in, we 
> will have a live process that may or may not have trace data. If tracing 
> isn't available we will not be able to dump anything. But I would like to see 
> process/thread commands for this scenario:
> 
> - process trace start/stop (only succeeds if we can gather trace data through 
> the process plug-in)
> - thread trace start/stop (which can succeed only if current tracing can 
> enable tracing for only one thread)
> 
> Not sure if we need "process trace save" or "thread trace save" as the saving 
> can be done as an option to "process trace stop --save /path/to/save"
> 
> So I am all for fitting these commands in where they need to go.
> 
> > 
> > On 21/09/2020 22:17, Walter via lldb-dev wrote:
> >> Thanks for your feedback Fangrui, I've just been checking Capn' Proto
> >> and it looks really good. I'll keep it in mind in the design and see how
> >> it can optimize the overall data transfer.
> > 
> > I'm not sure how Cap'n Proto comes into play here. The way I understand
> > it, the real data is contained in a separate file in the specialized
> > intel format and the json is just for the metadata. I'd expect the
> > metadata file to be small even for enormous traces, so I'm not sure
> > what's to be gained by optimizing it.
> > 
> > pl
> > 
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org 

Re: [lldb-dev] RFC: Processor Trace Support in LLDB

2020-10-01 Thread Greg Clayton via lldb-dev
I had accepted the patch https://reviews.llvm.org/D86670 
, but then marked as "Request Changes" while 
we discuss the commands in this RFC after new comments came in.


> On Oct 1, 2020, at 1:42 PM, Greg Clayton  wrote:
> 
> We spoke a bit after Panel's comments which made sense and we propose the 
> commands Walter sent below. Let us know what everyone thinks of this 
> organization of the command structure!
> 
>> On Oct 1, 2020, at 1:32 PM, Walter > > wrote:
>> 
>> After a chat with Greg, we agreed on this set of commands
>> 
>> 
>> trace load /path/to/json
>> 
>> process trace start/stop
>> process trace save /path/to/json
>> 
>> thread trace start/stop
>> thread trace dump [instructions | functions]
>> 
>> 
>> Il giorno gio 1 ott 2020 alle ore 13:21 Greg Clayton > > ha scritto:
>> 
>> 
>> > On Oct 1, 2020, at 7:08 AM, Pavel Labath via lldb-dev 
>> > mailto:lldb-dev@lists.llvm.org>> wrote:
>> > 
>> > Thank you for writing this Walter. I think this document will be a
>> > useful reference both now and in the future.
>> > 
>> > The part that's not clear to me is what is the story with multi-process
>> > traces. The file format enables those, but it's not clear how are they
>> > going be created or used. Can you elaborate more on what you intend to
>> > use those for?
>> 
>> Mainly for system trace kinds of things where an entire system gets traced.
>> 
>> > 
>> > The main reason I am asking that is because I am thinking about the
>> > proposed command structure. I'm wondering if it would not be better to
>> > fit this into the existing target/process/thread commands instead of
>> > adding a new top-level command. For example, one could imagine the
>> > following set of commands:
>> > 
>> > - "process trace start" + "thread trace start" instead of "thread trace
>> > [tid]". That would be similar to "process continue" + "thread continue".
>> > - "thread trace dump [tid]" instead of "trace dump [-t tid]". That would
>> > be similar to "thread continue" and other thread control commands.
>> > - "target create --trace" instead of "trace load". (analogous to target
>> > create --core).
>> > - "process trace save" instead of "trace save" -- (mostly) analogous to
>> > "process save-core"
>> 
>> > I am thinking this composition may fit in better into the existing lldb
>> > command landscape, though I also see the appeal in grouping everything
>> > trace-related under a single top-level command. What do you think?
>> > 
>> > The main place where this idea breaks down is the multi-process traces.
>> > While we could certainly make "target create --trace" create multiple
>> > targets, that would be fairly unusual. OTOH, the whole concept of having
>> > multiple targets share something is a pretty unusual thing for lldb.
>> > That's why I'd like to hear more about where you want to go with this idea.
>> 
>> I kind of see tracing has having two sides:
>> 1 - post mortem tracing for individual or multiple processes
>> 2 - live debug session tracing for being able to see how you crashed where 
>> trace data is for current process only
>> 
>> For post mortem tracing, the trace top level command seemed to make sense 
>> here because there are no other target commands that act on more than one 
>> target. So "trace load" makes sense to me here for loading one or more 
>> traces. The idea is the trace JSON file has enough info to completely load 
>> up the state of the trace so we can symbolicate, dump and step around in 
>> history. So I would vote to keep "trace load" at the very least because it 
>> can create one or more targets. Options can be added to display the 
>> processes if needed:
>> 
>> (lldb) trace list 
>> 
>> But we could move "trace dump" over into "target trace dump" or "process 
>> trace dump" since that is effectively how we are coding these patches.
>> 
>> For live debugging where we gather trace data through the process plug-in, 
>> we will have a live process that may or may not have trace data. If tracing 
>> isn't available we will not be able to dump anything. But I would like to 
>> see process/thread commands for this scenario:
>> 
>> - process trace start/stop (only succeeds if we can gather trace data 
>> through the process plug-in)
>> - thread trace start/stop (which can succeed only if current tracing can 
>> enable tracing for only one thread)
>> 
>> Not sure if we need "process trace save" or "thread trace save" as the 
>> saving can be done as an option to "process trace stop --save /path/to/save"
>> 
>> So I am all for fitting these commands in where they need to go.
>> 
>> > 
>> > On 21/09/2020 22:17, Walter via lldb-dev wrote:
>> >> Thanks for your feedback Fangrui, I've just been checking Capn' Proto
>> >> and it looks really good. I'll keep it in mind in the design and see how
>> >> it can optimize the overall data transfer.
>> > 
>> > I'm not sure how Cap'n Proto comes into play here. The way I 

Re: [lldb-dev] [Release-testers] [11.0.0 Release] Release Candidate 5 is here

2020-10-01 Thread Brian Cain via lldb-dev
Uploaded binaries for SLES12 and Ubuntu 16 x86_64.  I realized that I'd
forgotten rc3,4 ones so I uploaded them too.

$ cat clang+llvm-11.0.0-rc3-x86_64-linux-gnu-ubuntu-16.04.tar.xz.sha256
clang+llvm-11.0.0-rc3-x86_64-linux-sles12.4.tar.xz.sha256
clang+llvm-11.0.0-rc4-x86_64-linux-sles12.4.tar.xz.sha256
clang+llvm-11.0.0-rc5-x86_64-linux-gnu-ubuntu-16.04.tar.xz.sha256
clang+llvm-11.0.0-rc5-x86_64-linux-sles12.4.tar.xz.sha256

aaf668664769dfb071c59f5d2622f3459d457b58489ee79f69262cef8cf2abb4
 clang+llvm-11.0.0-rc3-x86_64-linux-gnu-ubuntu-16.04.tar.xz
93394ee58b18ec72bf4455dc6055a9ba5100621282547c686bd4ef689fe1d8a5
 clang+llvm-11.0.0-rc3-x86_64-linux-sles12.4.tar.xz
91a0984c7d0be93af310a1d762e34e283952ce9734ecec040b6a90fd31466150
 clang+llvm-11.0.0-rc4-x86_64-linux-sles12.4.tar.xz
af8daead4a6d996fab7630759a9330d5eb0ceea06dbd6daa7fdd92126b0f02ee
 clang+llvm-11.0.0-rc5-x86_64-linux-gnu-ubuntu-16.04.tar.xz
b57ef3689a6bf161dab3fee644fe0837e1c9cb31875556f7be259b8eaf64a43d
 clang+llvm-11.0.0-rc5-x86_64-linux-sles12.4.tar.xz



On Wed, Sep 30, 2020 at 1:07 PM Hans Wennborg via Release-testers <
release-test...@lists.llvm.org> wrote:

> Hello again,
>
> We had to pick up another bug fix, so here is another release
> candidate: llvmorg-11.0.0-rc5 tag was just created.
>
> Source code and docs are available at
> https://prereleases.llvm.org/11.0.0/#rc5
> and
> https://github.com/llvm/llvm-project/releases/tag/llvmorg-11.0.0-rc5
>
> Pre-built binaries will be added as they become ready.
>
> Please file reports for any bugs you find as blockers of
> https://llvm.org/pr46725
>
> Release testers, if you still have cycles (perhaps you didn't even
> have time to start rc4 yet), please run the test script, share your
> results, and upload binaries.
>
> As mentioned above, this rc is very similar to the previous one. There
> are no open blockers, so it could be the last release candidate.
>
> Thanks,
> Hans
> ___
> Release-testers mailing list
> release-test...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/release-testers
>


-- 
-Brian
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev