Re: [lldb-dev] lldb tests and tear down hooks

2015-10-21 Thread Pavel Labath via lldb-dev
I think we can remove these, provided there is a way to mimic the
functionality they are used for now, which I think shouldn't be hard.
Anything which was set up in the setUp() method should be undone in
tearDown(). Anything which was set up in the test method, can be
undone using a try-finally block. Is there a use case not covered by
this?

pl

On 21 October 2015 at 04:47, Zachary Turner via lldb-dev
 wrote:
> There's a subtle bug that is pervasive throughout the test suite.  Consider
> the following seemingly innocent test class.
>
> class MyTest(TestBase);
> def setUp():
> TestBase.setUp()#1
>
> # Do some stuff  #2
> self.addTearDownHook(lambda: self.foo())   #3
>
> def test_interesting_stuff():
> pass
>
> Here's the problem.  As a general principle, cleanup needs to happen in
> reverse order from initialization.  That's why, if we had a tearDown()
> method, it would probably look something like this:
>
> def tearDown():
> # Clean up some stuff  #2
>
> TestBase.tearDown()#1
>
> This follows the pattern in other languages like C++, for example, where
> construction goes from base -> derived, but destruction goes from derived ->
> base.
>
> But if you add these tear down hooks into the mix, it violates that.  tear
> down hooks get invoked as part of TestBase.tearDown(), so in the above
> example the initialization order is 1 -> 2 -> 3 but the teardown order is 2
> -> 1 -> 3  (or 2 -> 3 -> 1, or none of the above depending on where inside
> of TestBase.tearDown() hook the hooks get invoked).
>
> To make matters worse, tear down hooks can be added from arbitrary points in
> a test's run, not just during setup.
>
> The only way I can see to fix this is to delete this tearDownHook mechanism
> entirely.  Anyone who wants it can easily reimplement this in the individual
> test by just keeping their own list of lambdas in the derived class,
> overriding tearDown(), and running through their own list in reverse order
> before calling TestBase.tearDown().
>
> I don't intend to do this work right now, but I would like to do it in the
> future, so I want to throw this out there and see if anyone has thoughts on
> it.
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Lldb-commits] [lldb] r248028 - Make libc++ tests skip themselves if libc++ is not actually loaded in the target

2015-10-21 Thread Pavel Labath via lldb-dev
[moving this to lldb-dev for more visibility.]

Sorry, I was in a hurry yesterday, so I did not explain myself fully. Let
me try to elaborate.

> What I am trying to avoid here is getting useless noise in build
automation where test cases proclaim their failure, which however tells us
nothing of value as the issue is simply “libc++ not available” vs. “data
formatters broken” That is an ability I would strongly like to preserve

I agree that we should have an ability to skip libc++ tests if the library
is not available (similarly for go, etc.). However, I believe there should
be as little "magic" in that as possible. Otherwise you run into the
opposite problem where a test passing could mean "everything is ok" or "our
auto-skipping magic has gone wrong", which I would argue is a worse
situation than the first one.

Currently, we have a lot of magic in here:
- self.build() silently builds an executable with a different library even
though libc++ was requested. (otherwise you wouldn't even be able to list
the modules of the executable)
- the test decides to skip itself without giving any indication about
(sure, it shows up in the "skipped tests" list, but a lot of other stuff
appears there as well. and I would argue that this is a different
situation: usually we do skips based on the OS, architecture, or other
"immutable" characteristics. Presence of libc++ is something that can be
usually changed by simply installing a package.)

I'd like to propose a few alternative solutions to achieve both of these
objectives:

a) Use the existing category system for this: libc++ tests get tagged as
such and the user has to explicitly disable this category to avoid running
the tests. Potentially, add some code to detect when the user is running
the test suite without libc++ installed and abort the run with some message
like "No libc++ detected on your system. Please install libc++ or disable
libc++ tests with `--disable-category libc++`".

I like this solution because it is easily achievable and has no magic
involved. However, it requires an action from the user (which I think is a
good thing, but I see how others may disagree). That's what I meant for
asking about your "use case". Would you be able to fit it inside this
framework?

b) Use category tagging as in (a), but auto-disable this category when
libc++ is missing and print a big fat warning "No libc++ detected, libc++
tests will not run". What is different from the current state is that this
libc++ detection would work centrally, which would give us the possibility
to print the warning (e.g. right before the "Ran XXX test suites" message")
I like this a bit less, as it is still automatic, but I am fine with it, as
it would give a clear visual indication of what is happening.


What do you think? Do you have another proposal?

I can implement either of those, but I was to get some feedback first..

pl



On 20 October 2015 at 19:05, Enrico Granata  wrote:

>
> On Oct 20, 2015, at 10:43 AM, Pavel Labath  wrote:
>
> Hi Enrico,
>
> Could you explain what was the motivation behind this change?
>
>
> As per title of the commit, in a process that is using a standard c++
> library other than libc++, these tests are noise - of course the libc++
> tests aren’t gonna pass if you don’t have libc++
>
> I am asking because, I have just learned that this commit has caused
> all libc++ tests to be skipped on linux*, silently decreasing test
> coverage on linux. I would like to replace this with some other
> mechanism, which is not prone to accidental silent skips, like a
> dotest flag to skip libc++ or something, but I'd like to understand
> the original motivation first.
>
> pl
>
> * the problem seems to be that on linux, we do not have the list of
> modules until we actually start the process, so this code will not
> find the library, as it runs before that.
>
>
> The solution might then be to run the process, and then
> skip_if_library_missing
> I think we avoid trying to compile the test inferior entirely if we can’t
> find libc++ however, so you might first want to check if a.out exists at
> all, and only then proceed all the way to the first breakpoint being hit
>
> If is considered a bug then
> we can look into that separately, but I'd still like to avoid these
> kinds of skips.
>
>
> What I am trying to avoid here is getting useless noise in build
> automation where test cases proclaim their failure, which however tells us
> nothing of value as the issue is simply “libc++ not available” vs. “data
> formatters broken”
> That is an ability I would strongly like to preserve
>
>
> On 18 September 2015 at 21:12, Enrico Granata via lldb-commits
>  wrote:
>
> Author: enrico
> Date: Fri Sep 18 15:12:52 2015
> New Revision: 248028
>
> URL: http://llvm.org/viewvc/llvm-project?rev=248028&view=rev
> Log:
> Make libc++ tests skip themselves if libc++ is not actually loaded in the
> target
>
>
> Modified:
>
>
> lldb/trunk/test/functionalities/data-formatter/data-formatter-stl/libcxx

Re: [lldb-dev] Preliminary support for NetBSD

2015-10-21 Thread Kamil Rytarowski via lldb-dev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08.10.2015 12:21, Kamil Rytarowski via lldb-dev wrote:
> On 05.10.2015 21:46, Todd Fiala wrote:
>> Seems like a great idea.  (Ed, is that something you might be
>> able to review?)
> 
> 
> The first patch is already proposed: 
> http://reviews.llvm.org/D13334
> 
>> Hopefully you have access to other platforms to test if it's 
>> breaking anything?  (Most likely candidates would be FreeBSD and 
>> Linux/Android, I'd suspect, depending on how much you're having
>> to change things).
> 
> 
> I just run just NetBSD exclusively (desktop, development). My Linux
> or FreeBSD (unlikely frequent) is limited and at that places I
> don't develop lldb. I don't remember when I touched other systems,
> perhaps Tru64 2 years ago.
> 
> It's not that crucial anyway, as there is review board.

There is buildslave with NetBSD-7.0 in the gates. When it will be
functional, I will switch it immediately to the master lldb buildzone.

For now, I will disable running tests as I need to upstream few
plugins for NetBSD first.

Please review and merge the NetBSD pending commits in LLVM's
Phabricator, they are making it buildable for NetBSD - otherwise there
will be waste of an extra machine over there...
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWJ21HAAoJEEuzCOmwLnZsLlIQAMApPBYE285ss6lg/qF2n9iK
KSx/+NwVtbebBbcAAB+ZBtNWAsMMS4o1Q6leII2/BDhx42/3sowo/xzPaO4FmyKy
WwiCSVJuCvvlnGbz0Nizm78124LrVn0CJOIzGEpYaQMVGLEoB1z7Kwa9kKQ0FUzZ
Tdj87SbUBr2BLQeHC2CoC3Xg5S5odAXdY4E6IRmBQruEjpxc0GpSlQ8PVGrsC6Mw
zGBBRet3NLAOfTrLvJ10SqhvXSgLvXldky9YX8o/ir651+UOrQ+4RbFZJz2gHELo
pB9AgkC+gDeJJEAZ5Ba4KpsDWqrljELnCiEHjVHiGBLSiqtcrwN4IzN1PVNM2gzs
+0RMs0vdqiHGSJS5mYItoOgoh00RXHT2BagoizvFJikrt/FtLrgLhTQtmQN7h4UT
ld69bhWc+nMSDzuavuw7auLEQKdLTzyqP2KhvOBvMKHUXE6ypQmgF3DIIkQOLZwL
prDHZLScDMLUgiJgN5EL8espMoT8WLPBjJViO9tNoHkcgAHfSGAe0pVwTHHbxns+
+Hf3N4BoVnGcao9jtox7NyU+DLq1OYdbBZgHhNWi1pYw/lffGmlnef6Zptts/Uko
4/gdbvabhDQg/Bw4K/xskPz+2strKHNBnBrlqrQ8JlqmRVAR5zWdJFeAgqotmUsS
15UU5VD+Ahji5BRa3v+P
=pfNe
-END PGP SIGNATURE-
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Inquiry for performance monitors

2015-10-21 Thread Ravitheja Addepally via lldb-dev
Hello,
   I want to implement support for reading Performance measurement
information using the perf_event_open system calls. The motive is to add
support for Intel PT hardware feature, which is available through the
perf_event interface. I was thinking of implementing a new Wrapper like
PtraceWrapper in NativeProcessLinux files. My query is that, is this a
correct place to start or not ? in case not, could someone suggest me
another place to begin with ?

BR,
A Ravi Theja
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25273] New: synthetic data formatters for libstdc++ STL containers fail on Ubuntu 15.10 x86_64

2015-10-21 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25273

Bug ID: 25273
   Summary: synthetic data formatters for libstdc++ STL containers
fail on Ubuntu 15.10 x86_64
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Linux
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: todd.fi...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

libstdc++.so.6.0.21 (present in Ubuntu 15.10 Beta2 x86_64) has a slightly
different signature than libstdc++.so.6.0.20 in Ubuntu 14.04 (x86_64).  LLDB is
failing to handle those containers properly when using the python synthetic
libstdc++ data formatters.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 25273] synthetic data formatters for libstdc++ STL containers fail on Ubuntu 15.10 x86_64

2015-10-21 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=25273

Todd Fiala  changed:

   What|Removed |Added

 Status|NEW |ASSIGNED
   Assignee|lldb-dev@lists.llvm.org |todd.fi...@gmail.com

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Lldb-commits] [lldb] r248028 - Make libc++ tests skip themselves if libc++ is not actually loaded in the target

2015-10-21 Thread Todd Fiala via lldb-dev
I'm in favor of (b).  The less user-required setup to do the right thing on
a test suite, the better IMHO.  Those actively trying to make sure one or
another c++ library is getting tested will be looking for the output to
validate which std c++ lib(s) ran.

-Todd

On Wed, Oct 21, 2015 at 3:47 AM, Pavel Labath  wrote:

> [moving this to lldb-dev for more visibility.]
>
> Sorry, I was in a hurry yesterday, so I did not explain myself fully. Let
> me try to elaborate.
>
> > What I am trying to avoid here is getting useless noise in build
> automation where test cases proclaim their failure, which however tells us
> nothing of value as the issue is simply “libc++ not available” vs. “data
> formatters broken” That is an ability I would strongly like to preserve
>
> I agree that we should have an ability to skip libc++ tests if the library
> is not available (similarly for go, etc.). However, I believe there should
> be as little "magic" in that as possible. Otherwise you run into the
> opposite problem where a test passing could mean "everything is ok" or "our
> auto-skipping magic has gone wrong", which I would argue is a worse
> situation than the first one.
>
> Currently, we have a lot of magic in here:
> - self.build() silently builds an executable with a different library even
> though libc++ was requested. (otherwise you wouldn't even be able to list
> the modules of the executable)
> - the test decides to skip itself without giving any indication about
> (sure, it shows up in the "skipped tests" list, but a lot of other stuff
> appears there as well. and I would argue that this is a different
> situation: usually we do skips based on the OS, architecture, or other
> "immutable" characteristics. Presence of libc++ is something that can be
> usually changed by simply installing a package.)
>
> I'd like to propose a few alternative solutions to achieve both of these
> objectives:
>
> a) Use the existing category system for this: libc++ tests get tagged as
> such and the user has to explicitly disable this category to avoid running
> the tests. Potentially, add some code to detect when the user is running
> the test suite without libc++ installed and abort the run with some message
> like "No libc++ detected on your system. Please install libc++ or disable
> libc++ tests with `--disable-category libc++`".
>
> I like this solution because it is easily achievable and has no magic
> involved. However, it requires an action from the user (which I think is a
> good thing, but I see how others may disagree). That's what I meant for
> asking about your "use case". Would you be able to fit it inside this
> framework?
>
> b) Use category tagging as in (a), but auto-disable this category when
> libc++ is missing and print a big fat warning "No libc++ detected, libc++
> tests will not run". What is different from the current state is that this
> libc++ detection would work centrally, which would give us the possibility
> to print the warning (e.g. right before the "Ran XXX test suites" message")
> I like this a bit less, as it is still automatic, but I am fine with it,
> as it would give a clear visual indication of what is happening.
>
>
> What do you think? Do you have another proposal?
>
> I can implement either of those, but I was to get some feedback first..
>
> pl
>
>
>
> On 20 October 2015 at 19:05, Enrico Granata  wrote:
>
>>
>> On Oct 20, 2015, at 10:43 AM, Pavel Labath  wrote:
>>
>> Hi Enrico,
>>
>> Could you explain what was the motivation behind this change?
>>
>>
>> As per title of the commit, in a process that is using a standard c++
>> library other than libc++, these tests are noise - of course the libc++
>> tests aren’t gonna pass if you don’t have libc++
>>
>> I am asking because, I have just learned that this commit has caused
>> all libc++ tests to be skipped on linux*, silently decreasing test
>> coverage on linux. I would like to replace this with some other
>> mechanism, which is not prone to accidental silent skips, like a
>> dotest flag to skip libc++ or something, but I'd like to understand
>> the original motivation first.
>>
>> pl
>>
>> * the problem seems to be that on linux, we do not have the list of
>> modules until we actually start the process, so this code will not
>> find the library, as it runs before that.
>>
>>
>> The solution might then be to run the process, and then
>> skip_if_library_missing
>> I think we avoid trying to compile the test inferior entirely if we can’t
>> find libc++ however, so you might first want to check if a.out exists at
>> all, and only then proceed all the way to the first breakpoint being hit
>>
>> If is considered a bug then
>> we can look into that separately, but I'd still like to avoid these
>> kinds of skips.
>>
>>
>> What I am trying to avoid here is getting useless noise in build
>> automation where test cases proclaim their failure, which however tells us
>> nothing of value as the issue is simply “libc++ not available” vs. “data
>> formatters b

Re: [lldb-dev] Inquiry for performance monitors

2015-10-21 Thread Pavel Labath via lldb-dev
[ Moving this discussion back to the list. I pressed the wrong button
when replying.]

Thanks for the explanation Ravi. It sounds like a very useful feature
indeed. I've found a reference to the debugserver profile data in
GDBRemoteCommunicationClient.cpp:1276, so maybe that will help with
your investigation. Maybe also someone more knowledgeable can explain
what those A packets are used for (?).


On 21 October 2015 at 15:48, Ravitheja Addepally
 wrote:
> Hi,
>Thanx for your reply, some of the future processors to be released by
> Intel have this hardware support for recording the instructions that were
> executed by the processor and this recording process is also quite fast and
> does not add too much computational load. Now this hardware is made
> accessible via the perf_event_interface where one could map a region of
> memory for this purpose by passing it as an argument to this
> perf_event_interface. The recorded instructions are then written to the
> memory region assigned. Now this is basically the raw information, which can
> be obtained from the hardware. It can be interpreted and presented to the
> user in the following ways ->
>
> 1) Instruction history - where the user gets basically a list of all
> instructions that were executed
> 2) Function Call History - It is also possible to get a list of all the
> functions called in the inferior
> 3) Reverse Debugging with limited information - In GDB this is only the
> functions executed.
>
> This raw information also needs to decoded (even before you can disassemble
> it ), there is already a library released by Intel called libipt which can
> do that. At the moment we plan to work with Instruction History.
> I will look into the debugserver infrastructure and get back to you. I guess
> for the server client communication we would rely on packets only. In case
> of concerns about too much data being transferred, we can limit the number
> of entries we report because anyway the amount of data recorded is too big
> to present all at once so we would have to resort to something like a
> viewport.
>
> Since a lot of instructions can be recorded this way, the function call
> history can be quite useful for debugging and especially since it is a lot
> faster to collect function traces this way.
>
> -ravi
>
> On Wed, Oct 21, 2015 at 3:14 PM, Pavel Labath  wrote:
>>
>> Hi,
>>
>> I am not really familiar with the perf_event interface (and I suspect
>> others aren't also), so it might help if you explain what kind of
>> information do you plan to collect from there.
>>
>> As for the PtraceWrapper question, I think that really depends on
>> bigger design decisions. My two main questions for a feature like this
>> would be:
>> - How are you going to present this information to the user? (I know
>> debugserver can report some performance data... Have you looked into
>> how that works? Do you plan to reuse some parts of that
>> infrastructure?)
>> - How will you get the information from the server to the client?
>>
>> pl
>>
>>
>> On 21 October 2015 at 13:41, Ravitheja Addepally via lldb-dev
>>  wrote:
>> > Hello,
>> >I want to implement support for reading Performance measurement
>> > information using the perf_event_open system calls. The motive is to add
>> > support for Intel PT hardware feature, which is available through the
>> > perf_event interface. I was thinking of implementing a new Wrapper like
>> > PtraceWrapper in NativeProcessLinux files. My query is that, is this a
>> > correct place to start or not ? in case not, could someone suggest me
>> > another place to begin with ?
>> >
>> > BR,
>> > A Ravi Theja
>> >
>> >
>> > ___
>> > lldb-dev mailing list
>> > lldb-dev@lists.llvm.org
>> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> >
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-21 Thread Ramkumar Ramachandra via lldb-dev
On Tue, Oct 20, 2015 at 8:22 PM, Greg Clayton  wrote:
> Then try running and let me know what your results are!

Hm, there seems to be something seriously wrong. I triple-checked
everything, but Declaration::Compare is not even called when the error
is triggered! How should we proceed now?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-21 Thread Ramkumar Ramachandra via lldb-dev
So first, an addendum: I found a way to make the project build without
using a symlink, and use a direct reference instead. The problem still
persists. It may be that symlink is one of the problems, but it is
certainly not the only problem.

On Tue, Oct 20, 2015 at 8:22 PM, Greg Clayton  wrote:
> int
> Declaration::Compare(const Declaration& a, const Declaration& b)
> {
> int result = FileSpec::Compare(a.m_file, b.m_file, true);
> if (result)

Wait, won't FileSpec::Compare be true iff a.m_file is the same as
b.m_file (excluding symlink resolution)? If so, why are we putting the
symlink-checking logic in the true branch of the original
FileSpec::Compare? Aren't we expanding the scope of what we match,
instead of narrowing it?

> {
> int symlink_result = result;
> if (a.m_file.GetFilename() == b.m_file.GetFilename())
> {
> // Check if the directories in a and b are symlinks to each other
> FileSpec resolved_a;
> FileSpec resolved_b;
> if (FileSystem::ResolveSymbolicLink(a.m_file, 
> resolved_a).Success() &&
> FileSystem::ResolveSymbolicLink(b.m_file, 
> resolved_b).Success())
> {
> symlink_result = FileSpec::Compare(resolved_a, resolved_b, 
> true);

I'm confused. Shouldn't the logic be "check literal equality; if true,
return immediately; if not, check equality with symlink resolution"?

> }
> }
> if (symlink_result != 0)
> return symlink_result;
> }
> if (a.m_line < b.m_line)
> return -1;
> else if (a.m_line > b.m_line)
> return 1;
> #ifdef LLDB_ENABLE_DECLARATION_COLUMNS
> if (a.m_column < b.m_column)
> return -1;
> else if (a.m_column > b.m_column)
> return 1;
> #endif
> return 0;
> }

Here's my version of the patch, although I'm not sure when the code
will be reached.

int
Declaration::Compare(const Declaration& a, const Declaration& b)
{
int result = FileSpec::Compare(a.m_file, b.m_file, true);
if (result)
return result;
if (a.m_file.GetFilename() == b.m_file.GetFilename()) {
// Check if one of the directories is a symlink to the other
int symlink_result = result;
FileSpec resolved_a;
FileSpec resolved_b;
if (FileSystem::ResolveSymbolicLink(a.m_file, resolved_a).Success() &&
FileSystem::ResolveSymbolicLink(b.m_file, resolved_b).Success())
{
symlink_result = FileSpec::Compare(resolved_a, resolved_b, true);
if (symlink_result)
return symlink_result;
}
}
if (a.m_line < b.m_line)
return -1;
else if (a.m_line > b.m_line)
return 1;
#ifdef LLDB_ENABLE_DECLARATION_COLUMNS
if (a.m_column < b.m_column)
return -1;
else if (a.m_column > b.m_column)
return 1;
#endif
return 0;
}

If you're confident that this solves a problem, I can send it as a
code review or something (and set up git-svn, sigh).
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-21 Thread Tamas Berghammer via lldb-dev
I seen very similar error messages when debugging an application compiled
with fission (split/dwo) debug info on Linux with a release version of LLDB
compiled from ToT. When I tested the same with a debug or with a
release+assert build I hit some assertion inside clang. It might worth a
try to check if the same is happening in your case as it might help finding
out the root cause.

In my case the issue is that we somehow end up with 2 FilldDecl object for
a given field inside one of the CXXRecordDecl object and then when we are
doing a pointer based lookup we will go wrong. I haven't figured out why it
is happening and haven't manage to reproduce it reliably either, but plan
to look into it in the near future if nobody beats me.

Tamas

On Wed, Oct 21, 2015 at 4:46 PM Ramkumar Ramachandra via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> So first, an addendum: I found a way to make the project build without
> using a symlink, and use a direct reference instead. The problem still
> persists. It may be that symlink is one of the problems, but it is
> certainly not the only problem.
>
> On Tue, Oct 20, 2015 at 8:22 PM, Greg Clayton  wrote:
> > int
> > Declaration::Compare(const Declaration& a, const Declaration& b)
> > {
> > int result = FileSpec::Compare(a.m_file, b.m_file, true);
> > if (result)
>
> Wait, won't FileSpec::Compare be true iff a.m_file is the same as
> b.m_file (excluding symlink resolution)? If so, why are we putting the
> symlink-checking logic in the true branch of the original
> FileSpec::Compare? Aren't we expanding the scope of what we match,
> instead of narrowing it?
>
> > {
> > int symlink_result = result;
> > if (a.m_file.GetFilename() == b.m_file.GetFilename())
> > {
> > // Check if the directories in a and b are symlinks to each
> other
> > FileSpec resolved_a;
> > FileSpec resolved_b;
> > if (FileSystem::ResolveSymbolicLink(a.m_file,
> resolved_a).Success() &&
> > FileSystem::ResolveSymbolicLink(b.m_file,
> resolved_b).Success())
> > {
> > symlink_result = FileSpec::Compare(resolved_a,
> resolved_b, true);
>
> I'm confused. Shouldn't the logic be "check literal equality; if true,
> return immediately; if not, check equality with symlink resolution"?
>
> > }
> > }
> > if (symlink_result != 0)
> > return symlink_result;
> > }
> > if (a.m_line < b.m_line)
> > return -1;
> > else if (a.m_line > b.m_line)
> > return 1;
> > #ifdef LLDB_ENABLE_DECLARATION_COLUMNS
> > if (a.m_column < b.m_column)
> > return -1;
> > else if (a.m_column > b.m_column)
> > return 1;
> > #endif
> > return 0;
> > }
>
> Here's my version of the patch, although I'm not sure when the code
> will be reached.
>
> int
> Declaration::Compare(const Declaration& a, const Declaration& b)
> {
> int result = FileSpec::Compare(a.m_file, b.m_file, true);
> if (result)
> return result;
> if (a.m_file.GetFilename() == b.m_file.GetFilename()) {
> // Check if one of the directories is a symlink to the other
> int symlink_result = result;
> FileSpec resolved_a;
> FileSpec resolved_b;
> if (FileSystem::ResolveSymbolicLink(a.m_file,
> resolved_a).Success() &&
> FileSystem::ResolveSymbolicLink(b.m_file,
> resolved_b).Success())
> {
> symlink_result = FileSpec::Compare(resolved_a, resolved_b,
> true);
> if (symlink_result)
> return symlink_result;
> }
> }
> if (a.m_line < b.m_line)
> return -1;
> else if (a.m_line > b.m_line)
> return 1;
> #ifdef LLDB_ENABLE_DECLARATION_COLUMNS
> if (a.m_column < b.m_column)
> return -1;
> else if (a.m_column > b.m_column)
> return 1;
> #endif
> return 0;
> }
>
> If you're confident that this solves a problem, I can send it as a
> code review or something (and set up git-svn, sigh).
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb tests and tear down hooks

2015-10-21 Thread Zachary Turner via lldb-dev
Yea, that's what I think too.  I think this mechanism was probably invented
to just to save some code and promote reusability, but in practice leads to
these kinds of problems.

On Wed, Oct 21, 2015 at 2:07 AM Pavel Labath  wrote:

> I think we can remove these, provided there is a way to mimic the
> functionality they are used for now, which I think shouldn't be hard.
> Anything which was set up in the setUp() method should be undone in
> tearDown(). Anything which was set up in the test method, can be
> undone using a try-finally block. Is there a use case not covered by
> this?
>
> pl
>
> On 21 October 2015 at 04:47, Zachary Turner via lldb-dev
>  wrote:
> > There's a subtle bug that is pervasive throughout the test suite.
> Consider
> > the following seemingly innocent test class.
> >
> > class MyTest(TestBase);
> > def setUp():
> > TestBase.setUp()#1
> >
> > # Do some stuff  #2
> > self.addTearDownHook(lambda: self.foo())   #3
> >
> > def test_interesting_stuff():
> > pass
> >
> > Here's the problem.  As a general principle, cleanup needs to happen in
> > reverse order from initialization.  That's why, if we had a tearDown()
> > method, it would probably look something like this:
> >
> > def tearDown():
> > # Clean up some stuff  #2
> >
> > TestBase.tearDown()#1
> >
> > This follows the pattern in other languages like C++, for example, where
> > construction goes from base -> derived, but destruction goes from
> derived ->
> > base.
> >
> > But if you add these tear down hooks into the mix, it violates that.
> tear
> > down hooks get invoked as part of TestBase.tearDown(), so in the above
> > example the initialization order is 1 -> 2 -> 3 but the teardown order
> is 2
> > -> 1 -> 3  (or 2 -> 3 -> 1, or none of the above depending on where
> inside
> > of TestBase.tearDown() hook the hooks get invoked).
> >
> > To make matters worse, tear down hooks can be added from arbitrary
> points in
> > a test's run, not just during setup.
> >
> > The only way I can see to fix this is to delete this tearDownHook
> mechanism
> > entirely.  Anyone who wants it can easily reimplement this in the
> individual
> > test by just keeping their own list of lambdas in the derived class,
> > overriding tearDown(), and running through their own list in reverse
> order
> > before calling TestBase.tearDown().
> >
> > I don't intend to do this work right now, but I would like to do it in
> the
> > future, so I want to throw this out there and see if anyone has thoughts
> on
> > it.
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Inquiry for performance monitors

2015-10-21 Thread Zachary Turner via lldb-dev
There are two different kinds of performance counters: OS performance
counters and CPU performance counters.  It sounds like you're talking about
the latter, but it's worth considering whether this could be designed in a
way to support both (i.e. even if you don't do both yourself, at least make
the machinery reusable and apply to both for when someone else wanted to
come through and add OS perf counters).

There is also the question of this third party library.  Do we take a hard
dependency on libipt (probably a non-starter), or only use it if it's
available (much better)?

As Pavel said, how are you planning to present the information to the
user?  Through some sort of top level command like "perfcount
instructions_retired"?

On Wed, Oct 21, 2015 at 8:16 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> [ Moving this discussion back to the list. I pressed the wrong button
> when replying.]
>
> Thanks for the explanation Ravi. It sounds like a very useful feature
> indeed. I've found a reference to the debugserver profile data in
> GDBRemoteCommunicationClient.cpp:1276, so maybe that will help with
> your investigation. Maybe also someone more knowledgeable can explain
> what those A packets are used for (?).
>
>
> On 21 October 2015 at 15:48, Ravitheja Addepally
>  wrote:
> > Hi,
> >Thanx for your reply, some of the future processors to be released by
> > Intel have this hardware support for recording the instructions that were
> > executed by the processor and this recording process is also quite fast
> and
> > does not add too much computational load. Now this hardware is made
> > accessible via the perf_event_interface where one could map a region of
> > memory for this purpose by passing it as an argument to this
> > perf_event_interface. The recorded instructions are then written to the
> > memory region assigned. Now this is basically the raw information, which
> can
> > be obtained from the hardware. It can be interpreted and presented to the
> > user in the following ways ->
> >
> > 1) Instruction history - where the user gets basically a list of all
> > instructions that were executed
> > 2) Function Call History - It is also possible to get a list of all the
> > functions called in the inferior
> > 3) Reverse Debugging with limited information - In GDB this is only the
> > functions executed.
> >
> > This raw information also needs to decoded (even before you can
> disassemble
> > it ), there is already a library released by Intel called libipt which
> can
> > do that. At the moment we plan to work with Instruction History.
> > I will look into the debugserver infrastructure and get back to you. I
> guess
> > for the server client communication we would rely on packets only. In
> case
> > of concerns about too much data being transferred, we can limit the
> number
> > of entries we report because anyway the amount of data recorded is too
> big
> > to present all at once so we would have to resort to something like a
> > viewport.
> >
> > Since a lot of instructions can be recorded this way, the function call
> > history can be quite useful for debugging and especially since it is a
> lot
> > faster to collect function traces this way.
> >
> > -ravi
> >
> > On Wed, Oct 21, 2015 at 3:14 PM, Pavel Labath  wrote:
> >>
> >> Hi,
> >>
> >> I am not really familiar with the perf_event interface (and I suspect
> >> others aren't also), so it might help if you explain what kind of
> >> information do you plan to collect from there.
> >>
> >> As for the PtraceWrapper question, I think that really depends on
> >> bigger design decisions. My two main questions for a feature like this
> >> would be:
> >> - How are you going to present this information to the user? (I know
> >> debugserver can report some performance data... Have you looked into
> >> how that works? Do you plan to reuse some parts of that
> >> infrastructure?)
> >> - How will you get the information from the server to the client?
> >>
> >> pl
> >>
> >>
> >> On 21 October 2015 at 13:41, Ravitheja Addepally via lldb-dev
> >>  wrote:
> >> > Hello,
> >> >I want to implement support for reading Performance measurement
> >> > information using the perf_event_open system calls. The motive is to
> add
> >> > support for Intel PT hardware feature, which is available through the
> >> > perf_event interface. I was thinking of implementing a new Wrapper
> like
> >> > PtraceWrapper in NativeProcessLinux files. My query is that, is this a
> >> > correct place to start or not ? in case not, could someone suggest me
> >> > another place to begin with ?
> >> >
> >> > BR,
> >> > A Ravi Theja
> >> >
> >> >
> >> > ___
> >> > lldb-dev mailing list
> >> > lldb-dev@lists.llvm.org
> >> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >> >
> >
> >
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/

Re: [lldb-dev] RFC: Making unit tests run by default on ninja check-lldb

2015-10-21 Thread Ying Chen via lldb-dev
Yes, the output of dotest.py goes through LitTestCommand parse.
The parser is matching for "XPASS", but dotest output is using "UNEXPECTED
SUCCESS". :)

Thanks,
Ying

On Tue, Oct 20, 2015 at 6:34 PM, Todd Fiala  wrote:

> Hi Ying,
>
> Our dotest.py lldb test results go through that lit test parser system?  I
> see XPASS happen frequently (and in fact is my whole reason for starting a
> thread on getting rid of flakey tests, or making them run enough times so
> that their output can be a useful signal rather than useless).  According
> to this script, an XPASS would be listed as failure.  I'm not seeing us
> treat XPASS as failures AFAICT.
>
> Are we just saying that our gtests get processed by that?
>
> -Todd
>
> On Tue, Oct 20, 2015 at 4:51 PM, Ying Chen via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi Zachary,
>>
>> The big unknown here is how to make the buildbots understand unit test
>>> failures and trigger a failure when ninja check-lldb-unit fails.
>>>
>>
>> There're two conditions buildbot will identity a test step as failure.
>> One is that the command has non-zero return code.
>> The other is that there're failing codes in stdout message. (Refer to
>> LitTestCommand::evaluateCommand in this file
>> 
>> .)
>> Failing codes are defined as:
>> failingCodes = set(['FAIL', 'XPASS', 'KPASS', 'UNRESOLVED',
>> 'TIMEOUT'])
>>
>> So if the failures are print out as '^FAIL: (.*) \(.*\)', buildbot will
>> understand it's failing even if ninja check-lldb-unit returns 0.
>> Or we could add some logic to the above file to handle the output of unit
>> test.
>>
>> Thanks,
>> Ying
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-21 Thread Greg Clayton via lldb-dev

> On Oct 21, 2015, at 8:45 AM, Ramkumar Ramachandra  wrote:
> 
> So first, an addendum: I found a way to make the project build without
> using a symlink, and use a direct reference instead. The problem still
> persists. It may be that symlink is one of the problems, but it is
> certainly not the only problem.
> 
> On Tue, Oct 20, 2015 at 8:22 PM, Greg Clayton  wrote:
>> int
>> Declaration::Compare(const Declaration& a, const Declaration& b)
>> {
>>int result = FileSpec::Compare(a.m_file, b.m_file, true);
>>if (result)
> 
> Wait, won't FileSpec::Compare be true iff a.m_file is the same as
> b.m_file (excluding symlink resolution)?

No, it returns -1 for less than, 0 for equal and +1 for greater than.

> If so, why are we putting the symlink-checking logic in the true branch of 
> the original
> FileSpec::Compare?

My concern is that it is expensive to be stat'ing files all the time and that 
it dirties the file cache on your system. 

Also many times the FileSpec objects are remote paths. What happens when you 
build your project and send someone the dSYM file? You sent me your dSYM file 
and I have no way to know if your two types were the same since they had 
different paths and I have no way to resolve your symbolic links. I would 
consider the types different.

We could modify FileSpec::Compare, but I would like to try to limit this to 
only happen for symbolic links if we do. We have also had problems with paths 
like:

/tmp/foo/../bar.txt 
/tmp/bar.txt

If they aren't resolved they won't compare correctly. Same with:

/tmp/./bar.txt 
/tmp/bar.txt

We don't want to go changing the FileSpec object on people by resolving the 
path all the time. Because sometimes people don't want the path changing since 
they might have other FileSpec objects that are encoded with "/tmp/./bar.txt" 
and they will expect a FileSpec object they create to maintain what they put 
into it. So if we can't update the FileSpec objects, then our compares would 
constantly have to try to "stat()" objects they may or may not have come from 
the current system. So we actually can't resolve them because of that. 

> Aren't we expanding the scope of what we match,
> instead of narrowing it?



> 
>>{
>>int symlink_result = result;
>>if (a.m_file.GetFilename() == b.m_file.GetFilename())
>>{
>>// Check if the directories in a and b are symlinks to each other
>>FileSpec resolved_a;
>>FileSpec resolved_b;
>>if (FileSystem::ResolveSymbolicLink(a.m_file, 
>> resolved_a).Success() &&
>>FileSystem::ResolveSymbolicLink(b.m_file, 
>> resolved_b).Success())
>>{
>>symlink_result = FileSpec::Compare(resolved_a, resolved_b, 
>> true);
> 
> I'm confused. Shouldn't the logic be "check literal equality; if true,
> return immediately; if not, check equality with symlink resolution"?

These are compare routines that return -1, 0 or 1.

> 
>>}
>>}
>>if (symlink_result != 0)
>>return symlink_result;
>>}
>>if (a.m_line < b.m_line)
>>return -1;
>>else if (a.m_line > b.m_line)
>>return 1;
>> #ifdef LLDB_ENABLE_DECLARATION_COLUMNS
>>if (a.m_column < b.m_column)
>>return -1;
>>else if (a.m_column > b.m_column)
>>return 1;
>> #endif
>>return 0;
>> }
> 
> Here's my version of the patch, although I'm not sure when the code
> will be reached.
> 
> int
> Declaration::Compare(const Declaration& a, const Declaration& b)
> {
>int result = FileSpec::Compare(a.m_file, b.m_file, true);
>if (result)
>return result;

The code in the if statement below is useless. If we reach this location, 
"result" is zero and the two file specs are equal.

>if (a.m_file.GetFilename() == b.m_file.GetFilename()) {
>// Check if one of the directories is a symlink to the other
>int symlink_result = result;
>FileSpec resolved_a;
>FileSpec resolved_b;
>if (FileSystem::ResolveSymbolicLink(a.m_file, resolved_a).Success() &&
>FileSystem::ResolveSymbolicLink(b.m_file, resolved_b).Success())
>{
>symlink_result = FileSpec::Compare(resolved_a, resolved_b, true);
>if (symlink_result)
>return symlink_result;
>}
>}
>if (a.m_line < b.m_line)
>return -1;
>else if (a.m_line > b.m_line)
>return 1;
> #ifdef LLDB_ENABLE_DECLARATION_COLUMNS
>if (a.m_column < b.m_column)
>return -1;
>else if (a.m_column > b.m_column)
>return 1;
> #endif
>return 0;
> }
> 
> If you're confident that this solves a problem, I can send it as a
> code review or something (and set up git-svn, sigh).

We actually can't really do this because we might have a dSYM file from another 
system that we are debugging locally so we can't actually rely on symlink 
resolving. We could try to ignore the path to the file and just make the decl 
fi

Re: [lldb-dev] lldb tests and tear down hooks

2015-10-21 Thread Greg Clayton via lldb-dev
I think it was mostly done to provide an exception safe way to cleanup stuff 
without having to override TestBase.tearDown(). I am guessing this cleanup 
happens on TestCase.tearDown() and not after the current test case right? 

I could see it being used to cleanup after a single test case in case you have:

class MyTest(TestBase):
def test_1(self):
self.addTearDownHook(lambda: self.foo())
raise ValueError
def test_2(self):
self.addTearDownHook(lambda: self.bar())
raise ValueError


Are these tearDowns happening per test function, or during class setup/teardown?

> On Oct 21, 2015, at 9:33 AM, Zachary Turner via lldb-dev 
>  wrote:
> 
> Yea, that's what I think too.  I think this mechanism was probably invented 
> to just to save some code and promote reusability, but in practice leads to 
> these kinds of problems.
> 
> On Wed, Oct 21, 2015 at 2:07 AM Pavel Labath  wrote:
> I think we can remove these, provided there is a way to mimic the
> functionality they are used for now, which I think shouldn't be hard.
> Anything which was set up in the setUp() method should be undone in
> tearDown(). Anything which was set up in the test method, can be
> undone using a try-finally block. Is there a use case not covered by
> this?
> 
> pl
> 
> On 21 October 2015 at 04:47, Zachary Turner via lldb-dev
>  wrote:
> > There's a subtle bug that is pervasive throughout the test suite.  Consider
> > the following seemingly innocent test class.
> >
> > class MyTest(TestBase);
> > def setUp():
> > TestBase.setUp()#1
> >
> > # Do some stuff  #2
> > self.addTearDownHook(lambda: self.foo())   #3
> >
> > def test_interesting_stuff():
> > pass
> >
> > Here's the problem.  As a general principle, cleanup needs to happen in
> > reverse order from initialization.  That's why, if we had a tearDown()
> > method, it would probably look something like this:
> >
> > def tearDown():
> > # Clean up some stuff  #2
> >
> > TestBase.tearDown()#1
> >
> > This follows the pattern in other languages like C++, for example, where
> > construction goes from base -> derived, but destruction goes from derived ->
> > base.
> >
> > But if you add these tear down hooks into the mix, it violates that.  tear
> > down hooks get invoked as part of TestBase.tearDown(), so in the above
> > example the initialization order is 1 -> 2 -> 3 but the teardown order is 2
> > -> 1 -> 3  (or 2 -> 3 -> 1, or none of the above depending on where inside
> > of TestBase.tearDown() hook the hooks get invoked).
> >
> > To make matters worse, tear down hooks can be added from arbitrary points in
> > a test's run, not just during setup.
> >
> > The only way I can see to fix this is to delete this tearDownHook mechanism
> > entirely.  Anyone who wants it can easily reimplement this in the individual
> > test by just keeping their own list of lambdas in the derived class,
> > overriding tearDown(), and running through their own list in reverse order
> > before calling TestBase.tearDown().
> >
> > I don't intend to do this work right now, but I would like to do it in the
> > future, so I want to throw this out there and see if anyone has thoughts on
> > it.
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Inquiry for performance monitors

2015-10-21 Thread Greg Clayton via lldb-dev
IMHO the best way to provide this information is to implement reverse debugging 
packets in a GDB server (lldb-server). If you enable this feature via some 
packet to lldb-server, and that enables the gathering of data that keeps the 
last N instructions run by all threads in some buffer that gets overwritten. 
The lldb-server enables it and gives a buffer to the perf_event_interface(). 
Then clients can ask the lldb-server to step back in any thread. Only when the 
data is requested do we actually use the data to implement the reverse stepping.

Another way to do this would be to use a python based command that can be added 
to any target that supports this. The plug-in could install a set of LLDB 
commands. To see how to create new lldb command line commands in python, see 
the section named "CREATE A NEW LLDB COMMAND USING A PYTHON FUNCTION" on the 
http://lldb.llvm.org/python-reference.html web page.

Then you can have some commands like:

intel-pt-start
intel-pt-dump
intel-pt-stop

Each command could have options and arguments as desired. The "intel-pt-start" 
command could make an expression call to enable the feature in the target by 
running and expression that runs the some perf_event_interface calls that would 
allocate some memory and hand it to the Intel PT stuff. The "intel-pt-dump" 
could just give a raw dump all of history for one or more threads (again, add 
options and arguments as needed to this command). The python code could bridge 
to C and use the intel libraries that know how to process the data.

If this all goes well we can think about building it into LLDB as a built in 
command.


> On Oct 21, 2015, at 9:50 AM, Zachary Turner via lldb-dev 
>  wrote:
> 
> There are two different kinds of performance counters: OS performance 
> counters and CPU performance counters.  It sounds like you're talking about 
> the latter, but it's worth considering whether this could be designed in a 
> way to support both (i.e. even if you don't do both yourself, at least make 
> the machinery reusable and apply to both for when someone else wanted to come 
> through and add OS perf counters).
> 
> There is also the question of this third party library.  Do we take a hard 
> dependency on libipt (probably a non-starter), or only use it if it's 
> available (much better)?
> 
> As Pavel said, how are you planning to present the information to the user?  
> Through some sort of top level command like "perfcount instructions_retired"?
> 
> On Wed, Oct 21, 2015 at 8:16 AM Pavel Labath via lldb-dev 
>  wrote:
> [ Moving this discussion back to the list. I pressed the wrong button
> when replying.]
> 
> Thanks for the explanation Ravi. It sounds like a very useful feature
> indeed. I've found a reference to the debugserver profile data in
> GDBRemoteCommunicationClient.cpp:1276, so maybe that will help with
> your investigation. Maybe also someone more knowledgeable can explain
> what those A packets are used for (?).
> 
> 
> On 21 October 2015 at 15:48, Ravitheja Addepally
>  wrote:
> > Hi,
> >Thanx for your reply, some of the future processors to be released by
> > Intel have this hardware support for recording the instructions that were
> > executed by the processor and this recording process is also quite fast and
> > does not add too much computational load. Now this hardware is made
> > accessible via the perf_event_interface where one could map a region of
> > memory for this purpose by passing it as an argument to this
> > perf_event_interface. The recorded instructions are then written to the
> > memory region assigned. Now this is basically the raw information, which can
> > be obtained from the hardware. It can be interpreted and presented to the
> > user in the following ways ->
> >
> > 1) Instruction history - where the user gets basically a list of all
> > instructions that were executed
> > 2) Function Call History - It is also possible to get a list of all the
> > functions called in the inferior
> > 3) Reverse Debugging with limited information - In GDB this is only the
> > functions executed.
> >
> > This raw information also needs to decoded (even before you can disassemble
> > it ), there is already a library released by Intel called libipt which can
> > do that. At the moment we plan to work with Instruction History.
> > I will look into the debugserver infrastructure and get back to you. I guess
> > for the server client communication we would rely on packets only. In case
> > of concerns about too much data being transferred, we can limit the number
> > of entries we report because anyway the amount of data recorded is too big
> > to present all at once so we would have to resort to something like a
> > viewport.
> >
> > Since a lot of instructions can be recorded this way, the function call
> > history can be quite useful for debugging and especially since it is a lot
> > faster to collect function traces this way.
> >
> > -ravi
> >
> > On Wed, Oct 21, 2015 at 3:14 PM, Pavel Labath

Re: [lldb-dev] Inquiry for performance monitors

2015-10-21 Thread Greg Clayton via lldb-dev
one main benefit to doing this externally is allow this to be done remotely 
over any debugger connection. If you can run expressions to 
enable/disable/setup the memory buffer/access the buffer contents, then you 
don't need to add code into the debugger to actually do this.

Greg

> On Oct 21, 2015, at 11:54 AM, Greg Clayton  wrote:
> 
> IMHO the best way to provide this information is to implement reverse 
> debugging packets in a GDB server (lldb-server). If you enable this feature 
> via some packet to lldb-server, and that enables the gathering of data that 
> keeps the last N instructions run by all threads in some buffer that gets 
> overwritten. The lldb-server enables it and gives a buffer to the 
> perf_event_interface(). Then clients can ask the lldb-server to step back in 
> any thread. Only when the data is requested do we actually use the data to 
> implement the reverse stepping.
> 
> Another way to do this would be to use a python based command that can be 
> added to any target that supports this. The plug-in could install a set of 
> LLDB commands. To see how to create new lldb command line commands in python, 
> see the section named "CREATE A NEW LLDB COMMAND USING A PYTHON FUNCTION" on 
> the http://lldb.llvm.org/python-reference.html web page.
> 
> Then you can have some commands like:
> 
> intel-pt-start
> intel-pt-dump
> intel-pt-stop
> 
> Each command could have options and arguments as desired. The 
> "intel-pt-start" command could make an expression call to enable the feature 
> in the target by running and expression that runs the some 
> perf_event_interface calls that would allocate some memory and hand it to the 
> Intel PT stuff. The "intel-pt-dump" could just give a raw dump all of history 
> for one or more threads (again, add options and arguments as needed to this 
> command). The python code could bridge to C and use the intel libraries that 
> know how to process the data.
> 
> If this all goes well we can think about building it into LLDB as a built in 
> command.
> 
> 
>> On Oct 21, 2015, at 9:50 AM, Zachary Turner via lldb-dev 
>>  wrote:
>> 
>> There are two different kinds of performance counters: OS performance 
>> counters and CPU performance counters.  It sounds like you're talking about 
>> the latter, but it's worth considering whether this could be designed in a 
>> way to support both (i.e. even if you don't do both yourself, at least make 
>> the machinery reusable and apply to both for when someone else wanted to 
>> come through and add OS perf counters).
>> 
>> There is also the question of this third party library.  Do we take a hard 
>> dependency on libipt (probably a non-starter), or only use it if it's 
>> available (much better)?
>> 
>> As Pavel said, how are you planning to present the information to the user?  
>> Through some sort of top level command like "perfcount instructions_retired"?
>> 
>> On Wed, Oct 21, 2015 at 8:16 AM Pavel Labath via lldb-dev 
>>  wrote:
>> [ Moving this discussion back to the list. I pressed the wrong button
>> when replying.]
>> 
>> Thanks for the explanation Ravi. It sounds like a very useful feature
>> indeed. I've found a reference to the debugserver profile data in
>> GDBRemoteCommunicationClient.cpp:1276, so maybe that will help with
>> your investigation. Maybe also someone more knowledgeable can explain
>> what those A packets are used for (?).
>> 
>> 
>> On 21 October 2015 at 15:48, Ravitheja Addepally
>>  wrote:
>>> Hi,
>>>   Thanx for your reply, some of the future processors to be released by
>>> Intel have this hardware support for recording the instructions that were
>>> executed by the processor and this recording process is also quite fast and
>>> does not add too much computational load. Now this hardware is made
>>> accessible via the perf_event_interface where one could map a region of
>>> memory for this purpose by passing it as an argument to this
>>> perf_event_interface. The recorded instructions are then written to the
>>> memory region assigned. Now this is basically the raw information, which can
>>> be obtained from the hardware. It can be interpreted and presented to the
>>> user in the following ways ->
>>> 
>>> 1) Instruction history - where the user gets basically a list of all
>>> instructions that were executed
>>> 2) Function Call History - It is also possible to get a list of all the
>>> functions called in the inferior
>>> 3) Reverse Debugging with limited information - In GDB this is only the
>>> functions executed.
>>> 
>>> This raw information also needs to decoded (even before you can disassemble
>>> it ), there is already a library released by Intel called libipt which can
>>> do that. At the moment we plan to work with Instruction History.
>>> I will look into the debugserver infrastructure and get back to you. I guess
>>> for the server client communication we would rely on packets only. In case
>>> of concerns about too much data being transferred, we can limit the number

Re: [lldb-dev] lldb tests and tear down hooks

2015-10-21 Thread Zachary Turner via lldb-dev
Well you can see them getting added via self.addTearDownHook(), so that
means they're called through an instance.  Specifically, it happens in
Base.tearDown(self), so it's basically identical (in concept) to if the
relevant handlers were called in the implementation of MyTest.tearDown(),
but different in order.

I agree that it's useful in principle to be able to do what you suggest in
your example, but there's just no way to guarantee the right ordering if
you let the base class run the handlers.  If there actually *were* a
tearDown() function in your example, to be correct it would need to look
like this:

class MyTest(TestBase):
def tearDown(self):
# run the teardown hooks
# Do the inverse of setUp()
super.tearDown()

def test_1(self):
self.addTearDownHook(lambda: self.foo())
raise ValueError
def test_2(self):
self.addTearDownHook(lambda: self.bar())
raise ValueError

One possible solution is to shift burden of maintaining the hooks list to
the individual test case.  E.g.

class MyTest(TestBase):
self.hooks = []
def tearDown(self):
self.runTearDownHooks(self.hooks)   # This could be implemented in
TestBase, since now we can call it with our list at the right time.
# Do the inverse of setUp()
super.tearDown()

def test_1(self):
self.hooks.append(lambda: self.foo())
raise ValueError
def test_2(self):
self.hooks.append(lambda: self.bar())
raise ValueError

Almost no extra code to write, and should be bulletproof.

On Wed, Oct 21, 2015 at 11:41 AM Greg Clayton  wrote:

> I think it was mostly done to provide an exception safe way to cleanup
> stuff without having to override TestBase.tearDown(). I am guessing this
> cleanup happens on TestCase.tearDown() and not after the current test case
> right?
>
> I could see it being used to cleanup after a single test case in case you
> have:
>
> class MyTest(TestBase):
> def test_1(self):
> self.addTearDownHook(lambda: self.foo())
> raise ValueError
> def test_2(self):
> self.addTearDownHook(lambda: self.bar())
> raise ValueError
>
>
> Are these tearDowns happening per test function, or during class
> setup/teardown?
>
> > On Oct 21, 2015, at 9:33 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Yea, that's what I think too.  I think this mechanism was probably
> invented to just to save some code and promote reusability, but in practice
> leads to these kinds of problems.
> >
> > On Wed, Oct 21, 2015 at 2:07 AM Pavel Labath  wrote:
> > I think we can remove these, provided there is a way to mimic the
> > functionality they are used for now, which I think shouldn't be hard.
> > Anything which was set up in the setUp() method should be undone in
> > tearDown(). Anything which was set up in the test method, can be
> > undone using a try-finally block. Is there a use case not covered by
> > this?
> >
> > pl
> >
> > On 21 October 2015 at 04:47, Zachary Turner via lldb-dev
> >  wrote:
> > > There's a subtle bug that is pervasive throughout the test suite.
> Consider
> > > the following seemingly innocent test class.
> > >
> > > class MyTest(TestBase);
> > > def setUp():
> > > TestBase.setUp()#1
> > >
> > > # Do some stuff  #2
> > > self.addTearDownHook(lambda: self.foo())   #3
> > >
> > > def test_interesting_stuff():
> > > pass
> > >
> > > Here's the problem.  As a general principle, cleanup needs to happen in
> > > reverse order from initialization.  That's why, if we had a tearDown()
> > > method, it would probably look something like this:
> > >
> > > def tearDown():
> > > # Clean up some stuff  #2
> > >
> > > TestBase.tearDown()#1
> > >
> > > This follows the pattern in other languages like C++, for example,
> where
> > > construction goes from base -> derived, but destruction goes from
> derived ->
> > > base.
> > >
> > > But if you add these tear down hooks into the mix, it violates that.
> tear
> > > down hooks get invoked as part of TestBase.tearDown(), so in the above
> > > example the initialization order is 1 -> 2 -> 3 but the teardown order
> is 2
> > > -> 1 -> 3  (or 2 -> 3 -> 1, or none of the above depending on where
> inside
> > > of TestBase.tearDown() hook the hooks get invoked).
> > >
> > > To make matters worse, tear down hooks can be added from arbitrary
> points in
> > > a test's run, not just during setup.
> > >
> > > The only way I can see to fix this is to delete this tearDownHook
> mechanism
> > > entirely.  Anyone who wants it can easily reimplement this in the
> individual
> > > test by just keeping their own list of lambdas in the derived class,
> > > overriding tearDown(), and running through their own list in reverse
> order
> > > before calling TestBase.tearDown().
> > >
> > > I don't intend to do this work right now, but I would like to do it in
> the
> > > future

[lldb-dev] Moving pexpect and unittest2 to lldb/third_party

2015-10-21 Thread Zachary Turner via lldb-dev
*TL;DR - Nobody has to do anything, this is just a heads up that a 400+
file CL is coming.*

IANAL, but I've been told by one that I need to move all third party code
used by LLDB to lldb/third_party.  Currently there is only one thing there:
the Python `six` module used for creating code that is portable across
Python 2 and Python 3.

The only other 2 instances that I'm aware of are pexpect and unittest2,
which are under lldb/test.  I've got some patches locally which move
pexpect and unittest2 to lldb/third_party.  I'll hold off on checking them
in for a bit to give people a chance to see this message first, because
otherwise you might be surprised when you see a CL with 400 files being
checked in.

Nobody will have to do anything after this CL goes in, and everything
should continue to work exactly as it currently does.

The main reason for the churn is that pretty much every single test in LLDB
does something like this:

*import unittest2*

...

if __name__ == '__main__':
import atexit
lldb.SBDebugger.Initialize()
atexit.register(lambda: lldb.SBDebugger.Terminate())
*unittest2.main()*

This worked when unittest2 was a subfolder of test, but not when it's
somewhere else.  Since LLDB's python code is not organized into a standard
python package and we treat the scripts like dotest etc as standalone
scripts, the way I've made this work is by introducing a module called
*lldb_shared
*under test which, when you import it, fixes up sys.path to correctly add
all the right locations under lldb/third_party.

So, every single test now needs a line at the top to import lldb_shared.

TBH I don't even know if we need this unittest2 stuff anymore (does anyone
even use it?)  but even if the answer is no, then that still means changing
every file to delete the import statement and the if __name__ ==
'__main__': block.

If there are no major concerns I plan to check this in by the end of the
day, or tomorrow.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Preliminary support for NetBSD

2015-10-21 Thread Kamil Rytarowski via lldb-dev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 21.10.2015 12:47, Kamil Rytarowski via lldb-dev wrote:
> On 08.10.2015 12:21, Kamil Rytarowski via lldb-dev wrote:
>> On 05.10.2015 21:46, Todd Fiala wrote:
>>> Seems like a great idea.  (Ed, is that something you might be 
>>> able to review?)
> 
> 
>> The first patch is already proposed: 
>> http://reviews.llvm.org/D13334
> 
>>> Hopefully you have access to other platforms to test if it's 
>>> breaking anything?  (Most likely candidates would be FreeBSD
>>> and Linux/Android, I'd suspect, depending on how much you're
>>> having to change things).
> 
> 
>> I just run just NetBSD exclusively (desktop, development). My
>> Linux or FreeBSD (unlikely frequent) is limited and at that
>> places I don't develop lldb. I don't remember when I touched
>> other systems, perhaps Tru64 2 years ago.
> 
>> It's not that crucial anyway, as there is review board.
> 
> There is buildslave with NetBSD-7.0 in the gates. When it will be 
> functional, I will switch it immediately to the master lldb
> buildzone.
> 
> For now, I will disable running tests as I need to upstream few 
> plugins for NetBSD first.
> 
> Please review and merge the NetBSD pending commits in LLVM's 
> Phabricator, they are making it buildable for NetBSD - otherwise
> there will be waste of an extra machine over there...

The NetBSD buildslave is now operational

http://lab.llvm.org:8014/builders/lldb-amd64-ninja-netbsd7

Once it will finish building properly I will move it from the staging
buildmaster to the primary one.

At the moment it stops on:
http://reviews.llvm.org/D12995
http://reviews.llvm.org/D12994
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWKA/6AAoJEEuzCOmwLnZsblYP/1MFCdPqG/wLjv8Jxc9B7Nwf
zCmIC3iP10HYcqubep84m5eeUsoIo9+dDZGSHKGGj3XUiEyzUWruFFAulEHR9bGh
MPbb7dZQD0+XNq5KiUiGBBh3dl75qo47AgoMttQJAaJFTPCaE+c5r9Q50hceGaB/
KX8cNuoJ1nnUtoev1VaMN9fNS0WqDZtGjTcS+jTosGD524vXcv56/029Jic2aKtW
w63RR5RlHCmpzB/XUyYjap/mxlwni49lU5bFr93tZNGXu99KBsNm2z01jhfXQvv+
Sc5e7s7nhUf0Bzo+Hnx04m/pzDKz5b8WJRS0iIgHBwNErfgtjxLgRea8CdVS326H
tUv3pLc5GQXBYXCPsqCwFMNkTl7DZpHDQiUoP0O+QTyGZkM/BXgC37kF98kztLrl
YoL+EwdoioMwmt9yQCQzD87lPh3Q7gteM0a3EsQYV3FcxEFVeznZVMGVKsEH6xjo
zi5Zxiy7I1F28wrJIhPcukEoLTVxFZPP8YkCB+GkF9qTiyJFuCfsuQc62PH6+nkb
NhxVGZzgWtshK72gWGAk7vlhaL+ceJgY7nz3U8iwlDnWqs3YY90auwPZbSne9axZ
ZCFBB5d+oP/d0GA7bf+ZN6vfWX0TLnQ3tpAxXzR7XKeowyThBayuSvB9IDTxpZI6
LeJWovIz5hhWiYnH2x0A
=L5IT
-END PGP SIGNATURE-
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Making unit tests run by default on ninja check-lldb

2015-10-21 Thread Todd Fiala via lldb-dev
Oh haha okay.  :-)

Thanks for explaining, Ying!

-Todd

On Wed, Oct 21, 2015 at 10:01 AM, Ying Chen  wrote:

> Yes, the output of dotest.py goes through LitTestCommand parse.
> The parser is matching for "XPASS", but dotest output is using "UNEXPECTED
> SUCCESS". :)
>
> Thanks,
> Ying
>
> On Tue, Oct 20, 2015 at 6:34 PM, Todd Fiala  wrote:
>
>> Hi Ying,
>>
>> Our dotest.py lldb test results go through that lit test parser system?
>> I see XPASS happen frequently (and in fact is my whole reason for starting
>> a thread on getting rid of flakey tests, or making them run enough times so
>> that their output can be a useful signal rather than useless).  According
>> to this script, an XPASS would be listed as failure.  I'm not seeing us
>> treat XPASS as failures AFAICT.
>>
>> Are we just saying that our gtests get processed by that?
>>
>> -Todd
>>
>> On Tue, Oct 20, 2015 at 4:51 PM, Ying Chen via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hi Zachary,
>>>
>>> The big unknown here is how to make the buildbots understand unit test
 failures and trigger a failure when ninja check-lldb-unit fails.

>>>
>>> There're two conditions buildbot will identity a test step as failure.
>>> One is that the command has non-zero return code.
>>> The other is that there're failing codes in stdout message. (Refer to
>>> LitTestCommand::evaluateCommand in this file
>>> 
>>> .)
>>> Failing codes are defined as:
>>> failingCodes = set(['FAIL', 'XPASS', 'KPASS', 'UNRESOLVED',
>>> 'TIMEOUT'])
>>>
>>> So if the failures are print out as '^FAIL: (.*) \(.*\)', buildbot will
>>> understand it's failing even if ninja check-lldb-unit returns 0.
>>> Or we could add some logic to the above file to handle the output of
>>> unit test.
>>>
>>> Thanks,
>>> Ying
>>>
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>>
>>
>>
>> --
>> -Todd
>>
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev