[lldb-dev] Infinity

2015-10-20 Thread Gary Benson via lldb-dev
Hi all,

I've been working on a platform-independent system for executables and
shared libraries to export information to debuggers and other software
development tools.  It's called Infinity.  The initial use case was to
allow GDB to debug multithreaded inferiors without requiring
libthread_db, but if that all works it's likely this stuff will be
added to glibc to allow debuggers to support dlmopen too, and I'm
hoping the OpenMP community will get on board too (they are currently
proposing 
another libthread_db-style interface.  So, I'm sending this email so
it doesn't come as a complete surprise.

The idea is basically that, rather than requiring a plugin library
that the debugger loads, in Infinity inspection functions are shipped
as DWARF bytecode in ELF notes in the actual library they are for, so,
e.g., the notes implementing what libthread_db.so currently implements
will live in libpthread.so.  The idea is that the debugger keeps a
track of the notes it finds in the executables and libraries it loads,
and when complete sets arrive it enables that particular subsystem,
so, when the notes needed to support multithreaded inferiors appear
then thread debugging switches on.

I just mailed an RFC to the glibc list with notes implementing
map_lwp2thr function to get some feedback.  The series starts here:

  https://sourceware.org/ml/libc-alpha/2015-10/msg00690.html

This series is to build the notes, not to execute them--it's nothing
to do with debuggers--so it should be ok to look without fear of
having seen LGPL code you can't then implement.  IANAL though :)

I'm currently documenting the system on the GDB wiki, starting here:

  https://sourceware.org/gdb/wiki/Infinity

That won't be its final home, it's just somewhere convenient for now.
The documentation is by no means complete but I'll be filling it in
over the next few days, and most of the important concepts have at
least placeholder pages so if you subscribe to those you'll get emails
as I write.

Infinity's mailing list is infin...@sourceware.org, so please
subscribe if you're interested (by sending an empty message to
infinity-subsribe@).

Cheers,
Gary

-- 
http://gbenson.net/
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] RFC: Making unit tests run by default on ninja check-lldb

2015-10-20 Thread Zachary Turner via lldb-dev
Right now there are two ninja check targets: "ninja check-lldb", which
 runs dotest and all of the SB API tests, and "ninja check-lldb-unit" which
runs the gtest unit test suite.

I would like to make unit tests run by default.  This entails two things,
which could be done independently of each other.

1) Rename check-lldb to check-lldb-python, and create a new check-lldb that
depends on check-lldb-unit and check-lldb-python.  This way, when you run
"ninja check-lldb" you get both.

2) Update the build bots to run both.  We would probably want them as a
separate step, so the existing step that runs "ninja check-lldb" would need
to change to run "ninja check-lldb-python" instead.  To add a unit test
step, we would need to add another step that runs "ninja check-lldb-unit".

The big unknown here is how to make the buildbots understand unit test
failures and trigger a failure when ninja check-lldb-unit fails.  A
potential first step could be to just update the buildbot to run ninja
check-lldb-python instead, and don't add the second step, but if there's
anyone who knows how to make it parse the output of ninja check-lldb-unit
and report as a failure, that would be great.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-20 Thread Ramkumar Ramachandra via lldb-dev
Greg Clayton wrote:
> Yes I have seen a bunch of problems like this on linux due to types being 
> incomplete in the debug info (my guess). But I would like to verify that the 
> manual DWARF indexing isn't to blame for this. We have great accelerator 
> tables that the clang makes for us that actually have all of the info we need 
> to find types and functions quickly, whereas all other platforms must run 
> SymbolFileDWARF::Index() to manually index the DWARF.

I'm on OS X, so none of this applies?

> I should be able to tell if you can send me an ELF file and say where you 
> were and wait wasn't showing up correctly (which variables) in an exact code 
> context (which file + line or exact line in a function). Then I can verify 
> that SymbolFileDWARF::Index() is correctly indexing things so that we can 
> find types and functions when we need them.

I've been mulling over this problem: do you want to be able to run the
Mach-O, or do you just want to inspect it? The transitive closure of
the dependencies is atleast 30 .dylibs, and I can't take out that much
IP.

So what are we looking for exactly?

Thanks.

Ram
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] llvm assertion while evaluating expressions for MIPS on Linux

2015-10-20 Thread Greg Clayton via lldb-dev
What is this not happening on any other architecture? Is the "$" special for 
MIPS and not for other architectures? We really don't want to remove the '$' as 
we want the symbol to be unique. The '$' symbol is fine for all x86/x86_64/arm 
and arm64 variants...

Greg


> On Oct 19, 2015, at 11:30 PM, Bhushan Attarde via lldb-dev 
>  wrote:
> 
> Hi,
>  
> I am facing issue (llvm assertion) in evaluating expressions for MIPS on 
> Linux.
>  
> (lldb) p fooptr(a,b)
> lldb: /home/battarde/git/llvm/lib/MC/ELFObjectWriter.cpp:791: void 
> {anonymous}::ELFObjectWriter::computeSymbolTable(llvm::MCAssembler&, const 
> llvm::MCAsmLayout&, const SectionIndexMapTy&, const RevGroupMapTy&, 
> {anonymous}::ELFObjectWriter::SectionOffsetsTy&): Assertion `Local || 
> !Symbol.isTemporary()' failed.
>  
> I debugged it and found that, LLDB inserts calls to dynamic checker function 
> for pointer validation at appropriate locations in expression’s IR.
>  
> The issue is that this checker function’s name (hard-coded in LLDB in 
> lldb\source\Expression\IRDynamicChecks.cpp) starts with “$” i.e 
> “$__lldb_valid_pointer_check”.
> While creating a MCSymbol (MCContext::createSymbol() in 
> llvm/lib/MC/MCContext.cpp) for this function llvm detects the name starts 
> with “$” and marks that symbol as ‘temporary’ symbol (PrivateGlobalPrefix is 
> '$' for MIPS)
> Further while computing a symbol table in 
> ELFObjectWriter::computeSymbolTable() the assertion triggers because this 
> symbol is 'temporary'.
>  
> I tried couple of things that solves this issue for MIPS.
>  
> 1. Remove '$' from the function name.
> 2. Remove "C Language linkage" from the dynamic pointer validation function 
> i.e the below piece of code in lldb\source\Expression\IRDynamicChecks.cpp
> -
> static const char g_valid_pointer_check_text[] =
> "extern \"C\" void\n"
> "$__lldb_valid_pointer_check (unsigned char *$__lldb_arg_ptr)\n"
> "{\n"
> "unsigned char $__lldb_local_val = *$__lldb_arg_ptr;\n"
> "}";
> --
>  
> becomes 
>  
> 
> static const char g_valid_pointer_check_text[] =
> "void\n"
> "$__lldb_valid_pointer_check (unsigned char *$__lldb_arg_ptr)\n"
> "{\n"
> "unsigned char $__lldb_local_val = *$__lldb_arg_ptr;\n"
> "}";
> 
>  
> Removing C Language linkage will enable mangling and will mangle 
> "$__lldb_valid_pointer_check" to something like 
> "_Z27$__lldb_valid_pointer_checkPh".
> So the mangled name won't start with '$' and the symbol will not be marked as 
> Temporary and hence assertion won't be triggered.
>  
> Please let me know if there is any better solution to this issue.
>  
> Regards,
> Bhushan
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] llvm assertion while evaluating expressions for MIPS on Linux

2015-10-20 Thread Greg Clayton via lldb-dev
My guess is that there is a different temporary symbol for differing 
architectures and possibly depending on which file format (mach-o or ELF) you 
are targeting. MIPS probably happens to use '$'. I know mach-o files use "L" as 
the temporary symbol prefix, ELF tends to use '.'. Not sure where this would be 
abstracted in LLVM or if it is just built into the assemblers directly for each 
arch... If you can find out where this can be detected within LLVM, we can make 
sure we don't use any temporary prefixes in symbol names and work around this 
issue. We need to make sure that any functions we generate and JIT up and 
insert into the program do not conflict with _any_ symbol that could be in any 
system libraries or user binaries. This is why we used '$' in the first place.

Greg

> On Oct 20, 2015, at 11:11 AM, Greg Clayton via lldb-dev 
>  wrote:
> 
> What is this not happening on any other architecture? Is the "$" special for 
> MIPS and not for other architectures? We really don't want to remove the '$' 
> as we want the symbol to be unique. The '$' symbol is fine for all 
> x86/x86_64/arm and arm64 variants...
> 
> Greg
> 
> 
>> On Oct 19, 2015, at 11:30 PM, Bhushan Attarde via lldb-dev 
>>  wrote:
>> 
>> Hi,
>> 
>> I am facing issue (llvm assertion) in evaluating expressions for MIPS on 
>> Linux.
>> 
>> (lldb) p fooptr(a,b)
>> lldb: /home/battarde/git/llvm/lib/MC/ELFObjectWriter.cpp:791: void 
>> {anonymous}::ELFObjectWriter::computeSymbolTable(llvm::MCAssembler&, const 
>> llvm::MCAsmLayout&, const SectionIndexMapTy&, const RevGroupMapTy&, 
>> {anonymous}::ELFObjectWriter::SectionOffsetsTy&): Assertion `Local || 
>> !Symbol.isTemporary()' failed.
>> 
>> I debugged it and found that, LLDB inserts calls to dynamic checker function 
>> for pointer validation at appropriate locations in expression’s IR.
>> 
>> The issue is that this checker function’s name (hard-coded in LLDB in 
>> lldb\source\Expression\IRDynamicChecks.cpp) starts with “$” i.e 
>> “$__lldb_valid_pointer_check”.
>> While creating a MCSymbol (MCContext::createSymbol() in 
>> llvm/lib/MC/MCContext.cpp) for this function llvm detects the name starts 
>> with “$” and marks that symbol as ‘temporary’ symbol (PrivateGlobalPrefix is 
>> '$' for MIPS)
>> Further while computing a symbol table in 
>> ELFObjectWriter::computeSymbolTable() the assertion triggers because this 
>> symbol is 'temporary'.
>> 
>> I tried couple of things that solves this issue for MIPS.
>> 
>> 1. Remove '$' from the function name.
>> 2. Remove "C Language linkage" from the dynamic pointer validation function 
>> i.e the below piece of code in lldb\source\Expression\IRDynamicChecks.cpp
>> -
>> static const char g_valid_pointer_check_text[] =
>> "extern \"C\" void\n"
>> "$__lldb_valid_pointer_check (unsigned char *$__lldb_arg_ptr)\n"
>> "{\n"
>> "unsigned char $__lldb_local_val = *$__lldb_arg_ptr;\n"
>> "}";
>> --
>> 
>> becomes 
>> 
>> 
>> static const char g_valid_pointer_check_text[] =
>> "void\n"
>> "$__lldb_valid_pointer_check (unsigned char *$__lldb_arg_ptr)\n"
>> "{\n"
>> "unsigned char $__lldb_local_val = *$__lldb_arg_ptr;\n"
>> "}";
>> 
>> 
>> Removing C Language linkage will enable mangling and will mangle 
>> "$__lldb_valid_pointer_check" to something like 
>> "_Z27$__lldb_valid_pointer_checkPh".
>> So the mangled name won't start with '$' and the symbol will not be marked 
>> as Temporary and hence assertion won't be triggered.
>> 
>> Please let me know if there is any better solution to this issue.
>> 
>> Regards,
>> Bhushan
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-20 Thread Greg Clayton via lldb-dev

> On Oct 20, 2015, at 9:57 AM, Ramkumar Ramachandra  wrote:
> 
> Greg Clayton wrote:
>> Yes I have seen a bunch of problems like this on linux due to types being 
>> incomplete in the debug info (my guess). But I would like to verify that the 
>> manual DWARF indexing isn't to blame for this. We have great accelerator 
>> tables that the clang makes for us that actually have all of the info we 
>> need to find types and functions quickly, whereas all other platforms must 
>> run SymbolFileDWARF::Index() to manually index the DWARF.
> 
> I'm on OS X, so none of this applies?

Yes, then you are using good accelerator tables.

> 
>> I should be able to tell if you can send me an ELF file and say where you 
>> were and wait wasn't showing up correctly (which variables) in an exact code 
>> context (which file + line or exact line in a function). Then I can verify 
>> that SymbolFileDWARF::Index() is correctly indexing things so that we can 
>> find types and functions when we need them.
> 
> I've been mulling over this problem: do you want to be able to run the
> Mach-O, or do you just want to inspect it? The transitive closure of
> the dependencies is atleast 30 .dylibs, and I can't take out that much
> IP.

I would just inspect a type for a variable that isn't showing up from a 
specific shared library. If you can send just the dSYM file for a library, and 
give me a specific function from a specific file and what variables were not 
showing up, I can inspect the DWARF and see why the type isn't showing up. So 
just a single dylib + its dSYM file. If you don't have a dSYM file next to your 
libfoo.dylib, you can easily create one:

% dsymutil libfoo.dylib

This will create a libfoo.dylib.dSYM file, which is linked DWARF from all the 
.o files that made the dylib.

So if you can send me a copy of the dSYM file and a file + line (foo.cpp:11), 
or function + compile unit (function is "int foo(int)" inside "foo.cpp") and 
let me know which variable wasn't able to be expanded (name of variable), I 
should be able to tell you more.

Greg
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [BUG] Regression: unprintable characters displayed

2015-10-20 Thread Ramkumar Ramachandra via lldb-dev
Hi,

This does not happen with lldb-330.0.48, which ships with OS X, but
happens with HEAD:

frame #0: 0x000101c3ce8c libmwcgir_vm_rt.dylib`(anonymous
namespace)::CgJITMemManager::endFunctionBody(this=0x00010a715610,
F=0x00010a6da200, FunctionStart="�?^\n\x01",
FunctionEnd="...")
+ 28 at CgJITMemoryManager.cpp:437

We can easily detect if the characters are printable or not , no?
FunctionStart and FunctionEnd are uint8_t *.

Ram
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] proposal for reworked flaky test category

2015-10-20 Thread Todd Fiala via lldb-dev
I'm not totally sure yet here.

Right now there is a generic category mechanism, but it is only settable
via a file in a directory, or overridden via the test case class method
called getCategories().

I think I'd want a more general decorator that allows you to tag a method
itself with categories.

So, something like:

class TestRaise:
 @lldbtest.category("flakey")
 # or maybe better since it uses a constant
 @lldbtest.category(lldbtest.CATEGORY_FLAKEY)
 def test_something_that_does_not_always_pass():
  pass

Then a method has a collection of categories, and checking membership can
look at the class instance (to hook into the existing mechanism).

-Todd


On Mon, Oct 19, 2015 at 4:40 PM, Zachary Turner  wrote:

> Yea, I definitely agree with you there.
>
> Is this going to end up with an @expectedFlakeyWindows,
> @expectedFlakeyLinux, @expectedFlakeyDarwin, @expectedFlakeyAndroid,
> @expectedFlakeyFreeBSD?
>
> It's starting to get a little crazy, at some point I think we just need
> something that we can use like this:
>
> @test_status(status=flaky, host=[win, linux, android, darwin, bsd],
> target=[win, linux, android, darwin, bsd], compiler=[gcc, clang],
> debug_info=[dsym, dwarf, dwo])
>
> On Mon, Oct 19, 2015 at 4:35 PM Todd Fiala  wrote:
>
>> My initial proposal was an attempt to not entirely skip running them on
>> our end and still get them to generate actionable signals without
>> conflating them with unexpected successes (which they absolutely are not in
>> a semantic way).
>>
>> On Mon, Oct 19, 2015 at 4:33 PM, Todd Fiala  wrote:
>>
>>> Nope, I have no issue with what you said.  We don't want to run them
>>> over here at all because we don't see enough useful info come out of them.
>>> You need time series data for that to be somewhat useful, and even then it
>>> only is useful if you see a sharp change in it after a specific change.
>>>
>>> So I really don't want to be running flaky tests at all as their signals
>>> are not useful on a per-run basis.
>>>
>>> On Mon, Oct 19, 2015 at 4:16 PM, Zachary Turner 
>>> wrote:
>>>
 Don't get me wrong, I like the idea of running flakey tests a couple of
 times and seeing if one passes (Chromium does this too as well, so it's not
 without precedent).  If I sounded harsh, it's because I *want* to be harsh
 on flaky tests.  Flaky tests indicate literally the *worst* kind of bugs
 because you don't even know what kind of problems they're causing in the
 wild, so by increasing the amount of pain they cause people (test suite
 running longer, etc) the hope is that it will motivate someone to fix it.

 On Mon, Oct 19, 2015 at 4:04 PM Todd Fiala 
 wrote:

> Okay, so I'm not a fan of the flaky tests myself, nor of test suites
> taking longer to run than needed.
>
> Enrico is going to add a new 'flakey' category to the test
> categorization.
>
> Scratch all the other complexity I offered up.  What we're going to
> ask is if a test is flakey, please add it to the 'flakey' category.  We
> won't do anything different with the category by default, so everyone will
> still get flakey tests running the same manner they do now.  However, on
> our test runners, we will be disabling the category entirely using the
> skipCategories mechanism since those are generating too much noise.
>
> We may need to add a per-test-method category mechanism since right
> now our only mechanism to add categories (1) specify a dot-file to the
> directory to have everything in it get tagged with a category, or (2)
> override the categorization for the TestCase getCategories() mechanism.
>
> -Todd
>
> On Mon, Oct 19, 2015 at 1:03 PM, Zachary Turner 
> wrote:
>
>>
>>
>> On Mon, Oct 19, 2015 at 12:50 PM Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hi all,
>>>
>>> I'd like unexpected successes (i.e. tests marked as unexpected
>>> failure that in fact pass) to retain the actionable meaning that 
>>> something
>>> is wrong.  The wrong part is that either (1) the test now passes
>>> consistently and the author of the fix just missed updating the test
>>> definition (or perhaps was unaware of the test), or (2) the test is not
>>> covering the condition it is testing completely, and some change to the
>>> code just happened to make the test pass (due to the test being not
>>> comprehensive enough).  Either of those requires some sort of 
>>> adjustment by
>>> the developers.
>>>
>> I'dd add #3.  The test is actually flaky but is tagged incorrectly.
>>
>>
>>>
>>> We have a category of test known as "flaky" or "flakey" (both are
>>> valid spellings, for those who care:
>>> http://www.merriam-webster.com/dictionary/flaky, although flaky is
>>> considered the primary).  Flaky tests are tests that we can't get to 
>>

Re: [lldb-dev] proposal for reworked flaky test category

2015-10-20 Thread Zachary Turner via lldb-dev
Well that's basically what I meant with this:

@test_status(status=flaky, host=[win, linux, android, darwin, bsd],
target=[win, linux, android, darwin, bsd], compiler=[gcc, clang],
debug_info=[dsym, dwarf, dwo])

but it has keyword parameters that allow you to specify the conditions
under which the status applies.  This is general enough to handle every
single use case I know of with a single decorator (although you still might
need to apply it more than once for example if it's flakey on one platform
but an xfail on another.

On Tue, Oct 20, 2015 at 3:46 PM Todd Fiala  wrote:

> I'm not totally sure yet here.
>
> Right now there is a generic category mechanism, but it is only settable
> via a file in a directory, or overridden via the test case class method
> called getCategories().
>
> I think I'd want a more general decorator that allows you to tag a method
> itself with categories.
>
> So, something like:
>
> class TestRaise:
>  @lldbtest.category("flakey")
>  # or maybe better since it uses a constant
>  @lldbtest.category(lldbtest.CATEGORY_FLAKEY)
>  def test_something_that_does_not_always_pass():
>   pass
>
> Then a method has a collection of categories, and checking membership can
> look at the class instance (to hook into the existing mechanism).
>
> -Todd
>
>
> On Mon, Oct 19, 2015 at 4:40 PM, Zachary Turner 
> wrote:
>
>> Yea, I definitely agree with you there.
>>
>> Is this going to end up with an @expectedFlakeyWindows,
>> @expectedFlakeyLinux, @expectedFlakeyDarwin, @expectedFlakeyAndroid,
>> @expectedFlakeyFreeBSD?
>>
>> It's starting to get a little crazy, at some point I think we just need
>> something that we can use like this:
>>
>> @test_status(status=flaky, host=[win, linux, android, darwin, bsd],
>> target=[win, linux, android, darwin, bsd], compiler=[gcc, clang],
>> debug_info=[dsym, dwarf, dwo])
>>
>> On Mon, Oct 19, 2015 at 4:35 PM Todd Fiala  wrote:
>>
>>> My initial proposal was an attempt to not entirely skip running them on
>>> our end and still get them to generate actionable signals without
>>> conflating them with unexpected successes (which they absolutely are not in
>>> a semantic way).
>>>
>>> On Mon, Oct 19, 2015 at 4:33 PM, Todd Fiala 
>>> wrote:
>>>
 Nope, I have no issue with what you said.  We don't want to run them
 over here at all because we don't see enough useful info come out of them.
 You need time series data for that to be somewhat useful, and even then it
 only is useful if you see a sharp change in it after a specific change.

 So I really don't want to be running flaky tests at all as their
 signals are not useful on a per-run basis.

 On Mon, Oct 19, 2015 at 4:16 PM, Zachary Turner 
 wrote:

> Don't get me wrong, I like the idea of running flakey tests a couple
> of times and seeing if one passes (Chromium does this too as well, so it's
> not without precedent).  If I sounded harsh, it's because I *want* to be
> harsh on flaky tests.  Flaky tests indicate literally the *worst* kind of
> bugs because you don't even know what kind of problems they're causing in
> the wild, so by increasing the amount of pain they cause people (test 
> suite
> running longer, etc) the hope is that it will motivate someone to fix it.
>
> On Mon, Oct 19, 2015 at 4:04 PM Todd Fiala 
> wrote:
>
>> Okay, so I'm not a fan of the flaky tests myself, nor of test suites
>> taking longer to run than needed.
>>
>> Enrico is going to add a new 'flakey' category to the test
>> categorization.
>>
>> Scratch all the other complexity I offered up.  What we're going to
>> ask is if a test is flakey, please add it to the 'flakey' category.  We
>> won't do anything different with the category by default, so everyone 
>> will
>> still get flakey tests running the same manner they do now.  However, on
>> our test runners, we will be disabling the category entirely using the
>> skipCategories mechanism since those are generating too much noise.
>>
>> We may need to add a per-test-method category mechanism since right
>> now our only mechanism to add categories (1) specify a dot-file to the
>> directory to have everything in it get tagged with a category, or (2)
>> override the categorization for the TestCase getCategories() mechanism.
>>
>> -Todd
>>
>> On Mon, Oct 19, 2015 at 1:03 PM, Zachary Turner 
>> wrote:
>>
>>>
>>>
>>> On Mon, Oct 19, 2015 at 12:50 PM Todd Fiala via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
 Hi all,

 I'd like unexpected successes (i.e. tests marked as unexpected
 failure that in fact pass) to retain the actionable meaning that 
 something
 is wrong.  The wrong part is that either (1) the test now passes
 consistently and the author of the fix just missed updating the test

Re: [lldb-dev] proposal for reworked flaky test category

2015-10-20 Thread Enrico Granata via lldb-dev

> On Oct 19, 2015, at 4:40 PM, Zachary Turner via lldb-dev 
>  wrote:
> 
> Yea, I definitely agree with you there.  
> 
> Is this going to end up with an @expectedFlakeyWindows, @expectedFlakeyLinux, 
> @expectedFlakeyDarwin, @expectedFlakeyAndroid, @expectedFlakeyFreeBSD?
> 
> It's starting to get a little crazy, at some point I think we just need 
> something that we can use like this:
> 
> @test_status(status=flaky, host=[win, linux, android, darwin, bsd], 
> target=[win, linux, android, darwin, bsd], compiler=[gcc, clang], 
> debug_info=[dsym, dwarf, dwo])
> 

I think this was part of the initial intent in making the categories feature. 
That you would be able to mark tests with any number of “tags” in the form of 
categories, and then skip or execute only tests that had certain tag(s) marked 
to them

With that said, the feature as it stands:
- does not support different categories for methods in a class
-  does not allow any more complex logic than “is this category present 
on this test?”
- requires manual definition of all categories (e.g. “xfail” 
cross-product “platform” should be auto-generable)

We could extend the categories system to fix all of these issues, and then you 
could just mark tests with categories instead of attributes. Then you would 
only have one attribute that would be like

@lldbtest.categories(“win-flakey”, “linux-xfail”, “dsym”)
def test_stuff(self):
  …

> On Mon, Oct 19, 2015 at 4:35 PM Todd Fiala  > wrote:
> My initial proposal was an attempt to not entirely skip running them on our 
> end and still get them to generate actionable signals without conflating them 
> with unexpected successes (which they absolutely are not in a semantic way).
> 
> On Mon, Oct 19, 2015 at 4:33 PM, Todd Fiala  > wrote:
> Nope, I have no issue with what you said.  We don't want to run them over 
> here at all because we don't see enough useful info come out of them.  You 
> need time series data for that to be somewhat useful, and even then it only 
> is useful if you see a sharp change in it after a specific change.
> 
> So I really don't want to be running flaky tests at all as their signals are 
> not useful on a per-run basis.
> 
> On Mon, Oct 19, 2015 at 4:16 PM, Zachary Turner  > wrote:
> Don't get me wrong, I like the idea of running flakey tests a couple of times 
> and seeing if one passes (Chromium does this too as well, so it's not without 
> precedent).  If I sounded harsh, it's because I *want* to be harsh on flaky 
> tests.  Flaky tests indicate literally the *worst* kind of bugs because you 
> don't even know what kind of problems they're causing in the wild, so by 
> increasing the amount of pain they cause people (test suite running longer, 
> etc) the hope is that it will motivate someone to fix it.  
> 
> On Mon, Oct 19, 2015 at 4:04 PM Todd Fiala  > wrote:
> Okay, so I'm not a fan of the flaky tests myself, nor of test suites taking 
> longer to run than needed.
> 
> Enrico is going to add a new 'flakey' category to the test categorization.
> 
> Scratch all the other complexity I offered up.  What we're going to ask is if 
> a test is flakey, please add it to the 'flakey' category.  We won't do 
> anything different with the category by default, so everyone will still get 
> flakey tests running the same manner they do now.  However, on our test 
> runners, we will be disabling the category entirely using the skipCategories 
> mechanism since those are generating too much noise.
> 
> We may need to add a per-test-method category mechanism since right now our 
> only mechanism to add categories (1) specify a dot-file to the directory to 
> have everything in it get tagged with a category, or (2) override the 
> categorization for the TestCase getCategories() mechanism.
> 
> -Todd
> 
> On Mon, Oct 19, 2015 at 1:03 PM, Zachary Turner  > wrote:
> 
> 
> On Mon, Oct 19, 2015 at 12:50 PM Todd Fiala via lldb-dev 
> mailto:lldb-dev@lists.llvm.org>> wrote:
> Hi all,
> 
> I'd like unexpected successes (i.e. tests marked as unexpected failure that 
> in fact pass) to retain the actionable meaning that something is wrong.  The 
> wrong part is that either (1) the test now passes consistently and the author 
> of the fix just missed updating the test definition (or perhaps was unaware 
> of the test), or (2) the test is not covering the condition it is testing 
> completely, and some change to the code just happened to make the test pass 
> (due to the test being not comprehensive enough).  Either of those requires 
> some sort of adjustment by the developers.
> I'dd add #3.  The test is actually flaky but is tagged incorrectly.
>  
> 
> We have a category of test known as "flaky" or "flakey" (both are valid 
> spellings, for those who care: 
> http://www.merriam-webster.com/dictionary/flaky 
> 

Re: [lldb-dev] RFC: Making unit tests run by default on ninja check-lldb

2015-10-20 Thread Ying Chen via lldb-dev
Hi Zachary,

The big unknown here is how to make the buildbots understand unit test
> failures and trigger a failure when ninja check-lldb-unit fails.
>

There're two conditions buildbot will identity a test step as failure.
One is that the command has non-zero return code.
The other is that there're failing codes in stdout message. (Refer to
LitTestCommand::evaluateCommand in this file

.)
Failing codes are defined as:
failingCodes = set(['FAIL', 'XPASS', 'KPASS', 'UNRESOLVED', 'TIMEOUT'])

So if the failures are print out as '^FAIL: (.*) \(.*\)', buildbot will
understand it's failing even if ninja check-lldb-unit returns 0.
Or we could add some logic to the above file to handle the output of unit
test.

Thanks,
Ying
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-20 Thread Ramkumar Ramachandra via lldb-dev
[Quoting entire email for the benefit of everyone else]

On Tue, Oct 20, 2015 at 7:39 PM, Greg Clayton  wrote:
> Ok, so try this on all of your dSYM files:
>
> 1 - load the dsym file into lldb:
>
> % xcrun lldb 
> libmwcgir_vm_rt.dylib.dSYM/Contents/Resources/DWARF/libmwcgir_vm_rt.dylib
> (lldb) image lookup -t "iplist llvm::ilist_traits >"
> 2 matches found in 
> /Volumes/work/gclayton/Downloads/libmwcgir_vm_rt.dylib.dSYM/Contents/Resources/DWARF/libmwcgir_vm_rt.dylib:
> id = {0x000211dc}, name = "iplist llvm::ilist_traits >", qualified = 
> "llvm::iplist >", 
> byte-size = 24, decl = ilist.h:313, clang_type = "class iplist : public 
> llvm::ilist_traits {
> llvm::Function *Head;
> llvm::Function *getTail();
> const llvm::Function *getTail() const;
> void setTail(llvm::Function *) const;
> void CreateLazySentinel() const;
> static bool op_less(llvm::Function &, llvm::Function &);
> static bool op_equal(llvm::Function &, llvm::Function &);
> iplist(const llvm::iplist llvm::ilist_traits > &);
> void operator=(const llvm::iplist llvm::ilist_traits > &);
> iplist();
> ~iplist();
> iterator begin();
> const_iterator begin() const;
> iterator end();
> const_iterator end() const;
> reverse_iterator rbegin();
> const_reverse_iterator rbegin() const;
> reverse_iterator rend();
> const_reverse_iterator rend() const;
> size_type max_size() const;
> bool empty() const;
> reference front();
> const_reference front() const;
> reference back();
> const_reference back() const;
> void swap(llvm::iplist 
> > &);
> iterator insert(iterator, llvm::Function *);
> iterator insertAfter(iterator, llvm::Function *);
> llvm::Function *remove(iterator &);
> llvm::Function *remove(const iterator &);
> iterator erase(iterator);
> void clearAndLeakNodesUnsafely();
> void transfer(iterator, llvm::iplist llvm::ilist_traits > &, iterator, iterator);
> size_type size() const;
> iterator erase(iterator, iterator);
> void clear();
> void push_front(llvm::Function *);
> void push_back(llvm::Function *);
> void pop_front();
> void pop_back();
> void splice(iterator, llvm::iplist llvm::ilist_traits > &);
> void splice(iterator, llvm::iplist llvm::ilist_traits > &, iterator);
> void splice(iterator, llvm::iplist llvm::ilist_traits > &, iterator, iterator);
> void erase(const llvm::Function &);
> void unique();
> void merge(llvm::iplist llvm::ilist_traits > &);
> void sort();
> }
> "
> id = {0x001a658a}, name = "iplist llvm::ilist_traits >", qualified = 
> "llvm::iplist >", 
> byte-size = 24, decl = ilist.h:313, clang_type = "class iplist : public 
> llvm::ilist_traits {
> llvm::Function *Head;
> llvm::Function *getTail();
> const llvm::Function *getTail() const;
> void setTail(llvm::Function *) const;
> void CreateLazySentinel() const;
> static bool op_less(llvm::Function &, llvm::Function &);
> static bool op_equal(llvm::Function &, llvm::Function &);
> iplist(const llvm::iplist llvm::ilist_traits > &);
> void operator=(const llvm::iplist llvm::ilist_traits > &);
> iplist();
> ~iplist();
> iterator begin();
> const_iterator begin() const;
> iterator end();
> const_iterator end() const;
> reverse_iterator rbegin();
> const_reverse_iterator rbegin() const;
> reverse_iterator rend();
> const_reverse_iterator rend() const;
> size_type max_size() const;
> bool empty() const;
> reference front();
> const_reference front() const;
> reference back();
> const_reference back() const;
> void swap(llvm::iplist 
> > &);
> iterator insert(iterator, llvm::Function *);
> iterator insertAfter(iterator, llvm::Function *);
> llvm::Function *remove(iterator &);
> llvm::Function *remove(const iterator &);
> iterator erase(iterator);
> void clearAndLeakNodesUnsafely();
> void transfer(iterator, llvm::iplist llvm::ilist_traits > &, iterator, iterator);
> size_type size() const;
> iterator erase(iterator, iterator);
> void clear();
> void push_front(llvm::Function *);
> void push_back(llvm::Function *);
> void pop_front();
> void pop_back();
> void splice(iterator, llvm::iplist llvm::ilist_traits > &);
> void splice(iterator, llvm::iplist llvm::ilist_traits > &, iterator);
> void splice(iterator, llvm::iplist llvm::ilist_traits > &, iterator, iterator);
> void erase(const llvm::Function &);
> void unique();
> void merge(llvm::iplist llvm::ilist_traits > &);
> void sort();
> }
> "
>
>
> Do the same thing for any other shared libraries that you have and compare 
> the data in quotes of the 'clang_type = ""' and save the  to a 
> file. See if any of them differ from each other.
>
>
> What is interesting here is that we have two of the same copies of this type 
> in the same file, this shouldn't happen

Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-20 Thread Greg Clayton via lldb-dev
>> 
>> Are you pulling in data from two different copies of LLVM in your project? 
>> Or is something in here symlink to the other somewhere?
> 
> Excellent find. Yes, 3p_mirror is a symlink to the 3p-tmw-osx location.
> 
>> So to sum up: LLDB uniques types by decl file + decl line + byte size + 
>> fully qualified typename and that is failing because the decl files are 
>> different for these two types from the debug infos point of view. And these 
>> types could actually differ since they come from different files and we need 
>> to allow this so that we can display these types.
> 
> I'm slightly confused: can't we ask Clang to tell us if the two types
> are structurally equivalent? Is this some short-cut? We need to
> account for symlinks then, it seems.


Yep. Try replacing Declaration::Compare() in 
lldb/source/Symbol/Declaration.cpp. You will need to include:

#include "lldb/Host/FileSystem.h"


Then replace Declaration::Compare() with this:

int
Declaration::Compare(const Declaration& a, const Declaration& b)
{
int result = FileSpec::Compare(a.m_file, b.m_file, true);
if (result)
{
int symlink_result = result;
if (a.m_file.GetFilename() == b.m_file.GetFilename())
{
// Check if the directories in a and b are symlinks to each other
FileSpec resolved_a;
FileSpec resolved_b;
if (FileSystem::ResolveSymbolicLink(a.m_file, resolved_a).Success() 
&&
FileSystem::ResolveSymbolicLink(b.m_file, resolved_b).Success())
{
symlink_result = FileSpec::Compare(resolved_a, resolved_b, 
true);
}
}
if (symlink_result != 0)
return symlink_result;
}
if (a.m_line < b.m_line)
return -1;
else if (a.m_line > b.m_line)
return 1;
#ifdef LLDB_ENABLE_DECLARATION_COLUMNS
if (a.m_column < b.m_column)
return -1;
else if (a.m_column > b.m_column)
return 1;
#endif
return 0;
}

Then try running and let me know what your results are!
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [BUG?] Confusion between translation units?

2015-10-20 Thread Greg Clayton via lldb-dev

>> 
>> The other bad thing is even after you normalize the paths you are comparing:
>> 
>> /mathworks/devel/sbs/34/rramacha.idivide-final-lap/3p_mirror/maci64/LLVM/include/llvm/ADT/ilist.h
>> /mathworks/devel/sandbox/rramacha/3p-tmw-osx/3p/derived/maci64/LLVM/llvm/include/llvm/ADT/ilist.h
>> 
>> Are you pulling in data from two different copies of LLVM in your project? 
>> Or is something in here symlink to the other somewhere?
> 
> Excellent find. Yes, 3p_mirror is a symlink to the 3p-tmw-osx location.
> 
>> So to sum up: LLDB uniques types by decl file + decl line + byte size + 
>> fully qualified typename and that is failing because the decl files are 
>> different for these two types from the debug infos point of view. And these 
>> types could actually differ since they come from different files and we need 
>> to allow this so that we can display these types.
> 
> I'm slightly confused: can't we ask Clang to tell us if the two types
> are structurally equivalent?

How would you temporarily make a new version of this type so that you can 
compare it to the one in the clang::ASTContext for the DWARF file? Make another 
clang::ASTContext for each type and then try to construct the type in there 
along with any other types that are needed and then compare the types in the 
two different clang::ASTContext objects and if they compare don't copy the 
type? And if they don't copy the type from one AST to the other? That would be 
way too expensive and time consuming.

> Is this some short-cut? We need to account for symlinks then, it seems.

So in LLDB we have 1 AST context per executable file and we create _one_ 
instance of a type in each AST context based on the equivalent of C++ ODR (only 
one copy of a type at a given decl context). Why? Lets see how many copies of 
"iplist >" you actually have 
in your one shared library: 152 to be exact (see output below). All full 
definitions of the same thing over and over and over and over and over. This is 
how current compilers emit debug info: one copy per source file. So you end up 
with millions of copies of types all over the place. So to deal with this since 
we have one AST context per DWARF file, we create the type once and only once. 
See the results:

% dwarfdump --apple-types="iplist >" libmwcgir_vm_rt.dylib.dSYM/ -r0
--
 File: 
libmwcgir_vm_rt.dylib.dSYM/Contents/Resources/DWARF/libmwcgir_vm_rt.dylib 
(x86_64)
--
 152 matches:

0x000211dc: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"../../../3p_mirror/maci64/LLVM/include/llvm/ADT/ilist.h" )
 AT_decl_line( 313 )


0x000cb5d6: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"../../../3p_mirror/maci64/LLVM/include/llvm/ADT/ilist.h" )
 AT_decl_line( 313 )


0x000e8804: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"../../../3p_mirror/maci64/LLVM/include/llvm/ADT/ilist.h" )
 AT_decl_line( 313 )


0x00136a56: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"../../../3p_mirror/maci64/LLVM/include/llvm/ADT/ilist.h" )
 AT_decl_line( 313 )


0x001a658a: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"/mathworks/devel/sandbox/rramacha/3p-tmw-osx/3p/derived/maci64/LLVM/llvm/include/llvm/ADT/ilist.h"
 )
 AT_decl_line( 313 )


0x001fe93a: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"/mathworks/devel/sandbox/rramacha/3p-tmw-osx/3p/derived/maci64/LLVM/llvm/include/llvm/ADT/ilist.h"
 )
 AT_decl_line( 313 )


0x0022fa65: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"/mathworks/devel/sandbox/rramacha/3p-tmw-osx/3p/derived/maci64/LLVM/llvm/include/llvm/ADT/ilist.h"
 )
 AT_decl_line( 313 )


0x002624d3: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"/mathworks/devel/sandbox/rramacha/3p-tmw-osx/3p/derived/maci64/LLVM/llvm/include/llvm/ADT/ilist.h"
 )
 AT_decl_line( 313 )


0x00296bfe: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"/mathworks/devel/sandbox/rramacha/3p-tmw-osx/3p/derived/maci64/LLVM/llvm/include/llvm/ADT/ilist.h"
 )
 AT_decl_line( 313 )


0x002c9078: TAG_class_type [3] *
 AT_name( "iplist >" )
 AT_byte_size( 0x18 )
 AT_decl_file( 
"/mathworks/devel/sandbox/rramacha/3p-tmw-osx/3p/derived/maci64/LL

Re: [lldb-dev] RFC: Making unit tests run by default on ninja check-lldb

2015-10-20 Thread Todd Fiala via lldb-dev
Hi Ying,

Our dotest.py lldb test results go through that lit test parser system?  I
see XPASS happen frequently (and in fact is my whole reason for starting a
thread on getting rid of flakey tests, or making them run enough times so
that their output can be a useful signal rather than useless).  According
to this script, an XPASS would be listed as failure.  I'm not seeing us
treat XPASS as failures AFAICT.

Are we just saying that our gtests get processed by that?

-Todd

On Tue, Oct 20, 2015 at 4:51 PM, Ying Chen via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi Zachary,
>
> The big unknown here is how to make the buildbots understand unit test
>> failures and trigger a failure when ninja check-lldb-unit fails.
>>
>
> There're two conditions buildbot will identity a test step as failure.
> One is that the command has non-zero return code.
> The other is that there're failing codes in stdout message. (Refer to
> LitTestCommand::evaluateCommand in this file
> 
> .)
> Failing codes are defined as:
> failingCodes = set(['FAIL', 'XPASS', 'KPASS', 'UNRESOLVED', 'TIMEOUT'])
>
> So if the failures are print out as '^FAIL: (.*) \(.*\)', buildbot will
> understand it's failing even if ninja check-lldb-unit returns 0.
> Or we could add some logic to the above file to handle the output of unit
> test.
>
> Thanks,
> Ying
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] lldb tests and tear down hooks

2015-10-20 Thread Zachary Turner via lldb-dev
There's a subtle bug that is pervasive throughout the test suite.  Consider
the following seemingly innocent test class.

class MyTest(TestBase);
def setUp():
TestBase.setUp()#1

# Do some stuff  #2
self.addTearDownHook(lambda: self.foo())   #3

def test_interesting_stuff():
pass

Here's the problem.  As a general principle, cleanup needs to happen in
reverse order from initialization.  That's why, if we had a tearDown()
method, it would probably look something like this:

def tearDown():
# Clean up some stuff  #2

TestBase.tearDown()#1

This follows the pattern in other languages like C++, for example, where
construction goes from base -> derived, but destruction goes from derived
-> base.

But if you add these tear down hooks into the mix, it violates that.  tear
down hooks get invoked as part of TestBase.tearDown(), so in the above
example the initialization order is 1 -> 2 -> 3 but the teardown order is 2
-> 1 -> 3  (or 2 -> 3 -> 1, or none of the above depending on where inside
of TestBase.tearDown() hook the hooks get invoked).

To make matters worse, tear down hooks can be added from arbitrary points
in a test's run, not just during setup.

The only way I can see to fix this is to delete this tearDownHook mechanism
entirely.  Anyone who wants it can easily reimplement this in the
individual test by just keeping their own list of lambdas in the derived
class, overriding tearDown(), and running through their own list in reverse
order before calling TestBase.tearDown().

I don't intend to do this work right now, but I would like to do it in the
future, so I want to throw this out there and see if anyone has thoughts on
it.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev