Re: [lldb-dev] [llvm-dev] [3.7 Release] RC3 has been tagged, let's wrap this up

2015-08-25 Thread Daniel Sanders via lldb-dev
clang+llvm-3.7.0-rc3-mips-linux-gnu.tar.xz
All ok.

clang+llvm-3.7.0-rc3-mipsel-linux-gnu.tar.xz
All ok.

clang+llvm-3.7.0-rc3-x86_64-linux-gnu-ubuntu-14.04.tar.xz (cross compiling for 
Mips)
Still running the last few test-suite runs but no unexpected problems 
so far.
I say 'unexpected' because I updated to a new GCC cross-compilation 
toolchain which no longer contains certain multilibs. The three runs that 
depend on these removed multilibs (mips32r1 and mips64r1 n32/n64) failed as 
expected.

> -Original Message-
> From: llvm-dev [mailto:llvm-dev-boun...@lists.llvm.org] On Behalf Of Hans
> Wennborg via llvm-dev
> Sent: 21 August 2015 01:52
> To: llvm-dev; cfe-...@lists.llvm.org; lldb-dev@lists.llvm.org; openmp-
> d...@lists.llvm.org
> Cc: Ben Pope; Pavel Labath; Nikola Smiljanić
> Subject: [llvm-dev] [3.7 Release] RC3 has been tagged, let's wrap this up
> 
> Hello everyone,
> 
> 3.7-rc3 has just been tagged. Testers, please test, build binaries,
> upload to the sftp and report results to this thread.
> 
> Again, a lot of patches got merged between rc2 and rc3, but hopefully
> nothing that should upset things.
> 
> One thing that did change is that the release script now correctly
> symlinks clang-tools-extra into the build. If this causes problems on
> your platform, please just remove it.
> 
> This is a release candidate in the real sense: at this point I have
> zero release blockers on my radar. I will now only accept fixes for
> critical regressions, and if nothing comes up, rc3 will be promoted to
> 3.7.0-final.
> 
> Documentation and release note patches are still welcome all the way
> up until the final tag goes in.
> 
> Issues that were on my radar, but I don't consider blocking:
> 
> - Sanitizer test failures on various platforms, e.g. PR24222. We never
> ran these tests in previous releases, so it's not a regression. It
> would be great if the sanitizer folks could look into the test
> failures, but it's not blocking 3.7.
> 
> - PR24273: "[ARM] Libc++abi built in-tree with libunwind fails in
> __cxa_allocate_exception", Renato will exclude libc++ from his build
> for now.
> 
> - Lack of key functions in some Instruction classes causing build
> failures without -fno-rtti
> (http://lists.llvm.org/pipermail/llvm-dev/2015-August/089010.html). No
> patches have been forthcoming, so this will not get fixed for 3.7. At
> least we correctly report -fno-rtti in llvm-config built with CMake
> now.
> 
> - r244221: "[SPARC] Don't compare arch name as a string, use the enum
> instead", owner is unresponsive.
> 
> - "[lldb] r245020 - [MIPS]Handle floating point and aggregate return
> types in SysV-mips [32 bit] ABI", owner is unresponsive.
> 
> 
> Cheers,
> Hans
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Tamas Berghammer via lldb-dev
Going back to the original question I think you have more test failures
then expected. As Chaoren mentioned all TestDataFormatterLibc* tests are
failing because of a missing dependency, but I think the rest of the tests
should pass (I wouldn't expect them to depend on libc++-dev).

You can see the up to date list of failures on the Linux buildbot here:
http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake

The buildbot is running in "Google Compute Engine" with Linux version:
"Linux buildbot-master-ubuntu-1404 3.16.0-31-generic #43~14.04.1-Ubuntu SMP
Tue Mar 10 20:13:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux"

LLDB is compiled by Clang (not sure about witch version but can find out if
somebody thinks it matters) and the inferiors are compiled with clang-3.5,
clang-tot, gcc-4.9.2. In all tested configuration there should be no
failure (all failing tests should be XFAIL-ed).

For the flaky tests we introduced an "expectedFlaky" decorator what
executes the test twice and expects it to pass at least once, but it
haven't been applied to all flaky test yet. The plan with the tests passing
with "unexpected success" at the moment is to gather statistics about them
and based on that mark them as "expected flaky" or remove the "expected
failure" based on the number of failures we seen in the last few hundreds
runs.

Tamas

On Tue, Aug 25, 2015 at 2:50 AM via lldb-dev 
wrote:

> On Mon, Aug 24, 2015 at 05:37:43PM -0700, via lldb-dev wrote:
> > On Mon, Aug 24, 2015 at 03:37:52PM -0700, Todd Fiala via lldb-dev wrote:
> > > On Linux on non-virtualized hardware, I currently see the failures
> below on
> > > Ubuntu 14.04.2 using a setup like this:
> > > [...]
> > >
> > > ninja check-lldb output:
>
> FYI, ninja check-lldb actually calls dosep.
>
> > > Ran 394 test suites (15 failed) (3.807107%)
> > > Ran 474 test cases (17 failed) (3.586498%)
> >
> > I don't think you can trust the reporting of dosep.py's "Ran N test
> > cases", as it fails to count about 500 test cases.  The only way I've
> > found to get an accurate count is to add up all the Ns from "Ran N tests
> > in" as follows:
> >
> > ./dosep.py -s --options "-v --executable $BLDDIR/bin/lldb" 2>&1 | tee
> test_out.log
> > export total=`grep -E "^Ran [0-9]+ tests? in" test_out.log | awk
> '{count+=$2} END {print count}'`
>
> Of course, these commands assume you're running the tests from the
> lldb/test directory.
>
> > (See comments in http://reviews.llvm.org/rL238467.)
>
> I've pasted (and tweaked) the relavent comments from that review here,
> where I describe a narrowed case showing how dosep fails to count all the
> test cases from one test suite in test/types.  Note that the tests were run
> on OSX, so your counts may vary.
>
> The final count from:
> Ran N test cases .*
> is wrong, as I'll explain below. I've done a comparison between dosep and
> dotest on a narrowed subset of tests to show how dosep can omit the test
> cases from a test suite in its count.
>
> Tested on subset of lldb/test with just the following directories/files
> (i.e. all others directories/files were removed):
> test/make
> test/pexpect-2.4
> test/plugins
> test/types
> test/unittest2
> # The .py files kept in test/types are as follows (so
> test/types/TestIntegerTypes.py* was removed):
> test/types/AbstractBase.py
> test/types/HideTestFailures.py
> test/types/TestFloatTypes.py
> test/types/TestFloatTypesExpr.py
> test/types/TestIntegerTypesExpr.py
> test/types/TestRecursiveTypes.py
>
> Tests were run in the lldb/test directory using the following commands:
> dotest:
> ./dotest.py -v
> dosep:
> ./dosep.py -s --options "-v"
>
> Comparing the test case totals, dotest correctly counts 46, but dosep
> counts only 16:
> dotest:
> Ran 46 tests in 75.934s
> dosep:
> Testing: 23 tests, 4 threads ## note: this number changes randonmly
> Ran 6 tests in 7.049s
> [PASSED TestFloatTypes.py] - 1 out of 23 test suites processed
> Ran 6 tests in 11.165s
> [PASSED TestFloatTypesExpr.py] - 2 out of 23 test suites processed
> Ran 30 tests in 54.581s ## FIXME: not counted?
> [PASSED TestIntegerTypesExpr.py] - 3 out of 23 test suites
> processed
> Ran 4 tests in 3.212s
> [PASSED TestRecursiveTypes.py] - 4 out of 23 test suites processed
> Ran 4 test suites (0 failed) (0.00%)
> Ran 16 test cases (0 failed) (0.00%)
>
> With test/types/TestIntegerTypesExpr.py* removed, both correctly count 16
> test cases:
> dosep:
> Testing: 16 tests, 4 threads
> Ran 6 tests in 7.059s
> Ran 6 tests in 11.186s
> Ran 4 tests in 3.241s
> Ran 3 test suites (0 failed) (0.00%)
> Ran 16 test cases (0 failed) (0.00%)
>
> Note: I couldn't compare the test counts on all the tests because of the
> concern raised in http://reviews.llvm.org/rL237053. That is, dotest can
> no longer comple

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
Thanks for the details on dosep.py, Dawn.

For counting I will probably go back to my old method of parsing the output
of a serial dotest run, since IIRC I can get skip counts accurately there
as well.  (Or perhaps that should be added to dosep.py, it's been a while
since I last heavily modified that script).

-Todd

On Mon, Aug 24, 2015 at 6:50 PM,  wrote:

> On Mon, Aug 24, 2015 at 05:37:43PM -0700, via lldb-dev wrote:
> > On Mon, Aug 24, 2015 at 03:37:52PM -0700, Todd Fiala via lldb-dev wrote:
> > > On Linux on non-virtualized hardware, I currently see the failures
> below on
> > > Ubuntu 14.04.2 using a setup like this:
> > > [...]
> > >
> > > ninja check-lldb output:
>
> FYI, ninja check-lldb actually calls dosep.
>
> > > Ran 394 test suites (15 failed) (3.807107%)
> > > Ran 474 test cases (17 failed) (3.586498%)
> >
> > I don't think you can trust the reporting of dosep.py's "Ran N test
> > cases", as it fails to count about 500 test cases.  The only way I've
> > found to get an accurate count is to add up all the Ns from "Ran N tests
> > in" as follows:
> >
> > ./dosep.py -s --options "-v --executable $BLDDIR/bin/lldb" 2>&1 | tee
> test_out.log
> > export total=`grep -E "^Ran [0-9]+ tests? in" test_out.log | awk
> '{count+=$2} END {print count}'`
>
> Of course, these commands assume you're running the tests from the
> lldb/test directory.
>
> > (See comments in http://reviews.llvm.org/rL238467.)
>
> I've pasted (and tweaked) the relavent comments from that review here,
> where I describe a narrowed case showing how dosep fails to count all the
> test cases from one test suite in test/types.  Note that the tests were run
> on OSX, so your counts may vary.
>
> The final count from:
> Ran N test cases .*
> is wrong, as I'll explain below. I've done a comparison between dosep and
> dotest on a narrowed subset of tests to show how dosep can omit the test
> cases from a test suite in its count.
>
> Tested on subset of lldb/test with just the following directories/files
> (i.e. all others directories/files were removed):
> test/make
> test/pexpect-2.4
> test/plugins
> test/types
> test/unittest2
> # The .py files kept in test/types are as follows (so
> test/types/TestIntegerTypes.py* was removed):
> test/types/AbstractBase.py
> test/types/HideTestFailures.py
> test/types/TestFloatTypes.py
> test/types/TestFloatTypesExpr.py
> test/types/TestIntegerTypesExpr.py
> test/types/TestRecursiveTypes.py
>
> Tests were run in the lldb/test directory using the following commands:
> dotest:
> ./dotest.py -v
> dosep:
> ./dosep.py -s --options "-v"
>
> Comparing the test case totals, dotest correctly counts 46, but dosep
> counts only 16:
> dotest:
> Ran 46 tests in 75.934s
> dosep:
> Testing: 23 tests, 4 threads ## note: this number changes randonmly
> Ran 6 tests in 7.049s
> [PASSED TestFloatTypes.py] - 1 out of 23 test suites processed
> Ran 6 tests in 11.165s
> [PASSED TestFloatTypesExpr.py] - 2 out of 23 test suites processed
> Ran 30 tests in 54.581s ## FIXME: not counted?
> [PASSED TestIntegerTypesExpr.py] - 3 out of 23 test suites
> processed
> Ran 4 tests in 3.212s
> [PASSED TestRecursiveTypes.py] - 4 out of 23 test suites processed
> Ran 4 test suites (0 failed) (0.00%)
> Ran 16 test cases (0 failed) (0.00%)
>
> With test/types/TestIntegerTypesExpr.py* removed, both correctly count 16
> test cases:
> dosep:
> Testing: 16 tests, 4 threads
> Ran 6 tests in 7.059s
> Ran 6 tests in 11.186s
> Ran 4 tests in 3.241s
> Ran 3 test suites (0 failed) (0.00%)
> Ran 16 test cases (0 failed) (0.00%)
>
> Note: I couldn't compare the test counts on all the tests because of the
> concern raised in http://reviews.llvm.org/rL237053. That is, dotest can
> no longer complete the tests on OSX, as all test suites fail after test
> case 898: test_disassemble_invalid_vst_1_64_raw_data get ERRORs. I don't
> think that issue is related to problems in dosep.
>
> Thanks,
> -Dawn
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
On Tue, Aug 25, 2015 at 5:40 AM, Tamas Berghammer 
wrote:

> Going back to the original question I think you have more test failures
> then expected. As Chaoren mentioned all TestDataFormatterLibc* tests are
> failing because of a missing dependency,
>

Thanks, Tamas.  I'm going to be testing again today with libc++ installed.


> but I think the rest of the tests should pass (I wouldn't expect them to
> depend on libc++-dev).
>
>
I'll get a better handle on what's failing once I get rid of that first
batch.


> You can see the up to date list of failures on the Linux buildbot here:
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake
>
>
Ah yes,that'll be good to cross reference.


> The buildbot is running in "Google Compute Engine" with Linux version:
> "Linux buildbot-master-ubuntu-1404 3.16.0-31-generic #43~14.04.1-Ubuntu SMP
> Tue Mar 10 20:13:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux"
>
> LLDB is compiled by Clang (not sure about witch version but can find out
> if somebody thinks it matters) and the inferiors are compiled with
> clang-3.5, clang-tot, gcc-4.9.2. In all tested configuration there should
> be no failure (all failing tests should be XFAIL-ed).
>
>
Ah okay good to know.  In the past IIRC I did get different failures using
clang-built vs. gcc-built lldb on Ubuntu 14.04.  The clang-built lldbs at
the time were harder to debug on Linux for one reason or another (I think
particularly if any optimizations were enabled due to loss of debuginfo,
but there might have been more).  Are you using a clang-built lldb and
debugging it reasonably well on Linux?  If so I'd just assume move over to
using clang so there's one less difference when I'm looking across
platforms.


> For the flaky tests we introduced an "expectedFlaky" decorator what
> executes the test twice and expects it to pass at least once,
>

Ah that's a good addition.  We had talked about doing something to watch
tests over time to see when it might be good to promote an XFAIL test that
is consistently passing to a static "expect success" test.  The flaky flag
sounds handy for those that flap.


> but it haven't been applied to all flaky test yet. The plan with the tests
> passing with "unexpected success" at the moment is to gather statistics
> about them and based on that mark them as "expected flaky" or remove the
> "expected failure" based on the number of failures we seen in the last few
> hundreds runs.
>

Ah yes that :-)  Love it.

Thanks, Tamas!


>
> Tamas
>
> On Tue, Aug 25, 2015 at 2:50 AM via lldb-dev 
> wrote:
>
>> On Mon, Aug 24, 2015 at 05:37:43PM -0700, via lldb-dev wrote:
>> > On Mon, Aug 24, 2015 at 03:37:52PM -0700, Todd Fiala via lldb-dev wrote:
>> > > On Linux on non-virtualized hardware, I currently see the failures
>> below on
>> > > Ubuntu 14.04.2 using a setup like this:
>> > > [...]
>> > >
>> > > ninja check-lldb output:
>>
>> FYI, ninja check-lldb actually calls dosep.
>>
>> > > Ran 394 test suites (15 failed) (3.807107%)
>> > > Ran 474 test cases (17 failed) (3.586498%)
>> >
>> > I don't think you can trust the reporting of dosep.py's "Ran N test
>> > cases", as it fails to count about 500 test cases.  The only way I've
>> > found to get an accurate count is to add up all the Ns from "Ran N tests
>> > in" as follows:
>> >
>> > ./dosep.py -s --options "-v --executable $BLDDIR/bin/lldb" 2>&1 | tee
>> test_out.log
>> > export total=`grep -E "^Ran [0-9]+ tests? in" test_out.log | awk
>> '{count+=$2} END {print count}'`
>>
>> Of course, these commands assume you're running the tests from the
>> lldb/test directory.
>>
>> > (See comments in http://reviews.llvm.org/rL238467.)
>>
>> I've pasted (and tweaked) the relavent comments from that review here,
>> where I describe a narrowed case showing how dosep fails to count all the
>> test cases from one test suite in test/types.  Note that the tests were run
>> on OSX, so your counts may vary.
>>
>> The final count from:
>> Ran N test cases .*
>> is wrong, as I'll explain below. I've done a comparison between dosep and
>> dotest on a narrowed subset of tests to show how dosep can omit the test
>> cases from a test suite in its count.
>>
>> Tested on subset of lldb/test with just the following directories/files
>> (i.e. all others directories/files were removed):
>> test/make
>> test/pexpect-2.4
>> test/plugins
>> test/types
>> test/unittest2
>> # The .py files kept in test/types are as follows (so
>> test/types/TestIntegerTypes.py* was removed):
>> test/types/AbstractBase.py
>> test/types/HideTestFailures.py
>> test/types/TestFloatTypes.py
>> test/types/TestFloatTypesExpr.py
>> test/types/TestIntegerTypesExpr.py
>> test/types/TestRecursiveTypes.py
>>
>> Tests were run in the lldb/test directory using the following commands:
>> dotest:
>> ./dotest.py -v
>> dosep:
>> ./dosep.py -s --options "-v"
>>
>> Comparing the test case totals, dotest correctly counts 46, but dosep
>> counts onl

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala  wrote:

>
>
> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin  wrote:
>
>> The TestDataFormatterLibcc* tests require libc++-dev:
>>
>> $ sudo apt-get install libc++-dev
>>
>>
> Ah okay, so we are working with libc++ on Ubuntu, that's good to hear.
> Pre-14.04 I gave up on it.
>
> Will cmake automatically choose libc++ if it is present?  Or do I need to
> pass something to cmake to use libc++?
>

Hmm it appears I need to do more than just install libc++-dev.  I did a
clean build with that installed, then ran the tests, and I still have the
Libcxc/Libcxx tests failing.  Is there some flag expected, either to pass
along for the compile options to dotest.py to override/specify which c++
lib it is using?


>
> Thanks, Chaoren!
>
> -Todd
>
>
>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>>
>>> On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner 
>>> wrote:
>>>
 Can't comment on the failures for Linux, but I don't think we have a
 good handle on the unexpected successes.  I only added that information to
 the output about a week ago, before that unexpected successes were actually
 going unnoticed.

>>>
>>> Okay, thanks Zachary.   A while back we had some flapping tests that
>>> would oscillate between unexpected success and failure on Linux.  Some of
>>> those might still be in that state but maybe (!) are fixed.
>>>
>>> Anyone on the Linux end who happens to know if the fails in particular
>>> look normal, that'd be good to know.
>>>
>>> Thanks!
>>>
>>>

 It's likely that someone could just go in there and remove the XFAIL
 from those tests.

 On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev <
 lldb-dev@lists.llvm.org> wrote:

> Hi all,
>
> I'm just trying to get a handle on current lldb test failures across
> different platforms.
>
> On Linux on non-virtualized hardware, I currently see the failures
> below on Ubuntu 14.04.2 using a setup like this:
> * stock linker (ld.bfd),
> * g++ 4.9.2
> * cmake
> * ninja
> * libstdc++
>
> ninja check-lldb output:
>
> Ran 394 test suites (15 failed) (3.807107%)
> Ran 474 test cases (17 failed) (3.586498%)
> Failing Tests (15)
> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad 3.13.0-57-generic
> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestDataFormatterSkipSummary.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestDataFormatterUnordered.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestMiGdbSetShowPrint.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestStaticVariables.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> FAIL: LLDB (suite) :: TestStepNoDebug.py (Linux rad 3.13.0-57-generic
> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
> FAIL: LLDB (suite) :: TestTypedefArray.py (Linux rad 3.13.0-57-generic
> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
> FAIL: LLDB (suite) :: TestVectorTypesFormatting.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
>
> Unexpected Successes (10)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux rad
> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
> x86_64)
>>

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Pavel Labath via lldb-dev
There is no separate option, it should just work. :)

I'm betting you are still missing some package there (we should
document the prerequisites better). Could you send the error message
you are getting so we can have a look.

cheers,
pl


On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
 wrote:
>
>
> On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala  wrote:
>>
>>
>>
>> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin  wrote:
>>>
>>> The TestDataFormatterLibcc* tests require libc++-dev:
>>>
>>> $ sudo apt-get install libc++-dev
>>>
>>
>> Ah okay, so we are working with libc++ on Ubuntu, that's good to hear.
>> Pre-14.04 I gave up on it.
>>
>> Will cmake automatically choose libc++ if it is present?  Or do I need to
>> pass something to cmake to use libc++?
>
>
> Hmm it appears I need to do more than just install libc++-dev.  I did a
> clean build with that installed, then ran the tests, and I still have the
> Libcxc/Libcxx tests failing.  Is there some flag expected, either to pass
> along for the compile options to dotest.py to override/specify which c++ lib
> it is using?
>
>>
>>
>> Thanks, Chaoren!
>>
>> -Todd
>>
>>>
>>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
>>>  wrote:


 On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner 
 wrote:
>
> Can't comment on the failures for Linux, but I don't think we have a
> good handle on the unexpected successes.  I only added that information to
> the output about a week ago, before that unexpected successes were 
> actually
> going unnoticed.


 Okay, thanks Zachary.   A while back we had some flapping tests that
 would oscillate between unexpected success and failure on Linux.  Some of
 those might still be in that state but maybe (!) are fixed.

 Anyone on the Linux end who happens to know if the fails in particular
 look normal, that'd be good to know.

 Thanks!

>
>
> It's likely that someone could just go in there and remove the XFAIL
> from those tests.
>
> On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
>  wrote:
>>
>> Hi all,
>>
>> I'm just trying to get a handle on current lldb test failures across
>> different platforms.
>>
>> On Linux on non-virtualized hardware, I currently see the failures
>> below on Ubuntu 14.04.2 using a setup like this:
>> * stock linker (ld.bfd),
>> * g++ 4.9.2
>> * cmake
>> * ninja
>> * libstdc++
>>
>> ninja check-lldb output:
>>
>> Ran 394 test suites (15 failed) (3.807107%)
>> Ran 474 test cases (17 failed) (3.586498%)
>> Failing Tests (15)
>> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad 3.13.0-57-generic
>> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
>> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestDataFormatterSkipSummary.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestDataFormatterUnordered.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestMiGdbSetShowPrint.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestStaticVariables.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 
>> x86_64)
>> FAIL: LLDB (suite) :: TestStepNoDebug.py (Linux rad 3.13.0-57-generic
>> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
>> FAIL: LLDB (suite) :: TestTypedefArray.py (Linux rad 3.13.0-57-generic
>> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
>> FAIL: LLDB (suite) :: TestVectorTypesFormatting.py (Linux rad
>> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:1

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
Thanks, Pavel!  I'll dig that up and get back.

On Tue, Aug 25, 2015 at 8:30 AM, Pavel Labath  wrote:

> There is no separate option, it should just work. :)
>
> I'm betting you are still missing some package there (we should
> document the prerequisites better). Could you send the error message
> you are getting so we can have a look.
>
> cheers,
> pl
>
>
> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
>  wrote:
> >
> >
> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
> wrote:
> >>
> >>
> >>
> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin 
> wrote:
> >>>
> >>> The TestDataFormatterLibcc* tests require libc++-dev:
> >>>
> >>> $ sudo apt-get install libc++-dev
> >>>
> >>
> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to hear.
> >> Pre-14.04 I gave up on it.
> >>
> >> Will cmake automatically choose libc++ if it is present?  Or do I need
> to
> >> pass something to cmake to use libc++?
> >
> >
> > Hmm it appears I need to do more than just install libc++-dev.  I did a
> > clean build with that installed, then ran the tests, and I still have the
> > Libcxc/Libcxx tests failing.  Is there some flag expected, either to pass
> > along for the compile options to dotest.py to override/specify which c++
> lib
> > it is using?
> >
> >>
> >>
> >> Thanks, Chaoren!
> >>
> >> -Todd
> >>
> >>>
> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
> >>>  wrote:
> 
> 
>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner 
>  wrote:
> >
> > Can't comment on the failures for Linux, but I don't think we have a
> > good handle on the unexpected successes.  I only added that
> information to
> > the output about a week ago, before that unexpected successes were
> actually
> > going unnoticed.
> 
> 
>  Okay, thanks Zachary.   A while back we had some flapping tests that
>  would oscillate between unexpected success and failure on Linux.
> Some of
>  those might still be in that state but maybe (!) are fixed.
> 
>  Anyone on the Linux end who happens to know if the fails in particular
>  look normal, that'd be good to know.
> 
>  Thanks!
> 
> >
> >
> > It's likely that someone could just go in there and remove the XFAIL
> > from those tests.
> >
> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
> >  wrote:
> >>
> >> Hi all,
> >>
> >> I'm just trying to get a handle on current lldb test failures across
> >> different platforms.
> >>
> >> On Linux on non-virtualized hardware, I currently see the failures
> >> below on Ubuntu 14.04.2 using a setup like this:
> >> * stock linker (ld.bfd),
> >> * g++ 4.9.2
> >> * cmake
> >> * ninja
> >> * libstdc++
> >>
> >> ninja check-lldb output:
> >>
> >> Ran 394 test suites (15 failed) (3.807107%)
> >> Ran 474 test cases (17 failed) (3.586498%)
> >> Failing Tests (15)
> >> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad 3.13.0-57-generic
> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterSkipSummary.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterUnordered.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestMiGdbSetShowPrint.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestStaticVariables.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestStepNoDebug.py (Linux rad
> 3.13.0-57-generic
> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Tamas Berghammer via lldb-dev
Hi Todd,

I am using a clang-3.5 build release LLDB to debug an other clang-3.5 build
debug LLDB on Linux x86_64 and it works pretty well for me (works better
then using GDB). The most issue I am hitting is around expression
evaluation when I can't execute very small functions in std:: objects, but
I can get around it with accessing the internal data representation
(primarily for shared_ptr, unique_ptr and vector). We are still using gcc
for compiling lldb-server for android because the android clang have some
issues (atomic not supported) but I don't know anybody who testing a gcc
built LLDB on Linux.

Tamas


On Tue, Aug 25, 2015 at 4:31 PM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> There is no separate option, it should just work. :)
>
> I'm betting you are still missing some package there (we should
> document the prerequisites better). Could you send the error message
> you are getting so we can have a look.
>
> cheers,
> pl
>
>
> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
>  wrote:
> >
> >
> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
> wrote:
> >>
> >>
> >>
> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin 
> wrote:
> >>>
> >>> The TestDataFormatterLibcc* tests require libc++-dev:
> >>>
> >>> $ sudo apt-get install libc++-dev
> >>>
> >>
> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to hear.
> >> Pre-14.04 I gave up on it.
> >>
> >> Will cmake automatically choose libc++ if it is present?  Or do I need
> to
> >> pass something to cmake to use libc++?
> >
> >
> > Hmm it appears I need to do more than just install libc++-dev.  I did a
> > clean build with that installed, then ran the tests, and I still have the
> > Libcxc/Libcxx tests failing.  Is there some flag expected, either to pass
> > along for the compile options to dotest.py to override/specify which c++
> lib
> > it is using?
> >
> >>
> >>
> >> Thanks, Chaoren!
> >>
> >> -Todd
> >>
> >>>
> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
> >>>  wrote:
> 
> 
>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner 
>  wrote:
> >
> > Can't comment on the failures for Linux, but I don't think we have a
> > good handle on the unexpected successes.  I only added that
> information to
> > the output about a week ago, before that unexpected successes were
> actually
> > going unnoticed.
> 
> 
>  Okay, thanks Zachary.   A while back we had some flapping tests that
>  would oscillate between unexpected success and failure on Linux.
> Some of
>  those might still be in that state but maybe (!) are fixed.
> 
>  Anyone on the Linux end who happens to know if the fails in particular
>  look normal, that'd be good to know.
> 
>  Thanks!
> 
> >
> >
> > It's likely that someone could just go in there and remove the XFAIL
> > from those tests.
> >
> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
> >  wrote:
> >>
> >> Hi all,
> >>
> >> I'm just trying to get a handle on current lldb test failures across
> >> different platforms.
> >>
> >> On Linux on non-virtualized hardware, I currently see the failures
> >> below on Ubuntu 14.04.2 using a setup like this:
> >> * stock linker (ld.bfd),
> >> * g++ 4.9.2
> >> * cmake
> >> * ninja
> >> * libstdc++
> >>
> >> ninja check-lldb output:
> >>
> >> Ran 394 test suites (15 failed) (3.807107%)
> >> Ran 474 test cases (17 failed) (3.586498%)
> >> Failing Tests (15)
> >> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad 3.13.0-57-generic
> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterSkipSummary.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86_64)
> >> FAIL: LLDB (suite) :: TestDataFormatterUnordered.py (Linux rad
> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> x86_64 x86

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Pavel Labath via lldb-dev
There's no need to do anything fancy (yet :) ). For initial diagnosis
the output of `./dotest.py $your_usual_options -p SomeLibcxxTest.py
-t` should suffice.

pl

On 25 August 2015 at 16:45, Todd Fiala  wrote:
> Thanks, Pavel!  I'll dig that up and get back.
>
> On Tue, Aug 25, 2015 at 8:30 AM, Pavel Labath  wrote:
>>
>> There is no separate option, it should just work. :)
>>
>> I'm betting you are still missing some package there (we should
>> document the prerequisites better). Could you send the error message
>> you are getting so we can have a look.
>>
>> cheers,
>> pl
>>
>>
>> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
>>  wrote:
>> >
>> >
>> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
>> > wrote:
>> >>
>> >>
>> >>
>> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin 
>> >> wrote:
>> >>>
>> >>> The TestDataFormatterLibcc* tests require libc++-dev:
>> >>>
>> >>> $ sudo apt-get install libc++-dev
>> >>>
>> >>
>> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to hear.
>> >> Pre-14.04 I gave up on it.
>> >>
>> >> Will cmake automatically choose libc++ if it is present?  Or do I need
>> >> to
>> >> pass something to cmake to use libc++?
>> >
>> >
>> > Hmm it appears I need to do more than just install libc++-dev.  I did a
>> > clean build with that installed, then ran the tests, and I still have
>> > the
>> > Libcxc/Libcxx tests failing.  Is there some flag expected, either to
>> > pass
>> > along for the compile options to dotest.py to override/specify which c++
>> > lib
>> > it is using?
>> >
>> >>
>> >>
>> >> Thanks, Chaoren!
>> >>
>> >> -Todd
>> >>
>> >>>
>> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
>> >>>  wrote:
>> 
>> 
>>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner 
>>  wrote:
>> >
>> > Can't comment on the failures for Linux, but I don't think we have a
>> > good handle on the unexpected successes.  I only added that
>> > information to
>> > the output about a week ago, before that unexpected successes were
>> > actually
>> > going unnoticed.
>> 
>> 
>>  Okay, thanks Zachary.   A while back we had some flapping tests that
>>  would oscillate between unexpected success and failure on Linux.
>>  Some of
>>  those might still be in that state but maybe (!) are fixed.
>> 
>>  Anyone on the Linux end who happens to know if the fails in
>>  particular
>>  look normal, that'd be good to know.
>> 
>>  Thanks!
>> 
>> >
>> >
>> > It's likely that someone could just go in there and remove the XFAIL
>> > from those tests.
>> >
>> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
>> >  wrote:
>> >>
>> >> Hi all,
>> >>
>> >> I'm just trying to get a handle on current lldb test failures
>> >> across
>> >> different platforms.
>> >>
>> >> On Linux on non-virtualized hardware, I currently see the failures
>> >> below on Ubuntu 14.04.2 using a setup like this:
>> >> * stock linker (ld.bfd),
>> >> * g++ 4.9.2
>> >> * cmake
>> >> * ninja
>> >> * libstdc++
>> >>
>> >> ninja check-lldb output:
>> >>
>> >> Ran 394 test suites (15 failed) (3.807107%)
>> >> Ran 474 test cases (17 failed) (3.586498%)
>> >> Failing Tests (15)
>> >> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad 3.13.0-57-generic
>> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterSkipSummary.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterUnordered.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestMiGdbSetShowPrint.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> 

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
Here are a couple of the failures that came up (the log output from the
full dosep.py run).

Let me know if that is not sufficient!

On Tue, Aug 25, 2015 at 9:14 AM, Pavel Labath  wrote:

> There's no need to do anything fancy (yet :) ). For initial diagnosis
> the output of `./dotest.py $your_usual_options -p SomeLibcxxTest.py
> -t` should suffice.
>
> pl
>
> On 25 August 2015 at 16:45, Todd Fiala  wrote:
> > Thanks, Pavel!  I'll dig that up and get back.
> >
> > On Tue, Aug 25, 2015 at 8:30 AM, Pavel Labath  wrote:
> >>
> >> There is no separate option, it should just work. :)
> >>
> >> I'm betting you are still missing some package there (we should
> >> document the prerequisites better). Could you send the error message
> >> you are getting so we can have a look.
> >>
> >> cheers,
> >> pl
> >>
> >>
> >> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
> >>  wrote:
> >> >
> >> >
> >> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
> >> > wrote:
> >> >>
> >> >>
> >> >>
> >> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin 
> >> >> wrote:
> >> >>>
> >> >>> The TestDataFormatterLibcc* tests require libc++-dev:
> >> >>>
> >> >>> $ sudo apt-get install libc++-dev
> >> >>>
> >> >>
> >> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to
> hear.
> >> >> Pre-14.04 I gave up on it.
> >> >>
> >> >> Will cmake automatically choose libc++ if it is present?  Or do I
> need
> >> >> to
> >> >> pass something to cmake to use libc++?
> >> >
> >> >
> >> > Hmm it appears I need to do more than just install libc++-dev.  I did
> a
> >> > clean build with that installed, then ran the tests, and I still have
> >> > the
> >> > Libcxc/Libcxx tests failing.  Is there some flag expected, either to
> >> > pass
> >> > along for the compile options to dotest.py to override/specify which
> c++
> >> > lib
> >> > it is using?
> >> >
> >> >>
> >> >>
> >> >> Thanks, Chaoren!
> >> >>
> >> >> -Todd
> >> >>
> >> >>>
> >> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
> >> >>>  wrote:
> >> 
> >> 
> >>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner <
> ztur...@google.com>
> >>  wrote:
> >> >
> >> > Can't comment on the failures for Linux, but I don't think we
> have a
> >> > good handle on the unexpected successes.  I only added that
> >> > information to
> >> > the output about a week ago, before that unexpected successes were
> >> > actually
> >> > going unnoticed.
> >> 
> >> 
> >>  Okay, thanks Zachary.   A while back we had some flapping tests
> that
> >>  would oscillate between unexpected success and failure on Linux.
> >>  Some of
> >>  those might still be in that state but maybe (!) are fixed.
> >> 
> >>  Anyone on the Linux end who happens to know if the fails in
> >>  particular
> >>  look normal, that'd be good to know.
> >> 
> >>  Thanks!
> >> 
> >> >
> >> >
> >> > It's likely that someone could just go in there and remove the
> XFAIL
> >> > from those tests.
> >> >
> >> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
> >> >  wrote:
> >> >>
> >> >> Hi all,
> >> >>
> >> >> I'm just trying to get a handle on current lldb test failures
> >> >> across
> >> >> different platforms.
> >> >>
> >> >> On Linux on non-virtualized hardware, I currently see the
> failures
> >> >> below on Ubuntu 14.04.2 using a setup like this:
> >> >> * stock linker (ld.bfd),
> >> >> * g++ 4.9.2
> >> >> * cmake
> >> >> * ninja
> >> >> * libstdc++
> >> >>
> >> >> ninja check-lldb output:
> >> >>
> >> >> Ran 394 test suites (15 failed) (3.807107%)
> >> >> Ran 474 test cases (17 failed) (3.586498%)
> >> >> Failing Tests (15)
> >> >> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad 3.13.0-57-generic
> >> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux
> rad
> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> >> >> x86_64 x86_64)
> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> >> >> x86_64 x86_64)
> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux
> rad
> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> >> >> x86_64 x86_64)
> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux
> rad
> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> >> >> x86_64 x86_64)
> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux rad
> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> >> >> x86_64 x86_64)
> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux rad
> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
> >> >> x86_64

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Zachary Turner via lldb-dev
It would be great (and not too difficult) to add skip counts to dosep.  I
modified dotest so it formats the result summary in a nice single string
that you can regex match to get counts.  It's already matched in dosep, but
we just aren't pulling out the skip counts.  So it would be very easy to
add this.

On Tue, Aug 25, 2015 at 7:41 AM Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Thanks for the details on dosep.py, Dawn.
>
> For counting I will probably go back to my old method of parsing the
> output of a serial dotest run, since IIRC I can get skip counts accurately
> there as well.  (Or perhaps that should be added to dosep.py, it's been a
> while since I last heavily modified that script).
>
> -Todd
>
> On Mon, Aug 24, 2015 at 6:50 PM,  wrote:
>
>> On Mon, Aug 24, 2015 at 05:37:43PM -0700, via lldb-dev wrote:
>> > On Mon, Aug 24, 2015 at 03:37:52PM -0700, Todd Fiala via lldb-dev wrote:
>> > > On Linux on non-virtualized hardware, I currently see the failures
>> below on
>> > > Ubuntu 14.04.2 using a setup like this:
>> > > [...]
>> > >
>> > > ninja check-lldb output:
>>
>> FYI, ninja check-lldb actually calls dosep.
>>
>> > > Ran 394 test suites (15 failed) (3.807107%)
>> > > Ran 474 test cases (17 failed) (3.586498%)
>> >
>> > I don't think you can trust the reporting of dosep.py's "Ran N test
>> > cases", as it fails to count about 500 test cases.  The only way I've
>> > found to get an accurate count is to add up all the Ns from "Ran N tests
>> > in" as follows:
>> >
>> > ./dosep.py -s --options "-v --executable $BLDDIR/bin/lldb" 2>&1 | tee
>> test_out.log
>> > export total=`grep -E "^Ran [0-9]+ tests? in" test_out.log | awk
>> '{count+=$2} END {print count}'`
>>
>> Of course, these commands assume you're running the tests from the
>> lldb/test directory.
>>
>> > (See comments in http://reviews.llvm.org/rL238467.)
>>
>> I've pasted (and tweaked) the relavent comments from that review here,
>> where I describe a narrowed case showing how dosep fails to count all the
>> test cases from one test suite in test/types.  Note that the tests were run
>> on OSX, so your counts may vary.
>>
>> The final count from:
>> Ran N test cases .*
>> is wrong, as I'll explain below. I've done a comparison between dosep and
>> dotest on a narrowed subset of tests to show how dosep can omit the test
>> cases from a test suite in its count.
>>
>> Tested on subset of lldb/test with just the following directories/files
>> (i.e. all others directories/files were removed):
>> test/make
>> test/pexpect-2.4
>> test/plugins
>> test/types
>> test/unittest2
>> # The .py files kept in test/types are as follows (so
>> test/types/TestIntegerTypes.py* was removed):
>> test/types/AbstractBase.py
>> test/types/HideTestFailures.py
>> test/types/TestFloatTypes.py
>> test/types/TestFloatTypesExpr.py
>> test/types/TestIntegerTypesExpr.py
>> test/types/TestRecursiveTypes.py
>>
>> Tests were run in the lldb/test directory using the following commands:
>> dotest:
>> ./dotest.py -v
>> dosep:
>> ./dosep.py -s --options "-v"
>>
>> Comparing the test case totals, dotest correctly counts 46, but dosep
>> counts only 16:
>> dotest:
>> Ran 46 tests in 75.934s
>> dosep:
>> Testing: 23 tests, 4 threads ## note: this number changes
>> randonmly
>> Ran 6 tests in 7.049s
>> [PASSED TestFloatTypes.py] - 1 out of 23 test suites processed
>> Ran 6 tests in 11.165s
>> [PASSED TestFloatTypesExpr.py] - 2 out of 23 test suites processed
>> Ran 30 tests in 54.581s ## FIXME: not counted?
>> [PASSED TestIntegerTypesExpr.py] - 3 out of 23 test suites
>> processed
>> Ran 4 tests in 3.212s
>> [PASSED TestRecursiveTypes.py] - 4 out of 23 test suites processed
>> Ran 4 test suites (0 failed) (0.00%)
>> Ran 16 test cases (0 failed) (0.00%)
>>
>> With test/types/TestIntegerTypesExpr.py* removed, both correctly count 16
>> test cases:
>> dosep:
>> Testing: 16 tests, 4 threads
>> Ran 6 tests in 7.059s
>> Ran 6 tests in 11.186s
>> Ran 4 tests in 3.241s
>> Ran 3 test suites (0 failed) (0.00%)
>> Ran 16 test cases (0 failed) (0.00%)
>>
>> Note: I couldn't compare the test counts on all the tests because of the
>> concern raised in http://reviews.llvm.org/rL237053. That is, dotest can
>> no longer complete the tests on OSX, as all test suites fail after test
>> case 898: test_disassemble_invalid_vst_1_64_raw_data get ERRORs. I don't
>> think that issue is related to problems in dosep.
>>
>> Thanks,
>> -Dawn
>>
>
>
>
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listin

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Chaoren Lin via lldb-dev
You're using CC="/usr/bin/cc". It needs to be clang for USE_LIBCPP to do
anything. :/

On Tue, Aug 25, 2015 at 9:20 AM, Todd Fiala  wrote:

> Here are a couple of the failures that came up (the log output from the
> full dosep.py run).
>
> Let me know if that is not sufficient!
>
> On Tue, Aug 25, 2015 at 9:14 AM, Pavel Labath  wrote:
>
>> There's no need to do anything fancy (yet :) ). For initial diagnosis
>> the output of `./dotest.py $your_usual_options -p SomeLibcxxTest.py
>> -t` should suffice.
>>
>> pl
>>
>> On 25 August 2015 at 16:45, Todd Fiala  wrote:
>> > Thanks, Pavel!  I'll dig that up and get back.
>> >
>> > On Tue, Aug 25, 2015 at 8:30 AM, Pavel Labath 
>> wrote:
>> >>
>> >> There is no separate option, it should just work. :)
>> >>
>> >> I'm betting you are still missing some package there (we should
>> >> document the prerequisites better). Could you send the error message
>> >> you are getting so we can have a look.
>> >>
>> >> cheers,
>> >> pl
>> >>
>> >>
>> >> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
>> >>  wrote:
>> >> >
>> >> >
>> >> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
>> >> > wrote:
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin 
>> >> >> wrote:
>> >> >>>
>> >> >>> The TestDataFormatterLibcc* tests require libc++-dev:
>> >> >>>
>> >> >>> $ sudo apt-get install libc++-dev
>> >> >>>
>> >> >>
>> >> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to
>> hear.
>> >> >> Pre-14.04 I gave up on it.
>> >> >>
>> >> >> Will cmake automatically choose libc++ if it is present?  Or do I
>> need
>> >> >> to
>> >> >> pass something to cmake to use libc++?
>> >> >
>> >> >
>> >> > Hmm it appears I need to do more than just install libc++-dev.  I
>> did a
>> >> > clean build with that installed, then ran the tests, and I still have
>> >> > the
>> >> > Libcxc/Libcxx tests failing.  Is there some flag expected, either to
>> >> > pass
>> >> > along for the compile options to dotest.py to override/specify which
>> c++
>> >> > lib
>> >> > it is using?
>> >> >
>> >> >>
>> >> >>
>> >> >> Thanks, Chaoren!
>> >> >>
>> >> >> -Todd
>> >> >>
>> >> >>>
>> >> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
>> >> >>>  wrote:
>> >> 
>> >> 
>> >>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner <
>> ztur...@google.com>
>> >>  wrote:
>> >> >
>> >> > Can't comment on the failures for Linux, but I don't think we
>> have a
>> >> > good handle on the unexpected successes.  I only added that
>> >> > information to
>> >> > the output about a week ago, before that unexpected successes
>> were
>> >> > actually
>> >> > going unnoticed.
>> >> 
>> >> 
>> >>  Okay, thanks Zachary.   A while back we had some flapping tests
>> that
>> >>  would oscillate between unexpected success and failure on Linux.
>> >>  Some of
>> >>  those might still be in that state but maybe (!) are fixed.
>> >> 
>> >>  Anyone on the Linux end who happens to know if the fails in
>> >>  particular
>> >>  look normal, that'd be good to know.
>> >> 
>> >>  Thanks!
>> >> 
>> >> >
>> >> >
>> >> > It's likely that someone could just go in there and remove the
>> XFAIL
>> >> > from those tests.
>> >> >
>> >> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
>> >> >  wrote:
>> >> >>
>> >> >> Hi all,
>> >> >>
>> >> >> I'm just trying to get a handle on current lldb test failures
>> >> >> across
>> >> >> different platforms.
>> >> >>
>> >> >> On Linux on non-virtualized hardware, I currently see the
>> failures
>> >> >> below on Ubuntu 14.04.2 using a setup like this:
>> >> >> * stock linker (ld.bfd),
>> >> >> * g++ 4.9.2
>> >> >> * cmake
>> >> >> * ninja
>> >> >> * libstdc++
>> >> >>
>> >> >> ninja check-lldb output:
>> >> >>
>> >> >> Ran 394 test suites (15 failed) (3.807107%)
>> >> >> Ran 474 test cases (17 failed) (3.586498%)
>> >> >> Failing Tests (15)
>> >> >> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad
>> 3.13.0-57-generic
>> >> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
>> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux
>> rad
>> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> >> x86_64 x86_64)
>> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
>> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> >> x86_64 x86_64)
>> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux
>> rad
>> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> >> x86_64 x86_64)
>> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux
>> rad
>> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> >> >> x86_64 x86_64)
>> >> >> FAIL: LLDB (suite) :: TestDataFo

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Tamas Berghammer via lldb-dev
In theory the test should be skipped when you are using gcc (cc is an alias
for it) but we detect the type of the compiler based on the executable name
and in case of cc we don't recognize that it is a gcc, so we don't skip the
test.

On Tue, Aug 25, 2015 at 5:45 PM Chaoren Lin via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> You're using CC="/usr/bin/cc". It needs to be clang for USE_LIBCPP to do
> anything. :/
>
> On Tue, Aug 25, 2015 at 9:20 AM, Todd Fiala  wrote:
>
>> Here are a couple of the failures that came up (the log output from the
>> full dosep.py run).
>>
>> Let me know if that is not sufficient!
>>
>> On Tue, Aug 25, 2015 at 9:14 AM, Pavel Labath  wrote:
>>
>>> There's no need to do anything fancy (yet :) ). For initial diagnosis
>>> the output of `./dotest.py $your_usual_options -p SomeLibcxxTest.py
>>> -t` should suffice.
>>>
>>> pl
>>>
>>> On 25 August 2015 at 16:45, Todd Fiala  wrote:
>>> > Thanks, Pavel!  I'll dig that up and get back.
>>> >
>>> > On Tue, Aug 25, 2015 at 8:30 AM, Pavel Labath 
>>> wrote:
>>> >>
>>> >> There is no separate option, it should just work. :)
>>> >>
>>> >> I'm betting you are still missing some package there (we should
>>> >> document the prerequisites better). Could you send the error message
>>> >> you are getting so we can have a look.
>>> >>
>>> >> cheers,
>>> >> pl
>>> >>
>>> >>
>>> >> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
>>> >>  wrote:
>>> >> >
>>> >> >
>>> >> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
>>> >> > wrote:
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin 
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> The TestDataFormatterLibcc* tests require libc++-dev:
>>> >> >>>
>>> >> >>> $ sudo apt-get install libc++-dev
>>> >> >>>
>>> >> >>
>>> >> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to
>>> hear.
>>> >> >> Pre-14.04 I gave up on it.
>>> >> >>
>>> >> >> Will cmake automatically choose libc++ if it is present?  Or do I
>>> need
>>> >> >> to
>>> >> >> pass something to cmake to use libc++?
>>> >> >
>>> >> >
>>> >> > Hmm it appears I need to do more than just install libc++-dev.  I
>>> did a
>>> >> > clean build with that installed, then ran the tests, and I still
>>> have
>>> >> > the
>>> >> > Libcxc/Libcxx tests failing.  Is there some flag expected, either to
>>> >> > pass
>>> >> > along for the compile options to dotest.py to override/specify
>>> which c++
>>> >> > lib
>>> >> > it is using?
>>> >> >
>>> >> >>
>>> >> >>
>>> >> >> Thanks, Chaoren!
>>> >> >>
>>> >> >> -Todd
>>> >> >>
>>> >> >>>
>>> >> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
>>> >> >>>  wrote:
>>> >> 
>>> >> 
>>> >>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner <
>>> ztur...@google.com>
>>> >>  wrote:
>>> >> >
>>> >> > Can't comment on the failures for Linux, but I don't think we
>>> have a
>>> >> > good handle on the unexpected successes.  I only added that
>>> >> > information to
>>> >> > the output about a week ago, before that unexpected successes
>>> were
>>> >> > actually
>>> >> > going unnoticed.
>>> >> 
>>> >> 
>>> >>  Okay, thanks Zachary.   A while back we had some flapping tests
>>> that
>>> >>  would oscillate between unexpected success and failure on Linux.
>>> >>  Some of
>>> >>  those might still be in that state but maybe (!) are fixed.
>>> >> 
>>> >>  Anyone on the Linux end who happens to know if the fails in
>>> >>  particular
>>> >>  look normal, that'd be good to know.
>>> >> 
>>> >>  Thanks!
>>> >> 
>>> >> >
>>> >> >
>>> >> > It's likely that someone could just go in there and remove the
>>> XFAIL
>>> >> > from those tests.
>>> >> >
>>> >> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
>>> >> >  wrote:
>>> >> >>
>>> >> >> Hi all,
>>> >> >>
>>> >> >> I'm just trying to get a handle on current lldb test failures
>>> >> >> across
>>> >> >> different platforms.
>>> >> >>
>>> >> >> On Linux on non-virtualized hardware, I currently see the
>>> failures
>>> >> >> below on Ubuntu 14.04.2 using a setup like this:
>>> >> >> * stock linker (ld.bfd),
>>> >> >> * g++ 4.9.2
>>> >> >> * cmake
>>> >> >> * ninja
>>> >> >> * libstdc++
>>> >> >>
>>> >> >> ninja check-lldb output:
>>> >> >>
>>> >> >> Ran 394 test suites (15 failed) (3.807107%)
>>> >> >> Ran 474 test cases (17 failed) (3.586498%)
>>> >> >> Failing Tests (15)
>>> >> >> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad
>>> 3.13.0-57-generic
>>> >> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
>>> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux
>>> rad
>>> >> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>>> >> >> x86_64 x86_64)
>>> >> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
>>> >> >> 3.13.0-57-generic #95-Ubunt

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
Okay.

So the culprit then is that I'm using:
cmake -GNinja ../llvm

with one extra flag for build type.  And cmake is then just choosing
/usr/bin/cc.

We could improve this by having the compiler symbolic links fully resolved:
/usr/bin/cc -> /etc/alternatives/cc -> /usr/bin/gcc, which would have then
caught that it doesn't support libc++.

Couldn't we use gcc with libc++?  (i.e. is it sufficient to assume we don't
have libc++ if we're using gcc?)  I have never tried that combo but I don't
know that it is impossible.  (After all, I just added libc++-dev to the
system, which presumably I can link against).

On Tue, Aug 25, 2015 at 9:48 AM, Tamas Berghammer 
wrote:

> In theory the test should be skipped when you are using gcc (cc is an
> alias for it) but we detect the type of the compiler based on the
> executable name and in case of cc we don't recognize that it is a gcc, so
> we don't skip the test.
>
> On Tue, Aug 25, 2015 at 5:45 PM Chaoren Lin via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> You're using CC="/usr/bin/cc". It needs to be clang for USE_LIBCPP to do
>> anything. :/
>>
>> On Tue, Aug 25, 2015 at 9:20 AM, Todd Fiala  wrote:
>>
>>> Here are a couple of the failures that came up (the log output from the
>>> full dosep.py run).
>>>
>>> Let me know if that is not sufficient!
>>>
>>> On Tue, Aug 25, 2015 at 9:14 AM, Pavel Labath  wrote:
>>>
 There's no need to do anything fancy (yet :) ). For initial diagnosis
 the output of `./dotest.py $your_usual_options -p SomeLibcxxTest.py
 -t` should suffice.

 pl

 On 25 August 2015 at 16:45, Todd Fiala  wrote:
 > Thanks, Pavel!  I'll dig that up and get back.
 >
 > On Tue, Aug 25, 2015 at 8:30 AM, Pavel Labath 
 wrote:
 >>
 >> There is no separate option, it should just work. :)
 >>
 >> I'm betting you are still missing some package there (we should
 >> document the prerequisites better). Could you send the error message
 >> you are getting so we can have a look.
 >>
 >> cheers,
 >> pl
 >>
 >>
 >> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
 >>  wrote:
 >> >
 >> >
 >> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
 >> > wrote:
 >> >>
 >> >>
 >> >>
 >> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin >>> >
 >> >> wrote:
 >> >>>
 >> >>> The TestDataFormatterLibcc* tests require libc++-dev:
 >> >>>
 >> >>> $ sudo apt-get install libc++-dev
 >> >>>
 >> >>
 >> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to
 hear.
 >> >> Pre-14.04 I gave up on it.
 >> >>
 >> >> Will cmake automatically choose libc++ if it is present?  Or do I
 need
 >> >> to
 >> >> pass something to cmake to use libc++?
 >> >
 >> >
 >> > Hmm it appears I need to do more than just install libc++-dev.  I
 did a
 >> > clean build with that installed, then ran the tests, and I still
 have
 >> > the
 >> > Libcxc/Libcxx tests failing.  Is there some flag expected, either
 to
 >> > pass
 >> > along for the compile options to dotest.py to override/specify
 which c++
 >> > lib
 >> > it is using?
 >> >
 >> >>
 >> >>
 >> >> Thanks, Chaoren!
 >> >>
 >> >> -Todd
 >> >>
 >> >>>
 >> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
 >> >>>  wrote:
 >> 
 >> 
 >>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner <
 ztur...@google.com>
 >>  wrote:
 >> >
 >> > Can't comment on the failures for Linux, but I don't think we
 have a
 >> > good handle on the unexpected successes.  I only added that
 >> > information to
 >> > the output about a week ago, before that unexpected successes
 were
 >> > actually
 >> > going unnoticed.
 >> 
 >> 
 >>  Okay, thanks Zachary.   A while back we had some flapping tests
 that
 >>  would oscillate between unexpected success and failure on Linux.
 >>  Some of
 >>  those might still be in that state but maybe (!) are fixed.
 >> 
 >>  Anyone on the Linux end who happens to know if the fails in
 >>  particular
 >>  look normal, that'd be good to know.
 >> 
 >>  Thanks!
 >> 
 >> >
 >> >
 >> > It's likely that someone could just go in there and remove the
 XFAIL
 >> > from those tests.
 >> >
 >> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
 >> >  wrote:
 >> >>
 >> >> Hi all,
 >> >>
 >> >> I'm just trying to get a handle on current lldb test failures
 >> >> across
 >> >> different platforms.
 >> >>
 >> >> On Linux on non-virtualized hardware, I currently see the
 failures
 >> >> below on Ubuntu 14.04.2 using a setup 

[lldb-dev] [Bug 24575] New: ERRORs running lldb tests on OSX via dotest.py after TestDisassemble_VST1_64

2015-08-25 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24575

Bug ID: 24575
   Summary: ERRORs running lldb tests on OSX via dotest.py after
TestDisassemble_VST1_64
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: MacOS X
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: dawn+bugzi...@burble.org
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

Running lldb tests on OSX via dotest.py gets ERRORs on all tests following test
test_disassemble_invalid_vst_1_64_raw_data
(TestDisassemble_VST1_64.Disassemble_VST1_64)
which creates a target via
target = self.dbg.CreateTargetWithFileAndTargetTriple ("", "thumbv7")

This started after commit svn r237053 (see comments in
http://reviews.llvm.org/rL237053).  Before this commit, all tests run:
Collected 1324 tests

1: test_sb_api_directory (TestPublicAPIHeaders.SBDirCheckerCase)
[...]
850: test_disassemble_invalid_vst_1_64_raw_data
(TestDisassemble_VST1_64.Disassemble_VST1_64)
Test disassembling invalid vst1.64 raw bytes with the API. ... ok
851: test_disassemble_raw_data
(TestDisassembleRawData.DisassembleRawDataTestCase)
Test disassembling raw bytes with the API. ... ok
[...]
--
Ran 1324 tests in 4017.285s

FAILED (failures=2, errors=3, skipped=118, expected failures=61, unexpected
successes=28)

After this commit, all tests folloing
test_disassemble_invalid_vst_1_64_raw_data get ERRORs:
Collected 1324 tests

1: test_sb_api_directory (TestPublicAPIHeaders.SBDirCheckerCase)
Test the SB API directory and make sure there's no unwanted stuff. ...
skipped 'skip because LLDB.h header not found'
[...]
850: test_disassemble_invalid_vst_1_64_raw_data
(TestDisassemble_VST1_64.Disassemble_VST1_64)
Test disassembling invalid vst1.64 raw bytes with the API. ... ok
ERROR
ERROR
ERROR
[...]
-
Ran 850 tests in 3052.121s

FAILED (failures=2, errors=86, skipped=40, expected failures=57, unexpected
successes=25)

To reproduce in a narrowed case, build lldb on OSX (cmake and ninja were used
here), set LLDB_EXEC to point to lldb, cd to lldb/test, and run:
./dotest.py -v -t --executable $LLDB_EXEC python_api/disassemble-raw-data
You'll see the ERROR on the second test:
ERROR

==
ERROR: setUpClass (TestDisassembleRawData.DisassembleRawDataTestCase)
--
Traceback (most recent call last):
File "/Users/dawn/llvm_delphi/tools/lldb/test/lldbtest.py", line 1167, in
setUpClass
if platformIsDarwin():
File "/Users/dawn/llvm_delphi/tools/lldb/test/lldbtest.py", line 916, in
platformIsDarwin
return getPlatform() in getDarwinOSTriples()
File "/Users/dawn/llvm_delphi/tools/lldb/test/lldbtest.py", line 895, in
getPlatform
platform = lldb.DBG.GetSelectedPlatform().GetTriple().split('-')[2]
AttributeError: 'NoneType' object has no attribute 'split'

--
Ran 1 test in 0.005s

RESULT: FAILED (1 passes, 0 failures, 1 errors, 0 skipped, 0 expected
failures, 0 unexpected successes)

Running each test separately both pass:
./dotest.py -v -t --executable $LLDB_EXEC -f
Disassemble_VST1_64.test_disassemble_invalid_vst_1_64_raw_data
RESULT: PASSED (1 passes, 0 failures, 0 errors, 0 skipped, 0 expected
failures, 0 unexpected successes)
./dotest.py -v -t --executable $LLDB_EXEC -f
DisassembleRawDataTestCase.test_disassemble_raw_data
RESULT: PASSED (1 passes, 0 failures, 0 errors, 0 skipped, 0 expected
failures, 0 unexpected successes)

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
So specifying CC=/usr/bin/gcc CXX=/usr/bin/g++ cmake -GNinja ...

did the trick for getting rid of the libc++ issues.  I think I may try to
see if we can get those tests to make a run-time check to see if the
inferior is linked against libc++, and if not, to skip it.  We can have
lldb do it by looking at the image list.  Sound reasonable?  That seems
more fool-proof than guessing based on the compiler.

An alternative I considered and probably also might be valid to do anyway
for cases where we look at the compiler binary is to fully resolve symbolic
links before making decisions based on the binary.

Thoughts?

Separately, with the tests correctly seeing gcc now, I am down to the
following errors:

Ran 394 test suites (5 failed) (1.269036%)
Ran 451 test cases (5 failed) (1.108647%)
Failing Tests (5)
FAIL: LLDB (suite) :: TestExitDuringStep.py (Linux lldb 3.19.0-26-generic
#28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestNumThreads.py (Linux lldb 3.19.0-26-generic
#28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
FAIL: LLDB (suite) :: TestThreadExit.py (Linux lldb 3.19.0-26-generic
#28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)

Unexpected Successes (10)
UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestStubSetSID.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestWatchedVarHitWhenInScope.py (Linux
lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)


Some of those failures look like old friends that were failing a year ago.
Can anybody tell me anything about those failures on Linux?  Are they being
looked at?  Any hunches at to what is wrong?

Thanks!

-Todd

On Tue, Aug 25, 2015 at 10:04 AM, Todd Fiala  wrote:

> Okay.
>
> So the culprit then is that I'm using:
> cmake -GNinja ../llvm
>
> with one extra flag for build type.  And cmake is then just choosing
> /usr/bin/cc.
>
> We could improve this by having the compiler symbolic links fully resolved:
> /usr/bin/cc -> /etc/alternatives/cc -> /usr/bin/gcc, which would have then
> caught that it doesn't support libc++.
>
> Couldn't we use gcc with libc++?  (i.e. is it sufficient to assume we
> don't have libc++ if we're using gcc?)  I have never tried that combo but I
> don't know that it is impossible.  (After all, I just added libc++-dev to
> the system, which presumably I can link against).
>
> On Tue, Aug 25, 2015 at 9:48 AM, Tamas Berghammer 
> wrote:
>
>> In theory the test should be skipped when you are using gcc (cc is an
>> alias for it) but we detect the type of the compiler based on the
>> executable name and in case of cc we don't recognize that it is a gcc, so
>> we don't skip the test.
>>
>> On Tue, Aug 25, 2015 at 5:45 PM Chaoren Lin via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> You're using CC="/usr/bin/cc". It needs to be clang for USE_LIBCPP to do
>>> anything. :/
>>>
>>> On Tue, Aug 25, 2015 at 9:20 AM, Todd Fiala 
>>> wrote:
>>>
 Here are a couple of the failures that came up (the log output from the
 full dosep.py run).

 Let me know if that is not sufficient!

 On Tue, Aug 25, 2015 at 9:14 AM, Pavel Labath 
 wrote:

> There's no need to do anything fancy (yet :) ). For initial diagnosis
> the output of `./dotest.py $your_usual_options -p SomeLibcxxTest.py
> -t` should suffice.
>
> pl
>
> On 25 August 2015 at

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
On Tue, Aug 25, 2015 at 8:45 AM, Tamas Berghammer 
wrote:

> Hi Todd,
>
> I am using a clang-3.5 build release LLDB to debug an other clang-3.5
> build debug LLDB on Linux x86_64 and it works pretty well for me (works
> better then using GDB). The most issue I am hitting is around expression
> evaluation when I can't execute very small functions in std:: objects, but
> I can get around it with accessing the internal data representation
> (primarily for shared_ptr, unique_ptr and vector). We are still using gcc
> for compiling lldb-server for android because the android clang have some
> issues (atomic not supported) but I don't know anybody who testing a gcc
> built LLDB on Linux.
>
> Tamas
>
>
Okay, thanks for the details on your setup, Tamas!

-Todd


>
> On Tue, Aug 25, 2015 at 4:31 PM Pavel Labath via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> There is no separate option, it should just work. :)
>>
>> I'm betting you are still missing some package there (we should
>> document the prerequisites better). Could you send the error message
>> you are getting so we can have a look.
>>
>> cheers,
>> pl
>>
>>
>> On 25 August 2015 at 16:20, Todd Fiala via lldb-dev
>>  wrote:
>> >
>> >
>> > On Mon, Aug 24, 2015 at 4:11 PM, Todd Fiala 
>> wrote:
>> >>
>> >>
>> >>
>> >> On Mon, Aug 24, 2015 at 4:01 PM, Chaoren Lin 
>> wrote:
>> >>>
>> >>> The TestDataFormatterLibcc* tests require libc++-dev:
>> >>>
>> >>> $ sudo apt-get install libc++-dev
>> >>>
>> >>
>> >> Ah okay, so we are working with libc++ on Ubuntu, that's good to hear.
>> >> Pre-14.04 I gave up on it.
>> >>
>> >> Will cmake automatically choose libc++ if it is present?  Or do I need
>> to
>> >> pass something to cmake to use libc++?
>> >
>> >
>> > Hmm it appears I need to do more than just install libc++-dev.  I did a
>> > clean build with that installed, then ran the tests, and I still have
>> the
>> > Libcxc/Libcxx tests failing.  Is there some flag expected, either to
>> pass
>> > along for the compile options to dotest.py to override/specify which
>> c++ lib
>> > it is using?
>> >
>> >>
>> >>
>> >> Thanks, Chaoren!
>> >>
>> >> -Todd
>> >>
>> >>>
>> >>> On Mon, Aug 24, 2015 at 3:42 PM, Todd Fiala via lldb-dev
>> >>>  wrote:
>> 
>> 
>>  On Mon, Aug 24, 2015 at 3:39 PM, Zachary Turner 
>>  wrote:
>> >
>> > Can't comment on the failures for Linux, but I don't think we have a
>> > good handle on the unexpected successes.  I only added that
>> information to
>> > the output about a week ago, before that unexpected successes were
>> actually
>> > going unnoticed.
>> 
>> 
>>  Okay, thanks Zachary.   A while back we had some flapping tests that
>>  would oscillate between unexpected success and failure on Linux.
>> Some of
>>  those might still be in that state but maybe (!) are fixed.
>> 
>>  Anyone on the Linux end who happens to know if the fails in
>> particular
>>  look normal, that'd be good to know.
>> 
>>  Thanks!
>> 
>> >
>> >
>> > It's likely that someone could just go in there and remove the XFAIL
>> > from those tests.
>> >
>> > On Mon, Aug 24, 2015 at 3:37 PM Todd Fiala via lldb-dev
>> >  wrote:
>> >>
>> >> Hi all,
>> >>
>> >> I'm just trying to get a handle on current lldb test failures
>> across
>> >> different platforms.
>> >>
>> >> On Linux on non-virtualized hardware, I currently see the failures
>> >> below on Ubuntu 14.04.2 using a setup like this:
>> >> * stock linker (ld.bfd),
>> >> * g++ 4.9.2
>> >> * cmake
>> >> * ninja
>> >> * libstdc++
>> >>
>> >> ninja check-lldb output:
>> >>
>> >> Ran 394 test suites (15 failed) (3.807107%)
>> >> Ran 474 test cases (17 failed) (3.586498%)
>> >> Failing Tests (15)
>> >> FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad 3.13.0-57-generic
>> >> #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux rad
>> >> 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015
>> x86_64 x86_64)
>> >> FAIL: LLDB (suite) :: T

Re: [lldb-dev] [3.7 Release] RC3 has been tagged, let's wrap this up

2015-08-25 Thread Dimitry Andric via lldb-dev
Hans,

Note that the patches I posted solved the problems, at least for me. :)

-Dimitry

> On 25 Aug 2015, at 01:40, Hans Wennborg  wrote:
> 
> It seems this is a cmake vs autoconf thing. With cmake, it builds
> correctly, but with autoconf I get the same error as you.
> 
> I probably shouldn't have made this change while we were in the
> release process as it was potentially risky :-/ I've reverted it now,
> so hopefully the next build should be problem free.
> 
> Thanks,
> Hans
> 
> On Fri, Aug 21, 2015 at 5:09 AM, Dimitry Andric  wrote:
>> Strangely, the clang-tools-extra stuff does build if I manually check it out 
>> like so (without any symlinks):
>> 
>> .<-- 
>> https://llvm.org/svn/llvm-project/llvm/branches/release_37
>> tools/clang  <-- 
>> https://llvm.org/svn/llvm-project/cfe/branches/release_37
>> tools/clang/tools/extra  <-- 
>> https://llvm.org/svn/llvm-project/clang-tools-extra/branches/release_37
>> 
>> I'll investigate, because it would be nice to have those tools.
>> 
>> -Dimitry
>> 
>>> On 21 Aug 2015, at 13:42, Nikola Smiljanic  wrote:
>>> 
>>> Hi Dmitry, if I understood Hans clang-extra wasn't part of the build prior 
>>> to rc3. Just delete it and run script with --no-checkout.
>>> 
>>> On Fri, Aug 21, 2015 at 7:15 PM, Dimitry Andric  wrote:
>>> Hm, it does not seem to compile at all here?  The build ends with:
>>> 
>>> In file included from 
>>> /home/dim/llvm-3.7.0/rc3/llvm.src/tools/clang/tools/extra/clang-apply-replacements/lib/Tooling/ApplyReplacements.cpp:17:
>>> /home/dim/llvm-3.7.0/rc3/llvm.src/tools/clang/tools/extra/clang-apply-replacements/lib/Tooling/../../include/clang-apply-replacements/Tooling/ApplyReplacements.h:19:10:
>>>  fatal error: 'clang/Tooling/Refactoring.h' file not found
>>> #include "clang/Tooling/Refactoring.h"
>>> ^
>>> 1 error generated.
>>> 
>>> Any idea?  I had no problems at all with -rc2.
>>> 
>>> -Dimitry
>>> 
 On 21 Aug 2015, at 02:51, Hans Wennborg  wrote:
 
 Hello everyone,
 
 3.7-rc3 has just been tagged. Testers, please test, build binaries,
 upload to the sftp and report results to this thread.
 
 Again, a lot of patches got merged between rc2 and rc3, but hopefully
 nothing that should upset things.
 
 One thing that did change is that the release script now correctly
 symlinks clang-tools-extra into the build. If this causes problems on
 your platform, please just remove it.
 
 This is a release candidate in the real sense: at this point I have
 zero release blockers on my radar. I will now only accept fixes for
 critical regressions, and if nothing comes up, rc3 will be promoted to
 3.7.0-final.
 
 Documentation and release note patches are still welcome all the way
 up until the final tag goes in.
 
 Issues that were on my radar, but I don't consider blocking:
 
 - Sanitizer test failures on various platforms, e.g. PR24222. We never
 ran these tests in previous releases, so it's not a regression. It
 would be great if the sanitizer folks could look into the test
 failures, but it's not blocking 3.7.
 
 - PR24273: "[ARM] Libc++abi built in-tree with libunwind fails in
 __cxa_allocate_exception", Renato will exclude libc++ from his build
 for now.
 
 - Lack of key functions in some Instruction classes causing build
 failures without -fno-rtti
 (http://lists.llvm.org/pipermail/llvm-dev/2015-August/089010.html). No
 patches have been forthcoming, so this will not get fixed for 3.7. At
 least we correctly report -fno-rtti in llvm-config built with CMake
 now.
 
 - r244221: "[SPARC] Don't compare arch name as a string, use the enum
 instead", owner is unresponsive.
 
 - "[lldb] r245020 - [MIPS]Handle floating point and aggregate return
 types in SysV-mips [32 bit] ABI", owner is unresponsive.
 
 
 Cheers,
 Hans
>>> 
>>> 
>> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24579] New: settings.target-env-vars doesn't work correctly on Windows

2015-08-25 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24579

Bug ID: 24579
   Summary: settings.target-env-vars doesn't work correctly on
Windows
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: Windows NT
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: ztur...@google.com
CC: llvm-b...@lists.llvm.org
Blocks: 21766
Classification: Unclassified

I fixed this some time ago, but it seems to have regressed.  This should be an
easy fix, so just triaging this for later.  This failure is caught by

TestSettings.SettingsCommandTestCase.test_run_args_and_env_vars_with_dwarf

which fails on windows and is currently XFAIL'ed until we fix this.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [3.7 Release] RC3 has been tagged, let's wrap this up

2015-08-25 Thread Hans Wennborg via lldb-dev
Thanks, we should probably do something like that after this release,
but for now I think it's best to revert to safety.

On Tue, Aug 25, 2015 at 2:44 PM, Dimitry Andric  wrote:
> Hans,
>
> Note that the patches I posted solved the problems, at least for me. :)
>
> -Dimitry
>
>> On 25 Aug 2015, at 01:40, Hans Wennborg  wrote:
>>
>> It seems this is a cmake vs autoconf thing. With cmake, it builds
>> correctly, but with autoconf I get the same error as you.
>>
>> I probably shouldn't have made this change while we were in the
>> release process as it was potentially risky :-/ I've reverted it now,
>> so hopefully the next build should be problem free.
>>
>> Thanks,
>> Hans
>>
>> On Fri, Aug 21, 2015 at 5:09 AM, Dimitry Andric  wrote:
>>> Strangely, the clang-tools-extra stuff does build if I manually check it 
>>> out like so (without any symlinks):
>>>
>>> .<-- 
>>> https://llvm.org/svn/llvm-project/llvm/branches/release_37
>>> tools/clang  <-- 
>>> https://llvm.org/svn/llvm-project/cfe/branches/release_37
>>> tools/clang/tools/extra  <-- 
>>> https://llvm.org/svn/llvm-project/clang-tools-extra/branches/release_37
>>>
>>> I'll investigate, because it would be nice to have those tools.
>>>
>>> -Dimitry
>>>
 On 21 Aug 2015, at 13:42, Nikola Smiljanic  wrote:

 Hi Dmitry, if I understood Hans clang-extra wasn't part of the build prior 
 to rc3. Just delete it and run script with --no-checkout.

 On Fri, Aug 21, 2015 at 7:15 PM, Dimitry Andric  wrote:
 Hm, it does not seem to compile at all here?  The build ends with:

 In file included from 
 /home/dim/llvm-3.7.0/rc3/llvm.src/tools/clang/tools/extra/clang-apply-replacements/lib/Tooling/ApplyReplacements.cpp:17:
 /home/dim/llvm-3.7.0/rc3/llvm.src/tools/clang/tools/extra/clang-apply-replacements/lib/Tooling/../../include/clang-apply-replacements/Tooling/ApplyReplacements.h:19:10:
  fatal error: 'clang/Tooling/Refactoring.h' file not found
 #include "clang/Tooling/Refactoring.h"
 ^
 1 error generated.

 Any idea?  I had no problems at all with -rc2.

 -Dimitry

> On 21 Aug 2015, at 02:51, Hans Wennborg  wrote:
>
> Hello everyone,
>
> 3.7-rc3 has just been tagged. Testers, please test, build binaries,
> upload to the sftp and report results to this thread.
>
> Again, a lot of patches got merged between rc2 and rc3, but hopefully
> nothing that should upset things.
>
> One thing that did change is that the release script now correctly
> symlinks clang-tools-extra into the build. If this causes problems on
> your platform, please just remove it.
>
> This is a release candidate in the real sense: at this point I have
> zero release blockers on my radar. I will now only accept fixes for
> critical regressions, and if nothing comes up, rc3 will be promoted to
> 3.7.0-final.
>
> Documentation and release note patches are still welcome all the way
> up until the final tag goes in.
>
> Issues that were on my radar, but I don't consider blocking:
>
> - Sanitizer test failures on various platforms, e.g. PR24222. We never
> ran these tests in previous releases, so it's not a regression. It
> would be great if the sanitizer folks could look into the test
> failures, but it's not blocking 3.7.
>
> - PR24273: "[ARM] Libc++abi built in-tree with libunwind fails in
> __cxa_allocate_exception", Renato will exclude libc++ from his build
> for now.
>
> - Lack of key functions in some Instruction classes causing build
> failures without -fno-rtti
> (http://lists.llvm.org/pipermail/llvm-dev/2015-August/089010.html). No
> patches have been forthcoming, so this will not get fixed for 3.7. At
> least we correctly report -fno-rtti in llvm-config built with CMake
> now.
>
> - r244221: "[SPARC] Don't compare arch name as a string, use the enum
> instead", owner is unresponsive.
>
> - "[lldb] r245020 - [MIPS]Handle floating point and aggregate return
> types in SysV-mips [32 bit] ABI", owner is unresponsive.
>
>
> Cheers,
> Hans


>>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
One more data point:

Building/testing on Ubuntu 14.04.3 built with clang-3.6 and the ld.gold
linker yielded the following test results, bringing me down to a single
failure (and was 1.6x faster than a Debug build with gcc-4.9 and ld.bfd, 12
GB RAM and 6 cores allocated):

Failing Tests (1)
FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)

Unexpected Successes (12)
UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestExitDuringStep.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestStubSetSID.py (Linux lldb
3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestWatchedVarHitWhenInScope.py (Linux
lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
x86_64 x86_64)


I'm not yet sure if that's stable, but it's what I'm seeing on my VM.

-Todd

On Tue, Aug 25, 2015 at 1:56 PM, Todd Fiala  wrote:

> So specifying CC=/usr/bin/gcc CXX=/usr/bin/g++ cmake -GNinja ...
>
> did the trick for getting rid of the libc++ issues.  I think I may try to
> see if we can get those tests to make a run-time check to see if the
> inferior is linked against libc++, and if not, to skip it.  We can have
> lldb do it by looking at the image list.  Sound reasonable?  That seems
> more fool-proof than guessing based on the compiler.
>
> An alternative I considered and probably also might be valid to do anyway
> for cases where we look at the compiler binary is to fully resolve symbolic
> links before making decisions based on the binary.
>
> Thoughts?
>
> Separately, with the tests correctly seeing gcc now, I am down to the
> following errors:
>
> Ran 394 test suites (5 failed) (1.269036%)
> Ran 451 test cases (5 failed) (1.108647%)
> Failing Tests (5)
> FAIL: LLDB (suite) :: TestExitDuringStep.py (Linux lldb 3.19.0-26-generic
> #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
> FAIL: LLDB (suite) :: TestNumThreads.py (Linux lldb 3.19.0-26-generic
> #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
> FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> FAIL: LLDB (suite) :: TestThreadExit.py (Linux lldb 3.19.0-26-generic
> #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
>
> Unexpected Successes (10)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS

Re: [lldb-dev] test results look typical?

2015-08-25 Thread via lldb-dev
On Tue, Aug 25, 2015 at 04:41:34PM +, Zachary Turner wrote:
> It would be great (and not too difficult) to add skip counts to dosep.  I
> modified dotest so it formats the result summary in a nice single string
> that you can regex match to get counts.  It's already matched in dosep, but
> we just aren't pulling out the skip counts.  So it would be very easy to
> add this.

I would like to see totals from all the dotest.py's RESULTs counts:  
RESULT: PASSED (4 passes, 0 failures, 0 errors, 0 skipped, 0 expected 
failures, 0 unexpected successes)
as well as the timeouts from dosep.

Of course, dosep needs to be fixed to count the test cases correctly first. :)
It seems to miss the results from every 4th test suite or so.

> On Tue, Aug 25, 2015 at 7:41 AM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > For counting I will probably go back to my old method of parsing the
> > output of a serial dotest run, since IIRC I can get skip counts accurately
> > there as well.  (Or perhaps that should be added to dosep.py, it's been a
> > while since I last heavily modified that script).

You can get all the counts from running dosep by counting up the results
from each dotest run.  Collect the output via:

./dosep.py -s --options "-v --executable $BLDDIR/bin/lldb" 2>&1 | tee 
test_out.log

Then your totals will be:

export passes=`grep -E "^RESULT: " test_out.log | sed 's/(//' | awk 
'{count+=$3} END {print count}'` || true
export failures=`grep -E "^RESULT:" test_out.log | awk '{count+=$5} END 
{print count}'` || true
export errors=`grep -E "^RESULT:" test_out.log | awk '{count+=$7} END 
{print count}'` || true
export skips=`grep -E "^RESULT:" test_out.log | awk '{count+=$9} END {print 
count}'` || true
[...]
export total=`grep -E "^Ran [0-9]+ tests? in" lldb_test_out.log | awk 
'{count+=$2} END {print count}'`

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Chaoren Lin via lldb-dev
Are you running VMware by any chance? TestStepOverWatchpoint fails on
VMware because of a kernel bug.

On Tue, Aug 25, 2015 at 4:17 PM, Todd Fiala  wrote:

> One more data point:
>
> Building/testing on Ubuntu 14.04.3 built with clang-3.6 and the ld.gold
> linker yielded the following test results, bringing me down to a single
> failure (and was 1.6x faster than a Debug build with gcc-4.9 and ld.bfd, 12
> GB RAM and 6 cores allocated):
>
> Failing Tests (1)
> FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
>
> Unexpected Successes (12)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestExitDuringStep.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestStubSetSID.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestWatchedVarHitWhenInScope.py (Linux
> lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
>
>
> I'm not yet sure if that's stable, but it's what I'm seeing on my VM.
>
> -Todd
>
> On Tue, Aug 25, 2015 at 1:56 PM, Todd Fiala  wrote:
>
>> So specifying CC=/usr/bin/gcc CXX=/usr/bin/g++ cmake -GNinja ...
>>
>> did the trick for getting rid of the libc++ issues.  I think I may try to
>> see if we can get those tests to make a run-time check to see if the
>> inferior is linked against libc++, and if not, to skip it.  We can have
>> lldb do it by looking at the image list.  Sound reasonable?  That seems
>> more fool-proof than guessing based on the compiler.
>>
>> An alternative I considered and probably also might be valid to do anyway
>> for cases where we look at the compiler binary is to fully resolve symbolic
>> links before making decisions based on the binary.
>>
>> Thoughts?
>>
>> Separately, with the tests correctly seeing gcc now, I am down to the
>> following errors:
>>
>> Ran 394 test suites (5 failed) (1.269036%)
>> Ran 451 test cases (5 failed) (1.108647%)
>> Failing Tests (5)
>> FAIL: LLDB (suite) :: TestExitDuringStep.py (Linux lldb 3.19.0-26-generic
>> #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
>> FAIL: LLDB (suite) :: TestNumThreads.py (Linux lldb 3.19.0-26-generic
>> #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
>> FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> FAIL: LLDB (suite) :: TestThreadExit.py (Linux lldb 3.19.0-26-generic
>> #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
>>
>> Unexpected Successes (10)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite)

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Chaoren Lin via lldb-dev
Sorry, "kernel" bug is probably the wrong word. It's a problem specific to
WMware.

On Tue, Aug 25, 2015 at 4:25 PM, Chaoren Lin  wrote:

> Are you running VMware by any chance? TestStepOverWatchpoint fails on
> VMware because of a kernel bug.
>
> On Tue, Aug 25, 2015 at 4:17 PM, Todd Fiala  wrote:
>
>> One more data point:
>>
>> Building/testing on Ubuntu 14.04.3 built with clang-3.6 and the ld.gold
>> linker yielded the following test results, bringing me down to a single
>> failure (and was 1.6x faster than a Debug build with gcc-4.9 and ld.bfd, 12
>> GB RAM and 6 cores allocated):
>>
>> Failing Tests (1)
>> FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>>
>> Unexpected Successes (12)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestExitDuringStep.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestStubSetSID.py (Linux lldb
>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>> x86_64 x86_64)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestWatchedVarHitWhenInScope.py
>> (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17
>> UTC 2015 x86_64 x86_64)
>>
>>
>> I'm not yet sure if that's stable, but it's what I'm seeing on my VM.
>>
>> -Todd
>>
>> On Tue, Aug 25, 2015 at 1:56 PM, Todd Fiala  wrote:
>>
>>> So specifying CC=/usr/bin/gcc CXX=/usr/bin/g++ cmake -GNinja ...
>>>
>>> did the trick for getting rid of the libc++ issues.  I think I may try
>>> to see if we can get those tests to make a run-time check to see if the
>>> inferior is linked against libc++, and if not, to skip it.  We can have
>>> lldb do it by looking at the image list.  Sound reasonable?  That seems
>>> more fool-proof than guessing based on the compiler.
>>>
>>> An alternative I considered and probably also might be valid to do
>>> anyway for cases where we look at the compiler binary is to fully resolve
>>> symbolic links before making decisions based on the binary.
>>>
>>> Thoughts?
>>>
>>> Separately, with the tests correctly seeing gcc now, I am down to the
>>> following errors:
>>>
>>> Ran 394 test suites (5 failed) (1.269036%)
>>> Ran 451 test cases (5 failed) (1.108647%)
>>> Failing Tests (5)
>>> FAIL: LLDB (suite) :: TestExitDuringStep.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> FAIL: LLDB (suite) :: TestNumThreads.py (Linux lldb 3.19.0-26-generic
>>> #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
>>> FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> FAIL: LLDB (suite) :: TestThreadExit.py (Linux lldb 3.19.0-26-generic
>>> #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
>>>
>>> Unexpected Successes (10)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu S

[lldb-dev] Test suite rebuilding test executables many times

2015-08-25 Thread Zachary Turner via lldb-dev
While looking into a Windows-specific issue involving TestTargetAPI.py, I
noticed that we are building the exact same executable many times.  Every
single test has a line such as self.buildDwarf() or self.buildDsym().
Those functions will first run make clean and then run make, essentially
rebuilding the exact same program.

Is this necessary for some reason?  Each test suite already supports
suite-specific setup and tear down by implementing a suite-specific setUp
and tearDown function.  Any particular reason we can't build the
executables a single time in setUp and clean them a single time in tearDown?

I don't think we need to retro-actively do this for every single test suite
as it would be churn, but in a couple of places it would actually fix test
failures on Windows, and improve performance of the test suite as a side
benefit (as a result of reducing the number of compilations that need to
happen)

Thoughts?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
On Tue, Aug 25, 2015 at 4:22 PM,  wrote:

> On Tue, Aug 25, 2015 at 04:41:34PM +, Zachary Turner wrote:
> > It would be great (and not too difficult) to add skip counts to dosep.  I
> > modified dotest so it formats the result summary in a nice single string
> > that you can regex match to get counts.  It's already matched in dosep,
> but
> > we just aren't pulling out the skip counts.  So it would be very easy to
> > add this.
>
> I would like to see totals from all the dotest.py's RESULTs counts:
> RESULT: PASSED (4 passes, 0 failures, 0 errors, 0 skipped, 0 expected
> failures, 0 unexpected successes)
> as well as the timeouts from dosep.
>
> Of course, dosep needs to be fixed to count the test cases correctly
> first. :)
> It seems to miss the results from every 4th test suite or so.
>
>
I may dig into that if nobody beats me to it.  I did the original
multiprocessing work on dosep ~1.5 years ago and it may be doing something
goofy.  So far the results have been remarkably stable on the counts for me
over the last 2 days.


> > On Tue, Aug 25, 2015 at 7:41 AM Todd Fiala via lldb-dev <
> > lldb-dev@lists.llvm.org> wrote:
> > > For counting I will probably go back to my old method of parsing the
> > > output of a serial dotest run, since IIRC I can get skip counts
> accurately
> > > there as well.  (Or perhaps that should be added to dosep.py, it's
> been a
> > > while since I last heavily modified that script).
>
> You can get all the counts from running dosep by counting up the results
> from each dotest run.  Collect the output via:
>
> ./dosep.py -s --options "-v --executable $BLDDIR/bin/lldb" 2>&1 | tee
> test_out.log
>
> Then your totals will be:
>
> export passes=`grep -E "^RESULT: " test_out.log | sed 's/(//' | awk
> '{count+=$3} END {print count}'` || true
> export failures=`grep -E "^RESULT:" test_out.log | awk '{count+=$5}
> END {print count}'` || true
> export errors=`grep -E "^RESULT:" test_out.log | awk '{count+=$7} END
> {print count}'` || true
> export skips=`grep -E "^RESULT:" test_out.log | awk '{count+=$9} END
> {print count}'` || true
> [...]
> export total=`grep -E "^Ran [0-9]+ tests? in" lldb_test_out.log | awk
> '{count+=$2} END {print count}'`
>
>
Great, thanks Dawn!

I'd like to get all the counts into dosep.py at least as an option, but
having something to cross check it with is good (and getting a quick answer
is nice as well, thanks.)
-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
Hi Chaoren,

Right you are, I am using a VMWare VM.  Usually when I have issues with
VMs, it is because I'm not using VMWare, so this is a change!

Do you have a reference to a VMWare bug on this?  That would be great to
follow up with them on.

In the absence of that, I wonder if we can detect that is the runtime
environment and perhaps skip that test on VMWare VMs.  I'm pretty sure we
can detect that we're running in a VM if (at least) the guest tools are
installed.  I'll look into that.

-Todd

On Tue, Aug 25, 2015 at 4:26 PM, Chaoren Lin  wrote:

> Sorry, "kernel" bug is probably the wrong word. It's a problem specific to
> WMware.
>
> On Tue, Aug 25, 2015 at 4:25 PM, Chaoren Lin  wrote:
>
>> Are you running VMware by any chance? TestStepOverWatchpoint fails on
>> VMware because of a kernel bug.
>>
>> On Tue, Aug 25, 2015 at 4:17 PM, Todd Fiala  wrote:
>>
>>> One more data point:
>>>
>>> Building/testing on Ubuntu 14.04.3 built with clang-3.6 and the ld.gold
>>> linker yielded the following test results, bringing me down to a single
>>> failure (and was 1.6x faster than a Debug build with gcc-4.9 and ld.bfd, 12
>>> GB RAM and 6 cores allocated):
>>>
>>> Failing Tests (1)
>>> FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>>
>>> Unexpected Successes (12)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestExitDuringStep.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestStubSetSID.py (Linux lldb
>>> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
>>> x86_64 x86_64)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestWatchedVarHitWhenInScope.py
>>> (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17
>>> UTC 2015 x86_64 x86_64)
>>>
>>>
>>> I'm not yet sure if that's stable, but it's what I'm seeing on my VM.
>>>
>>> -Todd
>>>
>>> On Tue, Aug 25, 2015 at 1:56 PM, Todd Fiala 
>>> wrote:
>>>
 So specifying CC=/usr/bin/gcc CXX=/usr/bin/g++ cmake -GNinja ...

 did the trick for getting rid of the libc++ issues.  I think I may try
 to see if we can get those tests to make a run-time check to see if the
 inferior is linked against libc++, and if not, to skip it.  We can have
 lldb do it by looking at the image list.  Sound reasonable?  That seems
 more fool-proof than guessing based on the compiler.

 An alternative I considered and probably also might be valid to do
 anyway for cases where we look at the compiler binary is to fully resolve
 symbolic links before making decisions based on the binary.

 Thoughts?

 Separately, with the tests correctly seeing gcc now, I am down to the
 following errors:

 Ran 394 test suites (5 failed) (1.269036%)
 Ran 451 test cases (5 failed) (1.108647%)
 Failing Tests (5)
 FAIL: LLDB (suite) :: TestExitDuringStep.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 FAIL: LLDB (suite) :: TestNumThreads.py (Linux lldb 3.19.0-26-generic
 #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
 FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu

Re: [lldb-dev] test results look typical?

2015-08-25 Thread Zachary Turner via lldb-dev
On Tue, Aug 25, 2015 at 4:39 PM Todd Fiala  wrote:

>
> Great, thanks Dawn!
>
> I'd like to get all the counts into dosep.py at least as an option, but
> having something to cross check it with is good (and getting a quick answer
> is nice as well, thanks.)
>

Personally I'd love to see LESS options in dotest and dosep.  So if there's
no good reason to not print it, I say collect all of them.  Just my 2c

>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Test suite rebuilding test executables many times

2015-08-25 Thread Zachary Turner via lldb-dev
Another possibility is changing the arguments to buildDwarf and buildDsym.
Currently they take a clean argument with a default value of True.  Does
this really need to be True?  If this were False by default it could
drastically speed up the test suite.  And I can't think of a reason why
make clean would need to run by default, because tear down is going to have
to clean up the files manually anyway

On Tue, Aug 25, 2015 at 4:33 PM Zachary Turner  wrote:

> While looking into a Windows-specific issue involving TestTargetAPI.py, I
> noticed that we are building the exact same executable many times.  Every
> single test has a line such as self.buildDwarf() or self.buildDsym().
> Those functions will first run make clean and then run make, essentially
> rebuilding the exact same program.
>
> Is this necessary for some reason?  Each test suite already supports
> suite-specific setup and tear down by implementing a suite-specific setUp
> and tearDown function.  Any particular reason we can't build the
> executables a single time in setUp and clean them a single time in tearDown?
>
> I don't think we need to retro-actively do this for every single test
> suite as it would be churn, but in a couple of places it would actually fix
> test failures on Windows, and improve performance of the test suite as a
> side benefit (as a result of reducing the number of compilations that need
> to happen)
>
> Thoughts?
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread via lldb-dev
On Tue, Aug 25, 2015 at 04:39:14PM -0700, Todd Fiala wrote:
> I may dig into that if nobody beats me to it.  I did the original
> multiprocessing work on dosep ~1.5 years ago and it may be doing something
> goofy.  

Cool!  It would be awesome if you could have a look - I've been meaning to dig
further but just haven't had the time.

> So far the results have been remarkably stable on the counts for me
> over the last 2 days.

They are always the same.  Try the narrowed case I described with only
the tests from test/types - you'll get the same total each time, because
the same test suite is skipped each time.

Thanks!
-Dawn
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Test suite rebuilding test executables many times

2015-08-25 Thread Jim Ingham via lldb-dev
It is fairly common practice (at least it is for me) when figuring out why a 
test failed, or adding to a test case, or when looking for a good example file 
to poke at, etc, to go to some relevant test directory, do a "make" then poke 
around a bunch.  I don't generally remember to clean when I'm done.  If the 
test suite didn't do make clean before running the tests then I'd get whatever 
state I left the binaries in after that investigation.  So I prefer doing make 
clean the first time you run a test in a given directory, but I have no 
objection to trying not to do the clean on subsequent tests in the same 
directory.  Also we do "dsym" and then "non-dsym" builds in the same directory 
on OS X, so we'd have to make sure that we clean when switching back & forth 
between the two kinds of tests, or we will leave a dSYM around at some point 
and stop testing .o file debugging.  Now that support is coming in for "dwo" 
debugging on Linux, we probably should also add the ability to test normal & 
dwo debugging there as well.  So this soon won't be just an OS X oddity...

Finally, there are some tests that rebuild the binaries on purpose - sadly I 
don't remember which ones.  If we're lucky they would fail if you switched the 
default and you could go fix them, but if you are unlucky they would succeed 
without actually testing what they were supposed to test.  So a little care 
would be needed to find these.

Jim

> On Aug 25, 2015, at 4:52 PM, Zachary Turner via lldb-dev 
>  wrote:
> 
> Another possibility is changing the arguments to buildDwarf and buildDsym.  
> Currently they take a clean argument with a default value of True.  Does this 
> really need to be True?  If this were False by default it could drastically 
> speed up the test suite.  And I can't think of a reason why make clean would 
> need to run by default, because tear down is going to have to clean up the 
> files manually anyway
> 
> On Tue, Aug 25, 2015 at 4:33 PM Zachary Turner  wrote:
> While looking into a Windows-specific issue involving TestTargetAPI.py, I 
> noticed that we are building the exact same executable many times.  Every 
> single test has a line such as self.buildDwarf() or self.buildDsym().  Those 
> functions will first run make clean and then run make, essentially rebuilding 
> the exact same program.
> 
> Is this necessary for some reason?  Each test suite already supports 
> suite-specific setup and tear down by implementing a suite-specific setUp and 
> tearDown function.  Any particular reason we can't build the executables a 
> single time in setUp and clean them a single time in tearDown?
> 
> I don't think we need to retro-actively do this for every single test suite 
> as it would be churn, but in a couple of places it would actually fix test 
> failures on Windows, and improve performance of the test suite as a side 
> benefit (as a result of reducing the number of compilations that need to 
> happen)
> 
> Thoughts?
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.llvm.org_cgi-2Dbin_mailman_listinfo_lldb-2Ddev&d=BQIGaQ&c=eEvniauFctOgLOKGJOplqw&r=aTCVT7yw0RLKhx7ZXY2faboS3m1dhXpYF-Av4XoSGMU&m=Eycd1fp0-akP8p_gZ00FUIzJIUFwuS78MvYOL_c9s5s&s=Xm3Muzt0_AVzEn-Y2pSy-gsnQsAbeNAmeo10Vr1VeT8&e=

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
On Tue, Aug 25, 2015 at 4:43 PM, Zachary Turner  wrote:

>
>
> On Tue, Aug 25, 2015 at 4:39 PM Todd Fiala  wrote:
>
>>
>> Great, thanks Dawn!
>>
>> I'd like to get all the counts into dosep.py at least as an option, but
>> having something to cross check it with is good (and getting a quick answer
>> is nice as well, thanks.)
>>
>
> Personally I'd love to see LESS options in dotest and dosep.  So if
> there's no good reason to not print it, I say collect all of them.  Just my
> 2c
>
>>
I'm good with that.  dotest.py will already print out the counts without
options IIRC, so that would be no different.

-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test results look typical?

2015-08-25 Thread Todd Fiala via lldb-dev
On Tue, Aug 25, 2015 at 4:41 PM, Todd Fiala  wrote:

> Hi Chaoren,
>
> Right you are, I am using a VMWare VM.  Usually when I have issues with
> VMs, it is because I'm not using VMWare, so this is a change!
>
>
And I am happy to report I get *no* errors when building with clang-3.6 +
ld.gold + Debug on real iron.

Thanks for the help, everyone!

-Todd


> Do you have a reference to a VMWare bug on this?  That would be great to
> follow up with them on.
>
> In the absence of that, I wonder if we can detect that is the runtime
> environment and perhaps skip that test on VMWare VMs.  I'm pretty sure we
> can detect that we're running in a VM if (at least) the guest tools are
> installed.  I'll look into that.
>
> -Todd
>
> On Tue, Aug 25, 2015 at 4:26 PM, Chaoren Lin  wrote:
>
>> Sorry, "kernel" bug is probably the wrong word. It's a problem specific
>> to WMware.
>>
>> On Tue, Aug 25, 2015 at 4:25 PM, Chaoren Lin  wrote:
>>
>>> Are you running VMware by any chance? TestStepOverWatchpoint fails on
>>> VMware because of a kernel bug.
>>>
>>> On Tue, Aug 25, 2015 at 4:17 PM, Todd Fiala 
>>> wrote:
>>>
 One more data point:

 Building/testing on Ubuntu 14.04.3 built with clang-3.6 and the ld.gold
 linker yielded the following test results, bringing me down to a single
 failure (and was 1.6x faster than a Debug build with gcc-4.9 and ld.bfd, 12
 GB RAM and 6 cores allocated):

 Failing Tests (1)
 FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)

 Unexpected Successes (12)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestExitDuringStep.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py (Linux
 lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestStubSetSID.py (Linux lldb
 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
 x86_64 x86_64)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestWatchedVarHitWhenInScope.py
 (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17
 UTC 2015 x86_64 x86_64)


 I'm not yet sure if that's stable, but it's what I'm seeing on my VM.

 -Todd

 On Tue, Aug 25, 2015 at 1:56 PM, Todd Fiala 
 wrote:

> So specifying CC=/usr/bin/gcc CXX=/usr/bin/g++ cmake -GNinja ...
>
> did the trick for getting rid of the libc++ issues.  I think I may try
> to see if we can get those tests to make a run-time check to see if the
> inferior is linked against libc++, and if not, to skip it.  We can have
> lldb do it by looking at the image list.  Sound reasonable?  That seems
> more fool-proof than guessing based on the compiler.
>
> An alternative I considered and probably also might be valid to do
> anyway for cases where we look at the compiler binary is to fully resolve
> symbolic links before making decisions based on the binary.
>
> Thoughts?
>
> Separately, with the tests correctly seeing gcc now, I am down to the
> following errors:
>
> Ran 394 test suites (5 failed) (1.269036%)
> Ran 451 test cases (5 failed) (1.108647%)
> Failing Tests (5)
> FAIL: LLDB (suite) :: TestExitDuringStep.py (Linux lldb
> 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015
> x86_64 x86_64)
> FAIL: LLDB (suite) :: TestNumThreads.py (Linux lldb 3.19.0-26-generic
> #28~14.

Re: [lldb-dev] Test suite rebuilding test executables many times

2015-08-25 Thread Zachary Turner via lldb-dev
The first and second issues (cleaning once at startup, switching between
dsym and dwarf tests) can probably both be solved at the same time by
having the test runner sort the runs and do all dsym tests first, and then
all dwarf tests, and having TestBase do make clean once before each of
those steps.  What do you think?

I'm going to do some timings tomorrow to see how much faster the test suite
is when clean=False is the default.  I already confirmed that it fixes all
the failures I'm seeing though, so as long as it's agreeable I'd like to
make this change.

I'll wait and see if anyone can remember which tests rebuild binaries on
purpose.  Otherwise I will try to look through them and see if I can figure
it out.

On Tue, Aug 25, 2015 at 5:06 PM Jim Ingham  wrote:

> It is fairly common practice (at least it is for me) when figuring out why
> a test failed, or adding to a test case, or when looking for a good example
> file to poke at, etc, to go to some relevant test directory, do a "make"
> then poke around a bunch.  I don't generally remember to clean when I'm
> done.  If the test suite didn't do make clean before running the tests then
> I'd get whatever state I left the binaries in after that investigation.  So
> I prefer doing make clean the first time you run a test in a given
> directory, but I have no objection to trying not to do the clean on
> subsequent tests in the same directory.  Also we do "dsym" and then
> "non-dsym" builds in the same directory on OS X, so we'd have to make sure
> that we clean when switching back & forth between the two kinds of tests,
> or we will leave a dSYM around at some point and stop testing .o file
> debugging.  Now that support is coming in for "dwo" debugging on Linux, we
> probably should also add the ability to test normal & dwo debugging there
> as well.  So this soon won't be just an OS X oddity...
>
> Finally, there are some tests that rebuild the binaries on purpose - sadly
> I don't remember which ones.  If we're lucky they would fail if you
> switched the default and you could go fix them, but if you are unlucky they
> would succeed without actually testing what they were supposed to test.  So
> a little care would be needed to find these.
>
> Jim
>
> > On Aug 25, 2015, at 4:52 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Another possibility is changing the arguments to buildDwarf and
> buildDsym.  Currently they take a clean argument with a default value of
> True.  Does this really need to be True?  If this were False by default it
> could drastically speed up the test suite.  And I can't think of a reason
> why make clean would need to run by default, because tear down is going to
> have to clean up the files manually anyway
> >
> > On Tue, Aug 25, 2015 at 4:33 PM Zachary Turner 
> wrote:
> > While looking into a Windows-specific issue involving TestTargetAPI.py,
> I noticed that we are building the exact same executable many times.  Every
> single test has a line such as self.buildDwarf() or self.buildDsym().
> Those functions will first run make clean and then run make, essentially
> rebuilding the exact same program.
> >
> > Is this necessary for some reason?  Each test suite already supports
> suite-specific setup and tear down by implementing a suite-specific setUp
> and tearDown function.  Any particular reason we can't build the
> executables a single time in setUp and clean them a single time in tearDown?
> >
> > I don't think we need to retro-actively do this for every single test
> suite as it would be churn, but in a couple of places it would actually fix
> test failures on Windows, and improve performance of the test suite as a
> side benefit (as a result of reducing the number of compilations that need
> to happen)
> >
> > Thoughts?
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.llvm.org_cgi-2Dbin_mailman_listinfo_lldb-2Ddev&d=BQIGaQ&c=eEvniauFctOgLOKGJOplqw&r=aTCVT7yw0RLKhx7ZXY2faboS3m1dhXpYF-Av4XoSGMU&m=Eycd1fp0-akP8p_gZ00FUIzJIUFwuS78MvYOL_c9s5s&s=Xm3Muzt0_AVzEn-Y2pSy-gsnQsAbeNAmeo10Vr1VeT8&e=
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev