On 09/08/2011 05:38 PM, Jonathan Wakely wrote:
>
> the pdf size is 4MB. maybe that is the problem.
Please use some common sense before forwarding a 100KB message to the
mailing list, where it gets sent to hundreds of people. It would only
have taken you a few seconds to remove the base64-enco
Hi,
In ifcvt.c's function find_if_case_2, it uses cheap_bb_rtx_cost_p to
judge the conversion.
Function cheap_bb_rtx_cost_p checks whether the total insn_rtx_cost on
non-jump insns in
basic block BB is less than MAX_COST.
So the question is why uses cheap_bb_rtx_cost_p, even when we know the
ELSE
Snapshot gcc-4.5-20110908 is now available on
ftp://gcc.gnu.org/pub/gcc/snapshots/4.5-20110908/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.
This snapshot has been generated from the GCC 4.5 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches
tcompare/gcc-linaro-4.6-2011.08/logs/armv7l-natty-cbuild162-ursa1-cortexa9r1/gcc-testsuite.txt?base=gcc-linaro-4.6-2011.07-0
and a lower level diff-on-sum-files for each commit:
http://builds.linaro.org/toolchain/gcc-linaro-4.5+bzr99541~rsandifo~lp823708-4.5/logs/armv7l-natty-cbuild181-ursa4-armv5r2/testsuite
On Thu, 8 Sep 2011, Richard Earnshaw wrote:
> And that's only going to work if all the test names are unique. I
> currently see quite a few tests that appear in my log as both PASS and
> FAIL in a single run. For example:
Yes, that's just a bug in the testsuite that should be fixed just like an
On Thu, 8 Sep 2011, Richard Guenther wrote:
> I think it would be more useful to have a script parse gcc-testresults@
> postings from the various autotesters and produce a nice webpage
> with revisions and known FAIL/XPASSes for the target triplets that
> are tested.
Better than parsing gcc-testr
On 09/07/2011 06:59 PM, Kevin Polulak wrote:
I'd like to know at what stage during compilation is debug data
collected and subsequently stored in the object file? Is it a
multi-stage process? Perhaps the parser collects some high-level
information which passes it to the code generator for furthe
On 08/09/11 14:54, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 09:23, Richard Earnshaw wrote:
>
>> And that's only going to work if all the test names are unique. I
>> currently see quite a few tests that appear in my log as both PASS and
>> FAIL in a single run. For example:
>
> That's fine
On Thu, Sep 8, 2011 at 09:23, Richard Earnshaw wrote:
> And that's only going to work if all the test names are unique. I
> currently see quite a few tests that appear in my log as both PASS and
> FAIL in a single run. For example:
That's fine. What we are looking for is to capture the state
On Thu, Sep 8, 2011 at 3:20 PM, Richard Guenther
wrote:
> On Thu, Sep 8, 2011 at 3:09 PM, Steve White
> wrote:
>> Hi Richard!
>>
>> On Thu, Sep 8, 2011 at 11:02 AM, Richard Guenther
>> wrote:
>>> On Thu, Sep 8, 2011 at 12:31 AM, Steve White
>>> wrote:
Hi,
I run some tests of sim
On 08/09/11 12:33, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 07:16, Richard Guenther
> wrote:
>
>> Well, you'd need to maintain a list of known XPASS/FAILs anyway.
>
> Yes, of course. That's the manifest of things you expect to be broken.
>
And that's only going to work if all the test na
On Thu, Sep 8, 2011 at 3:09 PM, Steve White wrote:
> Hi Richard!
>
> On Thu, Sep 8, 2011 at 11:02 AM, Richard Guenther
> wrote:
>> On Thu, Sep 8, 2011 at 12:31 AM, Steve White
>> wrote:
>>> Hi,
>>>
>>> I run some tests of simple number-crunching loops whenever new
>>> architectures and compilers
Kevin Polulak writes:
>
> I've tried to gain some knowledge by digging through the GCC source
> but haven't come up with much other than the values of the DW_*
> constants which isn't that important. Are there any files in
> particular I should be looking at?
>From the gcc internals manual:
* D
Hi Richard!
On Thu, Sep 8, 2011 at 11:02 AM, Richard Guenther
wrote:
> On Thu, Sep 8, 2011 at 12:31 AM, Steve White
> wrote:
>> Hi,
>>
>> I run some tests of simple number-crunching loops whenever new
>> architectures and compilers arise.
>>
>> These tests on recent Intel architectures show simi
On Thu, Sep 8, 2011 at 08:29, Richard Guenther
wrote:
> It _does_ live with the source code. Think of implicitly "checking in" the
> build result with the tested revision. That's not different from your idea
> of checking in some sort of whitelist of fails.
Ah, I see what you mean. Yes, that'
On Thu, Sep 8, 2011 at 2:26 PM, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 08:20, Richard Guenther
> wrote:
>
>> Cache the comparison result? If you specify a (minimum) revision
>> required for testing just test against a cached revision that fulfils
>> the requirement. Something I never imp
On Thu, Sep 8, 2011 at 08:20, Richard Guenther
wrote:
> Cache the comparison result? If you specify a (minimum) revision
> required for testing just test against a cached revision that fulfils
> the requirement. Something I never implemented for ours.
Nope. Build must be functionally independ
On Thu, Sep 8, 2011 at 2:14 PM, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 07:49, Richard Guenther
> wrote:
>
>> Well, I'd rather _fix_ dejagnu then. Any specific example you can't
>> eventually xfail by dg-skipping the testcase?
>
> Several I mentioned upthread:
> - Some .exp files do no sup
On Thu, Sep 8, 2011 at 07:49, Richard Guenther
wrote:
> Well, I'd rather _fix_ dejagnu then. Any specific example you can't
> eventually xfail by dg-skipping the testcase?
Several I mentioned upthread:
- Some .exp files do no support xfail markers.
- Different directories will have their own sy
On Thu, Sep 8, 2011 at 1:33 PM, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 07:16, Richard Guenther
> wrote:
>
>> Well, you'd need to maintain a list of known XPASS/FAILs anyway.
>
> Yes, of course. That's the manifest of things you expect to be broken.
>
>> You can as well do it in the testca
On Thu, Sep 8, 2011 at 07:16, Richard Guenther
wrote:
> Well, you'd need to maintain a list of known XPASS/FAILs anyway.
Yes, of course. That's the manifest of things you expect to be broken.
> You can as well do it in the testcases themself (add XFAILs, remove
> XPASSes and open bugreports to
On Thu, Sep 8, 2011 at 1:04 PM, Diego Novillo wrote:
> On Thu, Sep 8, 2011 at 04:31, Richard Guenther
> wrote:
>
>> I think it would be more useful to have a script parse gcc-testresults@
>> postings from the various autotesters and produce a nice webpage
>> with revisions and known FAIL/XPASSes
On Thu, Sep 8, 2011 at 04:31, Richard Guenther
wrote:
> I think it would be more useful to have a script parse gcc-testresults@
> postings from the various autotesters and produce a nice webpage
> with revisions and known FAIL/XPASSes for the target triplets that
> are tested.
Sure, though that
On 8 September 2011 10:00, Xiangfu Liu wrote:
> On 09/08/2011 12:11 PM, Joe Buck wrote:
>>
>> On Wed, Sep 07, 2011 at 08:08:01PM -0700, Xiangfu Liu wrote:
>>>
>>> > Hi
>>> >
>>> > I got the pdf file. and I also sent out the papers by postal mail.
>>> > where is the pdf file I should send to?
>>
Why is lto/whole program mode not used in LLVM for peak performance
comparison? (of course, peak performance should really use FDO..)
Thanks for the feedback. I did not manage to use LTO for LLVM as it
described on
http://llvm.org/docs/LinkTimeOptimization.html#lto
I am getting 'file not reco
On Thu, Sep 8, 2011 at 12:31 AM, Steve White
wrote:
> Hi,
>
> I run some tests of simple number-crunching loops whenever new
> architectures and compilers arise.
>
> These tests on recent Intel architectures show similar performance
> between gcc and icc compilers, at full optimization.
>
> Howeve
On 09/08/2011 12:11 PM, Joe Buck wrote:
On Wed, Sep 07, 2011 at 08:08:01PM -0700, Xiangfu Liu wrote:
> Hi
>
> I got the pdf file. and I also sent out the papers by postal mail.
> where is the pdf file I should send to?
>
> I have tried:
> copyright-cl...@fsf.org ass...@gnu.org
>
> and
On Wed, Sep 07, 2011 at 11:15:39AM -0400, Vladimir Makarov wrote:
> This year I used -Ofast -flto -fwhole-program instead of
> -O3 for GCC and -O3 -ffast-math for LLVM for comparison of peak
> performance. I could improve GCC performance even more by using
> other GCC possibilities (like support
On Wed, Sep 7, 2011 at 5:28 PM, Diego Novillo wrote:
> One of the most vexing aspects of GCC development is dealing with
> failures in the various testsuites. In general, we are unable to
> keep failures down to zero. We tolerate some failures and tell
> people to "compare your build against a c
On Wed, Sep 7, 2011 at 6:23 PM, Vladimir Makarov wrote:
> On 09/07/2011 11:55 AM, Xinliang David Li wrote:
>>
>> Why is lto/whole program mode not used in LLVM for peak performance
>> comparison? (of course, peak performance should really use FDO..)
>>
> Thanks for the feedback. I did not manage
30 matches
Mail list logo