Re: Error saying cannot compute suffix while building compiler gcc-4.3.2

2010-03-29 Thread Jonathan Wakely
On 26 March 2010 07:54, Vaibhav Shrimali  wrote:
> Hello,
> I made some changes in the compiler gcc-4.3.2 and am currently trying
> to build the compiler.
> There are no compilation error in the source code. I followed the
> steps specified at : http://gcc.gnu.org/install/index.html
> while configuring i used the command:
>
> r...@vebs-pc:/home/vebs/gcc/gcc# export SUF=-4.3
> r...@vebs-pc:/home/vebs/gcc/gcc# /home/vebs/gcc/gcc-4.3.2/configure
> --program-suffix=$SUF
> r...@vebs-pc:/home/vebs/gcc/gcc# make -f Makefile
>
> it exits and gives the following output.. whose last few lines are:
>
> **
> checking for i686-pc-linux-gnu-gcc... /home/vebs/gcc/gcc/./gcc/xgcc
> -B/home/vebs/gcc/gcc/./gcc/ -B/usr/local/i686-pc-linux-gnu/bin/
> -B/usr/local/i686-pc-linux-gnu/lib/ -isystem
> /usr/local/i686-pc-linux-gnu/include -isystem
> /usr/local/i686-pc-linux-gnu/sys-include
> checking for suffix of object files... configure: error: cannot
> compute suffix of object files: cannot compile
> See `config.log' for more details.
> make[2]: *** [configure-stage1-target-libgcc] Error 1
> make[2]: Leaving directory `/home/vebs/gcc/gcc'
> make[1]: *** [stage1-bubble] Error 2
> make[1]: Leaving directory `/home/vebs/gcc/gcc'
> make: *** [all] Error 2
> **

As the configure output says:
See `config.log' for more details.

Your existing compiler is not working correctly, see config.log for
more details of what is failing.

There is no need to crosspost this to gcc and gcc-patches, please
follow up on gcc-help.

Jonathan


bug linear loop transforms

2010-03-29 Thread Alex Turjan
Im writing to you regarding a possible bug in linear loop transfor.
The bug can be reproduce by compiling the attached c file with gcc.4.5.0 
(20100204, 20100325) on x86 machine.

The compiler flags that reproduce the error are:
-O2 -fno-inline -fno-tree-ch -ftree-loop-linear

If the compiler is run with:
-O2 -fno-inline -fno-tree-ch -fno-tree-loop-linear 
then the produced code is correct.


  #include 

int test (int n, int *a)
{
  int i, j;

  for (i = 0; i < n; i++)
{
  for (j = 0; j < n; j++)
{
  a[j] = i + n;
}
}


if (a[0] != 31 || i + n - 1 != 31)
   printf("incorrect %d  %d \n", a[0], i+n-1);  

  return 0;
}

int main (void)
{
  int a[16];
  test (16, a);
  return 0;
}


Re: bug linear loop transforms

2010-03-29 Thread Alexander Monakov
[gcc-bugs@ removed from Cc:]

On Mon, 29 Mar 2010, Alex Turjan wrote:

> Im writing to you regarding a possible bug in linear loop transfor.
> The bug can be reproduce by compiling the attached c file with gcc.4.5.0 
> (20100204, 20100325) on x86 machine.
> 
> The compiler flags that reproduce the error are:
> -O2 -fno-inline -fno-tree-ch -ftree-loop-linear
> 
> If the compiler is run with:
> -O2 -fno-inline -fno-tree-ch -fno-tree-loop-linear 
> then the produced code is correct.

Instead of writing to a mailing list, please file a bug in GCC Bugzilla, as
described in http://gcc.gnu.org/bugs/ .  Posting bug reports to gcc-bugs@ does
not register them in the bugzilla, and thus is not recommended.

Thanks.

Alexander Monakov


Is gcc-bugs archive down?

2010-03-29 Thread H.J. Lu
Hi,

Many comments for

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43560

are missing from gcc-bugs archive:

http://gcc.gnu.org/ml/gcc-bugs/2010-03/

Is there a problem with gcc-bugs archive?

-- 
H.J.


Re: Ask for suggestions on init_caller_save

2010-03-29 Thread Jeff Law

On 03/23/10 21:30, Jie Zhang wrote:
I'm fixing a bug. It's caused by uninitialized caller save pass data. 
One function in the test case uses the "optimize" attribute with "O2" 
option. So even with -O0 in command line, GCC calls caller save pass 
for that function. The problem is init_caller_save is called in 
backend_inti_target if flag_caller_saves is set. Apparently, in this 
case, flag_caller_saves is not set when came to backend_inti_target. I 
think there are several ways to fix this bug, but I don't know which 
way should/can I go:


1. Always call init_caller_save in backend_inti_target. But it seems a 
waste for most cases if -O0.


2. Call init_caller_save in IRA main function. But by this way it will 
be called multiple times unless we create a flag to remember if it has 
been called or not. Maybe we can reuse test_reg or test_mem. If they 
are NULL_TREE, just call init_caller_save.


3. Call init_caller_save in handle_optimize_attribute. If 
flag_caller_saves is not set before parse_optimize_options but set 
after, call init_caller_save. Considering there might be multiple 
functions using optimize attribute, we also need a flag to remember if 
init_caller_save has been called or not.


4. There are only three global function in caller-save.c: 
init_save_areas, setup_save_areas, and save_call_clobbered_regs. We 
can just add a check in the beginning of those functions. If the data 
has not been initialized, just init_caller_save first.



Any suggestions?
I'd suggest #2 with a status flag indicating whether or not caller-saves 
has been initialized.That should be low enough overhead to not be a 
problem.


Jeff



Peculiar XPASS of gcc.dg/guality/inline-params.c

2010-03-29 Thread Martin Jambor
Hi,

I have run the testcase with the early inliner disabled and noticed
that gcc.dg/guality/inline-params.c XPASSes with early inlining and
XFAILs without it.  The reason for the (expected) failure is that
IPA-CP removes a parameter which is constant (but also unused?).  I
reckon this is the reason for the xfail mark and so I guess that early
inlining should be disabled in the particular testcase, am I right?

Thanks,

Martin


gmp 5.0.1 and gcc 4.5?

2010-03-29 Thread Jack Howarth
   I've not seen any discussion of testing gcc trunk
against the newer gmp 5.0 or 5.0.1 releases. Has anyone
done significant testing with the newer gmp releases
and are there any long term plans for bumping the
required gmp (assuming that any of the new features
or fixes are useful for gcc)? Thanks in advance for
any comments. I planning a gcc45 fink package once
gcc 4.5.0 is released and was considering whether
it made sense to try to depend on the newer gmp.
 Jack


Re: gmp 5.0.1 and gcc 4.5?

2010-03-29 Thread Joseph S. Myers
On Mon, 29 Mar 2010, Jack Howarth wrote:

>I've not seen any discussion of testing gcc trunk
> against the newer gmp 5.0 or 5.0.1 releases. Has anyone
> done significant testing with the newer gmp releases
> and are there any long term plans for bumping the
> required gmp (assuming that any of the new features
> or fixes are useful for gcc)? Thanks in advance for

GMP is mainly used via MPFR.  Thus, I'd expect the required version to be 
bumped if a new MPFR version was required that in turn required newer GMP, 
but otherwise there would be little use to a bump.  New MPFR would be 
required if needed for folding some function of use to GCC to fold 
(erfc_scaled has been mentioned as one it would be useful to the Fortran 
front end to have MPFR support for, for example, but SVN MPFR doesn't yet 
support it; if it gains support, that might justify a future increase in 
the required MPFR version).

-- 
Joseph S. Myers
jos...@codesourcery.com


Optimizing floating point *(2^c) and /(2^c)

2010-03-29 Thread Jeroen Van Der Bossche
I've recently written a program where taking the average of 2 floating
point numbers was a real bottleneck. I've looked into the assembly
generated by gcc -O3 and apparently gcc treats multiplication and
division by a hard-coded 2 like any other multiplication with a
constant. I think, however, that *(2^c) and /(2^c) for floating
points, where the c is known at compile-time, should be able to be
optimized with the following pseudo-code:

e = exponent bits of the number
if (e > c && e < (0b111...11)-c) {
e += c or e -= c
} else {
do regular multiplication
}

Even further optimizations may be possible, such as bitshifting the
significand when e=0. However, that would require checking for a lot
of special cases and require so many conditional jumps that it's most
likely not going to be any faster.

I'm not skilled enough with assembly to write this myself and test if
this actually performs faster than how it's implemented now. Its
performance will most likely also depend on the processor
architecture, and I could only test this code on one machine.
Therefore I ask to those who are familiar with gcc's optimization
routines to give this 2 seconds of thought, as this is probably rather
easy to implement and many programs could benefit from this.

Greets,
Jeroen


Re: Ask for suggestions on init_caller_save

2010-03-29 Thread Jie Zhang

On 03/30/2010 12:11 AM, Jeff Law wrote:

On 03/23/10 21:30, Jie Zhang wrote:

I'm fixing a bug. It's caused by uninitialized caller save pass data.
One function in the test case uses the "optimize" attribute with "O2"
option. So even with -O0 in command line, GCC calls caller save pass
for that function. The problem is init_caller_save is called in
backend_inti_target if flag_caller_saves is set. Apparently, in this
case, flag_caller_saves is not set when came to backend_inti_target. I
think there are several ways to fix this bug, but I don't know which
way should/can I go:

1. Always call init_caller_save in backend_inti_target. But it seems a
waste for most cases if -O0.

2. Call init_caller_save in IRA main function. But by this way it will
be called multiple times unless we create a flag to remember if it has
been called or not. Maybe we can reuse test_reg or test_mem. If they
are NULL_TREE, just call init_caller_save.

3. Call init_caller_save in handle_optimize_attribute. If
flag_caller_saves is not set before parse_optimize_options but set
after, call init_caller_save. Considering there might be multiple
functions using optimize attribute, we also need a flag to remember if
init_caller_save has been called or not.

4. There are only three global function in caller-save.c:
init_save_areas, setup_save_areas, and save_call_clobbered_regs. We
can just add a check in the beginning of those functions. If the data
has not been initialized, just init_caller_save first.


Any suggestions?

I'd suggest #2 with a status flag indicating whether or not caller-saves
has been initialized. That should be low enough overhead to not be a
problem.


Thanks. I will send a patch to gcc-patches and CC you.

--
Jie Zhang
CodeSourcery
(650) 331-3385 x735


Re: Optimizing floating point *(2^c) and /(2^c)

2010-03-29 Thread Geert Bosch

On Mar 29, 2010, at 13:19, Jeroen Van Der Bossche wrote:

> 've recently written a program where taking the average of 2 floating
> point numbers was a real bottleneck. I've looked into the assembly
> generated by gcc -O3 and apparently gcc treats multiplication and
> division by a hard-coded 2 like any other multiplication with a
> constant. I think, however, that *(2^c) and /(2^c) for floating
> points, where the c is known at compile-time, should be able to be
> optimized with the following pseudo-code:
> 
> e = exponent bits of the number
> if (e > c && e < (0b111...11)-c) {
> e += c or e -= c
> } else {
> do regular multiplication
> }
> 
> Even further optimizations may be possible, such as bitshifting the
> significand when e=0. However, that would require checking for a lot
> of special cases and require so many conditional jumps that it's most
> likely not going to be any faster.
> 
> I'm not skilled enough with assembly to write this myself and test if
> this actually performs faster than how it's implemented now. Its
> performance will most likely also depend on the processor
> architecture, and I could only test this code on one machine.
> Therefore I ask to those who are familiar with gcc's optimization
> routines to give this 2 seconds of thought, as this is probably rather
> easy to implement and many programs could benefit from this.

For any optimization suggestions, you should start with showing some real, 
compilable, code with a performance problem that you think the compiler could 
address. Please include details about compilation options, GCC versions and 
target hardware, as well as observed performance numbers. How do you see that 
averaging two floating point numbers is a bottleneck? This should only be a 
single addition and multiplication, and will execute in a nanosecond or so on a 
moderately modern system.

Your particular suggestion is flawed. Floating-point multiplication is very 
fast on most targets. It is hard to see how on any target with floating-point 
hardware, manual mucking with the representation can be a win. In particular, 
your sketch doesn't at all address underflow and overflow. Likely a complete 
implementation would be many times slower than a floating-point multiply.

  -Geert


Re: Optimizing floating point *(2^c) and /(2^c)

2010-03-29 Thread Tim Prince

On 3/29/2010 10:51 AM, Geert Bosch wrote:

On Mar 29, 2010, at 13:19, Jeroen Van Der Bossche wrote:

   

've recently written a program where taking the average of 2 floating
point numbers was a real bottleneck. I've looked into the assembly
generated by gcc -O3 and apparently gcc treats multiplication and
division by a hard-coded 2 like any other multiplication with a
constant. I think, however, that *(2^c) and /(2^c) for floating
points, where the c is known at compile-time, should be able to be
optimized with the following pseudo-code:

e = exponent bits of the number
if (e>  c&&  e<  (0b111...11)-c) {
e += c or e -= c
} else {
do regular multiplication
}

Even further optimizations may be possible, such as bitshifting the
significand when e=0. However, that would require checking for a lot
of special cases and require so many conditional jumps that it's most
likely not going to be any faster.

I'm not skilled enough with assembly to write this myself and test if
this actually performs faster than how it's implemented now. Its
performance will most likely also depend on the processor
architecture, and I could only test this code on one machine.
Therefore I ask to those who are familiar with gcc's optimization
routines to give this 2 seconds of thought, as this is probably rather
easy to implement and many programs could benefit from this.
 

For any optimization suggestions, you should start with showing some real, 
compilable, code with a performance problem that you think the compiler could 
address. Please include details about compilation options, GCC versions and 
target hardware, as well as observed performance numbers. How do you see that 
averaging two floating point numbers is a bottleneck? This should only be a 
single addition and multiplication, and will execute in a nanosecond or so on a 
moderately modern system.

Your particular suggestion is flawed. Floating-point multiplication is very 
fast on most targets. It is hard to see how on any target with floating-point 
hardware, manual mucking with the representation can be a win. In particular, 
your sketch doesn't at all address underflow and overflow. Likely a complete 
implementation would be many times slower than a floating-point multiply.

   -Geert
   
gcc used to have the ability to replace division by a power of 2 by an 
fscale instruction, for appropriate targets (maybe still does).  Such 
targets have nearly disappeared from everyday usage.  What remains is 
the possibility of replacing the division by constant power of 2 by 
multiplication, but it's generally considered the programmer should have 
done that in the beginning.  icc has such an facility, but it's subject 
to -fp-model=fast (equivalent to gcc -ffast-math -fno-cx-limited-range), 
even though it's a totally safe conversion.
As Geert indicated, it's almost inconceivable that a correct 
implementation which takes care of exceptions could match the floating 
point hardware performance, even for a case which starts with operands 
in memory (but you mention the case following an addition).


--
Tim Prince



Re: Peculiar XPASS of gcc.dg/guality/inline-params.c

2010-03-29 Thread Jan Hubicka
> Hi,
> 
> I have run the testcase with the early inliner disabled and noticed
> that gcc.dg/guality/inline-params.c XPASSes with early inlining and
> XFAILs without it.  The reason for the (expected) failure is that
> IPA-CP removes a parameter which is constant (but also unused?).  I
> reckon this is the reason for the xfail mark and so I guess that early
> inlining should be disabled in the particular testcase, am I right?

Well, I guess we should be able to maintain debug info with IPA-CP changes
(only case where debugging info is difficult to maintain IMO is the case
of unused argument removal that is explicitely disabled here).  So I guess
in a way this is correct XFAIL...

Honza
> 
> Thanks,
> 
> Martin


Re: GSoC 2010 Project Idea

2010-03-29 Thread Andi Kleen
°àâÕÜ ÈØÝÚÐàÞÒ  writes:

> Hi,
>
> I have a project in mind which I'm going to propose to the GCC in terms of
> Google Summer of Code. My project is not on the list of project ideas
> (http://gcc.gnu.org/wiki/SummerOfCode) that is why it would be very 
> interesting
> for me to hear any opinions and maybe even to find a mentor.

My guess is that the project is a bit too ambitious for a single 
summer. Perhaps try to scale it down to make it more manageable?

-Andi

-- 
a...@linux.intel.com -- Speaking for myself only.