Re: Ada Status in mainline

2005-05-26 Thread Andreas Jaeger
Diego Novillo <[EMAIL PROTECTED]> writes:

> On Wed, May 25, 2005 at 03:37:29PM -0600, Jeffrey A Law wrote:
>
>> So, if I wanted to be able to bootstrap Ada, what I do I need
>> to do?  Disable VRP?
>>
> Applying the patches in the PRs I mentioned.  If that doesn't
> work, try with VRP disabled.

Does not work for me on powerpc64-linux-gnu, the compiler fails to
build with:

/aj-cvs/gcc/gcc/ada/atree.adb: In function ʽAtree._Elabbʼ:
/aj-cvs/gcc/gcc/ada/atree.adb:51: error: invariant not recomputed when 
ADDR_EXPR changed
&C.3356D.19258;

/aj-cvs/gcc/gcc/ada/atree.adb:51: error: invariant not recomputed when 
ADDR_EXPR changed
&C.3357D.19259;


pgp6lIoaXOKcV.pgp
Description: PGP signature

Andreas
-- 
 Andreas Jaeger, [EMAIL PROTECTED], http://www.suse.de/~aj
  SUSE Linux Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


pgpt6mpOljxKJ.pgp
Description: PGP signature


Re: Compiling GCC with g++: a report

2005-05-26 Thread Gabriel Dos Reis
"Kaveh R. Ghazi" <[EMAIL PROTECTED]> writes:

|  > > Now we have e.g. XNEW* and all we need is a new -W* flag to catch
|  > > things like using C++ keywords and it should be fairly automatic to
|  > > keep incompatibilities out of the sources.
|  > 
|  > Why not this?
|  > 
|  > #ifndef __cplusplus
|  > #pragma GCC poison class template new . . .
|  > #endif
| 
| That's limited.  A new -W flag could catch not only this, but also
| other problems like naked void* -> FOO* conversions.  E.g. IIRC, the
| -Wtraditional flag eventually caught over a dozen different problems.
| Over time this new warning flag for c/c++ intersection could be
| similarly refined.

This is now

 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21759

-- Gaby


Re: GCC and Floating-Point

2005-05-26 Thread Vincent Lefevre
On 2005-05-25 19:27:21 +0200, Allan Sandfeld Jensen wrote:
> Yes. I still don't understand why gcc doesn't do -ffast-math by
> default like all other compilers.

No! And I really don't think that other compilers do that.
It would be very bad, would not conform to the C standard[*]
and would make lots of codes fail.

[*] See for instance:

   5.1.2.3  Program execution
[...]
   [#14] EXAMPLE 5 Rearrangement for floating-point expressions
   is  often  restricted because of limitations in precision as
   well as range.  The implementation  cannot  generally  apply
   the   mathematical   associative   rules   for  addition  or
   multiplication,  nor  the  distributive  rule,  because   of
   roundoff   error,  even  in  the  absence  of  overflow  and
   underflow.   Likewise,  implementations   cannot   generally
   replace decimal constants in order to rearrange expressions.
   In  the  following  fragment,  rearrangements  suggested  by
   mathematical rules for real numbers are often not valid (see
   F.8).

   double x, y, z;
   /* ... */
   x = (x * y) * z;  // not equivalent to x *= y * z;
   z = (x - y) + y ; // not equivalent to z = x;
   z = x + x * y;// not equivalent to z = x * (1.0 + y);
   y = x / 5.0;  // not equivalent to y = x * 0.2;

> The people who needs perfect standard behavior are a lot fewer than
> all the packagers who doesn't understand which optimization flags
> gcc should _always_ be called with.

Standard should be the default.

(Is this a troll or what?)

-- 
Vincent Lefèvre <[EMAIL PROTECTED]> - Web: 
100% accessible validated (X)HTML - Blog: 
Work: CR INRIA - computer arithmetic / SPACES project at LORIA


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Scott Robert Ladd
Allan Sandfeld Jensen wrote:
>>Yes. I still don't understand why gcc doesn't do -ffast-math by
>>default like all other compilers.

Vincent Lefevre wrote:
> No! And I really don't think that other compilers do that.
> It would be very bad, would not conform to the C standard[*]
> and would make lots of codes fail.

Perhaps what needs to be changed is the definition of -ffast-math
itself. Some people (myself included) view it from the standpoint of
using the full capabilities of our processors' hardware intrinsics;
however, -ffast-math *also* implies the rearrangement of code that
violates Standard behavior. Thus it does two things that perhaps should
not be combined.

To be more pointed, it is -funsafe-math-optimizations (implied by
-ffast-math) that is in need of adjustment.

May I be so bold as to suggest that -funsafe-math-optimizations be
reduced in scope to perform exactly what it's name implies:
transformations that may slightly alter the meanding of code. Then move
the use of hardware intrinsics to a new -fhardware-math switch.

Does anyone object if I experiment a bit with this modification? Or am I
completely wrong in my understanding?

..Scott


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Richard Guenther
On 5/26/05, Scott Robert Ladd <[EMAIL PROTECTED]> wrote:
> Allan Sandfeld Jensen wrote:
> >>Yes. I still don't understand why gcc doesn't do -ffast-math by
> >>default like all other compilers.
> 
> Vincent Lefevre wrote:
> > No! And I really don't think that other compilers do that.
> > It would be very bad, would not conform to the C standard[*]
> > and would make lots of codes fail.
> 
> Perhaps what needs to be changed is the definition of -ffast-math
> itself. Some people (myself included) view it from the standpoint of
> using the full capabilities of our processors' hardware intrinsics;
> however, -ffast-math *also* implies the rearrangement of code that
> violates Standard behavior. Thus it does two things that perhaps should
> not be combined.
> 
> To be more pointed, it is -funsafe-math-optimizations (implied by
> -ffast-math) that is in need of adjustment.
> 
> May I be so bold as to suggest that -funsafe-math-optimizations be
> reduced in scope to perform exactly what it's name implies:
> transformations that may slightly alter the meanding of code. Then move
> the use of hardware intrinsics to a new -fhardware-math switch.

I think the other options implied by -ffast-math apart from
-funsafe-math-optimizations should (and do?) enable the use of
hardware intrinsics already.  It's only that some of the optimzations
guarded by -funsafe-math-optimizations could be applied in general.
A good start may be to enumerate the transformations done on a
Wiki page and list the flags it is guarded with.

Richard.


Re: GCC and Floating-Point

2005-05-26 Thread Daniel Berlin



On Thu, 26 May 2005, Vincent Lefevre wrote:


On 2005-05-25 19:27:21 +0200, Allan Sandfeld Jensen wrote:

Yes. I still don't understand why gcc doesn't do -ffast-math by
default like all other compilers.


No! And I really don't think that other compilers do that.


Have you looked, or are you just guessing?

I know for a fact that XLC does it at -O3+, and unless i'm 
misremembering, icc does it at -O2+.


Both require flags to turn the behavior *off* at those opt levels.

XLC will give you a warning when it sees itself making an optimization 
that may affect precision, saying that if you don't want this to happen, 
to use a flag.




Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Scott Robert Ladd
Scott Robert Ladd <[EMAIL PROTECTED]> wrote:
>>May I be so bold as to suggest that -funsafe-math-optimizations be
>>reduced in scope to perform exactly what it's name implies:
>>transformations that may slightly alter the meanding of code. Then move
>>the use of hardware intrinsics to a new -fhardware-math switch.

Richard Guenther wrote:
> I think the other options implied by -ffast-math apart from
> -funsafe-math-optimizations should (and do?) enable the use of
> hardware intrinsics already.  It's only that some of the optimzations
> guarded by -funsafe-math-optimizations could be applied in general.
> A good start may be to enumerate the transformations done on a
> Wiki page and list the flags it is guarded with.

Unless I've missed something obvious, -funsafe-math-optimizations alone
enables most hardware floating-point intrinsics -- on x86_64 and x86, at
least --. For example, consider a simple line of code that takes the
sine of a constant:

x = sin(1.0);

On the Pentium 4, with GCC 4.0, various command lines produced the
following code:

gcc -S -O3 -march=pentium4

movl$1072693248, 4(%esp)
callsin
fstpl   4(%esp)

gcc -S -O3 -march=pentium4 -D__NO_MATH_INLINES

movl$1072693248, 4(%esp)
callsin
fstpl   4(%esp)

gcc -S -O3 -march=pentium4 -funsafe-math-optimizations

fld1
fsin
fstpl   4(%esp)

gcc -S -O3 -march=pentium4 -funsafe-math-optimizations \
  -D__NO_MATH_INLINES

fld1
fsin
fstpl   4(%esp)

As you can see, it is -funsafe-math-optimizations alone that determines
the use of hardware intrinsics, on the P4 at least.

As a side note, GCC 4.0 on the Opteron produces the same result with all
four command-line variations:

gcc -S -O3 -march=k8
movlpd  .LC2(%rip), %xmm0
callsin

gcc -S -O3 -march=k8 -D__NO_MATH_INLINES
movlpd  .LC2(%rip), %xmm0
callsin

gcc -S -O3 -march=k8 -funsafe-math-optimizations
movlpd  .LC2(%rip), %xmm0
callsin

gcc -S -O3 -march=k8 -funsafe-math-optimizations -D__NO_MATH_INLINES
movlpd  .LC2(%rip), %xmm0
callsin


..Scott


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Paul Brook
On Thursday 26 May 2005 14:25, Scott Robert Ladd wrote:
> Scott Robert Ladd <[EMAIL PROTECTED]> wrote:
> >>May I be so bold as to suggest that -funsafe-math-optimizations be
> >>reduced in scope to perform exactly what it's name implies:
> >>transformations that may slightly alter the meanding of code. Then move
> >>the use of hardware intrinsics to a new -fhardware-math switch.
>
> Richard Guenther wrote:
> > I think the other options implied by -ffast-math apart from
> > -funsafe-math-optimizations should (and do?) enable the use of
> > hardware intrinsics already.  It's only that some of the optimzations
> > guarded by -funsafe-math-optimizations could be applied in general.
> > A good start may be to enumerate the transformations done on a
> > Wiki page and list the flags it is guarded with.
>
> Unless I've missed something obvious, -funsafe-math-optimizations alone
> enables most hardware floating-point intrinsics -- on x86_64 and x86, at
> least --. For example, consider a simple line of code that takes the
> sine of a constant:

I thought the x86 sin/cos intrinsics were unsafe. ie. they don't gave accurate 
results in all cases.

Paul


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Gabriel Dos Reis
Paul Brook <[EMAIL PROTECTED]> writes:

| On Thursday 26 May 2005 14:25, Scott Robert Ladd wrote:
| > Scott Robert Ladd <[EMAIL PROTECTED]> wrote:
| > >>May I be so bold as to suggest that -funsafe-math-optimizations be
| > >>reduced in scope to perform exactly what it's name implies:
| > >>transformations that may slightly alter the meanding of code. Then move
| > >>the use of hardware intrinsics to a new -fhardware-math switch.
| >
| > Richard Guenther wrote:
| > > I think the other options implied by -ffast-math apart from
| > > -funsafe-math-optimizations should (and do?) enable the use of
| > > hardware intrinsics already.  It's only that some of the optimzations
| > > guarded by -funsafe-math-optimizations could be applied in general.
| > > A good start may be to enumerate the transformations done on a
| > > Wiki page and list the flags it is guarded with.
| >
| > Unless I've missed something obvious, -funsafe-math-optimizations alone
| > enables most hardware floating-point intrinsics -- on x86_64 and x86, at
| > least --. For example, consider a simple line of code that takes the
| > sine of a constant:
| 
| I thought the x86 sin/cos intrinsics were unsafe. ie. they don't
| gave accurate results in all cases.

Indeed.

-- Gaby


Re: GCC and Floating-Point (A proposal)

2005-05-26 Thread Scott Robert Ladd
Paul Brook wrote:
> I thought the x86 sin/cos intrinsics were unsafe. ie. they don't gave 
> accurate 
> results in all cases.

If memory serves, Intel's fsin (for example) has an error > 1 ulp for
value flose to multiples of pi (2pi, for example).

Now, I'm not certain this is true for the K8 and later Pentiums. Looks
like I need to run another round of tests. ;)

..Scott



Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Let's consider the accuracy of sice and cosine. I've run tests as
follows, using a program provided at the end of this message.

On the Opteron, using GCC 4.0.0 release, the command lines produce these
outputs:

-lm -O3 -march=k8 -funsafe-math-optimizations -mfpmath=387

  generates:
  fsincos

  cumulative accuracy:   60.830074998557684 (binary)
 18.311677213055471 (decimal)

-lm -O3 -march=k8 -mfpmath=387

  generates:
  call sin
  call cos

  cumulative accuracy:   49.415037499278846 (binary)
 14.875408524143376 (decimal)

-lm -O3 -march=k8 -funsafe-math-optimizations

  generates:
  call sin
  call cos

  cumulative accuracy:   47.476438043942984 (binary)
 14.291831938509427 (decimal)

-lm -O3 -march=k8

  generates:
  call sin
  call cos

  cumulative accuracy:   47.476438043942984 (binary)
 14.291831938509427 (decimal)

The default for Opteron is -mfpmath=sse; as has been discussed in other
threads, this may not be a good choice. I also note that using
-funsafe-math-optimizations (and thus the combined fsincos instruction)
*increases* accuracy.

On the Pentium4, using the same version of GCC, I get:

-lm -O3 -march=pentium4 -funsafe-math-optimizations

  cumulative accuracy:   63.000 (binary)
 18.964889726830815 (decimal)

-lm -O3 -march=pentium4

  cumulative accuracy:   49.299560281858909 (binary)
 14.840646417884166 (decimal)

-lm -O3 -march=pentium4 -funsafe-math-optimizations -mfpmath=sse

  cumulative accuracy:   47.476438043942984 (binary)
 14.291831938509427 (decimal)

The program used is below. I'm very open to suggestions about this
program, which is a subset of a larger accuracy benchmark I'm writing
(Subtilis).

#include 
#pragma STDC FENV_ACCESS ON
#include 
#include 
#include 
#include 
#include 

static bool verbose = false;
#define PI 3.14159265358979323846

// Test floating point accuracy
inline double binary_accuracy(double x)
{
return -(log(fabs(x)) / log(2.0));
}

inline double decimal_accuracy(double x)
{
return -(log(fabs(x)) / log(10.0));
}

// accuracy of trigonometric functions
void trigtest()
{
static const double range = PI; // * 2.0;
static const double incr  = PI / 100.0;

if (verbose)
   printf("  xdiff accuracy\n");

double final = 1.0;
double x;

for (x = -range; x <= range; x += incr)
{
double s1  = sin(x);
double c1  = cos(x);
double one = s1 * s1 + c1 * c1;
double diff = one - 1.0;
final *= one;

double accuracy1 = binary_accuracy(diff);

if (verbose)
printf("%20.15f %14g %20.15f\n",x,diff,accuracy1);
}

final -= 1.0;

printf("\ncumulative accuracy: %20.15f (binary)\n",
   binary_accuracy(final));

printf(" %20.15f (decimal)\n",
   decimal_accuracy(final));
}

// Entry point
int main(int argc, char ** argv)
{
int i;

// do we have verbose output?
if (argc > 1)
{
for (i = 1; i < argc; ++i)
{
if (!strcmp(argv[i],"-v"))
{
verbose = true;
break;
}
}
}


// run tests
trigtest();

// done
return 0;
}

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Haley
Scott Robert Ladd writes:
 > 
 > The program used is below. I'm very open to suggestions about this
 > program, which is a subset of a larger accuracy benchmark I'm writing
 > (Subtilis).

Try this:

public class trial
{
  static public void main (String[] argv)
  {
System.out.println(Math.sin(Math.pow(2.0, 90.0)));
  }
}

zapata:~ $ gcj trial.java --main=trial -ffast-math -O 
zapata:~ $ ./a.out 
1.2379400392853803E27
zapata:~ $ gcj trial.java --main=trial -ffast-math   
zapata:~ $ ./a.out 
-0.9044312486086016

Andrew.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Andrew Haley wrote:
> Try this:
> 
> public class trial
> {
>   static public void main (String[] argv)
>   {
> System.out.println(Math.sin(Math.pow(2.0, 90.0)));
>   }
> }
> 
> zapata:~ $ gcj trial.java --main=trial -ffast-math -O 
> zapata:~ $ ./a.out 
> 1.2379400392853803E27
> zapata:~ $ gcj trial.java --main=trial -ffast-math   
> zapata:~ $ ./a.out 
> -0.9044312486086016

You're comparing apples and oranges, since C (my code) and Java differ
in their definitions and implementations of floating-point.

I don't build gcj these days; however, when I have a moment later, I'll
build the latest GCC mainline from CVS -- with Java -- and see how it
reacts to my Java version of my benchmark. I also have a Fortran 95
version as well, so I guess I might as well try several languages, and
see what we get.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paolo Carlini
Andrew Haley wrote

> zapata:~ $ gcj trial.java --main=trial -ffast-math -O
  ^^

Ok, maybe those people that are accusing the Free Software philosophy of
being akin to communisn are wrong, but it looks like revolutionaries are
lurking around, at least... ;) ;)

Paolo.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Haley
Scott Robert Ladd writes:
 > Andrew Haley wrote:
 > > Try this:
 > > 
 > > public class trial
 > > {
 > >   static public void main (String[] argv)
 > >   {
 > > System.out.println(Math.sin(Math.pow(2.0, 90.0)));
 > >   }
 > > }
 > > 
 > > zapata:~ $ gcj trial.java --main=trial -ffast-math -O 
 > > zapata:~ $ ./a.out 
 > > 1.2379400392853803E27
 > > zapata:~ $ gcj trial.java --main=trial -ffast-math   
 > > zapata:~ $ ./a.out 
 > > -0.9044312486086016
 > 
 > You're comparing apples and oranges, since C (my code) and Java differ
 > in their definitions and implementations of floating-point.

So try it in C.   -ffast-math won't be any better.

#include 
#include 

void
main (int argc, char **argv)
{
  printf ("%g\n", sin (pow (2.0, 90.0)));
}

Andrew.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Richard Henderson
On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
> static const double range = PI; // * 2.0;
> static const double incr  = PI / 100.0;

The trig insns fail with large numbers; an argument
reduction loop is required with their use.


r~


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Richard Henderson wrote:
> On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
> 
>>static const double range = PI; // * 2.0;
>>static const double incr  = PI / 100.0;
> 
> 
> The trig insns fail with large numbers; an argument
> reduction loop is required with their use.

Yes, but within the defined mathematical ranges for sine and cosine --
[0, 2 * PI) -- the processor intrinsics are quite accurate.

Now, I can see a problem in signal processing or similar applications,
where you're working with continuous values over a large range, but it
seems to me that a simple application of fmod (via FPREM) solves that
problem nicely.

I've never quite understood the necessity for performing trig operations
on excessively large values, but perhaps my problem domain hasn't
included such applications.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
> "Scott" == Scott Robert Ladd <[EMAIL PROTECTED]> writes:

 Scott> Richard Henderson wrote:
 >> On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
 >> 
 >>> static const double range = PI; // * 2.0; static const double
 >>> incr = PI / 100.0;
 >> 
 >> 
 >> The trig insns fail with large numbers; an argument reduction loop
 >> is required with their use.

 Scott> Yes, but within the defined mathematical ranges for sine and
 Scott> cosine -- [0, 2 * PI) -- the processor intrinsics are quite
 Scott> accurate.

Huh?  Sine and consine are mathematically defined for all finite
inputs. 

Yes, normally the first step is to reduce the arguments to a small
range around zero and then do the series expansion after that, because
the series expansion convergest fastest near zero.  But sin(100) is
certainly a valid call, even if not a common one.

  paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Paul Koning wrote:
>  Scott> Yes, but within the defined mathematical ranges for sine and
>  Scott> cosine -- [0, 2 * PI) -- the processor intrinsics are quite
>  Scott> accurate.
> 
> Huh?  Sine and consine are mathematically defined for all finite
> inputs. 

Defined, yes. However, I'm speaking as a mathematician in this case, not
a programmer. Pick up an trig book, and it will have a statement similar
to this one, taken from a text (Trigonometry Demystified, Gibilisco,
McGraw-Hill, 2003) randomly grabbed from the shelf next to me:

"These trigonometric identities apply to angles in the *standard range*
of 0 rad <= theta < 2 * PI rad. Angles outside the standard range are
converted to values within the standard range by adding or subtracting
the appropriate multiple of 2 * PI rad. You might hear of an angle with
negative measurement or with a measure more than 2 * PI rad, but this
can always be converted..."

I can assure you that other texts (of which I have several) make similar
statements.

> Yes, normally the first step is to reduce the arguments to a small
> range around zero and then do the series expansion after that, because
> the series expansion convergest fastest near zero.  But sin(100) is
> certainly a valid call, even if not a common one.

I *said* that such statements are outside the standard range of
trigonometric identities. Writing sin(100) is not a matter of necessity,
nor should people using "regular" math be penalized in speed or accuracy
for extreme cases.

..Scott


RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
>From: Scott Robert Ladd
>Sent: 26 May 2005 17:32

> Paul Koning wrote:
>>  Scott> Yes, but within the defined mathematical ranges for sine and
>>  Scott> cosine -- [0, 2 * PI) -- the processor intrinsics are quite 
>> Scott> accurate. 
>> 
>> Huh?  Sine and consine are mathematically defined for all finite
>> inputs.
> 
> Defined, yes. However, I'm speaking as a mathematician in this case, not
> a programmer. Pick up an trig book, and it will have a statement similar
> to this one, taken from a text (Trigonometry Demystified, Gibilisco,
> McGraw-Hill, 2003) randomly grabbed from the shelf next to me:
> 
> "These trigonometric identities apply to angles in the *standard range*
> of 0 rad <= theta < 2 * PI rad. 

  It's difficult to tell from that quote, which lacks sufficient context,
but you *appear* at first glance  to be conflating the fundamental
trignometric *functions* with the trignometric *identities* that are
generally built up from those functions.  That is to say, you appear to be
quoting a statement that says

" Identities such as
sin(x)^2 + cos(x)^2 === 1
  are only valid when 0 <= x <= 2*PI"

and interpreting it to imply that 

"   sin(x)
  is only valid when 0 <= x <= 2*PI"

which, while it may or may not be true for other reasons, certainly is a
non-sequitur from the statement above.

  And in fact, and in any case, this is a perfect illustration of the point,
because what we're discussing here is *not* the behaviour of the
mathematical sine and cosine functions, but the behaviour of the C runtime
library functions sin(...) and cos(...), which are defined by the language
spec rather than by the strictures of mathematics.  And that spec makes *no*
restriction on what values you may supply as inputs, so gcc had better
implement sin and cos in a way that doesn't require the programmer to have
reduced the arguments beforehand, or it won't be ANSI compliant.

  Not only that, but if you don't use -funsafe-math-optimisations, gcc emits
libcalls to sin/cos functions, which I'll bet *do* reduce their arguments to
that range before doing the computation, (and which might indeed even be
clever enough to use the intrinsic, and can encapsulate the knowledge that
that intrinsic can only be used on arguments within a more limited range
than are valid for the C library function which they are being used to
implement).

  When you use -funsafe-math-optimisations, one of those optimisations is to
assume that you're not going to be using the full range of arguments that
POSIX/ANSI say is valid for the sin/cos functions, but that you're going to
be using values that are already folded into the range around zero, and so
it optimises away the libcall and the reduction with it and just uses the
intrinsic to implement the function.  But the intrinsic does not actually
implement the function as specified by ANSI, since it doesn't accept the
same range of inputs, and therefore it is *not* a suitable transformation to
ever apply except when the user has explicitly specified that they want to
live dangerously.  So in terms of your earlier suggestion:


May I be so bold as to suggest that -funsafe-math-optimizations be
reduced in scope to perform exactly what it's name implies:
transformations that may slightly alter the meanding of code. Then move
the use of hardware intrinsics to a new -fhardware-math switch.


... I am obliged to point out that using the hardware intrinsics *IS* an
unsafe optimisation, at least in this case!

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread Kevin Handy

Paul Koning wrote:


"Scott" == Scott Robert Ladd <[EMAIL PROTECTED]> writes:
   



Scott> Richard Henderson wrote:
>> On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
>> 
>>> static const double range = PI; // * 2.0; static const double

>>> incr = PI / 100.0;
>> 
>> 
>> The trig insns fail with large numbers; an argument reduction loop

>> is required with their use.

Scott> Yes, but within the defined mathematical ranges for sine and
Scott> cosine -- [0, 2 * PI) -- the processor intrinsics are quite
Scott> accurate.

Huh?  Sine and consine are mathematically defined for all finite
inputs. 


Yes, normally the first step is to reduce the arguments to a small
range around zero and then do the series expansion after that, because
the series expansion convergest fastest near zero.  But sin(100) is
certainly a valid call, even if not a common one.

  paul


 


But, you are using a number in the range of 2^90, only
have 64 bits for storing the floating point representation, and
some of that is needed for the exponent.
2^90 would require 91 bits for the base alone (as an integer
value), plus a couple more for the '*PI' portion, and then
more for the exponent. And that wouldn't include anything
past the decimal point.
You are more than 30 bits short of getting a crappy result.

sin/cos/... is essentially based on the mod(n, PI) value.
To get 360 unique values, you need at least the 9 lower
bits of the number. You don't have them. That portion
of the number has fallen off the end of the representation,
and is forever lost. All you are calculating is noise.

To see this, try printing 'cos(n) - cos(n+1.0)'. If you get
something close to '0', you are outside of the functions
useful range, or just unluckey enough to be on opposite
sides of a hump (n*PI-1/2, and friends).

Or easier, try '(n + 1.0) - n'. If you don't get something
close to 1.0, you've lost.

$ vi check.c
#include 
#include 

#define PI 3.1415926535 /* Accurate enough for this test */

int main()
{
   double n = PI * pow(2.0, 90.0);

   printf("Test Add %f\n", (n+1) -n);
   printf("Test cos %f\n", cos(n) - cos(n+1));
}

$ gcc check.c  -lm
$ ./a.out
Test Add 0.00
Test cos -0.00



Re: Sine and Cosine Accuracy

2005-05-26 Thread David Daney

Dave Korn wrote:


" Identities such as
sin(x)^2 + cos(x)^2 === 1
  are only valid when 0 <= x <= 2*PI"



It's been a while since I studied math, but isn't that particular 
identity is true for any x real or complex?


David Daney,



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Dave Korn wrote:
>   It's difficult to tell from that quote, which lacks sufficient context,
> but you *appear* at first glance  to be conflating the fundamental
> trignometric *functions* with the trignometric *identities* that are
> generally built up from those functions.  That is to say, you appear to be
> quoting a statement that says

Perhaps I didn't say it as clearly as I should, but I do indeed know the
difference between the implementation and definition of the
trigonometric identifies.

The tradeoff is between absolute adherence to the C standard and the
need to provide fast, accurate results for people who know their math.
What I see is a focus (in some areas like math) on complying with the
standard, to the exclusion of people who need speed. Both needs can be met.

> And in fact, and in any case, this is a perfect illustration of the point,
> because what we're discussing here is *not* the behaviour of the
> mathematical sine and cosine functions, but the behaviour of the C runtime
> library functions sin(...) and cos(...), which are defined by the language
> spec rather than by the strictures of mathematics.

The sin() and cos() functions, in theory, implement the behavior of the
mathematical sine and cosine identities, so the two can not be
completely divorced. I believe it is, at the very least, misleading to
claim that the hardware intrinsics are "unsafe".

> And that spec makes *no*
> restriction on what values you may supply as inputs, so gcc had better
> implement sin and cos in a way that doesn't require the programmer to have
> reduced the arguments beforehand, or it won't be ANSI compliant.

I'm not asking that the default behavior of the compiler be non-ANSI;
I'm asking that we give non-perjorative options to people who know what
they are doing and need greater speed. The -funsafe-math-optimizations
encompasses more than hardware intrinsics, and I don't see why
separating the hardware intrinsics into their own option
(-fhardware-math) is unreasonable, for folk who want the intrinsics but
not the other transformations.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
> "Kevin" == Kevin Handy <[EMAIL PROTECTED]> writes:

 Kevin> But, you are using a number in the range of 2^90, only have 64
 Kevin> bits for storing the floating point representation, and some
 Kevin> of that is needed for the exponent.

Fair enough, so with 64 bit floats you have no right to expect an
accurate answer for sin(2^90).  However, you DO have a right to expect
an answer in the range [-1,+1] rather than the 1.2e+27 that Richard
quoted.  I see no words in the description of
-funsafe-math-optimizations to lead me to expect such a result.

paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Morten Welinder
> Yes, but within the defined mathematical ranges for sine and cosine --
> [0, 2 * PI) -- the processor intrinsics are quite accurate.

If you were to look up a serious math book like Abramowitz&Stegun1965
you would see a definition like

sin z = ((exp(iz)-exp(-iz))/2i   [4.3.1]

for all complex numbers, thus in particular valid for z=x+0i for all real x.
If you wanted to stick to reals only, a serious math text would probably use
the series expansion around zero [4.3.65]

And there is the answer to your question: if you just think of "sin"
as something
with angles and triangles, then sin(2^90) makes very little sense.  But "sin"
occurs other places where there are no triangles in sight.  For example:

  Gamma(z)Gamma(1-z) = pi/sin(z pi)   [6.1.17]

or in series expansions of the cdf for the Student t distribution [26.7.4]

Morten


GCC 3.3.6 has been released

2005-05-26 Thread Gabriel Dos Reis

I'm pleased to announce that GCC 3.3.6 has been released. 

  This version is a minor release, fixing regressions in GCC 3.3.5
with respect to previous versions of GCC.  It can be downloaded from
the FTP serves listed here

  http://www.gnu.org/order/ftp.html


  The list of changes is available at

   http://gcc.gnu.org/gcc-3.3/changes.html


  This release is the last from the 3.3.x series.


  Many thanks to the huge GCC community who contributed to the
completion of this release.

-- 
Gabriel Dos Reis
 [EMAIL PROTECTED]
Texas A&M University -- Department of Computer Science
301, Bright Building -- College Station, TX 77843-3112


RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
>From: David Daney
>Sent: 26 May 2005 18:23

> Dave Korn wrote:
> 
>> " Identities such as
>> sin(x)^2 + cos(x)^2 === 1
>>   are only valid when 0 <= x <= 2*PI"
>> 
> 
> It's been a while since I studied math, but isn't that particular
> identity is true for any x real or complex?
> 
> David Daney,


  Yes, that was solely an example of the difference between 'identities' and
'functions', for illustration, in case there was any ambiguity in the
language, but was not meant to be an example of an *actual* identity that
has a restriction on the valid range of inputs.  Sorry for not being
clearer.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Morten Welinder wrote:
> If you were to look up a serious math book like Abramowitz&Stegun1965
> you would see a definition like
> 
> sin z = ((exp(iz)-exp(-iz))/2i   [4.3.1]

Very true. However, the processor doesn't implement intrinsics for
complex functions -- well, maybe some do, and I've never encountered them!

As such, I was sticking to a discussion specific to reals.


> And there is the answer to your question: if you just think of "sin"
> as something
> with angles and triangles, then sin(2^90) makes very little sense.  But "sin"
> occurs other places where there are no triangles in sight.

That's certainly true; the use of sine and cosine depend on the
application. I don't deny that many applications need to perform sin()
on any double value; however there are also many applications where you
*are* dealing with angles.

I recently wrote a GPS application where using the intrinsics improved
both accuracy and speed (the latter substantially), and using those
intrinsics was only "unsafe" because -funsafe-math-optimizations
includes other transformations.

I am simply lobbying for the separation of hardware intrinsics from
-funsafe-math-optimizations.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
> "Scott" == Scott Robert Ladd <[EMAIL PROTECTED]> writes:

 Scott> Dave Korn wrote:
 >> It's difficult to tell from that quote, which lacks sufficient
 >> context, but you *appear* at first glance to be conflating the
 >> fundamental trignometric *functions* with the trignometric
 >> *identities* that are generally built up from those functions.
 >> That is to say, you appear to be quoting a statement that says

 Scott> Perhaps I didn't say it as clearly as I should, but I do
 Scott> indeed know the difference between the implementation and
 Scott> definition of the trigonometric identifies.

 Scott> The tradeoff is between absolute adherence to the C standard
 Scott> and the need to provide fast, accurate results for people who
 Scott> know their math. 

I'm really puzzled by that comment, partly because the text book quote
you gave doesn't match any math I ever learned.  Does "knowing your
math" translates to "believing that trig functions should be applied
only to arguments in the range 0 to 2pi"?  If so, I must object.

What *may* make sense is the creation of a new option (off by default)
that says "you're allowed to assume that all calls to trig functions
have arguments in the range x..y".  Then the question to be answered
is what x and y should be.  A possible answer is 0 and 2pi; another
answer that some might prefer is -pi to +pi.  Or it might be -2pi to
+2pi to accommodate both preferences at essentially no cost.

 paul




RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
>From: Scott Robert Ladd
>Sent: 26 May 2005 18:36

 
> I am simply lobbying for the separation of hardware intrinsics from
> -funsafe-math-optimizations.

  Well, as long as they're under the control of a flag that also makes it
clear that they are *also* unsafe math optimisations, I wouldn't object.

  But you can't just replace a call to the ANSI C 'sin' function with an
invocation of the x87 fsin intrinsic, because they aren't the same, and the
intrinsic is non-ansi-compliant.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



RFD: what to do about stale REG_EQUAL notes in dead_or_predictable

2005-05-26 Thread Joern RENNECKE

I wonder what best to do about rtl-optimization/21767.

We sometimes have REG_EQUAL notes that are only true when
the instruction stays exactly where it is, like:

(insn 11 10 12 0 (set (reg:SI 147 t)
   (eq:SI (reg/v:SI 159 [ i ])
   (reg:SI 161))) 1 {cmpeqsi_t} (nil)
   (expr_list:REG_EQUAL (eq:SI (reg/v:SI 159 [ i ])
   (const_int 2345678 [0x23cace]))
   (nil)))

(jump_insn 12 11 37 0 (set (pc)
   (if_then_else (eq (reg:SI 147 t)
   (const_int 0 [0x0]))
   (label_ref 17)
   (pc))) 201 {branch_false} (nil)
   (expr_list:REG_BR_PROB (const_int 7100 [0x1bbc])
   (nil)))
;; End of basic block 0, registers live:
(nil)

;; Start of basic block 1, registers live: (nil)
(note 37 12 14 1 [bb 1] NOTE_INSN_BASIC_BLOCK)

(insn 14 37 15 1 (set (reg/v:SI 160 [ r ])
   (reg/v:SI 159 [ i ])) 168 {movsi_ie} (nil)
   (expr_list:REG_EQUAL (const_int 2345678 [0x23cace])
   (nil)))

if-conversion changes this to

(insn 11 10 14 0 (set (reg:SI 147 t)
   (eq:SI (reg/v:SI 159 [ i ])
   (reg:SI 161))) 1 {cmpeqsi_t} (nil)
   (expr_list:REG_EQUAL (eq:SI (reg/v:SI 159 [ i ])
   (const_int 2345678 [0x23cace]))
   (nil)))

(insn 14 11 12 0 (set (reg/v:SI 160 [ r ])
   (reg/v:SI 159 [ i ])) 168 {movsi_ie} (nil)
   (expr_list:REG_EQUAL (const_int 2345678 [0x23cace])
   (nil)))

(jump_insn 12 14 38 0 (set (pc)
   (if_then_else (ne (reg:SI 147 t)
   (const_int 0 [0x0]))
   (label_ref:SI 21)
   (pc))) 200 {branch_true} (nil)
   (expr_list:REG_BR_PROB (const_int 2900 [0xb54])
   (nil)))
;; End of basic block 0, registers live:
(nil)

so the REG_EQUAL note on insn 14 is no longer true.
In general, if a REG_EQUAL note remains valid is not computable.
(any REG_EQUAL note is trivially valid if its insn is unreachable.)
Even where it is computable, you'd probably need as much complexity
and target-dependent knowledge to prove it as if you were computing
the equality from scratch.

So, I think our main options are to remove all REG_EQUAL notes
of insns that are moved above a branch, or to change the value to reflect
the condition it depends on.  I.e. we could have an UNKNOWN rtx
to describe an unknown value in a REG_EQUAL note (A note
with a bare UNKNOWN value would be meaningless and should be
removed), and then express the note for insn 14 as:

(expr_list:REG_EQUAL (if_then_else (ne (reg:SI 147 t) (const_int 0 [0x0]))
   
(const_int 2345678 [0x23cace]) (unknwon))

(nil))
  


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Paul Koning wrote:
> I'm really puzzled by that comment, partly because the text book quote
> you gave doesn't match any math I ever learned.  Does "knowing your
> math" translates to "believing that trig functions should be applied
> only to arguments in the range 0 to 2pi"?  If so, I must object.

I'll correct myself to say "people who know their application." ;) Some
apps need sin() over all possible doubles, while other applications need
sin() over the range of angles.

> What *may* make sense is the creation of a new option (off by default)
> that says "you're allowed to assume that all calls to trig functions
> have arguments in the range x..y".  Then the question to be answered
> is what x and y should be.  A possible answer is 0 and 2pi; another
> answer that some might prefer is -pi to +pi.  Or it might be -2pi to
> +2pi to accommodate both preferences at essentially no cost.

I prefer breaking out the hardware intrinsics from
-funsafe-math-optimizations, such that people can compile to use their
hardware *without* the other transformations implicit in the current
collective.

If someone can explain how this hurts anything, please let me know.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Morten Welinder
> But, you are using a number in the range of 2^90, only
> have 64 bits for storing the floating point representation, and
> some of that is needed for the exponent.
> 2^90 would require 91 bits for the base alone (as an integer
> value), plus a couple more for the '*PI' portion, and then
> more for the exponent. And that wouldn't include anything
> past the decimal point.
> You are more than 30 bits short of getting a crappy result.

This is not right.

Floating point variables are perfectly capable of holding _some_ rather large
floating point numbers completely accurately.  The integer 2^90 is one of them
(on a standard base-2 machine).

It is true that its nearest neighbours are about 2^30 away, but that is not
relevant for the accuracy of 2^90.  pow(2.0,90.0) generates that value
with no error.

Therefore sin(2^90) is a well defined number somewhere between -1 and 1
and the C fragment

sin(pow(2.0,90.0))

should calculate it with at most one wrong bit at the end.  (You get a one-bit
allowance since "sin" is trancendental.)

Morten
(who wouldn't be too surprised if some "sin" implementations failed to do that)


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Dave Korn wrote:
>   Well, as long as they're under the control of a flag that also makes it
> clear that they are *also* unsafe math optimisations, I wouldn't object.

But they are *not* unsafe for *all* applications.

An ignorant user may not understand the ramifications of "unsafe" math
-- however, the current documentation is quite vague as to why these
optimizations are unsafe, and people thus become paranoid and avoid
-ffast-math when it would be to their benefit.

First and foremost, GCC should conform to standards. *However*, I see
nothing wrong with providing additional capability for those who need
it, without combining everything "unsafe" under one umbrella.

> But you can't just replace a call to the ANSI C 'sin' function with an
> invocation of the x87 fsin intrinsic, because they aren't the same, and the
> intrinsic is non-ansi-compliant.

Nobody said they were.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
After some off-line exchanges with Dave Korn, it seems to me that part
of the problem is that the documentation for
-funsafe-math-optimizations is so vague as to have no discernable
meaning. 

For example, does the wording of the documentation convey the
limitation that one should only invoke math functions with a small
range of arguments (say, -pi to +pi)?  I cannot see anything remotely
resembling that limitation, but others can.

Given that, I wonder how we can tell whether a particular proposed
optimization governed by that flag is permissible.  Consider:

`-funsafe-math-optimizations'
 Allow optimizations for floating-point arithmetic that (a) assume
 that arguments and results are valid and (b) may violate IEEE or
 ANSI standards.  

What does (b) mean?  What if anything are its limitations?  Is
returning 1.2e27 as the result for a sin() call authorized by (b)?  I
would not have expected that, but I can't defend that expectation
based on a literal reading of the text...

  paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Pinski


On May 26, 2005, at 2:12 PM, Paul Koning wrote:

What does (b) mean?  What if anything are its limitations?  Is
returning 1.2e27 as the result for a sin() call authorized by (b)?  I
would not have expected that, but I can't defend that expectation
based on a literal reading of the text...



b) means that (-a)*(b-c) can be changed to a*(c-b) and other 
reassociation

opportunities.

Thanks,
Andrew Pinski



RE: Sine and Cosine Accuracy

2005-05-26 Thread Dave Korn
Original Message
>From: Scott Robert Ladd
>Sent: 26 May 2005 19:09

> Dave Korn wrote:
>>   Well, as long as they're under the control of a flag that also makes it
>> clear that they are *also* unsafe math optimisations, I wouldn't object.
> 
> But they are *not* unsafe for *all* applications.

  Irrelevant; nor are many of the other things that are described by the
term unsafe.

  In fact they are often things that may be safe on one occasion, yet not on
another, even within one single application.  Referring to something as
"unsafe" doesn't mean it's *always* unsafe, but referring to it as safe (or
implying that it is by contrast with an option that names it as unsafe)
*does* mean that it is *always* safe.

> An ignorant user may not understand the ramifications of "unsafe" math
> -- however, the current documentation is quite vague as to why these
> optimizations are unsafe, and people thus become paranoid and avoid
> -ffast-math when it would be to their benefit.

  Until they get sqrt(-1.0) returning a value of +1.0 with no complaints, of
course

  But yes: the biggest problem here that I can see is inadequate
documentation.

> First and foremost, GCC should conform to standards. *However*, I see
> nothing wrong with providing additional capability for those who need
> it, without combining everything "unsafe" under one umbrella.

  That's exactly what I said up at the top.  Nothing wrong with having
multiple unsafe options, but they *are* all unsafe.

>> But you can't just replace a call to the ANSI C 'sin' function with an
>> invocation of the x87 fsin intrinsic, because they aren't the same, and
>> the intrinsic is non-ansi-compliant.
> 
> Nobody said they were.

  Then any optimisation flag that replaces one with the other is, QED,
unsafe.

  Of course, if you went and wrote a whole load of builtins, so that with
your new flag in effect "sin (x)" would translate into a code sequence that
first uses fmod to reduce the argument to the valid range for fsin, I would
no longer consider it unsafe.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Andrew Pinski wrote:
> b) means that (-a)*(b-c) can be changed to a*(c-b) and other reassociation
> opportunities.

This is precisely the sort of transformation that, in my opinion, should
be separate from the hardware intrinsics. I mentioned this specific case
earlier in the thread (I think; maybe it went to a private mail).

The documentation should quote you above, instead of being general and
vague (lots of "mays", for example, in the current text).

Perhaps we need to have a clearer name for the option,
-funsafe-transformations, anyone? I may want to use a hardware
intrinsics, but not want those transformations.

..Scott



Re: Sine and Cosine Accuracy

2005-05-26 Thread Joseph S. Myers
On Thu, 26 May 2005, Paul Koning wrote:

> > "Kevin" == Kevin Handy <[EMAIL PROTECTED]> writes:
> 
>  Kevin> But, you are using a number in the range of 2^90, only have 64
>  Kevin> bits for storing the floating point representation, and some
>  Kevin> of that is needed for the exponent.
> 
> Fair enough, so with 64 bit floats you have no right to expect an
> accurate answer for sin(2^90).  However, you DO have a right to expect
> an answer in the range [-1,+1] rather than the 1.2e+27 that Richard
> quoted.  I see no words in the description of
> -funsafe-math-optimizations to lead me to expect such a result.

When I discussed this question with Nick Maclaren a while back after a UK 
C Panel meeting, his view was that for most applications (a) the output 
should be close (within 1 or a few ulp) to the sine/cosine of a value 
close (within 1 or a few ulp) to the floating-point input and (b) sin^2 + 
cos^2 (of any input value) should equal 1 with high precision, but most 
applications (using floating-point values as approximations of 
unrepresentable real numbers) wouldn't care about the answer being close 
to the sine or cosine of the exact real number represented by the 
floating-point value when 1ulp is on the order of 2pi or bigger.  This 
does of course disallow 1.2e+27 as a safe answer for sin or cos to give 
for any input.  (And a few applications may care for stronger degrees of 
accuracy.)

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: GCC and Floating-Point

2005-05-26 Thread Allan Sandfeld Jensen
On Thursday 26 May 2005 10:15, Vincent Lefevre wrote:
> On 2005-05-25 19:27:21 +0200, Allan Sandfeld Jensen wrote:
> > Yes. I still don't understand why gcc doesn't do -ffast-math by
> > default like all other compilers.
>
> No! And I really don't think that other compilers do that.

I can't speak of all compilers, only the ones I've tried. ICC enables it 
always, Sun CC, Dec CXX, and HP CC at certain levels of optimizations 
(equivalent to -O2). 

Basically any compiler that cares about benchmarks have it enabled by default.

Many of them however have multiple levels of relaxed floating point. The 
lowest levels will try to be as accurate as possible, while the higher will 
loosen the accuracy and just try to be as fast as possible.

>
> > The people who needs perfect standard behavior are a lot fewer than
> > all the packagers who doesn't understand which optimization flags
> > gcc should _always_ be called with.
>
> Standard should be the default.
>
> (Is this a troll or what?)

So why isn't -ansi or -pendantic default?



`Allan



Re: Sine and Cosine Accuracy

2005-05-26 Thread Andrew Haley
Morten Welinder writes:
 > > But, you are using a number in the range of 2^90, only
 > > have 64 bits for storing the floating point representation, and
 > > some of that is needed for the exponent.
 > > 2^90 would require 91 bits for the base alone (as an integer
 > > value), plus a couple more for the '*PI' portion, and then
 > > more for the exponent. And that wouldn't include anything
 > > past the decimal point.
 > > You are more than 30 bits short of getting a crappy result.
 > 
 > This is not right.
 > 
 > Floating point variables are perfectly capable of holding _some_
 > rather large floating point numbers completely accurately.  The
 > integer 2^90 is one of them (on a standard base-2 machine).
 > 
 > It is true that its nearest neighbours are about 2^30 away, but
 > that is not relevant for the accuracy of 2^90.  pow(2.0,90.0)
 > generates that value with no error.
 > 
 > Therefore sin(2^90) is a well defined number somewhere between -1 and 1
 > and the C fragment
 > 
 > sin(pow(2.0,90.0))
 > 
 > should calculate it with at most one wrong bit at the end.  (You
 > get a one-bit allowance since "sin" is trancendental.)
 > 
 > Morten

 > (who wouldn't be too surprised if some "sin" implementations failed
 > to do that)

Me either.  gcj, which has its own floating-point library, gets this
right:

System.out.println(Math.sin(Math.pow(2.0, 90.0)));
-0.9044312486086016

whereas gcc gets it wrong, at least on my GNU/Linux box:

 printf ("%g %g\n", pow (2.0, 90.0), sin (pow (2.0, 90.0)));
1.23794e+27 -0.00536134

sin (2^90) is, approximately:

-.90443124860860158093619738795260971475

Andrew.


Re: GCC and Floating-Point

2005-05-26 Thread Scott Robert Ladd
Allan Sandfeld Jensen wrote:
> Basically any compiler that cares about benchmarks have it enabled by default.
> 
> Many of them however have multiple levels of relaxed floating point. The 
> lowest levels will try to be as accurate as possible, while the higher will 
> loosen the accuracy and just try to be as fast as possible.

Perhaps we need something along these lines:

When -ansi or -pedantic is used, the compiler should disallow anything
"unsafe" that may break compliance, warning if someone uses a paradox
like "-ansi -funsafe-math-optimizations".

As has been pointed out elsewhere in this thread,
-funsafe-math-optimizations implies too many different things, and is
vaguely documented. I'd like to see varying levels of floating-point
optimization, including an option that uses an internal library
optimized for both speed and correctness, which are not mutually exclusive.

..Scott



Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd <[EMAIL PROTECTED]> writes:

| Richard Henderson wrote:
| > On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
| > 
| >>static const double range = PI; // * 2.0;
| >>static const double incr  = PI / 100.0;
| > 
| > 
| > The trig insns fail with large numbers; an argument
| > reduction loop is required with their use.
| 
| Yes, but within the defined mathematical ranges for sine and cosine --
| [0, 2 * PI) -- 

this is what they call "post-modern maths"?

[...]

| I've never quite understood the necessity for performing trig operations
| on excessively large values, but perhaps my problem domain hasn't
| included such applications.

The world is flat; I never quite understood the necessity of spherical
trigonometry.

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Gabriel Dos Reis wrote:
> Scott Robert Ladd <[EMAIL PROTECTED]> writes:
> | I've never quite understood the necessity for performing trig operations
> | on excessively large values, but perhaps my problem domain hasn't
> | included such applications.
> 
> The world is flat; I never quite understood the necessity of spherical
> trigonometry.

For many practical problems, the world can be considered flat. And I do
plenty of spherical geometry (GPS navigation) without requiring the sin
of 2**90. ;)

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Richard Henderson
On Thu, May 26, 2005 at 12:04:04PM -0400, Scott Robert Ladd wrote:
> I've never quite understood the necessity for performing trig operations
> on excessively large values, but perhaps my problem domain hasn't
> included such applications.

Whether you think it necessary or not, the ISO C functions allow
such arguments, and we're not allowed to break that without cause.


r~


Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd <[EMAIL PROTECTED]> writes:

| Gabriel Dos Reis wrote:
| > Scott Robert Ladd <[EMAIL PROTECTED]> writes:
| > | I've never quite understood the necessity for performing trig operations
| > | on excessively large values, but perhaps my problem domain hasn't
| > | included such applications.
| > 
| > The world is flat; I never quite understood the necessity of spherical
| > trigonometry.
| 
| For many practical problems, the world can be considered flat.

Wooho.

| And I do
| plenty of spherical geometry (GPS navigation) without requiring the sin
| of 2**90. ;)

Yeah, the problem with people who work only with angles is that they
tend to forget that sin (and friends) are defined as functions on
*numbers*, not just angles or whatever, and happen to appear in
approximations of functions as series (e.g. Fourier series) and therefore
those functions can be applied to things that are not just "angles". 

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Uros Bizjak

Hello!


Fair enough, so with 64 bit floats you have no right to expect an
accurate answer for sin(2^90).  However, you DO have a right to expect
an answer in the range [-1,+1] rather than the 1.2e+27 that Richard
quoted.  I see no words in the description of
-funsafe-math-optimizations to lead me to expect such a result.

 The source operand to fsin, fcos and fsincos x87 insns must be within 
the range of +-2^63, otherwise a C2 flag is set in FP status word that 
marks insufficient operand reduction. Limited operand range is the 
reason, why fsin & friends are enabled only with 
-funsafe-math-optimizations.


 However, the argument to fsin can be reduced to an acceptable range by 
using fmod builtin. Internally, this builtin is implemented as a very 
tight loop that check for insufficient reduction, and could reduce 
whatever finite value one wishes.


 Out of curiosity, where could sin(2^90) be needed? It looks rather big 
angle to me.


Uros.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Paul Koning
> "Uros" == Uros Bizjak <[EMAIL PROTECTED]> writes:

 Uros> Hello!
 >> Fair enough, so with 64 bit floats you have no right to expect an
 >> accurate answer for sin(2^90).  However, you DO have a right to
 >> expect an answer in the range [-1,+1] rather than the 1.2e+27 that
 >> Richard quoted.  I see no words in the description of
 >> -funsafe-math-optimizations to lead me to expect such a result.
 >> 
 Uros> The source operand to fsin, fcos and fsincos x87 insns must be
 Uros> within the range of +-2^63, otherwise a C2 flag is set in FP
 Uros> status word that marks insufficient operand reduction. Limited
 Uros> operand range is the reason, why fsin & friends are enabled
 Uros> only with -funsafe-math-optimizations.

 Uros> However, the argument to fsin can be reduced to an acceptable
 Uros> range by using fmod builtin. Internally, this builtin is
 Uros> implemented as a very tight loop that check for insufficient
 Uros> reduction, and could reduce whatever finite value one wishes.

 Uros> Out of curiosity, where could sin(2^90) be needed? It looks
 Uros> rather big angle to me.

It looks that way to me too, but it's a perfectly valid argument to
the function as has been explained by several people.

Unless -funsafe-math-optimizations is *explicitly* documented to say
"trig function arguments must be in the range x..y for meaningful
results" I believe it is a bug to translate sin(x) to a call to the
x87 fsin primitive.  It needs to be wrapped with fmod (perhaps after a
range check for efficiency), otherwise you've drastically changed the
semantics of the function.

Personally I don't expect sin(2^90) to yield 1.2e27.  Yes, you can
argue that, pedantically, clause (b) in the doc for
-funsafe-math-optimizations permits this.  Then again, I could argue
that it also permits sin(x) to return 0 for all x.

 paul



Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Uros Bizjak <[EMAIL PROTECTED]> writes:

[...]

|   Out of curiosity, where could sin(2^90) be needed? It looks rather
| big angle to me.

If it was and angle!  Not everything that is an argument to sin or cos
is an angle.  They are just functions!  Suppose you're evaluating an
approximation of a Fourrier series expansion.

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Steven Bosscher
On Friday 27 May 2005 00:26, Gabriel Dos Reis wrote:
> Uros Bizjak <[EMAIL PROTECTED]> writes:
>
> [...]
>
> |   Out of curiosity, where could sin(2^90) be needed? It looks rather
> | big angle to me.
>
> If it was and angle!  Not everything that is an argument to sin or cos
> is an angle.  They are just functions!  Suppose you're evaluating an
> approximation of a Fourrier series expansion.

It would, in a way, still be a phase angle ;-)

Gr.
Steven


RE: Sine and Cosine Accuracy

2005-05-26 Thread Menezes, Evandro
Uros, 

>   However, the argument to fsin can be reduced to an 
> acceptable range by using fmod builtin. Internally, this 
> builtin is implemented as a very tight loop that check for 
> insufficient reduction, and could reduce whatever finite 
> value one wishes.

Keep in mind that x87 transcendentals are not the most accurate around, but all 
x86 processors from any manufacturer produce roughly the same results for any 
argument as the 8087 did way back when, even if the result is hundreds of ulps 
off...


-- 
___
Evandro MenezesAMD   Austin, TX



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Richard Henderson wrote:
> On Thu, May 26, 2005 at 12:04:04PM -0400, Scott Robert Ladd wrote:
> 
>>I've never quite understood the necessity for performing trig operations
>>on excessively large values, but perhaps my problem domain hasn't
>>included such applications.
> 
> 
> Whether you think it necessary or not, the ISO C functions allow
> such arguments, and we're not allowed to break that without cause.

Then, as someone else said, why doesn't the compiler enforce -ansi
and/or -pedantic by default? Or is ANSI purity only important in some
cases, but not others?

I do not and have not suggested changing the default behavior of the
compiler, and *have* suggested that it is not pedantic enough about
Standards.

*This* discussion is about improving -funsafe-math-optimizations to make
it more sensible and flexible.

For a wide variety of applications, the hardware intrinsics provide both
faster and more accurate results, when compared to the library
functions. However, I may *not* want other transformations implied by
-funsafe-math-optimizations. Therefore, it seems to me that GCC could
cleanly and simply implement an option to use hardware intrinsics (or a
highly-optimized but non-ANSI library) for those of us who want it.

No changes to default optimizations, no breaking of existing code, just
a new option (as in optional.)

How does that hurt you or anyone else? It's not as if GCC doesn't have a
few options already... ;)

I (and others) also note other compilers do a fine job of handling these
problems.

..Scott



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Gabriel Dos Reis wrote:
> Yeah, the problem with people who work only with angles is that they
> tend to forget that sin (and friends) are defined as functions on
> *numbers*, not just angles or whatever, and happen to appear in
> approximations of functions as series (e.g. Fourier series) and therefore
> those functions can be applied to things that are not just "angles". 

To paraphrase the above:

"Yeah, the problem with people who only work with Fourier series is that
they tend to forget that sin (and friends) can be used in applications
with angles that fall in a limited range, where the hardware intrinsics
produce faster and more accurate results."

I've worked on some pretty fancy DSP code in the last years, and some
spherical trig stuff. Two different kinds of code with different needs.

..Scott




RE: Sine and Cosine Accuracy

2005-05-26 Thread Menezes, Evandro
Scott, 

> For a wide variety of applications, the hardware intrinsics 
> provide both faster and more accurate results, when compared 
> to the library functions.

This is not true.  Compare results on an x86 systems with those on an x86_64 or 
ppc.  As I said before, shortcuts were taken in x87 that sacrificed accuracy 
for the sake of speed initially and later of compatibility.

HTH


-- 
___
Evandro MenezesAMD   Austin, TX



Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd <[EMAIL PROTECTED]> writes:

| Richard Henderson wrote:
| > On Thu, May 26, 2005 at 12:04:04PM -0400, Scott Robert Ladd wrote:
| > 
| >>I've never quite understood the necessity for performing trig operations
| >>on excessively large values, but perhaps my problem domain hasn't
| >>included such applications.
| > 
| > 
| > Whether you think it necessary or not, the ISO C functions allow
| > such arguments, and we're not allowed to break that without cause.
| 
| Then, as someone else said, why doesn't the compiler enforce -ansi
| and/or -pedantic by default?

Care submitting a ptach?

-- Gaby


gcc-4.0-20050526 is now available

2005-05-26 Thread gccadmin
Snapshot gcc-4.0-20050526 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.0-20050526/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.0 CVS branch
with the following options: -rgcc-ss-4_0-20050526 

You'll find:

gcc-4.0-20050526.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.0-20050526.tar.bz2 C front end and core compiler

gcc-ada-4.0-20050526.tar.bz2  Ada front end and runtime

gcc-fortran-4.0-20050526.tar.bz2  Fortran front end and runtime

gcc-g++-4.0-20050526.tar.bz2  C++ front end and runtime

gcc-java-4.0-20050526.tar.bz2 Java front end and runtime

gcc-objc-4.0-20050526.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.0-20050526.tar.bz2The GCC testsuite

Diffs from 4.0-20050521 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.0
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Gabriel Dos Reis
Scott Robert Ladd <[EMAIL PROTECTED]> writes:

| Gabriel Dos Reis wrote:
| > Yeah, the problem with people who work only with angles is that they
| > tend to forget that sin (and friends) are defined as functions on
| > *numbers*, not just angles or whatever, and happen to appear in
| > approximations of functions as series (e.g. Fourier series) and therefore
| > those functions can be applied to things that are not just "angles". 
| 
| To paraphrase the above:
| 
| "Yeah, the problem with people who only work with Fourier series is that
| they tend to forget that sin (and friends) can be used in applications
| with angles that fall in a limited range, where the hardware intrinsics
| produce faster and more accurate results."

That is a good try, but it fails in the context in which the original
statement was made.  Maybe it is good time and check the thread aand
the pattern of logic that statement was point out?

-- Gaby


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Menezes, Evandro wrote:
> This is not true.  Compare results on an x86 systems with those on an
> x86_64 or ppc.  As I said before, shortcuts were taken in x87 that
> sacrificed accuracy for the sake of speed initially and later of
> compatibility.

It *is* true for the case where the argument is in the range [0, 2*PI),
at least according to the tests I published earlier in this thread. If
you think there is something erroneous in the test code, I sincerely
would like to know.

..Scott


RE: Sine and Cosine Accuracy

2005-05-26 Thread Menezes, Evandro
Scott, 

> > This is not true.  Compare results on an x86 systems with 
> those on an
> > x86_64 or ppc.  As I said before, shortcuts were taken in x87 that 
> > sacrificed accuracy for the sake of speed initially and later of 
> > compatibility.
> 
> It *is* true for the case where the argument is in the range 
> [0, 2*PI), at least according to the tests I published 
> earlier in this thread. If you think there is something 
> erroneous in the test code, I sincerely would like to know.

Your code just tests every 3.6°, perhaps you won't trip at the problems...  

As I said, x87 can be off by hundreds of ulps, whereas the routines for x86_64 
which ships with SUSE are accurate to less than 1ulp over their entire domain.

Besides, you're also comparing 80-bit calculations with 64-bit calculations, 
not only the accuracy of sin and cos.  Try using -ffloat-store along with 
-mfpmath=387 and see yet another set of results.  At the end of the day, which 
one do you trust?  I wouldn't trust my check balance to x87 microcode... ;-)

HTH


___
Evandro MenezesSoftware Strategy & Alliance
512-602-9940AMD
[EMAIL PROTECTED]  Austin, TX



Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Gabriel Dos Reis wrote:
> Scott Robert Ladd <[EMAIL PROTECTED]> writes:
> | Then, as someone else said, why doesn't the compiler enforce -ansi
> | and/or -pedantic by default?
> 
> Care submitting a ptach?

Would a strictly ansi default be accepted on principle? Given the
existing code base of non-standard code, such a change may be unrealistic.

I'm willing to make the "-ansi -pedantic" patch, if I wouldn't be
wasting my time.

What about separating hardware intrinsics from
-funsafe-math-optimizations? I believe this would make everyone happy by
allowing people to use the compiler more effectively in different
circumstances.

..Scott


Re: Sine and Cosine Accuracy

2005-05-26 Thread Scott Robert Ladd
Menezes, Evandro wrote:
> Besides, you're also comparing 80-bit calculations with 64-bit
> calculations, not only the accuracy of sin and cos.  Try using
> -ffloat-store along with -mfpmath=387 and see yet another set of
> results.  At the end of the day, which one do you trust?  I wouldn't
> trust my check balance to x87 microcode... ;-)

I wouldn;t trust my bank accounts to the x87 under any circumstances;
anyone doing exact math should be using fixed-point.

Different programs have different requirements. I don't understand why
GCC needs to be one-size fits all, when it could be *better* than the
competition by taking a broader and more flexible view.

..Scott



GCC 4.0.1 Status Report (2005-05-26)

2005-05-26 Thread Mark Mitchell

There are 163 regressions open against 4.0.1.  Of these, 42 are
critical, in the sense that they are wrong-code, ICE-on-valid, or
rejects-valid.  That's rather worse than we've done with previous
releases; in part because we're carrying along bugs that we never get
around to fixing from release to release, and in part because we've
managed to introduce rather many new bugs in GCC 4.0.

The category about we should be most concerned is the wrong-code
regressions, of which there are (unluckily) 13.  (Of course, it is a
now-fixed, but oft-encountered, wrong-code regression that is
prompting us to want to move up the schedule for 4.0.1.)

The following wrong-code regressions seem particularly troubling to me:

21562 Quiet bad codegen (unrolling + tail call interaction)
21171 IV OPTS removes does not create a new VOPs for constant values
21254 Incorrect code with -funroll-loops for multiple targets with same code
21528 Boost shared_ptr_test.cpp fails with -O3
21536 C99 array of variable length use causes segmentation fault
21614 wrong code when calling member function of undefined class

I'd like to get these analyzed, and either fix them, or have a good
argument that they cannot be practically fixed, before 4.0.1.  A
couple have already been fixed on the mainline; perhaps it is a simple
matter to backport the offending patches.

Let's give ourselves a week to fix these; i.e., until the end of June
3rd.  Then, I'll take another look, and if it seems like we're not
going to make good progress on any still remaining in short order,
I'll do a prerelease, with a plan of a release on or about June 10th.
During the next week, I'd encourage people to look through the 4.0.1
regressions and fix any others that seem important and/or tractable.

I also want to make clear that I don't see it as our mission, or my
duty as RM, to make releases every time we find a bug, even if those
bugs seem rather critical.  Certainly, toolchain distributors may need
to do that, but that is not the purpose of the FSF releases.  We do
not have the resources, and doing releases interferes with
development.  Rushing to get releases out is a good way to introduce
critical bug after critical bug, without ever achieving stability.  We
also make snapshots and CVS available for interested users who, for
whatever reason, need a toolchain without the critical bug before the
next official release.

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-23, at 08:15, Gabriel Dos Reis wrote:



Sixth, there is a real "mess" about name spaces.  It is true that
every C programmers knows the rule saying tags inhabit different name
space than variable of functions.  However, all the C coding standards
I've read so far usually suggest

   typedef struct foo foo;

but *not*

   typedef struct foo *foo;

i.e. "bringing" the tag-name into normal name space to name the type
structure or enumeration is OK, but not naming a different type!  the
latter practice will be flagged by a C++ compiler.  I guess we may
need some discussion about the naming of structure (POSIX reserves
anything ending with "_t", so we might want to choose something so
that we don't run into problem.  However, I do not expect this issue
to dominate the discussion :-))



In 80% of the cases you are talking about the GCC source code already
follows the semi-convention of appending _s to the parent type.



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-24, at 09:09, Zack Weinberg wrote:


Gabriel Dos Reis <[EMAIL PROTECTED]> writes:

[dropping most of the message - if I haven't responded, assume I don't
agree but I also don't care enough to continue the argument.  Also,
rearranging paragraphs a bit so as not to have to repeat myself]



with the explicit call to malloc + explicit specification of sizeof,
I've found a number of wrong codes -- while replacing the existing
xmalloc/xcallo with XNEWVEC and friends (see previous patches and
messages) in libiberty, not counting the happy confusion about
xcalloc() in the current GCC codes.  Those are bugs we do not have
with the XNEWVEC and friends.  Not only, we do get readable code, we
also get right codes.


...


I don't think so.  These patches make it possible to compile the
source code with a C++ compiler.  We gain better checking by doing
that.



Have you found any places where the bugs you found could have resulted
in user-visible incorrect behavior (of any kind)?

If you have, I will drop all of my objections.


You could look at the linkage issues for darwin I have found several  
months

ago. They where *real*.


Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-24, at 06:00, Andrew Pinski wrote:



On May 24, 2005, at 12:01 AM, Zack Weinberg wrote:


Use of bare 'inline' is just plain wrong in our source code; this has
nothing to do with C++, no two C compilers implement bare 'inline'
alike.  Patches to add 'static' to such functions (AND MAKING NO  
OTHER

CHANGES) are preapproved, post-slush.


That will not work for the cases where the bare 'inline' are used
because they are external also in this case.  Now this is where C99  
and

C++ differs at what a bare 'inline' means so I have no idea what to
do, except for removing the 'inline' in first place.


This actually applies only to two function from libiberty:

 /* Return the current size of given hash table. */
-inline size_t
-htab_size (htab)
- htab_t htab;
+size_t
+htab_size (htab_t htab)
{
   return htab->size;
}
/* Return the current number of elements in given hash table. */
-inline size_t
-htab_elements (htab)
- htab_t htab;
+size_t
+htab_elements (htab_t htab)
{
   return htab->n_elements - htab->n_deleted;
}

It could be resolved easy be moving those "macro wrappers" in to a  
header
and making the static inline there. Actually this could improve the  
GCC code

overall a bit.



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-24, at 18:06, Diego Novillo wrote:


On Mon, May 23, 2005 at 01:15:17AM -0500, Gabriel Dos Reis wrote:



So, if various components maintainers (e.g. C and C++, middle-end,
ports, etc.)  are willing to help quickly reviewing patches we can
have this done for this week (assuming mainline is unslushed soon).
And, of course, everybody can help :-)



If the final goal is to allow GCC components to be implemented in
C++, then I am all in favour of this project.  I'm pretty sick of
all this monkeying around we do with macros to make up for the
lack of abstraction.


Amen. GCC cries and woes through struct tree for polymorphism.



Re: Compiling GCC with g++: a report

2005-05-26 Thread Marcin Dalecki


On 2005-05-25, at 08:06, Christoph Hellwig wrote:


On Tue, May 24, 2005 at 05:14:42PM -0700, Zack Weinberg wrote:

I'm not sure what the above may imply for your ongoing  
discussion, tough...




Well, if I were running the show, the 'clock' would only start  
running

when it was consensus among the libstdc++ developers that the soname
would not be bumped again - that henceforth libstdc++ was  
committed to
binary compatibility as good as glibc's.  Or better, if y'all can  
manage

it.  It doesn't sound like we're there yet, to me.



Why can't libstdc++ use symbol versioning?  glibc has maintained  
the soname

and binary comptiblity despite changing fundamental types like FILE


Please stop spreading rumors:

1. libgcc changes with compiler release. glibc is loving libgcc. ergo:
   glibc has not maintained the soname and binary compatibility.

2. The whole linker tricks glibc plays to accomplish this
   are not portable and not applicable to C++ code.

3. Threads are the death to glibc backward compatibility.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-26, at 21:34, Scott Robert Ladd wrote:


For many practical problems, the world can be considered flat. And  
I do
plenty of spherical geometry (GPS navigation) without requiring the  
sin

of 2**90. ;)


Yes right. I guess your second name is ignorance.


Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-27, at 00:00, Gabriel Dos Reis wrote:

Yeah, the problem with people who work only with angles is that they
tend to forget that sin (and friends) are defined as functions on
*numbers*,



The problem with people who work only with angles is that they are  
without sin.




Re: Sine and Cosine Accuracy

2005-05-26 Thread Marcin Dalecki


On 2005-05-26, at 22:39, Gabriel Dos Reis wrote:


Scott Robert Ladd <[EMAIL PROTECTED]> writes:

| Richard Henderson wrote:
| > On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
| >
| >>static const double range = PI; // * 2.0;
| >>static const double incr  = PI / 100.0;
| >
| >
| > The trig insns fail with large numbers; an argument
| > reduction loop is required with their use.
|
| Yes, but within the defined mathematical ranges for sine and  
cosine --

| [0, 2 * PI) --

this is what they call "post-modern maths"?

[...]

| I've never quite understood the necessity for performing trig  
operations

| on excessively large values, but perhaps my problem domain hasn't
| included such applications.

The world is flat; I never quite understood the necessity of spherical
trigonometry.


I agree fully. And who was this Fourier anyway?


help, cvs screwed up

2005-05-26 Thread Mike Stump
I did a checkin using ../ in one of the files and cvs screwed up.   
The ChangeLog file came out ok, but, all the others were created  
someplace else.  I'm thinking those ,v files should just be rmed off  
the server...  but, would rather someone else do that.  Thanks.


I was in gcc/testsuite/objc.dg at the time.

mrs $ cvs ci ../ChangeLog $f
Checking in ../ChangeLog;
/cvs/gcc/gcc/gcc/testsuite/ChangeLog,v  <--  ChangeLog
new revision: 1.5540; previous revision: 1.5539
done
RCS file: /cvs/gcc/comp-types-8.m,v
done
Checking in comp-types-8.m;
/cvs/gcc/comp-types-8.m,v  <--  comp-types-8.m
initial revision: 1.1
done
RCS file: /cvs/gcc/encode-6.m,v
done
Checking in encode-6.m;
/cvs/gcc/encode-6.m,v  <--  encode-6.m
initial revision: 1.1
done
RCS file: /cvs/gcc/extra-semi.m,v
done
Checking in extra-semi.m;
/cvs/gcc/extra-semi.m,v  <--  extra-semi.m
initial revision: 1.1
done
RCS file: /cvs/gcc/fix-and-continue-2.m,v
done
Checking in fix-and-continue-2.m;
/cvs/gcc/fix-and-continue-2.m,v  <--  fix-and-continue-2.m
initial revision: 1.1
done
RCS file: /cvs/gcc/isa-field-1.m,v
done
Checking in isa-field-1.m;
/cvs/gcc/isa-field-1.m,v  <--  isa-field-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/lookup-1.m,v
done
Checking in lookup-1.m;
/cvs/gcc/lookup-1.m,v  <--  lookup-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-15.m,v
done
Checking in method-15.m;
/cvs/gcc/method-15.m,v  <--  method-15.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-16.m,v
done
Checking in method-16.m;
/cvs/gcc/method-16.m,v  <--  method-16.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-17.m,v
done
Checking in method-17.m;
/cvs/gcc/method-17.m,v  <--  method-17.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-18.m,v
done
Checking in method-18.m;
/cvs/gcc/method-18.m,v  <--  method-18.m
initial revision: 1.1
done
RCS file: /cvs/gcc/method-19.m,v
done
Checking in method-19.m;
/cvs/gcc/method-19.m,v  <--  method-19.m
initial revision: 1.1
done
RCS file: /cvs/gcc/next-runtime-1.m,v
done
Checking in next-runtime-1.m;
/cvs/gcc/next-runtime-1.m,v  <--  next-runtime-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/no-extra-load.m,v
done
Checking in no-extra-load.m;
/cvs/gcc/no-extra-load.m,v  <--  no-extra-load.m
initial revision: 1.1
done
RCS file: /cvs/gcc/pragma-1.m,v
done
Checking in pragma-1.m;
/cvs/gcc/pragma-1.m,v  <--  pragma-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/stubify-1.m,v
done
Checking in stubify-1.m;
/cvs/gcc/stubify-1.m,v  <--  stubify-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/stubify-2.m,v
done
Checking in stubify-2.m;
/cvs/gcc/stubify-2.m,v  <--  stubify-2.m
initial revision: 1.1
done
RCS file: /cvs/gcc/super-class-4.m,v
done
Checking in super-class-4.m;
/cvs/gcc/super-class-4.m,v  <--  super-class-4.m
initial revision: 1.1
done
RCS file: /cvs/gcc/super-dealloc-1.m,v
done
Checking in super-dealloc-1.m;
/cvs/gcc/super-dealloc-1.m,v  <--  super-dealloc-1.m
initial revision: 1.1
done
RCS file: /cvs/gcc/super-dealloc-2.m,v
done
Checking in super-dealloc-2.m;
/cvs/gcc/super-dealloc-2.m,v  <--  super-dealloc-2.m
initial revision: 1.1
done
RCS file: /cvs/gcc/try-catch-6.m,v
done
Checking in try-catch-6.m;
/cvs/gcc/try-catch-6.m,v  <--  try-catch-6.m
initial revision: 1.1
done
RCS file: /cvs/gcc/try-catch-7.m,v
done
Checking in try-catch-7.m;
/cvs/gcc/try-catch-7.m,v  <--  try-catch-7.m
initial revision: 1.1
done
RCS file: /cvs/gcc/try-catch-8.m,v
done
Checking in try-catch-8.m;
/cvs/gcc/try-catch-8.m,v  <--  try-catch-8.m
initial revision: 1.1
done
mrs $ echo $f
comp-types-8.m encode-6.m extra-semi.m fix-and-continue-2.m isa- 
field-1.m lookup-1.m method-15.m method-16.m method-17.m method-18.m  
method-19.m next-runtime-1.m no-extra-load.m pragma-1.m stubify-1.m  
stubify-2.m super-class-4.m super-dealloc-1.m super-dealloc-2.m try- 
catch-6.m try-catch-7.m try-catch-8.m




Re: help, cvs screwed up

2005-05-26 Thread Ian Lance Taylor
Mike Stump <[EMAIL PROTECTED]> writes:

> I did a checkin using ../ in one of the files and cvs screwed up.
> The ChangeLog file came out ok, but, all the others were created
> someplace else.  I'm thinking those ,v files should just be rmed off
> the server...  but, would rather someone else do that.  Thanks.

I have removed these files from the server.

Ian


Re: help, cvs screwed up

2005-05-26 Thread Mike Stump

On May 26, 2005, at 8:47 PM, Ian Lance Taylor wrote:

I have removed these files from the server.


Much thanks.