On 2012-02-03 22:57:31 +, James Courtier-Dutton wrote:
> On 3 February 2012 16:24, Vincent Lefevre wrote:
> > But 1.0e22 cannot be handled correctly.
>
> Of course it can't.
> You only have 52 bits of precision in double floating point numbers.
Wrong. 53 bits of precision. And 10^22 is the l
On 2012-02-03 16:51:22 -0500, Robert Dewar wrote:
> All machines that implement IEEE arithmetic :-) As we know only too well
> from the universe of machines on which we implement GNAT, this is not
> all machines :-)
But I think that machines with no IEEE support will tend to disappear
(and already
On 2012-02-03 17:40:05 +0100, Dominique Dhumieres wrote:
> While I fail to see how the "correct value" of
> cos(4.47460300787e+182)+sin(4.47460300787e+182)
> can be defined in the 'double' world, cos^2(x)+sin^2(x)=1 and
> sin(2*x)=2*sin(x)*cos(x) seems to be verified (at least for this value)
> e
On 3 February 2012 18:12, Konstantin Vladimirov
wrote:
> Hi,
>
> I agree, that this case have no practical value. It was autogenerated
> between other thousands of tests and showed really strange results, so
> I decided to ask. I thought, this value fits double precision range
> and, according to
On 3 February 2012 16:24, Vincent Lefevre wrote:
> On 2012-02-03 16:57:19 +0100, Michael Matz wrote:
>> > And it may be important that some identities (like cos^2+sin^2=1) be
>> > preserved.
>>
>> Well, you're not going to get this without much more work in sin/cos.
>
> If you use the glibc sin()
Snapshot gcc-4.6-20120203 is now available on
ftp://gcc.gnu.org/pub/gcc/snapshots/4.6-20120203/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.
This snapshot has been generated from the GCC 4.6 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches
On 3 February 2012 21:28, Peter A. Felvegi wrote:
> on all tested gcc versions (4.4, 4.5, 4.6,4.7). There is definitely a type
> called 'Type' in struct 'Derived'. I'm not sure, the above code might be
> ill-formed, but then I'd like to see a specific error message.
I think the problem is that Der
On 2/3/2012 4:32 PM, Vincent Lefevre wrote:
Yes, I do! The floating-point representation of this number
This fact is not even necessarily correct because you don't know the
intent of the programmer. In the program,
double a = 4.47460300787e+182;
could mean two things:
1. A number whic
On 2012-02-03 17:44:21 +0100, Michael Matz wrote:
> Hi,
>
> On Fri, 3 Feb 2012, Vincent Lefevre wrote:
>
> > > > For the glibc, I've finally reported a bug here:
> > > >
> > > > http://sourceware.org/bugzilla/show_bug.cgi?id=13658
> > >
> > > That is about 1.0e22, not the obscene 4.4746030078
On 2012-02-03 11:35:39 -0500, Robert Dewar wrote:
> On 2/3/2012 10:55 AM, Vincent Lefevre wrote:
> >On 2012-02-03 10:33:58 -0500, Robert Dewar wrote:
> >>What is the basis for that claim? to me it seems useless to expect
> >>anything from such absurd arguments. Can you site a requirement to
> >>the
Hello,
compiling the following:
---8<---8<---8<---8<---
template
struct Base
{
typename T::Typevar;
};
template
struct Derived : Base >
{
typedef U Type;
};
void foo()
{
Derived i;
}
---8<---8<---8<---8<---
gives the error
gcctempl.cpp: In instantiation of ‘
On Fri, Feb 03, 2012 at 12:00:03PM -0800, Linus Torvalds wrote:
> On Fri, Feb 3, 2012 at 11:16 AM, Andrew MacLeod wrote:
[ . . . ]
> Having access to __ATOMIC_ACQUIRE would actually be an improvement -
> it's just that the architectures that really care about things like
> that simply don't matt
On Fri, Feb 3, 2012 at 11:16 AM, Andrew MacLeod wrote:
>> The special cases are because older x86 cannot do the generic
>> "add_return" efficiently - it needs xadd - but can do atomic versions
>> that test the end result and give zero or sign information.
>
> Since these are older x86 only, cou
On 02/03/2012 12:16 PM, Linus Torvalds wrote:
So we have several atomics we use in the kernel, with the more common being
- add (and subtract) and cmpchg of both 'int' and 'long'
This would be __atomic_fetch_add, __atomic_fetch_sub, and
__atomic_compare_exchange.
For 4.8 __atomic_compare_e
On 02/03/2012 04:13 PM, Robert Dewar wrote:
On 2/3/2012 10:01 AM, Michael Matz wrote:
No normal math library supports such an extreme range, even basic
identities (like cos^2+sin^2=1) aren't retained with such inputs.
I agree: the program is complete nonsense. It would be useful to know
wha
On 2/3/2012 1:12 PM, Konstantin Vladimirov wrote:
Hi,
I agree, that this case have no practical value. It was autogenerated
between other thousands of tests and showed really strange results, so
I decided to ask. I thought, this value fits double precision range
and, according to C standard, all
Hi,
I agree, that this case have no practical value. It was autogenerated
between other thousands of tests and showed really strange results, so
I decided to ask. I thought, this value fits double precision range
and, according to C standard, all double-precision arithmetics must be
avaliable for
On Fri, Feb 3, 2012 at 8:38 AM, Andrew MacLeod wrote:
>
> The atomic intrinsics were created for c++11 memory model compliance, but I
> am certainly open to enhancements that would make them more useful. I am
> planning some enhancements for 4.8 now, and it sounds like you may have some
> sugge
On Fri, 3 Feb 2012, Dominique Dhumieres wrote:
> Note that sqrt(2.0)*sin(4.47460300787e+182+pi/4) gives a diffeent value
> for the sum.
In double: 4.47460300787e+182 + pi/4 == 4.47460300787e+182
Ciao,
Michael.
Hi,
On Fri, 3 Feb 2012, Vincent Lefevre wrote:
> > > For the glibc, I've finally reported a bug here:
> > >
> > > http://sourceware.org/bugzilla/show_bug.cgi?id=13658
> >
> > That is about 1.0e22, not the obscene 4.47460300787e+182 of the
> > original poster.
>
> But 1.0e22 cannot be handle
While I fail to see how the "correct value" of
cos(4.47460300787e+182)+sin(4.47460300787e+182)
can be defined in the 'double' world, cos^2(x)+sin^2(x)=1 and
sin(2*x)=2*sin(x)*cos(x) seems to be verified (at least for this value)
even if the actual values of sin and cos depends on the optimisation
And I assume that since the compiler does them, that would now make it
impossible for us to gather a list of all the 'lock' prefixes so that
we can undo them if it turns out that we are running on a UP machine.
When we do SMP operations, we don't just add a "lock" prefix to it. We do this:
On 2/3/2012 10:55 AM, Vincent Lefevre wrote:
On 2012-02-03 10:33:58 -0500, Robert Dewar wrote:
On 2/3/2012 10:28 AM, Vincent Lefevre wrote:
If the user requested such a computation, there should at least be
some intent. Unless an option like -ffast-math is given, the result
should be accurate.
On 2012-02-03 16:57:19 +0100, Michael Matz wrote:
> > And it may be important that some identities (like cos^2+sin^2=1) be
> > preserved.
>
> Well, you're not going to get this without much more work in sin/cos.
If you use the glibc sin() and cos(), you already have this (possibly
up to a few ulp
Hi,
On Fri, 3 Feb 2012, Vincent Lefevre wrote:
> > >No normal math library supports such an extreme range, even basic
> > >identities (like cos^2+sin^2=1) aren't retained with such inputs.
> >
> > I agree: the program is complete nonsense.
>
> I disagree: there may be cases where large inputs c
On 2012-02-03 10:33:58 -0500, Robert Dewar wrote:
> On 2/3/2012 10:28 AM, Vincent Lefevre wrote:
> >If the user requested such a computation, there should at least be
> >some intent. Unless an option like -ffast-math is given, the result
> >should be accurate.
>
> What is the basis for that claim?
On 2/3/2012 10:28 AM, Vincent Lefevre wrote:
On 2012-02-03 10:13:58 -0500, Robert Dewar wrote:
On 2/3/2012 10:01 AM, Michael Matz wrote:
No normal math library supports such an extreme range, even basic
identities (like cos^2+sin^2=1) aren't retained with such inputs.
I agree: the program is
On 2012-02-03 10:13:58 -0500, Robert Dewar wrote:
> On 2/3/2012 10:01 AM, Michael Matz wrote:
> >No normal math library supports such an extreme range, even basic
> >identities (like cos^2+sin^2=1) aren't retained with such inputs.
>
> I agree: the program is complete nonsense.
I disagree: there
On 2/3/2012 10:01 AM, Michael Matz wrote:
No normal math library supports such an extreme range, even basic
identities (like cos^2+sin^2=1) aren't retained with such inputs.
I agree: the program is complete nonsense. It would be useful to know
what the intent was.
Ciao,
Michael.
Hi,
On Fri, 3 Feb 2012, Richard Guenther wrote:
> > int main(void)
> > {
> > double a = 4.47460300787e+182;
> > slipped = -1.141385
> > That is correct.
> >
> > slipped = -0.432436
> > That is obviously incorrect.
How did you determine that one is correct and the other obviously
incorrect? N
On Fri, Feb 3, 2012 at 2:26 PM, Konstantin Vladimirov
wrote:
> Hi,
>
> Consider minimal reproduction code:
>
> #include "math.h"
> #include "stdio.h"
>
> double __attribute__ ((noinline))
> slip(double a)
> {
> return (cos(a) + sin(a));
> }
>
> int main(void)
> {
> double a = 4.47460300787e+182;
Hi,
Consider minimal reproduction code:
#include "math.h"
#include "stdio.h"
double __attribute__ ((noinline))
slip(double a)
{
return (cos(a) + sin(a));
}
int main(void)
{
double a = 4.47460300787e+182;
double slipped = slip(a);
printf("slipped = %lf\n", slipped);
return 0;
}
Compil
On 2012-01-31 09:16:09 +0100, David Brown wrote:
> For normal variables, "a = b = 0" is just ugly - but that is a
> matter of opinion.
and it is handled badly by GCC:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=52106
(this is just a missing warning... Still, inconsistency is bad.)
--
Vincent
On Fri, Feb 03, 2012 at 09:37:22AM +, Richard Guenther wrote:
> On Fri, 3 Feb 2012, DJ Delorie wrote:
>
> >
> > Jan Kara writes:
> > > we've spotted the following mismatch between what kernel folks expect
> > > from a compiler and what GCC really does, resulting in memory corruption
> > >
On Fri, 3 Feb 2012, DJ Delorie wrote:
>
> Jan Kara writes:
> > we've spotted the following mismatch between what kernel folks expect
> > from a compiler and what GCC really does, resulting in memory corruption on
> > some architectures. Consider the following structure:
> > struct x {
> >
35 matches
Mail list logo