Re: tr1::unordered_set bizarre rounding behavior (x86)

2005-07-06 Thread Avi Kivity
On Tue, 2005-07-05 at 20:05 +0200, Gabriel Dos Reis wrote:
> Paolo Carlin
> It is definitely a good thing to use the full bits of value
> representation if we ever want to make all "interesting" bits part of
> the hash value.  For reasonable or sane representations it suffices to
> get your hand on the object representation, e.g.:
> 
>const int objsize = sizeof (double);
>typedef unsigned char objrep_t[objsize];
>double x = ;
>objrep_t& p = reintepret_cast(x);
>// ...
> 
> and let frexp and friends only for less obvious value representation.

most architectures have different bit representations for +0.0 and -0.0,
yet the two values compare equal.



Re: tr1::unordered_set bizarre rounding behavior (x86)

2005-07-06 Thread Avi Kivity
On Wed, 2005-07-06 at 15:54 +0300, Michael Veksler wrote:

> > most architectures have different bit representations for +0.0 and -0.0,
> > yet the two values compare equal.
> >
> 
> Yet, their sign bit is observable through   things like
>   assert(a == 0.0);
>   assert(b == 0.0);
>   1/(1/a+ 1/b)
> which would give either NaN or 0 depending on the sign
> of a and b.
> 
> So do you want one or two copies in the set?
> 
what matters is whether the sign bit is observable through the equality
predicate. in the case of the operator==(double, double), it is not
observable, so there should be only one copy in a set.



Re: Where does the C standard describe overflow of signed integers?

2005-07-15 Thread Avi Kivity

Paul Schlie wrote:


In that case you may want to stick with -O0.  There are *lots* of
things GCC does that alter undefined cases.  How about the undefined
behavior when aliasing rules are violated?  Would you want to make
-fno-strict-aliasing be the only supported setting?
   



- Isn't the purpose of "restrict" to explicitly enable the compiler to
 more aggressively optimize references which it may be not have been
 able to identify it as being strictly safe?  (As opposed to it feeling
 compelled presume potentially disastrously otherwise, without explicit
 permission to do so?)
 

"restrict" allows the compiler to make further assumptions than 
-fstrict-aliasing. with strict aliasing, the compiler assumes that an 
int pointer and a long pointer cannot refer to the same object, whereas 
"restrict" allows you to specify that two int pointers do not refer to 
the same object.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.



Re: Warning on C++ catch by value on non primitive types

2005-10-16 Thread Avi Kivity

Kai Henningsen wrote:

So what you say is that any decent modern C++ coding guide/list wants to  
forbid catching the C++ standard exceptions, and anything derived from  
them?


 


no, only catch by value is problematic, as it can lead to slicing.

catch by reference is perfectly fine.



Build successful for GCC 3.2.2 on FC4, or backward bootstrap compatibility

2005-12-18 Thread Avi Kivity
I recently had to build gcc 3.2.2 on an FC4 box. This failed using gcc 
4.0.2 as the bootstrap compiler since gcc 3.2.2 uses no-longer-accepted 
extensions. So I built gcc 3.4.5 using 4.0.2, and used that to bootstrap 
3.2.2.


Now, if it is part of the release criteria that release N-1 must be 
buildable with release N, then it worked well. If not, I'd like to 
suggest that it be added, as it makes building older gcc releases 
possible (though perhaps lengthy).


Avi

--
error compiling committee.c: too many arguments to function