Bernd Trog wrote:
This shows, IMHO, that the bug is related to the encoding/decoding of
Integers when the target Integer'Size is 16 bit.
Comments? Ideas?
None at all, Uint should have nothing to do with target
integer size, so you must somehow have some confusion!
On Sun, 30 Apr 2006, Robert Dewar wrote:
> Bernd Trog wrote:
> > package i is
> >subtype I32767 is Integer range -32767 .. 32767;
> >-- Note: -32767 is in the Uint_Direct range!
>
> This is a host type, not a target type, and this Integer
> is the host integer.
FWIW, if I include -32768
Bernd Trog wrote:
Reading the comments in ttypes.ads suggests that there is at least a
clear distinction beteen host and target integer types.
Yes it tries to, so if you find problems they should be fixed
package i is
subtype I32767 is Integer range -32767 .. 32767;
-- Note: -32767 is
Bernd Trog wrote:
On Wed, 26 Apr 2006, Robert Dewar wrote:
Bernd Trog wrote:
I'm chasing a bug that only appeares when Standard.Integer'Size is 16:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26849
Trying to make the compiler work with standard integer size of
16 will be very difficult I fea
On Wed, 26 Apr 2006, Robert Dewar wrote:
> Bernd Trog wrote:
>
> > I'm chasing a bug that only appeares when Standard.Integer'Size is 16:
> > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26849
>
> Trying to make the compiler work with standard integer size of
> 16 will be very difficult I fear.
Do
Bernd Trog wrote:
Is the handling of the value -32768 optimized in any way, while
-32769 and -32767 are not optimized in the same way?
No, see below
For interest, why do you ask?
I'm chasing a bug that only appeares when Standard.Integer'Size is 16:
http://gcc.gnu.org/bugzilla/show_bug.cg
On Wed, 26 Apr 2006, Robert Dewar wrote:
> Bernd Trog wrote:
> > can someone please explain the huge change in the internal
> > integer representation(Uint) from -32769 to -32767?
>
> just a matter of efficiency for commonly used values
Does this mean that there are three different representation
Bernd Trog wrote:
Hello,
can someone please explain the huge change in the internal
integer representation(Uint) from -32769 to -32767?
just a matter of efficiency for commonly used values
What's the difference between these three values, from
the Ada FE's point of view?
none at all. Uint i
Hello,
can someone please explain the huge change in the internal
integer representation(Uint) from -32769 to -32767?
What's the difference between these three values, from
the Ada FE's point of view?
Thanks!
__
Do You Yahoo!?
Tired of spam?