Hi,

This is a one line patch to an unexpected behaviour noticed from ARM
and x86 when testing the D frontend.

---
import core.stdc.stdio;
import core.stdc.stdint;

void test(void* p)
{
    uint64_t pl = cast(uint64_t)p;
    uint64_t p2 = cast(uint64_t)cast(int)p;
    int tmp = cast(int)p;
    uint64_t p3 = cast(uint64_t)tmp;

    printf("%llx %llx %llx\n", pl, p2, p3);
}

void main()
{
    void* p = cast(void*)0xFFEECCAA;
    test(p);
}

------------------------------
Output is:
ffffffffffeeccaa ffffffffffeeccaa ffffffffffeeccaa

Expected:
ffeeccaa ffffffffffeeccaa ffffffffffeeccaa



Doing a quick conversion to C found that the same thing occurs with GCC too.

This is the comment associated with the change in the function.

/* Convert to an unsigned integer of the correct width first, and
from there widen/truncate to the required type.  Some targets support
the coexistence of multiple valid pointer sizes, so fetch the one we
need from the type.  */

Currently, GCC is converting the expression to a signed integer
instead of an unsigned one.  Does a test for the testsuite need to be
written for this?


Regards,
Iain.
diff --git a/gcc/convert.c b/gcc/convert.c
index 4cf5001..262d080 100644
--- a/gcc/convert.c
+++ b/gcc/convert.c
@@ -547,7 +547,7 @@ convert_to_integer (tree type, tree expr)
         from the type.  */
       expr = fold_build1 (CONVERT_EXPR,
                          lang_hooks.types.type_for_size
-                           (TYPE_PRECISION (intype), 0),
+                           (TYPE_PRECISION (intype), 1),
                          expr);
       return fold_convert (type, expr);
 

Reply via email to