In the main function there is a cast casting int result to signed char.

typedef  int  int32_T;
typedef  unsigned int  uint32_T;
typedef  signed char  int8_T;

#include <stdio.h> 

int main(void)
{
    int32_T i;
    uint32_T numerator;

    for (i = 126; i < 256; i++) {

      numerator = (uint32_T)((int8_T)(-128 + i) + 128) * 63U;

      fprintf(stdout, "i = %d numerator = %u\n", i, numerator); 
    }
    return numerator;
}

This program is compiled with gcc -O0 -g,
I suppose the result of the variable "numerator" should be:
126 * 63
127 * 63
128 * 63
....

But the printed result is equal to:
126 * 63
127 * 63
- 128 * 63
....

The generated assembly code on mac with Intel Core-2-Duo processor is:

        movl    -24(%ebp), %eax
        movsbl  %al,%edx
        movl    %edx, %eax
        sall    $6, %eax
        subl    %edx, %eax
        movl    %eax, -20(%ebp)

It seems both -128 and +128 are ignored.

This problem doesn't appear on gcc 4.1.1.


-- 
           Summary: wrong code on casting int result to signed char
           Product: gcc
           Version: 4.0.1
            Status: UNCONFIRMED
          Severity: major
          Priority: P3
         Component: c
        AssignedTo: unassigned at gcc dot gnu dot org
        ReportedBy: hongbo dot yang at mathworks dot com
 GCC build triplet: i686-linux-gnu
  GCC host triplet: i686-linux-gnu
GCC target triplet: i686-linux-gnu


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35774

Reply via email to