gcc 3.4.6 evaluates wrong size for dynamic size arrays (hence binaries compiled
with it can yield wrong results or even segfault)

In the example source code below, the second array is incorrectly interpreted
by the code generated by gcc 3.4.6 as if it were at a different location than
it actually is. Instead, gcc 2.95.3 and 4.1.2 show the right behavior.

I've checked this both on x86 and ppc, and it happens on both platforms.

The difference in the x86 assembly seems to be the following, and in fact
pushing and popping with printf()s scattered all over the accesses to the
vector elements seems a viable yet horrible workaround:

| bash-3.00$ diff -U10 buglet.s-{mv3.4.3,2.95.3}
| --- buglet.s-3.4.3    2008-05-28 16:03:47.000000000 -0700
| +++ buglet.s-2.95.3     2008-05-28 16:01:53.000000000 -0700
| @@ -111,5 +111,5 @@
|         leal    -8(%ebp), %esp
|         popl    %esi
|         popl    %edi
| -       popl    %ebp
| +       leave
|         ret
|         .size   main, .-main
|         .section        .note.GNU-stack,"",@progbits



Reproducible: Always

Steps to Reproduce:
Try and build the following code:

int main(){
     struct input_{
         unsigned int num;
         unsigned int key_siz;
         unsigned int val_siz;
         unsigned char key [ 5 ] [ 3 ];
         unsigned char val [ 5 ] [ 6 ];
     } __attribute__((packed)) input = {
         5,
         3,
         6,
         {
             'a', 'b', 'c',
             'd', 'e', 'f',
             'g', 'h', 'i',
             'j', 'k', 'l',
             'm', 'n', 'o',
         },
         {
             'A', 'B', 'C', 'D', 'E', 'F',
             'G', 'H', 'I', 'J', 'K', 'L',
             'M', 'N', 'O', 'P', 'Q', 'R',
             'S', 'T', 'U', 'V', 'W', 'X',
             'Y', 'Z', '0', '1', '2', '3',
         },
     };

     unsigned int num = input.num;
     unsigned int key_siz = input.key_siz;
     unsigned int val_siz = input.val_siz;

     struct payload_ {
         unsigned int num;
         unsigned int key_siz;
         unsigned int val_siz;
         unsigned char key [ num ] [ key_siz ];
         unsigned char val [ num ] [ val_siz ];
     } __attribute__((packed)) * data = (struct payload_ *)&input;

     while(num--){
         printf("%3.3s -> %6.6s\n", data->key[num], data->val[num]);
     }

     return 0;
 }


Actual Results:  

The binary produced by gcc 3.4.6 shows wrong results:

$ ./buglet-gcc-3.4.6
mno -> VWXYZ0
jkl -> MNOPQR
ghi -> DEFGHI
def -> jklmno
abc -> abcdef


Expected Results:  
The binary produced by gcc 2.95.3 or 4.1.2 shows the correct results:

$ ./buglet-gcc-4.1.2 [or ./buglet-gcc-2.95.3]
mno -> YZ0123
jkl -> STUVWX
ghi -> MNOPQR
def -> GHIJKL
abc -> ABCDEF


In my actual code, perhaps because of the much larger indexes, I get
segmentation faults in unpredictable places... These don't happen with good
compilers, of course.


-- 
           Summary: gcc 3.4.6 evaluates wrong size for dynamic size arrays
                    (hence binaries compiled with it can yield wrong results
                    or even segfault)
           Product: gcc
           Version: 3.4.6
            Status: UNCONFIRMED
          Severity: critical
          Priority: P3
         Component: c
        AssignedTo: unassigned at gcc dot gnu dot org
        ReportedBy: sandr8+gcc at gmail dot com
 GCC build triplet: 3.4.3, 3.4.6
  GCC host triplet: i686-pc-linux-gnu
GCC target triplet: i686-pc-linux-gnu


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36455

Reply via email to