Dear developers,

May I seek your confirmation to check whether the following program
triggers a true wrong-code issue or not? I don't want to make noise to the
bug repository so I'd like to seek your confirmation here first.

The following case makes GCC outputs differ under  -O1 below vs -O1 above
optimization levels. Here is the program (s.c):
```
#include <stdint.h>
union {
int32_t a;
uint16_t b;
} c;
static uint16_t *d = &c.b;
int32_t *e = &c.a;
int32_t **f = &e;
int32_t ***g = &f;
int32_t *h = &c.a;
int64_t i() {
int8_t j = 5;
*h = j;
***g = *d;
return 0;
}
int main() {
c.a = 1;
c.b = 1;
i();
printf("%d\n", c.a);
}
```

```
$gcc-trunk -O0 -w s.c ; ./a.out
5
$gcc-trunk -O1 -w s.c ; ./a.out
5
$gcc-trunk -O2 -w s.c ; ./a.out
1
$gcc-trunk -O3 -w s.c ; ./a.out
1
$gcc-trunk -Os -w s.c ; ./a.out
1
```

What surprised me was that almost all GCC versions produce different
results (please check more details here: https://godbolt.org/z/vTzhhvnGE).
Besides, clang also has the same issue with -O0 vs the above versions. So,
does this program have any UB? If not, I would like to file new bug reports
then. Thank you so much!


Best regards,
Haoxin

Reply via email to