https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114441

--- Comment #5 from Yang Wang <njuwy at smail dot nju.edu.cn> ---
(In reply to Xi Ruoyao from comment #4)
> (In reply to Yang Wang from comment #3)
> > (In reply to Andrew Pinski from comment #1)
> > > This is not a GCC bug, 
> > > 
> > > You need to use -mcmodel=large if you have huge statically allocated 
> > > arrays.
> > > 
> > > The default -mcmodel=medium does not support more than 2GB size arrays
> > > 
> > > See
> > >  https://gcc.gnu.org/onlinedocs/gcc-13.2.0/gcc/x86-Options.html#index-
> > > mcmodel_003dlarge-3
> > 
> > Thanks for your reply! Indeed, the compilation was successful when either
> > the -mcmodel=medium or -mcmodel=large was enabled. However, I'm still
> > curious why it can be compiled successfully at the -O3 optimization level
> > when the default -mcmodel=small is enabled? As far as I know, the -O3
> > optimization level is the highest.
> 
> Because the entire `c` function is optimized to empty.  Note that `c` always
> invokes undefined behavior:
> 
> t.cc: In function 'void c()':
> t.cc:10:30: warning: iteration 1 invokes undefined behavior
> [-Waggressive-loop-optimizations]
>    10 |             a[d][e][f][g][i] = 2;
>       |             ~~~~~~~~~~~~~~~~~^~~
> t.cc:9:32: note: within this loop
>     9 |           for (size_t i = 0; i < 16; ++i)
>       |                              ~~^~~~
> 
> So the compiler is allowed to optimize it into nothing.
> 
> A test case without undefined behavior:
> 
> #include <stdio.h>
> int a[1][1][1][1][1];
> short b[6268435456];
> void c() {
>   a[0][0][0][0][0] = 114514;
> }
> int main() {}
> 
> fails at -O3 too.
> 
> Also note that -O levels are for optimizing the *valid* programs, not for
> debugging the compiler.  It's wrong to report bugs solely because of
> "different behaviors with different -O levels" (without analysing the
> validity of the test case).

Thanks for your reply!I'll pay more attention.

Reply via email to