False positive misleading indentation warning
Since I'm unable to create an account to report a bug and got no reply from gcc-bugzilla-account-requ...@gcc.gnu.org I'll dump this here. Depending on the placement of a label Gcc gives a false positive warning about misleading indentation. Below is a minimal working example to reproduce and the output from Gcc. Arsen Arsenović was so kind to confirm (on IRC) this bug is reproducible with a current Gcc and asked me to report it. /* * miside.c * MWE for a wrong warning shown with gcc -Wmisleading-indentation */ void good(int c) { label: while (c != '-'); if (c != '-') goto label; } void bad(int c) { label: while (c != '-'); if (c != '-') goto label; } /* % gcc -c -Wmisleading-indentation miside.c miside.c: In function ‘bad’: miside.c:18:9: warning: this ‘while’ clause does not guard... [-Wmisleading-indentation] 18 | label: while (c != '-'); | ^ miside.c:19:9: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘while’ 19 | if (c != '-') | ^~ */
Re: False positive misleading indentation warning
I think this is the same bug already filed here: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70954 Martin Am Mittwoch, dem 01.11.2023 um 09:11 +0100 schrieb Rene Kita: > Since I'm unable to create an account to report a bug and got no reply > from gcc-bugzilla-account-requ...@gcc.gnu.org I'll dump this here. > > Depending on the placement of a label Gcc gives a false positive warning > about misleading indentation. Below is a minimal working example to > reproduce and the output from Gcc. > > Arsen Arsenović was so kind to confirm (on IRC) this bug is reproducible > with a current Gcc and asked me to report it. > > /* > * miside.c > * MWE for a wrong warning shown with gcc -Wmisleading-indentation > */ > > void > good(int c) > { > label: > while (c != '-'); > if (c != '-') > goto label; > } > > void > bad(int c) > { > label:while (c != '-'); > if (c != '-') > goto label; > } > > /* > % gcc -c -Wmisleading-indentation miside.c > miside.c: In function ‘bad’: > miside.c:18:9: warning: this ‘while’ clause does not guard... > [-Wmisleading-indentation] >18 | label: while (c != '-'); > | ^ > miside.c:19:9: note: ...this statement, but the latter is misleadingly > indented as if it were guarded by the ‘while’ >19 | if (c != '-') > | ^~ > */
Question on GIMPLE shifts
Hi! When investigating bit shifts I got an incomprehensible moment with the following example: int f(int x, int k) { int tmp = x >> k; return (tmp & 1) << 10; } If we would like to take a look into GIMPLE then we'll get: int f (int x, int k) { int tmp; int D.2746; int _1; int _5; : tmp_4 = x_2(D) >> k_3(D); _1 = tmp_4 << 10; _5 = _1 & 1024; : : return _5; } Is the expression '_1 = tmp_4 << 10' considered legal in GIMPLE? Given the semantics of C bit shifts, this statement could modify the sign bit, potentially leading to overflow. --- With best regards, Daniil
Question about function Split with va_args
Hi I've been working on function splits recently, and I've noticed that functions with va_args arguments won't be split, so why is that? I tried to understand the comments in the source code, but I still don't get the specific reason. At the same time, if I do want to split functions with va_args arguments when it's safe to do, how do I define security here? In other words, how should I know that a function with va_args can be split? Or I can't? Thanks Hanke Zhang
Re: False positive misleading indentation warning
On Wed, 1 Nov 2023 at 08:12, Rene Kita wrote: > > Since I'm unable to create an account to report a bug and got no reply > from gcc-bugzilla-account-requ...@gcc.gnu.org You did get a reply. Jose Marchesi replied to you 1.5 hours after your email, but you didn't reply-all with the requested info that we needed to confirm you weren't the spammer who was requesting an account every few hours with different addresses. If you still want an account, I can create one for you.
Suspecting a wrong behavior in the value range propagation analysis for __builtin_clz
I found an unexpected issue working with an experimental target (available here: https://github.com/EEESlab/tricore-gcc), but I was able to reproduce it on mainstream architectures. For the sake of clarity and reproducibility, I always refer to upstream code in the rest of the discussion. Consider this simple test: #include int f(unsigned int a) { unsigned int res = 8*sizeof(unsigned int) - __builtin_clz(a); if(res>0) printf("test passed\n"); return res-1; } I tested this code on GCC 9 and GCC 11 branches, obtaining the expected result from GCC 9 and the wrong one from GCC 11. In GCC 11 and newer versions, the condition check is removed by a gimple-level optimization (I will provide details later), and the printf is always invoked at the assembly level with no branch. According to the GCC manual, __builtin_clz "returns the number of leading 0-bits in x, starting at the most significant bit position. If x is 0, the result is undefined." However, it is possible to define a CLZ_DEFINED_VALUE_AT_ZERO in the architecture backend to specify a defined behavior for this case. For instance, this has been done for SPARC and AARCH64 architectures. Compiling my test with SPARC GCC 13.2.0 with the -O3 flag on CompilerExplorer I got this assembly: .LC0: .asciz "test" f: save%sp, -96, %sp call__clzsi2, 0 mov%i0, %o0 mov %o0, %i0 sethi %hi(.LC0), %o0 callprintf, 0 or %o0, %lo(.LC0), %o0 mov 31, %g1 return %i7+8 sub%g1, %o0, %o0 After some investigation, I found this optimization derives from the results of the value range propagation analysis: https://github.com/gcc-mirror/gcc/blob/master/gcc/gimple-range-op.cc#L917 In this code, I do not understand why CLZ_DEFINED_VALUE_AT_ZERO is verified only if the function call is tagged as internal. A gimple call is tagged as internal at creation time only when there is no associated function declaration (see https://github.com/gcc-mirror/gcc/blob/master/gcc/gimple.cc#L371), which is not the case for the builtins. From my point of view, this condition prevents the computation of the correct upper bound for this case, resulting in a wrong result from the VRP analysis. Before considering this behavior as a bug, I prefer to ask the community to understand if there is any aspect I have missed in my reasoning.
Re: GCC support addition for Safety compliances
On Wed, 1 Nov 2023 at 09:45, Vishal B Patil wrote: > > Hi team, > > I'm using Mingw win32. My total code size is around 82MB. I'm getting error > while compilation "out of memory allocating 48 bytes" , I have attached the > snap for your reference. > > I have cleaned temp folder but not solved, then I research and it's about the > win32 mingw. Which has limit of 2GB. My code use more than 2GB size in > memory, task manager it consumes 98% memory and then it crashed. > > Could you please help me on that. This mailing list is for discussion the development of GCC itself, not for asking for help using GCC. Please use the gcc-help list next time. Your problem is not a problem with GCC, it's a problem with your program. Either change your code to use less memory, or create a 64-bit executable that does not have the 2GB limit.
Re: Question on GIMPLE shifts
On Wed, Nov 1, 2023 at 3:56 AM Daniil Frolov wrote: > > Hi! > > When investigating bit shifts I got an incomprehensible moment with > the following example: > > int f(int x, int k) > { > int tmp = x >> k; > return (tmp & 1) << 10; > } > > If we would like to take a look into GIMPLE then we'll get: > > int f (int x, int k) > { >int tmp; >int D.2746; >int _1; >int _5; > > : >tmp_4 = x_2(D) >> k_3(D); >_1 = tmp_4 << 10; >_5 = _1 & 1024; > > : > : >return _5; > > } > > Is the expression '_1 = tmp_4 << 10' considered legal in GIMPLE? Given > the > semantics of C bit shifts, this statement could modify the sign bit, > potentially leading to overflow. Except it was not undefined in C90. Thanks, Andrew > > --- > With best regards, > Daniil
Re: Suspecting a wrong behavior in the value range propagation analysis for __builtin_clz
On 11/1/23 05:29, Giuseppe Tagliavini via Gcc wrote: I found an unexpected issue working with an experimental target (available here: https://github.com/EEESlab/tricore-gcc), but I was able to reproduce it on mainstream architectures. For the sake of clarity and reproducibility, I always refer to upstream code in the rest of the discussion. Consider this simple test: #include int f(unsigned int a) { unsigned int res = 8*sizeof(unsigned int) - __builtin_clz(a); if(res>0) printf("test passed\n"); return res-1; } I tested this code on GCC 9 and GCC 11 branches, obtaining the expected result from GCC 9 and the wrong one from GCC 11. In GCC 11 and newer versions, the condition check is removed by a gimple-level optimization (I will provide details later), and the printf is always invoked at the assembly level with no branch. According to the GCC manual, __builtin_clz "returns the number of leading 0-bits in x, starting at the most significant bit position. If x is 0, the result is undefined." However, it is possible to define a CLZ_DEFINED_VALUE_AT_ZERO in the architecture backend to specify a defined behavior for this case. For instance, this has been done for SPARC and AARCH64 architectures. Compiling my test with SPARC GCC 13.2.0 with the -O3 flag on CompilerExplorer I got this assembly: .LC0: .asciz "test" f: save%sp, -96, %sp call__clzsi2, 0 mov%i0, %o0 mov %o0, %i0 sethi %hi(.LC0), %o0 callprintf, 0 or %o0, %lo(.LC0), %o0 mov 31, %g1 return %i7+8 sub%g1, %o0, %o0 After some investigation, I found this optimization derives from the results of the value range propagation analysis: https://github.com/gcc-mirror/gcc/blob/master/gcc/gimple-range-op.cc#L917 In this code, I do not understand why CLZ_DEFINED_VALUE_AT_ZERO is verified only if the function call is tagged as internal. A gimple call is tagged as internal at creation time only when there is no associated function declaration (see https://github.com/gcc-mirror/gcc/blob/master/gcc/gimple.cc#L371), which is not the case for the builtins. From my point of view, this condition prevents the computation of the correct upper bound for this case, resulting in a wrong result from the VRP analysis. Before considering this behavior as a bug, I prefer to ask the community to understand if there is any aspect I have missed in my reasoning. It would help if you included the debugging dumps. Jeff
Re: Suspecting a wrong behavior in the value range propagation analysis for __builtin_clz
Sure, I include the relevant tree dumps obtained with the "releases/gcc-11" branch. The "patch_" variants represent the dumps after disabling the check on the internal flag (I include the patch for both "releases/gcc-11" and "master" branches). The pass under investigation is "evpr"; you can see how the if condition is removed and related BBs are merged if the range analysis provides what I think is an unexpected result. The optimized dump changes accordingly, but the troublesome transformation is the one performed by the gimple VRP. Giuseppe From: Jeff Law Sent: Wednesday, November 1, 2023 5:11 PM To: Giuseppe Tagliavini ; gcc@gcc.gnu.org Subject: Re: Suspecting a wrong behavior in the value range propagation analysis for __builtin_clz On 11/1/23 05:29, Giuseppe Tagliavini via Gcc wrote: > I found an unexpected issue working with an experimental target (available > here: https://github.com/EEESlab/tricore-gcc), but I was able to reproduce it > on mainstream architectures. For the sake of clarity and reproducibility, I > always refer to upstream code in the rest of the discussion. > > Consider this simple test: > > #include > int f(unsigned int a) { >unsigned int res = 8*sizeof(unsigned int) - __builtin_clz(a); >if(res>0) printf("test passed\n"); >return res-1; > } > > I tested this code on GCC 9 and GCC 11 branches, obtaining the expected > result from GCC 9 and the wrong one from GCC 11. In GCC 11 and newer > versions, the condition check is removed by a gimple-level optimization (I > will provide details later), and the printf is always invoked at the assembly > level with no branch. > > According to the GCC manual, __builtin_clz "returns the number of leading > 0-bits in x, starting at the most significant bit position. If x is 0, the > result is undefined." However, it is possible to define a > CLZ_DEFINED_VALUE_AT_ZERO in the architecture backend to specify a defined > behavior for this case. For instance, this has been done for SPARC and > AARCH64 architectures. Compiling my test with SPARC GCC 13.2.0 with the -O3 > flag on CompilerExplorer I got this assembly: > > .LC0: > .asciz "test" > f: > save%sp, -96, %sp > call__clzsi2, 0 > mov%i0, %o0 > mov %o0, %i0 > sethi %hi(.LC0), %o0 > callprintf, 0 > or %o0, %lo(.LC0), %o0 > mov 31, %g1 > return %i7+8 > sub%g1, %o0, %o0 > > After some investigation, I found this optimization derives from the results > of the value range propagation analysis: > https://github.com/gcc-mirror/gcc/blob/master/gcc/gimple-range-op.cc#L917 > In this code, I do not understand why CLZ_DEFINED_VALUE_AT_ZERO is verified > only if the function call is tagged as internal. A gimple call is tagged as > internal at creation time only when there is no associated function > declaration (see > https://github.com/gcc-mirror/gcc/blob/master/gcc/gimple.cc#L371), which is > not the case for the builtins. From my point of view, this condition prevents > the computation of the correct upper bound for this case, resulting in a > wrong result from the VRP analysis. > > Before considering this behavior as a bug, I prefer to ask the community to > understand if there is any aspect I have missed in my reasoning. It would help if you included the debugging dumps. Jeff gcc-master.patch Description: gcc-master.patch test.c.244t.optimized Description: test.c.244t.optimized patch_test.c.244t.optimized Description: patch_test.c.244t.optimized patch_test.c.038t.evrp Description: patch_test.c.038t.evrp test.c.006t.gimple Description: test.c.006t.gimple test.c.037t.fre1 Description: test.c.037t.fre1 test.c.038t.evrp Description: test.c.038t.evrp
Suboptimal warning formatting with `bool` type in C
Recently, I was writing some code, and noticed some slightly strange warning formatting on a function taking a `bool` parameter #include void test(bool unused) { } bruh.c: In function 'test': bruh.c:2:16: warning: unused parameter 'unused' [-Wunused-parameter] 2 | void test(bool unused) |^ Notice that there is only a ^ pointing at the first character of the indentifer There is no underlining. Also, only the first "u" is colored purple The same issue does not manifest for _Bool bruh.c: In function 'test': bruh.c:2:17: warning: unused parameter 'unused' [-Wunused-parameter] 2 | void test(_Bool unused) | ~~^~ I was wondering why, and after some further investigation, I found the reason gcc's stdbool.h uses: #define bool_Bool to provide the type I investigated that myself with: #define test_type int void test(test_type unused) { } and also reproduced the same thing bruh.c: In function 'test': bruh.c:3:21: warning: unused parameter 'unused' [-Wunused-parameter] 3 | void test(test_type unused) | ^ typedef however, does not have this problem. So, I guess I'm asking: 1) Why is #define used instead of typedef? I can't imagine how this could possibly break any existing code. Would it be acceptable to make stdbool.h do this instead? 2) Is it possible to improve this diagnostic to cope with #define? also, it's worth noting, clang has this same "problem" too. Both the compiler emits the suboptimal underlining in the diagnostic, and its stdbool.h uses #define for bool https://clang.llvm.org/doxygen/stdbool_8h_source.html https://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/ginclude/stdbool.h
Re: Suboptimal warning formatting with `bool` type in C
On Wed, 1 Nov 2023, peter0x44 via Gcc wrote: > Why is #define used instead of typedef? I can't imagine how this could > possibly break any existing code. That's how stdbool.h is specified up to C17. In C23, bool is a keyword instead. -- Joseph S. Myers jos...@codesourcery.com
Re: Suboptimal warning formatting with `bool` type in C
On 2023-11-01 23:13, Joseph Myers wrote: On Wed, 1 Nov 2023, peter0x44 via Gcc wrote: Why is #define used instead of typedef? I can't imagine how this could possibly break any existing code. That's how stdbool.h is specified up to C17. In C23, bool is a keyword instead. I see, I didn't know it was specified that way. It seems quite strange that typedef wouldn't be used for this purpose. I suppose perhaps it matters if you #undef bool and then use it to define your own type? Still, it seems very strange to do this. Maybe it's something to offer as a GNU extension? Though, I'm leaning towards too trivial to be worth it, just for a (very minor) improvement to a diagnostic that can probably be handled in other ways.