[Bug preprocessor/78008] New: Forbid or document #pragma pack(0)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78008 Bug ID: 78008 Summary: Forbid or document #pragma pack(0) Product: gcc Version: 5.4.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: preprocessor Assignee: unassigned at gcc dot gnu.org Reporter: dhekir at gmail dot com Target Milestone: --- The GCC online doc (https://gcc.gnu.org/onlinedocs/gcc/Structure-Layout-Pragmas.html) says that #pragma pack is supported for compatibility with MS compilers. However, the MSVC doc (https://msdn.microsoft.com/en-us/library/2e70t5y1.aspx) explicitly states that, for pack(n): "Valid values are 1, 2, 4, 8, and 16." Indeed, writing #pragma pack(0) and compiling (with VS 2010, in my case) results in a warning: warning C4086: expected pragma parameter to be '1', '2', '4', '8', or '16' On GCC (I tried 5.4.0, but it seems not to have changed in quite some time), using #pragma pack(0) results in no warnings, even with -Wall. I found very old discussions (around gcc 2.9.5) mentioning that pack(0) is supposed to disable the effect of #pragma pack, however this is not documented. If the intended behavior is the same as "#pragma pack()" (without arguments), please document it. Otherwise, it would be best if GCC would report this construction as a warning/error, or at least indicate in the documentation that it should not be used.
[Bug c/66736] New: float rounding differences when using constant literal versus variable
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66736 Bug ID: 66736 Summary: float rounding differences when using constant literal versus variable Product: gcc Version: 5.1.1 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: dhekir at gmail dot com Target Milestone: --- Calling function "log10f(3)" with a constant literal or via a variable, such as "float f = 3; log10f(f)" gives different rounding results, which are incorrect in the latter case. Note that the bug is not about imprecision in the result, but inconsistency between two statements which should be equivalent. The difference only appears with no optimization flag or with -O0; activating -O1 or greater makes the difference disappear. It is especially annoying (although not forbidden) that the rounding differences in this case do not respect usual order (i.e. changing the rounding mode allows one to see that FE_DOWNWARD is larger than FE_TONEAREST in the version using the variable). This behavior has been observed in several GCCs, from 4.8.4 (Ubuntu) to 5.1.1 (Fedora), including a 5.0.0 compiled from trunk, and using different versions of glibc (2.19, and also tried compiling 2.21). All of them produced the same result. Also, there are several constants for which this happen, but 3 would be one of the most notable ones. #include #include int main() { float r = log10f(3); printf("literal constant: %g (%a)\n", r, r); float x = 3; r = log10f(x); printf("with variable:%g (%a)\n", r, r); return 0; }
[Bug c/66736] float rounding differences when using constant literal versus variable
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66736 --- Comment #2 from dhekir at gmail dot com --- Isn't the library implementation of log10f used to compute the literal constants generated in the assembly code? Would it then be a double precision result that would be precomputed and rounded to single precision in this case? Well, sorry for the noise, I compared the results with other compilers and, even if the numerical results themselves were different, they were consistent between precomputed constant literals and the underlying libc, therefore such surprising situations do not arrive. I assumed that it was not intented in GCC and so it would be useful to report it, but if it's not the same library function used in both cases, that explains the issue.
[Bug c/108500] New: -O -finline-small-functions results in "internal compiler error: Segmentation fault" on a very large program (700k function calls)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108500 Bug ID: 108500 Summary: -O -finline-small-functions results in "internal compiler error: Segmentation fault" on a very large program (700k function calls) Product: gcc Version: 12.2.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: dhekir at gmail dot com Target Milestone: --- Created attachment 54328 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=54328&action=edit compressed version of a simplified program causing the ICE In the attached preprocessed program (compressed with .tar.gz), running 'gcc -O -finline-small-functions' results in: gcc: internal compiler error: Segmentation fault signal terminated program cc1 The original program is more interesting than this simplified version. Still, it does have more than 700k function calls in the main function, which is causing the problem. The original command line was simply 'gcc -O2', then I narrowed the options down to -finline-small-functions. I tried several GCC Docker images (running 'gcc -O2' on the attached file), and I narrowed it down to: - with gcc:10.4 (or older), compilation works without any errors; - with gcc:11.1 (or newer; I tested up to 12.2.0), segmentation fault happens.
[Bug c/108500] -O -finline-small-functions results in "internal compiler error: Segmentation fault" on a very large program (700k function calls)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108500 dhekir at gmail dot com changed: What|Removed |Added Attachment #54328|0 |1 is obsolete|| --- Comment #1 from dhekir at gmail dot com --- Created attachment 54329 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=54329&action=edit .tar.gz compressed version of program causing crash
[Bug tree-optimization/108500] [11/12 Regression] -O -finline-small-functions results in "internal compiler error: Segmentation fault" on a very large program (700k function calls)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108500 --- Comment #12 from dhekir at gmail dot com --- Created attachment 54386 --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=54386&action=edit another test case, this time with 1M calls and structs as arguments A more complex test case, which still works (no segmentation fault), but takes too long to compile.
[Bug tree-optimization/108500] [11/12 Regression] -O -finline-small-functions results in "internal compiler error: Segmentation fault" on a very large program (700k function calls)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108500 --- Comment #13 from dhekir at gmail dot com --- Thank you very much for the work. Running the attached file with `-O -finline-small-functions` does compile in under 30 seconds on my computer. However, when trying to compile the original program (which is about 1 million lines, and each call passes 2 structures as arguments, instead of just calling a function without any arguments), it's taking several dozen minutes. I tried preprocessing it (5s to obtain the .i) file, and then running it with `-O -finline-small-functions`, or `-O2`, or `-O3`, and without any options at all, and in all cases, I ended up terminating the program before it finished (after more than 10 minutes; in some cases I waited up to 30 minutes). I tried re-simplifying the program. After preprocessing, I tried the following variants, with options `-O -finline-small-functions`: - 1M calls, no arguments, function returning a (global) struct: compiles in 30s; - 1M calls, each with a single argument of type `struct s`, function returns that same argument (that is, `struct s f(struct s s1) {return s1;}`): compiles in <2 minutes; - 1M calls, each with 2 arguments of types `struct s1` and `struct s2`, returning the second argument (that is, `struct s2 f(struct s1 arg1, struct s2 arg2) {return arg2;}`): >50 minutes (I had to terminate it). The last version, with -O2, I left it compiling for almost 3h before having to stop it. In any case, this bug seems definitely solved for me, and I no longer have the original stack overflow. However, I am still unable to compile my original code, so I'll have to try something else. It's possibly not a regression, however. I'm attaching it in case you may want to try it, but feel free to ignore it.
[Bug tree-optimization/108500] [11/12 Regression] -O -finline-small-functions results in "internal compiler error: Segmentation fault" on a very large program (700k function calls)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108500 --- Comment #17 from dhekir at gmail dot com --- To be honest, the "real" test case is very similar to the last one I sent: it's a semi-generated code, with some initialization of the data in the beginning, and then a lot of statements which perform not necessarily useful operations, and in the end a few assertions are checked (e.g. that the initialized data was not tampered with). So, in reality, I expected GCC to discard most of the program after optimization and execute it almost instantly. When I encountered the segmentation fault during compilation, I thought it might also be relevant for other users, so I submitted the bug. Now, however, that the issue is mostly a "performance" issue, it's less likely that other users will encounter such a huge program with "useful" purposes, so I understand completely if you decide this is just not interesting/useful enough. To be honest, I tried compiling the code with other open source C compilers (Clang, and another not-so-mature one), and one failed with a stack overflow, and the other didn't complete until 1h30m, so I terminated it. So, the simple fact that you were able to succesfully compile it with those options is already very interesting to me and sufficient for my "real" test case.