gcc-12-20241114 is now available
Snapshot gcc-12-20241114 is now available on https://gcc.gnu.org/pub/gcc/snapshots/12-20241114/ and on various mirrors, see https://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 12 git branch with the following options: git://gcc.gnu.org/git/gcc.git branch releases/gcc-12 revision 169f70a693cf35dfee6086351bdda0621be2b832 You'll find: gcc-12-20241114.tar.xz Complete GCC SHA256=8335182ba34748d3ea210fb7e450846f0a6ceab42285069e04d62bcb02ff543d SHA1=c5796fc92dc94922bf4d9dd98d6a009561187974 Diffs from 12-20241107 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-12 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: -Wfloat-equal and comparison to zero
On Thu, 14 Nov 2024 10:04:59 +0100 David Brown via Gcc wrote: > No. This is - or at least appears to be - missing critical thinking. You are explaining this to someone who designed research databases and who implemented quantitative models that ran on them. You're entitled to your opinion, of course. I thought you were scratching your head to understand how x == 0 might be a useful test, not preparing to explain to me how to do my job of 17 years. > you are not completely sure that you have full control over the data Does the programmer breathe who has full control over input? Errors occur. Any analyst will tell you 80% of the work is ensuring the accuracy of inputs. Using SQL COALESCE to convert a NULL to 0 is a perfectly clear and dependable way to represent it. I'm not saying it's always done, or the best way. I'm saying it's deterministic, which is good enough. > And the programmer should know that testing for floating > point /equality/, even comparing to 0.0, is a questionable choice of > strategy. What's "questionable" about it? If 0 was assigned, 0 is what it is. If 1 was assigned, 1 is what it is. Every 32-bit integer, and more, is likewise accurately stored in a C double. > will not have to wonder why the programmer is using risky > code techniques. I would say fgetc(3) is risky if you don't know what you're doing, and float equality is not, if you do. I would also say that, if you were right, equality would not be a valid operator for floating point in C. I get it. I can imagine suspecting a dodgy comparison and, in lieu of better tools, using -Wfloat-equal to surface for inspection all floating-point equality tests. I'm just not willing to say all such uses are risky, ill-defined, or na. --jkl
Re: -Wfloat-equal and comparison to zero
> It's also not unusual to start with "x" statically initialized to zero, > and use that as an indication to invoke the initialization routine. This is exactly what I have. In my case, if the value remains 0.0 it means the calculations for those metrics are not applicable. The suggestion to use booleans and checking for Inf and NaN, would make the code hard to read and maintain. Specifically: 1) Debugging "result = a / ((b + c) / d)" when d is 0.0 and the result ends up being 0.0 is somewhat tedious. Here Inf due to division by 0.0 is lost in a complex equation. Either you have to break up the equation and test each part for Inf/NaN or use fetestexcept(). 2) I normally enable SIGFPE to catch issues like this very early, e.g. (void)feenableexcept(FE_ALL_EXCEPT & (~FE_INEXACT)); Generating floating point exceptions would simply crash the program with the default signal handler. The assumption is that such exceptions are bugs in the code and not the default behavior. The only issue is that some hardware does not support traps on floating point exceptions. Hence, the fallback would be bitmask = fetestexcept(FE_ALL_EXCEPT & (~FE_INEXACT)); if (bitmask != 0) { /* Test and report each exception to stderr */ } before exiting main() function. It is a bit unfortunate that the default behavior with many C compilers is to ignore floating point exceptions.
RE: [RFC] Enabling SVE with offloading to nvptx
> -Original Message- > From: Andrew Stubbs > Sent: 12 November 2024 20:23 > To: Prathamesh Kulkarni ; Jakub Jelinek > > Cc: Richard Biener ; Richard Biener > ; gcc@gcc.gnu.org; Thomas Schwinge > > Subject: Re: [RFC] Enabling SVE with offloading to nvptx > > External email: Use caution opening links or attachments > > > On 12/11/2024 06:01, Prathamesh Kulkarni via Gcc wrote: > > > > > >> -Original Message- > >> From: Jakub Jelinek > >> Sent: 04 November 2024 21:44 > >> To: Prathamesh Kulkarni > >> Cc: Richard Biener ; Richard Biener > >> ; gcc@gcc.gnu.org; Thomas Schwinge > >> > >> Subject: Re: [RFC] Enabling SVE with offloading to nvptx > >> > >> External email: Use caution opening links or attachments > >> > >> > >> On Sat, Nov 02, 2024 at 03:53:34PM +, Prathamesh Kulkarni > wrote: > >>> The attached patch adds a new bitfield needs_max_vf_lowering to > >> loop, > >>> and sets that in expand_omp_simd for loops that need delayed > >> lowering > >>> of safelen and omp simd arrays. The patch defines a new macro > >>> OMP_COMMON_MAX_VF (arbitrarily set to 16), as a placeholder value > >> for > >>> max_vf (instead of INT_MAX), and is later replaced by appropriate > >>> max_vf during omp_adjust_max_vf pass. Does that look OK ? > >> > >> No. > >> The thing is, if user doesn't specify safelen, it defaults to > >> infinity (which we represent as INT_MAX), if user specifies it, > then > >> that is the maximum for it (currently in OpenMP specification it is > >> just an integral value, so can't be a poly int). > >> And then the lowering uses the max_vf as another limit, what the hw > >> can do at most and sizes the magic arrays with it. So, one needs > to > >> use minimum of what user specified and what the hw can handle. > >> So using 16 as some magic value is just wrong, safelen(16) can be > >> specified in the source as well, or safelen(8), or safelen(32) or > >> safelen(123). > >> > >> Thus, the fact that the hw minimum hasn't been determined yet needs > >> to be represented in some other flag, not in loop->safelen value, > and > >> before that is determined, loop->safelen should then represent what > >> the user wrote (or was implied) and the later pass should use > minimum > >> from loop->safelen and the picked hw maximum. Of course if the > >> picked hw maximum is POLY_INT-ish, the big question is how to > compare > >> that against the user supplied integer value, either one can just > >> handle the INT_MAX (aka > >> infinity) special case, or say query the backend on what is the > >> maximum value of the POLY_INT at runtime and only use the POLY_INT > if > >> it is always known to be smaller or equal to the user supplied > >> safelen. > >> > >> Another thing (already mentioned in the thread Andrew referenced) > is > >> that max_vf is used in two separate places. One is just to size of > >> the magic arrays and one of the operands of the minimum (the other > is > >> user specified safelen). In this case, it is generally just fine > to > >> pick later value than strictly necessary (as long as it is never > >> larger than user supplied safelen). > >> The other case is simd modifier on schedule clause. That value > >> should better be the right one or slightly larger, but not too > much. > >> I think currently we just use the INTEGER_CST we pick as the > maximum, > >> if this sizing is deferred, maybe it needs to be another internal > >> function that asks the value (though, it can refer to a loop vf in > >> another function, which complicates stuff). > >> > >> Regarding Richi's question, I'm afraid the OpenMP simd loop > lowering > >> can't be delayed until some later pass. > > Hi Jakub, > > Thanks for the suggestions! The attached patch makes the following > changes: > > (1) Delays setting of safelen for offloading by introducing a new > > bitfield needs_max_vf_lowering in loop, which is true with > offloading enabled, and safelen is then set to min(safelen, max_vf) > for the target later in omp_device_lower pass. > > Comparing user-specified safelen with poly_int max_vf may not be > > always possible at compile-time (say 32 and 16+16x), and even if we > determine runtime VL based on -mcpu flags, I guess relying on that > won't be portable ? > > The patch works around this by taking constant_lower_bound (max_vf), > > and comparing it with safelen instead, with the downside that > constant_lower_bound(max_vf) will not be the optimal max_vf for SVE > target if it implements SIMD width > 128 bits. > > > > (2) Since max_vf is used as length of omp simd array, it gets > streamed > > out to device, and device compilation fails during streaming-in if > > max_vf is poly_int (16+16x), and device's NUM_POLY_INT_COEFFS < 2 > (which motivated my patch). The patch tries to address this by simply > setting length to a placeholder value (INT_MAX?) in > lower_rec_simd_input_clauses if offloading is enabled, and will be > later set to appropriate value in omp_device_lower pass. > > > > (3)
Re: -Wfloat-equal and comparison to zero
On 12/11/2024 22:44, James K. Lowden wrote: On Tue, 12 Nov 2024 18:12:50 +0100 David Brown via Gcc wrote: Under what circumstances would you have code that : ... d) Would be perfectly happy with "x" having the value 2.225e-307 (or perhaps a little larger) and doing the division with that. I think what you really want to check is if "x" is a reasonable value - checking only for exactly 0.0 is usually a lazy and useless attempt at such checks. In quantitative research, "x" may be initialized from a database. Yesterday that database might have been fine, and today there's a row with a zero in it. If it's a problem, it's better to report the problem as "x is 0 for foo" than as a divide-by-zero error later on. No. This is - or at least appears to be - missing critical thinking. If you have data from a database, and you are not completely sure that you have full control over the data and know that it is always valid (for whatever operations you will do with the data), then it is "external data". That means you need to do a full check for sane, safe and valid data before you use it - just as you would for data from a user-provided file, an internet web form, or anything else untrusted. And it is simply not credible that the only check for validity of "x" is to check for a value of exactly 0.0. (What about a null entry? A NaN ? A value that is simply too big or too small?) In fact, division might not be the issue. A coefficient of zero might indicate, say, an error upstream in the model parameter output. That would be a design flaw - 0.0 is not (in general) a good way to indicate such things. Use a NaN if you need to keep it within a float - or, better, use a database Null entry or an additional validity or error signal. In the kind of situation where 0.0 could be a practical choice for indicating an error or missing data, you have a clear gap between invalid data and valid data - and you use "x > 0.0" for the check, not "x == 0.0" (after first checking for NaNs and other awkward values if you got it straight from the database). It's also not unusual to start with "x" statically initialized to zero, and use that as an indication to invoke the initialization routine. Use 0.0 as the starting point if that makes sense - it often does. Don't use it as a flag for invoking other things. (Or, if you have enough control of the data to be sure it is safe to do so, at least use greater than comparisons rather than equality.) When we have floating point numbers initialized from small integer constants, comparison with == is both valid and safe. Whether 0 itself is the concern, or near-zeroes like 2.225e-307, is depends on context, something the programmer knows and the compiler does not. Certainly it is the programmer that knows what is valid or not, and how it is appropriate to test for validity. And the programmer should know that testing for floating point /equality/, even comparing to 0.0, is a questionable choice of strategy. That's why this is a useful warning in the compiler. And if you have a point in your code where you are absolutely sure it is safe and useful to compare floats for equality, then turn off the warning around that bit of code, along with a comment to say how you know it is safe, why you think it is a good idea here, and what future maintainers should watch out for if they change the code. That way, future readers of the code (perhaps yourself, after many years) will not have to wonder why the programmer is using risky code techniques.