https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107561
Richard Biener <rguenth at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |hubicka at gcc dot gnu.org, | |rguenth at gcc dot gnu.org Priority|P1 |P3 --- Comment #23 from Richard Biener <rguenth at gcc dot gnu.org> --- So we can "mitigate" the diagnostic for g++.dg/pr17488.C with a hack, but for g++.dg/warn/Warray-bounds-16.C we see <bb 2> [local count: 1073741824]: a ={v} {CLOBBER}; a.m = 0; _5 = operator new [] (0); a.p = _5; _2 = a.m; if (_2 > 0) goto <bb 3>; [89.00%] else goto <bb 5>; [11.00%] <bb 3> [local count: 955630225]: _12 = (sizetype) _2; _11 = _12 * 4; __builtin_memset (_5, 0, _11); [tail call] where we'd have a clear range (_2 > 0) even without the multiplication but we're only now picking that up. The bug here is quite the same missed optimization though, we fail to CSE a.m around the 'operator new [] (0)' call and so obviously dead code remains. C++ is simply an awful language to work with here. A static analyzer would maybe simply look past possibly clobbering calls deriving the code is likely dead and refrain from diagnosing it. Note while for g++.dg/warn/Warray-bounds-16.C we are again working inside a CTOR the issue extends to any code with intermediate allocations via new or delete expressions. Yes, we can add some flag like -fnew-is-not-stupid, but then we couldn't make it the default. Maybe(?) we can somehow detect whether we are dealing with overloaded global new/delete with LTO, like detecting we're resolving it to the copy in libstdc++? The resolution info just tells us RESOLVED_DYN though, maybe we can add something like RESOLVED_STDLIB_DYN and handle a set of known libraries specially? I'm putting this back to P3, we do have a load more (late) diagnostic regressions in GCC 13.