https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118527
--- Comment #3 from Jan Hubicka <hubicka at gcc dot gnu.org> --- The reason why I did not implement profile fixups to cfgcleanup is that you can not really fix the profile without knowing why it became inconsistent. Consider situation where we have function foo (int a) { for (int i = 0; i < a; i++) something (); } We profile it and see that loop iterates 10 times at average. Then foo gets inlined into a call "foo(1);" and eventually FRE replaces if (i < 1) goto loopback; by if (false) goto loopback; then we will have profile predictng loop to iterate 10 times and after removing the loopback edge profile will end up obviously inconsistent. However the point is that it was wrong all the way since inlining, since inliner specialized the code and scaled profile without taking context into account. Correct way would be to fix the count of basic block calling something but also increase its count in the offline copy of foo (or other inline copy with different parameter). This is hard to do. Some profile updates (such as scaling profile after unrolling) are correct ones, since we know we unrolled the loop and each copy of its body will iterate less frequently. In this case we however have situation where FRE proves that the profile was wrong on its input and only transforms profile from being wrong in non-obvious way to wrong in obvious way. Since it does not know why the profile was wrong (in this case it was bad loop prediction at early opts), it does not really know how to fix it correctly. Scaling down profile will often shift the inconsistency somewhere else. For example, if loop has additional exit, scaling down its profile will break profile past that exit. So doing nothing in case we can not longer do the correct thing seems bit more safe and maintainable strategy than having ad-hoc updates that solves some testcases but also may go quite wrong...