https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81456
Jakub Jelinek <jakub at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jakub at gcc dot gnu.org --- Comment #3 from Jakub Jelinek <jakub at gcc dot gnu.org> --- Yeah, the earlier in the optimization pipeline we need to make decisions, the more approximate the cost models are, unless we have infinite compile time and compile memory resources, we can't try both alternatives with all subsequent passes and choose what is in the end better; so, before RA we can't know what the register allocation will need to do with the code. -Os certainly doesn't and can't guarantee the resulting code will be always smaller than or equal to code with -O2, what matters is whether it creates smaller code on large amounts of real-world code; so, we generally choose to do or not to do optimizations where we know they do or don't generally result in smaller code on average. You can always find counter-examples where the heuristics just doesn't handle a particular case in the end well. Not convinced we need to track each such case.