https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90273
--- Comment #26 from Alexandre Oliva <aoliva at gcc dot gnu.org> --- I saw the #c11 patch in gcc-patches, and it seemed to have been posted FTR and installed. It looked good, so I didn't comment on it. I agree about the effects of #c16, though I begin to get a feeling that it's working too hard for too little benefit. Ditto trying to optimize debug temps: you will get some savings, sure, but how much benefit for such global analyses? Perhaps we'd get a much bigger bang for the buck introducing vector resets, in which a single gimple bind stmt would reset several decls at once. If that's become as common as it is being made out to be, this could save a significant amount of memory. Though from Jan's comments on compile times, it doesn't look like we've got much slower, which makes me wonder what the new problem really is... Could it be that debug binds have always been there, plentiful but under the radar, and that the real recent regression (assuming there really is one) lies elsewhere? (sorry, I haven't really dug into it myself)