https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115817
--- Comment #4 from Dmytro Bagrii <dimich.dmb at gmail dot com> --- Of course, there are various cores, i'm just speculating regarding necessity of preserving R1. For now the issue is worked around with inline assembler. It was surprise for me to see the .text size increased by 8 bytes after just changing `true` to `false`, plus extra RAM byte for stack. ISR may invoke other functions expecting R1 to be zero by convention, but gcc is smart enough not to initialize R1 when it is not used. I'm curious whether optimizer can make a branch for ISR (i.e. __signal__) and check both variants: using EOR R1,R1 and using LDI Rd,0, and select variant with less number of overall instructions. I guess the decision to use __tmp_reg__ instead of LDI for zero value is made on a stage other than prologue/epilogue generation. Sorry for perhaps naive assumptions, i'm not very familiar with modern optimizers design.