https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109618
--- Comment #3 from Andrew Pinski <pinskia at gcc dot gnu.org> --- This one seems harder to fix. We have: ``` (gdb) p debug_tree(value) <nop_expr 0x7ffff79ad5c0 type <integer_type 0x7ffff7822000 sizetype public unsigned DI size <integer_cst 0x7ffff7802f48 constant 64> unit-size <integer_cst 0x7ffff7802f60 constant 8> align:64 warn_if_not_align:0 symtab:0 alias-set -1 canonical-type 0x7ffff7822000 precision:64 min <integer_cst 0x7ffff7802f78 0> max <integer_cst 0x7ffff78035e0 18446744073709551615>> readonly arg:0 <var_decl 0x7ffff7810c60 var_1 type <error_mark 0x7ffff7802f30> readonly used unsigned read SI t2.c:3:22 size <integer_cst 0x7ffff7824198 constant 32> unit-size <integer_cst 0x7ffff78241b0 constant 4> align:32 warn_if_not_align:0 context <function_decl 0x7ffff79b3600 foo> initial <integer_cst 0x7ffff79bbbe8 2>>> ``` Before calling fold. So we turn var_1's type to error_mark when merging the 2 decls (after an error). But then the rest of the front-end/middle-end is still not ready for types all the time to have an error_mark. Note I don't think we should revert r12-3278 even though it has had a lot of fall out because it produces better error recovery in general. Note for this simplified testcase we could look through the NOP_EXPR in c_sizeof_alignof but a more complex one which does s/var_5[var_1]/var_5[13 * var_1]/, we can't as we get: (sizetype) ((unsigned int) var_1 * 13) And that one ICEs in tree_nonzero_bits. CASE_CONVERT: return wide_int::from (tree_nonzero_bits (TREE_OPERAND (t, 0)), TYPE_PRECISION (TREE_TYPE (t)), TYPE_SIGN (TREE_TYPE (TREE_OPERAND (t, 0))));