http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49798
--- Comment #9 from Uros Bizjak <ubizjak at gmail dot com> 2011-07-21 16:25:49 UTC --- Please compare optimized tree dumps from i686 (a) compilation vs x32 (b): (a) foo (union U u) { union U v; _Bool D.2000; double D.1999; double D.1998; int D.1997; <bb 2>: v = {}; v.m = &xxxx; D.1998_1 = u.d; D.1999_2 = v.d; D.2000_3 = D.1998_1 == D.1999_2; D.1997_4 = (int) D.2000_3; return D.1997_4; } (b) foo (union U u) { double D.2709; _Bool D.2704; double D.2702; int D.2701; <bb 2>: D.2709_8 = VIEW_CONVERT_EXPR<double>(&xxxx); D.2702_1 = u.d; D.2704_3 = D.2702_1 == D.2709_8; D.2701_4 = (int) D.2704_3; return D.2701_4; } We can't directly move &xxxx (32bit value) to double (64bit value). However, we expand to: (insn 6 5 11 (set (reg/f:DI 66) (symbol_ref:DI ("xxxx") [flags 0x40] <var_decl 0x7f71c5925140 xxxx>)) pr49798.c:12 -1 (nil)) ... (insn 13 12 14 (set (reg:CCFPU 17 flags) (compare:CCFPU (reg:DF 73) (subreg:DF (reg/f:DI 66) 0))) pr49798.c:13 -1 (nil)) Does this looks OK?