http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46027

           Summary: (unsignedtype1)(unsignedtype2)(unsignedtype1_var) not
                    changed into unsignedtype1_var & NNN when
                    sizeof(type1) > sizeof(type2)
           Product: gcc
           Version: 4.5.0
            Status: UNCONFIRMED
          Severity: enhancement
          Priority: P3
         Component: tree-optimization
        AssignedTo: unassig...@gcc.gnu.org
        ReportedBy: pins...@gcc.gnu.org


Take:
typedef unsigned type1;
typedef unsigned char type2;
type1 t;
void f(type1 tt)
{
  t = (type2)tt;
}
--- CUT ---
In .optimized we have:
  D.2687_2 = (unsigned char) tt_1(D);
  t.0_3 = (type1) D.2687_2;
  t = t.0_3;

--- CUT ---
This is done on the RTL level, well I think it uses (zero_extend(subreg)) as
the conical form (at least on big endian targets).  Using &0xFF here would be
faster and less memory usage and might be able to optimize more.
Here is a testcase which shows where we can optimize more if we do this:
typedef unsigned type1;
typedef unsigned char type2;


type1 t;

void f(type1 tt)
{
  type2 t1 = t;
  type1 t2 = t1;
  t = t2 & 0xFF;
}

void f1(type1 tt)
{
  type1 t2 = t & 0xff;
  t = t2 & 0xFF;
}

Reply via email to