On Thu, Nov 08, 2012 at 10:37:00AM +0100, Eric Botcazou wrote: > I guess the natural question is then: if we start to change the type of the > operation, why not always reassociate in the unsigned version of the type? > > int > foo (int x, int y) > { > return (x + 1) + (int) (y + 1); > }
I was afraid of preventing optimizations based on the assumption that the operations don't overflow. Perhaps we could do that just if with signed atype a TREE_OVERFLOW would be introduced (i.e. when my current patch returns NULL: + /* Don't introduce overflows through reassociation. */ + if (!any_overflows + && ((lit0 && TREE_OVERFLOW (lit0)) + || (minus_lit0 && TREE_OVERFLOW (minus_lit0)))) + return NULL_TREE; it could instead for INTEGER_TYPE_P (atype) do atype = build_nonstandard_integer_type (TYPE_PRECISION (type), 1); and retry (can be done also by atype = build_nonstandard_integer_type (TYPE_PRECISION (type), 1); return fold_convert_loc (loc, type, fold_binary_loc (loc, code, atype, fold_convert (atype, arg0), fold_convert (atype, arg1))); ). E.g. for int foo (int x, int y) { return (x + __INT_MAX__) + (int) (y + __INT_MAX__); } where __INT_MAX__ + __INT_MAX__ overflows, but if say x and y are (- __INT_MAX__ / 4 * 3), then there is no overflow originally. But we'd give up on this already because there are two different variables. So perhaps better int foo (int x) { return (x + __INT_MAX__) + (int) (x + __INT_MAX__); } And similarly the retry with unsigned type could be done for the var0 && var1 case where it sets ok = false;. > > int > bar (int x, unsigned int y) > { > return (x + 1) + (int) (y + 1); > } Jakub