> By the way, shouldn't this be:
>
> unsigned HOST_WIDE_INT mask = ((unsigned HOST_WIDE_INT) 2 << (n_elts - 1)) -
> 1;
>
> so it doesn't cause undefined behavior for V64QI?
You're right, but I think that we'd rather write:
if (n_elts == HOST_BITS_PER_WIDE_INT)
mask = -1;
else
mask =
On Tue, 2 Apr 2013, Eric Botcazou wrote:
Thanks, here is a version taking into account all your comments, and which
still passes bootstrap+testsuite on x86_64-linux-gnu. I am not completely
sure if there is a point checking !side_effects_p (op1) after rtx_equal_p
(op0, op1), but I am still doing
> Thanks, here is a version taking into account all your comments, and which
> still passes bootstrap+testsuite on x86_64-linux-gnu. I am not completely
> sure if there is a point checking !side_effects_p (op1) after rtx_equal_p
> (op0, op1), but I am still doing it as it seems safe.
It's also don
On Wed, 27 Mar 2013, Eric Botcazou wrote:
int is getting small to store one bit per vector element (V32QI...) so I
switched to hwint after checking that Zadeck's patches don't touch this.
unsigned HOST_WIDE_INT is indeed the correct type to use for mask manipulation
but please use UINTVAL inst
On Sat, Mar 30, 2013 at 3:47 PM, Marc Glisse wrote:
>> OK, modulo a few nits:
>
>
> Thanks, here is a version taking into account all your comments, and which
> still passes bootstrap+testsuite on x86_64-linux-gnu. I am not completely
> sure if there is a point checking !side_effects_p (op1) af
On Wed, 27 Mar 2013, Eric Botcazou wrote:
OK, modulo a few nits:
Thanks, here is a version taking into account all your comments, and which
still passes bootstrap+testsuite on x86_64-linux-gnu. I am not completely
sure if there is a point checking !side_effects_p (op1) after rtx_equal_p
(op
> int is getting small to store one bit per vector element (V32QI...) so I
> switched to hwint after checking that Zadeck's patches don't touch this.
unsigned HOST_WIDE_INT is indeed the correct type to use for mask manipulation
but please use UINTVAL instead of INTVAL with it. And:
+ u
Hello,
simplify-rtx had almost no simplification involving vec_merge, so I am
adding a couple. I purposedly did not optimize (vec_merge (vec_merge a b
m1) b m2) to (vec_merge a b m1&m2) because m1&m2 might be a more
complicated pattern than m1 and m2 and I don't know if that is safe (I can
ad