https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110557

--- Comment #6 from Xi Ruoyao <xry111 at gcc dot gnu.org> ---
(In reply to avieira from comment #5)
> Hi Xi,
> 
> Feel free to test your patch and submit it to the list for review. I had a
> look over and it looks correct to me.

https://gcc.gnu.org/pipermail/gcc-patches/2023-July/623782.html

The changes from the version posted here:

1. Add a test case (I already made it sandwiched because a very first, not
posted version of the patch failed with sandwiched cases).
2. Slightly adjusted the comment.

There is another issue: if mask_width + shift_n == prec, we should omit the
AND_EXPR even for unsigned bit-field.  For example

    movq    $-256, %rax
    vmovq   %rax, %xmm1
    vpunpcklqdq %xmm1, %xmm1, %xmm1
    vpand   (%rcx,%rdi,8), %xmm1, %xmm1
    vpsrlq  $8, %xmm1, %xmm1

can be just

    vmovdqu (%rcx,%rdi,8), %xmm1
    vpsrlq  $8, %xmm1, %xmm1

But it's a different issue so we can fix it in a different patch.

Reply via email to