On Thu, Oct 12, 2017 at 8:39 AM, Uros Bizjak <ubiz...@gmail.com> wrote:
> On Thu, Oct 12, 2017 at 8:32 AM, Uros Bizjak <ubiz...@gmail.com> wrote:
>> On Wed, Oct 11, 2017 at 10:59 PM, Jakub Jelinek <ja...@redhat.com> wrote:
>>> Hi!
>>>
>>> As can be seen on the testcase below, the *<rotate_insn><mode>3_mask
>>> insn/splitter is able to optimize only the case when the and is
>>> performed in SImode and then the result subreged into QImode,
>>> while if the computation is already in QImode, we don't handle it.
>>>
>>> Fixed by adding another pattern, bootstrapped/regtested on x86_64-linux and
>>> i686-linux, ok for trunk?
>>
>> We probably want to add this variant to *all* *_mask splitters (there
>> are a few of them in i386.md, please grep for "Avoid useless
>> masking"). Which finally begs a question - should we implement this
>> simplification in a generic, target-independent way? OTOH, we already
>> have SHIFT_COUNT_TRUNCATED and shift_truncation_mask hooks, but last
>> time I try the former, there were some problems in the testsuite on
>> x86. I guess there are several targets that would benefit from
>> removing useless masking of count operands.
>
> Oh, and there is a strange x86 exception in the comment for
> SHIFT_COUNT_TRUNCATED. I'm not sure what "(real or pretended)
> bit-field operation" means, but variable-count BT instruction with
> non-memory operand (we never generate variable-count BTx with memory
> operand) masks its count operand as well.

I forgot that SSE shifts don't truncate their count operand. This is
the reason that the removal of mask is implemented in the *.md file,
but it would really be nice if the same functionality can be achieved
in a more generic way, without pattern explosion.

Uros.

Reply via email to