https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
Martin Liška changed:
What|Removed |Added
Status|UNCONFIRMED |NEW
Last reconfirmed|
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
--- Comment #11 from Jakub Jelinek ---
As for TARGET_SHIFT_TRUNCATION_MASK, I'm not sure it can be safely used,
because different instructions on x86 work differently. The old scalar shifts
do the & 31 masking for QImode/HImode, but e.g. vector
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
--- Comment #10 from Jakub Jelinek ---
I've tried:
--- gcc/config/i386/i386.md.jj 2019-07-19 11:56:10.475964435 +0200
+++ gcc/config/i386/i386.md 2019-07-19 12:43:52.461469500 +0200
@@ -10661,6 +10661,43 @@
"ix86_split_ (operands, NULL_RT
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
--- Comment #9 from Jakub Jelinek ---
We have several PRs for narrowing/widening pass on late gimple, but I'm afraid
this exact thing is not something that can be done there, because the semantics
on what the x86 instructions do is quite weird an
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
--- Comment #8 from Uroš Bizjak ---
(In reply to Jakub Jelinek from comment #7)
> Perhaps we could define patterns for combine like:
> (set (match_operand:SI 0 "register_operand" "=q")
> (ashiftrt:SI (zero_extend:SI (match_ope
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
--- Comment #7 from Jakub Jelinek ---
Perhaps we could define patterns for combine like:
(set (match_operand:SI 0 "register_operand" "=q")
(ashiftrt:SI (zero_extend:SI (match_operand:QI 1 "register_operand"
"q"))
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
--- Comment #6 from Uroš Bizjak ---
(In reply to Jakub Jelinek from comment #4)
> Looking at x86 shl/shr instructions, it seems they don't do the
> SHIFT_COUNT_TRUNCATED masking, but actually mask always the shift count with
> & 31 (unless 64-bit
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
--- Comment #5 from Jakub Jelinek ---
Though, note the combiner doesn't try to match that, nor with the
void
foo (unsigned char a, unsigned char b, unsigned char *c)
{
*c = a >> b;
}
case, the final subreg is in some other instruction (e.g. the
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
--- Comment #4 from Jakub Jelinek ---
Looking at x86 shl/shr instructions, it seems they don't do the
SHIFT_COUNT_TRUNCATED masking, but actually mask always the shift count with &
31 (unless 64-bit shift, then it is indeed SHIFT_COUNT_TRUNCATED)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91202
Richard Biener changed:
What|Removed |Added
Target|x86 |x86_64-*-* i?86-*-*
CC|
10 matches
Mail list logo