------- Comment #14 from michael dot meissner at amd dot com  2005-10-04 18:59 
-------
Created an attachment (id=9876)
 --> (http://gcc.gnu.org/bugzilla/attachment.cgi?id=9876&action=view)
Patch for x86 double word shifts

This patch fixes the bug from the x86 side of things instead of from the
machine independent, by adding direct expanders for the best code (for doing 64
bit rotates in 32-bit mode and 128 bit rotates in 64-bit mode).  On a machine
with conditional move (all recent processors), the code becomes:

        movl    %edx, %ebx
        shldl   %eax, %edx
        shldl   %ebx, %eax
        movl    %edx, %ebx
        andl    $32, %ecx
        cmovne  %eax, %edx
        cmovne  %ebx, %eax

However, I suspect using MMX or SSE2 instructions will provide even more of a
speedup, since there are direct 64-bit shifts, and, or, load/store directly
(but no direct rotate).  In the MMX space you have to be careful not to have
active floating point going on, and to switch out of MMX mode before doing
calls or returns. 


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17886

Reply via email to