On Fri, Aug 30, 2013 at 01:53:37PM -0700, Richard Henderson wrote: > On 08/30/2013 12:12 PM, Aurelien Jarno wrote: > > On Fri, Aug 30, 2013 at 10:20:23AM -0700, Richard Henderson wrote: > >> On 08/30/2013 09:55 AM, Aurelien Jarno wrote: > >>> While it works for x86 and some other architectures, it makes the > >>> assumption that only part of the register can be used later by the TCG > >>> code. It won't be the case if we later (and I hope we will) implement a > >>> MIPS64 TCG target. In that case, a 32-bit value has to be returned > >>> signed extended, which won't be the case for example for a 32-bit guest > >>> loading a 16-bit unsigned value. > >> > >> This doesn't break the mips64 abi, since we'll be returning a 64-bit > >> value, not > >> a 32-bit value that needs sign-extension. > >> > >> Given a mips64 host with 32-bit guest, the sign-extension of the 32-bit > >> load > >> can either happen by using helper_ret_ldsl_mmu in the table of helper > >> functions, or by using an sll insn instead of a move to put the value into > >> place at the end of the slow path. > > > > That's indeed a possibility. That said while the MIPS64 ABI is then > > still followed, it would have break a MIPS backend as the ABI between > > the helper and the TCG code is broken.
Sorry I meant MIPS64 backend. > How's that? We're passing a value extended to tcg_target_ulong. > > For the 32-bit mips backend running o32 (or even a theoretical n32 backend), > that type is uint32_t. That gets returned from C exactly how C returns that > type. For o32 it's the full width of the register, full stop. For n32, it > would be returned sign-extended in the 64-bit register. > > Please explain exactly the failure mode you imagine, because I don't think > there is one. For MIPS64, to load a 32-bit value in a 32-bit guest, the slow path would have called helper_ret_ldl_mmu(), returning and uint32_t, but then signed extended to 64-bit in the register as per MIPS64 ABI. With the new code helper_ret_ldl_mmu() would return a tcg_target_ulong value, so zero extended to 64-bit. If this register is later used in a 32-bit op, it is not anymore sign extended, and the result is then unpredictable. As you said earlier the solution would have been to replace helper_ret_ldl_mmu() by helper_ret_ldsl_mmu(), but it should have been done in the same patch to not break things. I just wanted to make sure we don't have the problem in an another target > > I am therefore concerned that we might break some of our 64-bit > > backends. x86-64 and ia64 should be fine, I don't know about aarch64, > > ppc64, sparc64 or s390x. > > Nope, all 4 of those will be fine. Not least of which because the later 3 > are still using the original helper functions, not the new helper_ret_* ones. > > But certainly all 4 of those are, in the gcc sense, TRULY_NOOP_TRUNCATION > machines, meaning we can truncate to 32 bits merely by ignoring the garbage > in the high bits. In practice it means that they have different 32-bit and > 64-bit comparison instructions. If they are all fine, you get: Reviewed-by: Aurelien Jarno <[email protected]> -- Aurelien Jarno GPG: 1024D/F1BCDB73 [email protected] http://www.aurel32.net
