On 09/26/2013 05:48 PM, Alexander Graf wrote:
> +
> +uint64_t HELPER(sign_extend)(uint64_t x, uint64_t is_signed, uint64_t mask)
> +{
> + if (x & is_signed) {
> + x |= mask;
> + }
> +
> + return x;
> +}
Why in the world do you have such a simple helper?
> + tcg_gen_andi_i64(tcg_newmask, cpu_reg(source), mask);
> + if (imms < immr) {
> + tcg_gen_shli_i64(tcg_newmask, tcg_newmask, bitsize - immr);
> + tmask <<= bitsize - immr;
> + signbit <<= bitsize + imms - immr;
> + if (signbit == 0x8000000000000000ULL) {
> + /* Can't pad anymore - highest bit is already set */
> + topmask = 0;
> + } else {
> + topmask = ~((1ULL << (bitsize + imms - immr + 1)) - 1);
> + }
> + } else {
> + tcg_gen_shri_i64(tcg_newmask, tcg_newmask, immr);
> + tmask >>= immr;
> + signbit <<= imms - immr;
> + topmask = ~tmask;
> + }
> +
> + if (is_32bit) {
> + tcg_gen_ext32u_i64(tcg_newmask, tcg_newmask);
> + }
> +
> + switch (opc) {
> + case 0: { /* SBFM */
> + TCGv_i64 tcg_mask = tcg_const_i64(topmask);
> + TCGv_i64 tcg_signbit = tcg_const_i64(signbit);
> + gen_helper_sign_extend(cpu_reg(dest), tcg_newmask, tcg_signbit,
> + tcg_mask);
Ah. Perhaps it'll be helpful to know this can be done as
dest = (newmask ^ signbit) - signbit;
So you don't have to compute tcg_mask either.
Although given that ASR, LSL, LSR, are all canonical aliases, we'd probably
do well to detect those special cases.
> + case 1: /* BFM */
> + /* replace the field inside dest */
> + tcg_gen_andi_i64(cpu_reg(dest), cpu_reg(dest), ~tmask);
> + tcg_gen_or_i64(cpu_reg(dest), cpu_reg(dest), tcg_newmask);
> + break;
Ideally we'd emit deposit here for appropriate imms+immr.
r~