On 03/29/2012 11:17 PM, Jia Liu wrote:
> + int32_t temp;
> + uint32_t rd;
> + int i, last;
> +
> + temp = rt & MIPSDSP_LO;
> + rd = 0;
> + for (i = 0; i < 16; i++) {
> + last = temp % 2;
> + temp = temp >> 1;
temp should not be signed, as that % doesn't do what you wanted.
> + imm3 = tcg_const_i32(imm);
> + imm2 = tcg_const_i32(imm);
> + imm1 = tcg_const_i32(imm);
> + imm0 = tcg_const_i32(imm);
> + tcg_gen_shli_i32(imm3, imm3, 24);
> + tcg_gen_shli_i32(imm2, imm2, 16);
> + tcg_gen_shli_i32(imm1, imm1, 8);
> + tcg_gen_or_i32(cpu_gpr[rd], imm3, imm2);
> + tcg_gen_or_i32(cpu_gpr[rd], cpu_gpr[rd], imm1);
> + tcg_gen_or_i32(cpu_gpr[rd], cpu_gpr[rd], imm0);
> + tcg_temp_free(imm3);
> + tcg_temp_free(imm2);
> + tcg_temp_free(imm1);
> + tcg_temp_free(imm0);
Err, this is an *immediate*.
imm = (ctx->opcode >> 16) & 0xFF;
tcg_gen_movi(cpu_gpr[rd], imm * 0x01010101);
> + rt3 = tcg_const_i32(0);
> + rt2 = tcg_const_i32(0);
> + rt1 = tcg_const_i32(0);
> + rt0 = tcg_const_i32(0);
> +
> + tcg_gen_andi_i32(rt3, cpu_gpr[rt], 0xFF);
> + tcg_gen_andi_i32(rt2, cpu_gpr[rt], 0xFF);
> + tcg_gen_andi_i32(rt1, cpu_gpr[rt], 0xFF);
> + tcg_gen_andi_i32(rt0, cpu_gpr[rt], 0xFF);
> +
> + tcg_gen_shli_i32(rt3, rt3, 24);
> + tcg_gen_shli_i32(rt2, rt2, 16);
> + tcg_gen_shli_i32(rt1, rt1, 8);
> +
> + tcg_gen_or_i32(cpu_gpr[rd], rt3, rt2);
> + tcg_gen_or_i32(cpu_gpr[rd], cpu_gpr[rd], rt1);
> + tcg_gen_or_i32(cpu_gpr[rd], cpu_gpr[rd], rt0);
I hadn't been asking for you to inline replv, only repl.
But if you want to do this, at least only compute t=rt&0xff once.
That said, I suspect the * 0x01010101 trick is fairly efficient on
most hosts these days.
> + TCGv temp_rt = tcg_const_i32(rt);
> + gen_helper_insv(cpu_gpr[rt], cpu_env,
> + cpu_gpr[rs], cpu_gpr[rt]);
> + tcg_temp_free(temp_rt);
temp_rt is unused.
r~