On 10/12/2018 18:49, Richard Henderson wrote:
> On 12/7/18 2:56 AM, Mark Cave-Ayland wrote:
>> + avr = tcg_temp_new_i64();
>> \
>> EA = tcg_temp_new();
>> \
>> gen_addr_reg_index(ctx, EA);
>> \
>> tcg_gen_andi_tl(EA, EA, ~0xf);
>> \
>> /* We only need to swap high and low halves. gen_qemu_ld64_i64 does
>> \
>> necessary 64-bit byteswap already. */
>> \
>> if (ctx->le_mode) {
>> \
>> - gen_qemu_ld64_i64(ctx, cpu_avrl[rD(ctx->opcode)], EA);
>> \
>> + gen_qemu_ld64_i64(ctx, avr, EA);
>> \
>> + set_avr64(rD(ctx->opcode), avr, false);
>> \
>> tcg_gen_addi_tl(EA, EA, 8);
>> \
>> - gen_qemu_ld64_i64(ctx, cpu_avrh[rD(ctx->opcode)], EA);
>> \
>> + gen_qemu_ld64_i64(ctx, avr, EA);
>> \
>> + set_avr64(rD(ctx->opcode), avr, true);
>
> An accurate conversion, but I'm going to call this an existing bug:
>
> The writeback to both avr{l,h} should be delayed until all exceptions have
> been
> raised. Thus you should perform the two gen_qemu_ld64_i64 into two
> temporaries
> and only then write them both back via set_avr64.
Thanks for the pointer. I'll add this to the list of changes for the next
revision of
the patchset.
> Otherwise,
> Reviewed-by: Richard Henderson <[email protected]>
ATB,
Mark.