On 10/20/21 9:13 AM, Alex Bennée wrote:
Richard Henderson <[email protected]> writes:
Currently, we have support for optimizing redundant zero extensions,
which I think was done with x86 and aarch64 in mind, which zero-extend
all 32-bit operations into the 64-bit register.
But targets like Alpha, MIPS, and RISC-V do sign-extensions instead.
The last 5 patches address this.
But before that, split the quite massive tcg_optimize function.
BTW this reminded me of a discussion I was having on another thread:
Subject: Re: TCG Floating Point Support (Work in Progress)
Date: Fri, 01 Oct 2021 09:03:41 +0100
In-reply-to:
<CADc=-s5wj0cbv9r0rxaok0ys77far7mgxq5b+y4konr937c...@mail.gmail.com>
Message-ID: <[email protected]>
about a test harness of TCG. With the changes over the years are we any
closer to being able to lift the TCG code into a unit test so we can add
test cases that exercise and validate the optimiser decisions?
Nope.
I'm not even sure true unit testing is worthwhile.
It would require inventing a "tcg front end", parser, etc.
I could imagine, perhaps, something in which we input real asm and look at the optimized
opcode dump. E.g. for x86_64,
_start:
mov %eax, %ebx
mov %ebx, %ecx
hlt
should contain only one ext32u_i64 opcode.
r~