Quoting Steven Bosscher <stevenb....@gmail.com>:
The most visible ongoing effort is the conversion from target macros
to target hooks (which is incomplete). The goal was to allow "hot
swapping" of backends.  This is still the most obvious, most complete,

Yes, I initially thought about this one.

and least unappealing (from a technical POV) approach IMHO. But Kaveh
showed at one point that the compile time penalty of even just the
partial conversion done so far is a few percentage points (somewhere
between 3% and 5%, I don't recall the details).

Where the target-specific information can be expressed by a constant,
we could actually use a const data member of the target vector.
Also, sometimes a macro is used in a loop when it could be hoisted
outside.

And also it's not nice
and easy work so nobody is working on it actively AFAIK.

It would be easier to implement if C++ with virtual member functions
would be allowed for the target vector.  Then, where this is not already
readily available, we can tweak the optimizers and/or the code so that
we obtain de-virtualization and inlining of the function calls with
appropriate procedure (or part thereof) cloning of the callers.

Using plain C++, we would loose the ability to bootstrap with a C
compiler.  However, we could use macro tricks like erstwhile used with
K&R vs. ANSI/ISO C so that the target vector with its functions can be
compiled either as ordinary C with function pointers and separate
functions, or as C++ with virtual member functions.

Using templates should make the optimizations more readily available,
but we'd need a special macro to call target vector function, which would
have to be used at each call site, and macros for function definitions that
use any target vector functions, if we want to retain the ability to
botstrap with a C compiler.
Plus, when compiling with C++, template-expansion and mangling makes
functions are harder to find; you'll probbaly want to use a C-built
compiler for more intricate debugging then.

Another approach is taken in the MIPS backend, which can reset the
middle-end to swap between MIPS32 and MIPS16 AFAIU, but this looks
more like a hack to me than something that you want to do for really
different targets -- I mean, I guess MIPS16 and MIPS32 are both still
MIPS, but POWER and SPU are probably not similar enough for this
approach.

Similarity is not strictly necessary.  The SH4 and SH64 are very different
architectures, and the port looks like a set of conjoined twins.  Well,
more like a set of conjoined half-fraternal twins, if that was viable.
But it could be made much more elegant if they could be defined as two
separate target ports with a common ABI and linked together into one
compiler.

A large part of the backend are C files (insn-* etc.) that are generated
from the *.md files.  These files would have to get separate file and
function names for each backend; pointers to the non-static functions
of these files might then be stored in the target vector.

If putting the curent simple macros into the target vector would incur to
much overhead, we could also compile the rtl expander and optimizer passes
once for each target with the the matching target macros, to likewise
hand-specialize these optimizers.
The benefit would be that there would be virtually no performance penalty
right from the start, without any need to tweak the optimizers or plaster
the rtl passes with macros.

But I would think that a target vector which can alternatively compiled as
C or C++ with virtual member functions would be better maintainable.
Plus, it gives us a nice test-bed to work on the quality of
de-virtualization and profile-based feedback.
However, I don't think I personally will have opportunity anytime soon to
work on implementing/improving de-virtualization.

What approach would people prefer?

Reply via email to