Hello,
I'm a new developer to GCC and have been tasked with building a
machine description for an x86 processor. I have documentation
regarding the pipeline stages, instruction latency, and even a number of
special case optimization possibilities. I have been adding small
changes to i386.h/i386.c and have created a new machine description
file. In building the machine description, I have attempted to model
the pipeline as best as I understand from the documentation given, but a
number of aspects of GCC development elude me because I'm new to the
development process. Here are a list of questions I have that are
specific to i386 machine descriptions.
1.) The processor_costs structure seems very limited, but seem very
easily to "fill in" but are these costs supposed to be best or worst
case? For instance, many instructions with different sized operands
vary in latency.
2.) I don't understand the meaning of the stringop_algs, scalar, vector,
and branching costs at the end of the processor_cost structure. Could
someone give me an accurate description?
3.) The processor I am currently attempting to model is
single-issue/in-order with a simple pipeline. Stalls can occasionally
occur in the fetch/decode/translate, but the core is the latency of
instructions in the functional units in the execute stage. What
recommendations can anyone make to me for designing the DFA? Should it
just directly model the functional units latencies for certain insn types?
Because I'm new at this, any recommendations or assistance in this vein
of development would be greatly appreciated.
Thank you,
Ty Smith
- Development process for i386 machine descriptions Ty Smith
-