On 2/13/25 11:13 AM, Palmer Dabbelt wrote:


FWIW, that's what tripped up my "maybe there's a functional bug here" thought.  It looks like the scheduling is seeing

    bne t0, x0, end
    vsetvli t1, t2, ...
    vsetvli x0, t2, ...
    ...
  end:
    vsetvli x0, t2, ...

and thinking it's safe to schedule that like

    vsetvli t1, t2, ...
    bne t0, x0, end
    vsetvli x0, t2, ...
    ...
  end:
    vsetvli x0, t2, ...

which I'd assumed is because the scheduler sees both execution paths overwriting the vector control registers and thus thinks it's safe to move the first vsetvli to execute speculatively.  From reading "6. Configuration-Setting Instructions" in vector.md that seems intentional, though, so maybe it's all just fine?
I think it's fine. Perhaps not what we want from a performance standpoint, but functionally safe.



Also, why doesn't the vsetvl pass fix the situation?  IMHO we need to
understand the problem more thoroughly before changing things.
In the end LCM minimizes the number of vsetvls and inserts them at the
"earliest" point.  If that is not sufficient I'd say we need modify
the constraints (maybe on a per-uarch basis)?
The vsevl pass is LCM based.  So it's not allowed to add a vsetvl on a
path that didn't have a vsetvl before.  Consider this simple graph.

     0
    / \
   2-->3

If we have need for a vsetvl in bb2, but not bb0 or bb3, then the vsetvl
will land in bb4.  bb0 is not a valid insertion point for the vsetvl
pass because the path 0->3 doesn't strictly need a vsetvl.  That's
inherent in the LCM algorithm (anticipatable).

The scheduler has no such limitations.  The scheduler might create a
scheduling region out of blocks 0 and 2.  In that scenario, insns from
block 2 may speculate into block 0 as long as doing so doesn't change
semantics.

Ya.  The combination of the scheduler moving a vsetvli before the branch (IIUC from bb2 to bb0 here) and the vsetvli merging causes it to look like the whole vsetvli was moved before the branch.

I'm not sure why the scheduler doesn't move both vsetvli instructions to execute speculatively, but otherwise this seems to be behaving as designed.  It's just tripping up the VL=0 cases for us.
You'd have to get into those dumps and possibly throw the compiler under a debugger. My guess is it didn't see any advantage in doing so.



Maybe that's a broad uarch split point here?  For OOO designs we'd want to rely on HW scheduling and thus avoid hoisting possibly-expensive vsetvli instructions (where they'd need to execute in HW because of the side effects), while on in-order designs we'd want to aggressively schedule vsetvli instructions because we can't rely on HW scheduling to hide the latency.
There may be. But the natural question would be cost/benefit. It may not buy us anything on the performance side to defer vsetvl insertion for OOO cores. At which point the only advantage is testsuite stability. And if that's the only benefit, we may be able to do that through other mechanisms.



In theory at sched2 time the insn stream should be fixed.  There are
practical/historical exceptions, but changes to the insn stream after
that point are discouraged.

We were just talking about this is our toolchain team meeting, and it seems like both GCC and LLVM are in similar spots here -- essentially the required set of vsetvli instructions depends very strongly on scheduling, so trying to do them independently is just always going to lead to sub-par results.  It feels kind of like we want some scheduling- based cost feedback in the vsetvli pass (or the other way around if they're in the other order) to get better results.

Maybe that's too much of a time sink for the OOO machines, though?  If we've got HW scheduling then the SW just has to be in the ballpark and everything should be fine.
I'd guess it more work than it'd be worth. We're just not seeing vsetvls being all that problematical on our design. I do see a lot of seemingly gratutious changes in the vector config, but when we make changes to fix that we generally end up with worse performing code.

Jeff

Reply via email to