So per the discussion in PR 116662, this adjusts the destructive interference size for RISC-V to be more in line with current designs (64 bytes).

Getting this wrong is "just" a performance issue, so there's no correctness concerns to be worried about. The only real worry is that the value can have ABI implications. The position that Jason and others have taken is that while it can be mis-used in a way that gets exposed as ABI, that's inherently unsafe and we issue warning diagnostics for those cases.

So here's the change to bump it to 64 bytes. Tested on rv32 and rv64 embedded targets. Bootstrap on the Pioneer & BPI is in flight and not due to land for several hours. Will push once pre-commit CI has done its thing (and the Pioneer might have finished its cycle by then, which I'll check, obviously).

Jeff



        PR target/116662
gcc/
        * config/riscv/riscv.cc (riscv_option_override): Override
        default value for destructive interference size.

diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc
index 63404d3d514..8880b199d41 100644
--- a/gcc/config/riscv/riscv.cc
+++ b/gcc/config/riscv/riscv.cc
@@ -12040,6 +12040,14 @@ riscv_option_override (void)
                       param_cycle_accurate_model,
                       0);
 
+  /* Cache lines of 64 bytes are common these days, so let's get a sensible
+     value for the interference size.  Technically this can leak and cause
+     sizes of structures to change, but consensus is anything using the
+     value to size fields within a structure is broken.  Any 2^n byte value
+     is functionally correct, but may not be performant.  */
+  SET_OPTION_IF_UNSET (&global_options, &global_options_set,
+                      param_destruct_interfere_size, 0);
+
   /* Function to allocate machine-dependent function status.  */
   init_machine_status = &riscv_init_machine_status;
 

Reply via email to