Quoting Chris Wilson (2020-08-14 20:41:32)
> Quoting Mika Kuoppala (2020-08-14 19:41:14)
> > Chris Wilson <[email protected]> writes:
> > > - entry = READ_ONCE(*csb);
> > > - if (unlikely(entry == -1)) {
> > > - preempt_disable();
> > > - if (wait_for_atomic_us((entry = READ_ONCE(*csb)) != -1, 50))
> >
> > If we get this deep into desperation, should we start to apply more
> > pressure. Ie, rmb instead of just instructing the compiler. And could also
> > start to invalidate the entry which obviously if of no use.
>
> I had a rmb() here; removing it did not appear to make any difference
> whatsoever to the average delay. The extreme case would be a full
> mb(); clflush(); mb() read. I haven't timed the average for that....
+static inline u64 __csb_read(u64 *csb)
+{
+ mb();
+ clflush(csb);
+ mb();
+
+ return READ_ONCE(*csb);
+}
[ 1554.274204] csb: 1793 misses, avg 475ns, max 14727ns
So no better on average or at worst.
-Chris
_______________________________________________
Intel-gfx mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/intel-gfx