On 14/05/2019 11:05, Chris Wilson wrote:
Use the client id to alternate the static_vcs balancer (-b context)
across clients - otherwise all clients end up on vcs0 and do not match
the context balancing employed by media-driver.

This may want to be behind the -R flag, but I felt it was a fundamental
property of static context balancing that to keep it disabled by default
causes unfair comparisons and poor workload scheduling, defeating the
purpose of testing.

I see your reasoning but it also completely matches the design of other balancers to keep it under control of -R switch. It can also already be achieved with the -G switch. Which is perhaps a bit confusing.. Having both would still make sense I think. (-G gives out engines rr to contexts sequentially across all clients, -R start each client contexts by rr.)

But I wouldn't enable it unconditionally. Because consider another balancer like rr and a two same workload instances of a long context followed by short second context batch. If suffers the same problem of poor scheduling until -R is added.

So I think we want to have the two balancers compatible in behaviour in this respect.
Signed-off-by: Chris Wilson <[email protected]>
Cc: Tvrtko Ursulin <[email protected]>
---
  benchmarks/gem_wsim.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/benchmarks/gem_wsim.c b/benchmarks/gem_wsim.c
index afb9644dd..8c7e30eb4 100644
--- a/benchmarks/gem_wsim.c
+++ b/benchmarks/gem_wsim.c
@@ -939,7 +939,7 @@ alloc_step_batch(struct workload *wrk, struct w_step *w, 
unsigned int flags)
  static void
  prepare_workload(unsigned int id, struct workload *wrk, unsigned int flags)
  {
-       unsigned int ctx_vcs = 0;
+       unsigned int ctx_vcs = id & 1;

Therefore I think "ctx_vcs = (flags & INITVCSRR) ? id & 1 : 0" here.

        int max_ctx = -1;
        struct w_step *w;
        int i;


Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to