On Fri, 20 Sep 2019 16:53:49 -0400 Alyssa Rosenzweig <aly...@rosenzweig.io> wrote:
> > @@ -1121,7 +1134,11 @@ panfrost_emit_for_draw(struct panfrost_context *ctx, > > bool with_vertex_data) > > > > struct panfrost_shader_state *ss = > > &all->variants[all->active_variant]; > > > > - panfrost_batch_add_bo(batch, ss->bo); > > + panfrost_batch_add_bo(batch, ss->bo, > > + PAN_BO_ACCESS_PRIVATE | > > + PAN_BO_ACCESS_READ | > > > + PAN_BO_ACCESS_VERTEX_TILER | > > + PAN_BO_ACCESS_FRAGMENT); > > I believe this should be just the access for the stage `i` > > Although actually I am not at all sure what this batch_add_bo is doing > at all? > > I think this batch_add_bo should probably dropped altogether? This loop > is dealing with constant buffers; the shaders themselves were added I'll double check. I couldn't find where BOs containing shader programs were added last time I looked. > > > void panfrost_batch_add_fbo_bos(struct panfrost_batch *batch) > > { > > + uint32_t flags = PAN_BO_ACCESS_SHARED | PAN_BO_ACCESS_WRITE | > > + PAN_BO_ACCESS_VERTEX_TILER | > > + PAN_BO_ACCESS_FRAGMENT; > > I think we can drop VERTEX_TILER here...? The buffers are written right > at the end of the FRAGMENT job, not touched before that. What about the read done when drawing the wallpaper? I guess it's also only read by the fragment job, but I wasn't sure. > > If nothing else is broken, this should allow a nice perf boost with > pipelining, so the vertex/tiler from frame n+1 can run in parallel with > the fragment of frame n (rather than blocking on frame n finishing with > the FBOs). Would require the kernel patches I posted earlier for that to happen ;-). Right now all jobs touching the same BO are serialized because of the implicit BO fences added by the kernel driver. _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/mesa-dev