Previously, we just went ahead and emitted MI_BATCH_BUFFER_START as normal. If we are near enough to the end, this can cause us to start a new BO just for the MI_BATCH_BUFFER_START which messes up chaining. We always reserve enough space at the end for an MI_BATCH_BUFFER_START so we can just increment cmd_buffer->batch.end prior to emitting the command.
Fixes: a0b133286a3 "anv/batch_chain: Simplify secondary batch return..." Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107926 --- src/intel/vulkan/anv_batch_chain.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/src/intel/vulkan/anv_batch_chain.c b/src/intel/vulkan/anv_batch_chain.c index 3e13553ac18..e08e07ad7bd 100644 --- a/src/intel/vulkan/anv_batch_chain.c +++ b/src/intel/vulkan/anv_batch_chain.c @@ -894,8 +894,17 @@ anv_cmd_buffer_end_batch_buffer(struct anv_cmd_buffer *cmd_buffer) * It doesn't matter where it points now so long as has a valid * relocation. We'll adjust it later as part of the chaining * process. + * + * We set the end of the batch a little short so we would be sure we + * have room for the chaining command. Since we're about to emit the + * chaining command, let's set it back where it should go. */ + cmd_buffer->batch.end += GEN8_MI_BATCH_BUFFER_START_length * 4; + assert(cmd_buffer->batch.start == batch_bo->bo.map); + assert(cmd_buffer->batch.end == batch_bo->bo.map + batch_bo->bo.size); + emit_batch_buffer_start(cmd_buffer, &batch_bo->bo, 0); + assert(cmd_buffer->batch.start == batch_bo->bo.map); } else { cmd_buffer->exec_mode = ANV_CMD_BUFFER_EXEC_MODE_COPY_AND_CHAIN; } -- 2.17.1 _______________________________________________ mesa-dev mailing list [email protected] https://lists.freedesktop.org/mailman/listinfo/mesa-dev
