Actually Type 2 packets are handled much faster on the R6xx compared to
most type 3 packets, cause they are handled by the PFP/fetch hw and
doesn't need to be forwarded to the ME.
Christian.
Am 06.09.2013 02:31, schrieb Dominik Behr:
0x80000000 is Type 2 NOP.
You could make it a little better/faster by inserting single
multi-DWORD Type 3 NOP
And pad to 8 DWORDs. CP fetches are 32 bytes each and R600 has
requires padding. Same with padding CP ring buffer updates to 32 bytes
(pad to 32bytes before you update CP_RB_WPTR).
On Thu, Sep 5, 2013 at 3:56 PM, Marek Olšák <[email protected]
<mailto:[email protected]>> wrote:
Reviewed-by: Marek Olšák <[email protected]
<mailto:[email protected]>>
Though I'm not sure if 0x80000000 is correct.
Marek
On Wed, Sep 4, 2013 at 11:55 PM, Alex Deucher
<[email protected] <mailto:[email protected]>> wrote:
> IBs need to be a multiple of 4 dwords on r6xx asics
> to avoid a hw bug.
>
> Signed-off-by: Alex Deucher <[email protected]
<mailto:[email protected]>>
> CC: "9.2" <[email protected]
<mailto:[email protected]>>
> CC: "9.1" <[email protected]
<mailto:[email protected]>>
> ---
> src/gallium/drivers/r600/r600_hw_context.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/src/gallium/drivers/r600/r600_hw_context.c
b/src/gallium/drivers/r600/r600_hw_context.c
> index 97b0f9c..0a219af 100644
> --- a/src/gallium/drivers/r600/r600_hw_context.c
> +++ b/src/gallium/drivers/r600/r600_hw_context.c
> @@ -347,6 +347,19 @@ void r600_context_flush(struct r600_context
*ctx, unsigned flags)
> flags |= RADEON_FLUSH_KEEP_TILING_FLAGS;
> }
>
> + /* Pad the GFX CS to a multiple of 4 dwords on rv6xx
> + * to avoid a hw bug.
> + */
> + if (ctx->chip_class < R700) {
> + unsigned i;
> + unsigned padding_dw = 4 - cs->cdw % 4;
> + if (padding_dw < 4) {
> + for (i = 0; i < padding_dw; i++) {
> + cs->buf[cs->cdw++] = 0x80000000;
> + }
> + }
> + }
> +
> /* Flush the CS. */
> ctx->ws->cs_flush(ctx->rings.gfx.cs, flags,
ctx->screen->cs_count++);
> }
> --
> 1.8.3.1
>
> _______________________________________________
> mesa-dev mailing list
> [email protected]
<mailto:[email protected]>
> http://lists.freedesktop.org/mailman/listinfo/mesa-dev
_______________________________________________
mesa-dev mailing list
[email protected] <mailto:[email protected]>
http://lists.freedesktop.org/mailman/listinfo/mesa-dev
_______________________________________________
mesa-dev mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/mesa-dev
_______________________________________________
mesa-dev mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/mesa-dev