El mié. 17 de feb. de 2010, a las 20:23:28 +0100, Rafał Miłecki escribió:
> Signed-off-by: Rafał Miłecki <[email protected]>
> Reported-by: Jaime Velasco Juan <[email protected]>
> ---
> This should make the trick, Jaime can you check if this works for you? Does it
> kill your corruptions?

It doesn't work, it have the same problem than the current code:

[    8.682478] [drm] Requested: e: 68000 m: 80000 p: 16
[    8.880126] [drm] Setting: e: 68000 m: 80000 p: 16
[    9.280080] [drm] Requested: e: 30000 m: 40500 p: 16
[    9.480118] [drm] Setting: e: 30000 m: 40500 p: 16

What about the attached version, does it seem ok to you?. It seems to fix
the corruptions for me.

Regards

> ---
>  drivers/gpu/drm/radeon/radeon_pm.c |   11 ++++++++---
>  1 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/radeon/radeon_pm.c 
> b/drivers/gpu/drm/radeon/radeon_pm.c
> index a8e151e..520197f 100644
> --- a/drivers/gpu/drm/radeon/radeon_pm.c
> +++ b/drivers/gpu/drm/radeon/radeon_pm.c
> @@ -337,10 +337,15 @@ static void radeon_pm_set_clocks(struct radeon_device 
> *rdev)
>               rdev->pm.req_vblank |= (1 << 1);
>               drm_vblank_get(rdev->ddev, 1);
>       }
> -     if (rdev->pm.active_crtcs)
> -             wait_event_interruptible_timeout(
> +     if (rdev->pm.active_crtcs) {
> +             /* We call __wait_* directly because of double condition check
> +                which we do not use. This call is suppossed to sleep until
> +                a wake_up happens or a timeout elapses */
> +             long timeout = msecs_to_jiffies(RADEON_WAIT_VBLANK_TIMEOUT);
> +             __wait_event_interruptible_timeout(
>                       rdev->irq.vblank_queue, 0,
> -                     msecs_to_jiffies(RADEON_WAIT_VBLANK_TIMEOUT));
> +                     timeout);
> +     }
>       if (rdev->pm.req_vblank & (1 << 0)) {
>               rdev->pm.req_vblank &= ~(1 << 0);
>               drm_vblank_put(rdev->ddev, 0);
> -- 
> 1.6.4.2
>From b44da60bce551b7119b0eb2e521e2e7635b9b98e Mon Sep 17 00:00:00 2001
From: Jaime Velasco Juan <[email protected]>
Date: Mon, 15 Feb 2010 14:50:46 +0000
Subject: [PATCH] radeon/PM Really wait for vblank before reclocking

The old code used a false condition so it always waited until
timeout.

Signed-off-by: Jaime Velasco Juan <[email protected]>
---
 drivers/gpu/drm/radeon/radeon_pm.c |   16 ++++++++++++----
 1 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
index a8e151e..7d8c5d9 100644
--- a/drivers/gpu/drm/radeon/radeon_pm.c
+++ b/drivers/gpu/drm/radeon/radeon_pm.c
@@ -337,10 +337,18 @@ static void radeon_pm_set_clocks(struct radeon_device *rdev)
 		rdev->pm.req_vblank |= (1 << 1);
 		drm_vblank_get(rdev->ddev, 1);
 	}
-	if (rdev->pm.active_crtcs)
-		wait_event_interruptible_timeout(
-			rdev->irq.vblank_queue, 0,
-			msecs_to_jiffies(RADEON_WAIT_VBLANK_TIMEOUT));
+	if (rdev->pm.active_crtcs) {
+		/* We code the wait istead of using the usual
+		   wait_event_interruptible_timeout because there is not
+		   condition to check, we want to be always waken up by
+		   the vblank IRQ handler */
+		DEFINE_WAIT(reclock_wait);
+		prepare_to_wait(&rdev->irq.vblank_queue,
+				&reclock_wait, TASK_INTERRUPTIBLE);
+		if (!signal_pending(current))
+			schedule_timeout(msecs_to_jiffies(RADEON_WAIT_VBLANK_TIMEOUT));
+		finish_wait(&rdev->irq.vblank_queue, &reclock_wait);
+	}
 	if (rdev->pm.req_vblank & (1 << 0)) {
 		rdev->pm.req_vblank &= ~(1 << 0);
 		drm_vblank_put(rdev->ddev, 0);
-- 
1.7.0

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
--
_______________________________________________
Dri-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to