Jerome,
looks good overall. Some comments inline.

Jerome Glisse wrote:
>
> I have no more list corruption but my tree have few other fixes
> it would be nice if you could check if my not missunderstanding
> vm code (patch attached).
>
>
> Cheers,
> Jerome
>   
> ------------------------------------------------------------------------
>  
> +static void ttm_bo_device_print_free_tt_size(struct ttm_bo_device *bdev)
> +{
> +     unsigned max_size = 0;
> +     unsigned free_size = 0;
> +     struct drm_mm_node *entry;
> +     struct list_head *list;
> +     struct list_head *free_stack;
> +
> +     if (!bdev->man[TTM_PL_TT].has_type || !bdev->man[TTM_PL_TT].use_type) {
> +             return;
> +     }
> +
> +     spin_lock(&bdev->lru_lock);
> +     free_stack = &bdev->man[TTM_PL_TT].manager.fl_entry;
> +     list_for_each(list, free_stack) {
> +             entry = list_entry(list, struct drm_mm_node, fl_entry);
> +             if (entry->size > max_size) {
> +                     max_size = entry->size;
> +             }
> +             free_size += entry->size;
> +     }
> +     spin_unlock(&bdev->lru_lock);
> +     max_size = max_size << PAGE_SHIFT;
> +     free_size = free_size << PAGE_SHIFT;
> +     printk(KERN_ERR "Aperture biggest free block %ub (%ukb, %uMb)\n",
> +             max_size, max_size >> 10, max_size >> 20);
> +     printk(KERN_ERR "Aperture total free size %u bytes (%ukb, %uMb)\n",
> +             free_size, free_size >> 10, free_size >> 20);
> +}
> +
>   
Could we put the core of this function in drm_mm.c, so we can use it for 
any memory type?

>  int ttm_buffer_object_validate(struct ttm_buffer_object *bo,
>                              bool interruptible, bool no_wait)
>  {
> @@ -937,12 +973,18 @@
>                                        interruptible, no_wait);
>               if (ret) {
>                       if (ret != -ERESTART)
> -                             printk(KERN_ERR "Failed moving buffer. "
> -                                    "Proposed placement 0x%08x\n",
> +                             printk(KERN_ERR "Failed moving buffer of %lub"
> +                                    "(%lukb, %luMb). Proposed placement "
> +                                    "0x%08x\n",
> +                                    bo->num_pages << PAGE_SHIFT,
> +                                    (bo->num_pages << PAGE_SHIFT) >> 10,
> +                                    (bo->num_pages << PAGE_SHIFT) >> 20,
>                                      bo->mem.proposed_flags);
> -                     if (ret == -ENOMEM)
> +                     if (ret == -ENOMEM) {
>                               printk(KERN_ERR "Out of aperture space or "
>                                      "DRM memory quota.\n");
> +                             ttm_bo_device_print_free_tt_size(bo->bdev);
> +                     }
>                       return ret;
>   
Here, I think we should move the error printout to the function caller.
In the case we error here because of fragmentation, the correct action 
of the execbuf code would be to restart  buffer validation with the ttm 
lock in write mode and evict all buffers in the relevant memory types 
before trying to validate, and then we don't really want any error 
message until that validation also fails.

>  
>  int ttm_fbdev_mmap(struct vm_area_struct *vma, struct ttm_buffer_object *bo)
>   

Looks like the patch generator got a bit confused. This should be 
ttm_bo_io(), right?
Anyway, that function has never been tested, since it appears it 
confuses the X server polling from the drm device node. I was thinking 
that we perhaps need a separate TTM device node to implement pread / 
pwrite correctly. Also one should probably take a look at Intel's 
implementation if performance with this trivial implementation is an issue.


Thanks,
Thomas


------------------------------------------------------------------------------
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
--
_______________________________________________
Dri-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to