Quoting jgli...@redhat.com (2018-12-06 15:47:04)
> From: Jérôme Glisse
>
> The debugfs take reference on fence without dropping them. Also the
> rcu section are not well balance. Fix all that ...
Wouldn't the code be a lot simpler (and a consistent snapshot) if it used
reservation_object_get_fen
Quoting Ezequiel Garcia (2018-05-14 22:28:31)
> On Mon, 2018-05-14 at 18:48 +0200, Daniel Vetter wrote:
> > On Fri, May 11, 2018 at 08:27:41AM +0100, Chris Wilson wrote:
> > > Quoting Ezequiel Garcia (2018-05-09 21:14:49)
> > > > Change how dma_fence_add_call
Quoting Ezequiel Garcia (2018-05-09 21:14:49)
> Change how dma_fence_add_callback() behaves, when the fence
> has error-signaled by the time it is being add. After this commit,
> dma_fence_add_callback() returns the fence error, if it
> has error-signaled before dma_fence_add_callback() is called.
Quoting Ezequiel Garcia (2018-05-10 13:51:56)
> On Wed, 2018-05-09 at 19:42 -0300, Gustavo Padovan wrote:
> > Hi Ezequiel,
> >
> > On Wed, 2018-05-09 at 17:14 -0300, Ezequiel Garcia wrote:
> > > Change how dma_fence_add_callback() behaves, when the fence
> > > has error-signaled by the time it is
Quoting Daniel Vetter (2018-05-03 15:25:52)
> Almost everyone uses dma_fence_default_wait.
>
> v2: Also remove the BUG_ON(!ops->wait) (Chris).
I just don't get the rationale for implicit over explicit.
-Chris
Quoting Daniel Vetter (2018-05-03 15:25:50)
> @@ -560,7 +567,7 @@ dma_fence_init(struct dma_fence *fence, const struct
> dma_fence_ops *ops,
>spinlock_t *lock, u64 context, unsigned seqno)
> {
> BUG_ON(!lock);
> - BUG_ON(!ops || !ops->wait || !ops->enable_signaling |
Quoting Christian König (2018-03-16 14:22:32)
[snip, probably lost too must context]
> This allows for full grown pipelining, e.g. the exporter can say I need
> to move the buffer for some operation. Then let the move operation wait
> for all existing fences in the reservation object and install
Quoting Christian König (2018-03-16 13:20:45)
> @@ -326,6 +338,29 @@ struct dma_buf_attachment {
> struct device *dev;
> struct list_head node;
> void *priv;
> +
> + /**
> +* @invalidate_mappings:
> +*
> +* Optional callback provided by the impo
Quoting Christian König (2017-11-21 15:49:55)
> Am 21.11.2017 um 15:59 schrieb Rob Clark:
> > On Tue, Nov 21, 2017 at 9:38 AM, Chris Wilson
> > wrote:
> >> Quoting Rob Clark (2017-11-21 14:08:46)
> >>> If we are testing if a reservation object's fence
Quoting Rob Clark (2017-11-21 14:08:46)
> If we are testing if a reservation object's fences have been
> signaled with timeout=0 (non-blocking), we need to pass 0 for
> timeout to dma_fence_wait_timeout().
>
> Plus bonus spelling correction.
>
> Signed-off-by: Rob Clark
> ---
> drivers/dma-buf/
quot;warning: Value stored to 'sg_table' during its initialization is
> never read"
>
> Signed-off-by: Colin Ian King
Reviewed-by: Chris Wilson
-Chris
Quoting Christian König (2017-09-11 10:57:57)
> Am 11.09.2017 um 11:23 schrieb Chris Wilson:
> > Quoting Christian König (2017-09-11 10:06:50)
> >> Am 11.09.2017 um 10:59 schrieb Chris Wilson:
> >>> Quoting Christian König (2017-09-11 09:50:40)
> >>>>
Quoting Christian König (2017-09-11 10:06:50)
> Am 11.09.2017 um 10:59 schrieb Chris Wilson:
> > Quoting Christian König (2017-09-11 09:50:40)
> >> Sorry for the delayed response, but your mail somehow ended up in the
> >> Spam folder.
> >>
> >>
Quoting Christian König (2017-09-11 09:50:40)
> Sorry for the delayed response, but your mail somehow ended up in the
> Spam folder.
>
> Am 04.09.2017 um 15:40 schrieb Chris Wilson:
> > Quoting Christian König (2017-09-04 14:27:33)
> >> From: Christian König
>
Quoting Christian König (2017-09-04 14:27:33)
> From: Christian König
>
> The logic is buggy and unnecessary complex. When dma_fence_get_rcu() fails to
> acquire a reference it doesn't necessary mean that there is no fence at all.
>
> It usually mean that the fence was replaced by a new one and
Quoting Sean Paul (2017-06-28 16:51:11)
> Protect against long-running processes from overflowing the timeline
> and creating fences that go back in time. While we're at it, avoid
> overflowing while we're incrementing the timeline.
>
> Signed-off-by: Sean Paul
> ---
> drivers/dma-buf/sw_sync.c
On Mon, Dec 19, 2016 at 10:40:41AM +0900, Inki Dae wrote:
>
>
> 2016년 08월 16일 01:02에 Daniel Vetter 이(가) 쓴 글:
> > On Mon, Aug 15, 2016 at 04:42:18PM +0100, Chris Wilson wrote:
> >> Rendering operations to the dma-buf are tracked implicitly via the
> >> reservati
t is
> wide enough.
>
> Also converts callers which were using unsigned long locally
> with the lower_32_bits annotation to make it explicitly
> clear what is happening.
>
> v2: Use offset_in_page. (Chris Wilson)
>
> Signed-off-by: Tvrtko Ursulin
> Cc: Masahiro Yamada
return -ENOMEM;
Full circle! The whole reason this exists was to avoid that vmalloc. I
don't really want it back...
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Sep 23, 2016 at 03:50:44PM +0200, Daniel Vetter wrote:
> On Mon, Aug 29, 2016 at 08:08:34AM +0100, Chris Wilson wrote:
> > Currently we install a callback for performing poll on a dma-buf,
> > irrespective of the timeout. This involves taking a spinlock, as well as
>
On Fri, Sep 23, 2016 at 03:50:44PM +0200, Daniel Vetter wrote:
> On Mon, Aug 29, 2016 at 08:08:34AM +0100, Chris Wilson wrote:
> > Currently we install a callback for performing poll on a dma-buf,
> > irrespective of the timeout. This involves taking a spinlock, as well as
>
On Fri, Sep 23, 2016 at 03:50:44PM +0200, Daniel Vetter wrote:
> On Mon, Aug 29, 2016 at 08:08:34AM +0100, Chris Wilson wrote:
> > Currently we install a callback for performing poll on a dma-buf,
> > irrespective of the timeout. This involves taking a spinlock, as well as
>
On Fri, Sep 23, 2016 at 03:49:26PM +0200, Daniel Vetter wrote:
> On Mon, Aug 29, 2016 at 08:08:33AM +0100, Chris Wilson wrote:
> > With the seqlock now extended to cover the lookup of the fence and its
> > testing, we can perform that testing solely under the seqlock guard an
If we being polled with a timeout of zero, a nonblocking busy query,
we don't need to install any fence callbacks as we will not be waiting.
As we only install the callback once, the overhead comes from the atomic
bit test that also causes serialisation between threads.
Signed-off-by:
callback to make the busy-query fast.
Single thread: 60% faster
8 threads on 4 (+4 HT) cores: 600% faster
Still not quite the perfect scaling we get with a native busy ioctl, but
poll(dmabuf) is faster due to the quicker lookup of the object and
avoiding drm_ioctl().
Signed-off-by: Chris Wilson
Cc
: Chris Wilson
Cc: Daniel Vetter
Cc: Sumit Semwal
Cc: linux-media@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linaro-mm-...@lists.linaro.org
---
include/linux/fence.h | 56 ++-
1 file changed, 51 insertions(+), 5 deletions(-)
diff --git
section does not prevent this reallocation, instead
we have to inspect the reservation's seqlock to double check if the
fences have been reassigned as we were acquiring our reference.
Signed-off-by: Chris Wilson
Cc: Daniel Vetter
Cc: Maarten Lankhorst
Cc: Christian König
Cc: Alex Deuche
section does not prevent this reallocation, instead
we have to inspect the reservation's seqlock to double check if the
fences have been reassigned as we were acquiring our reference.
Signed-off-by: Chris Wilson
Cc: Daniel Vetter
Cc: Maarten Lankhorst
Cc: Christian König
Cc: Alex Deuche
section does not prevent this reallocation, instead
we have to inspect the reservation's seqlock to double check if the
fences have been reassigned as we were acquiring our reference.
Signed-off-by: Chris Wilson
Cc: Daniel Vetter
Cc: Maarten Lankhorst
Cc: Christian König
Cc: Alex Deuche
test it, the same guarantee that made it safe to acquire the
reference previously. The seqlock tests whether the fence was replaced
as we are testing it telling us whether or not we can trust the result
(if not, we just repeat the test until stable).
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc
On Sun, Aug 28, 2016 at 09:33:54PM +0100, Chris Wilson wrote:
> On Sun, Aug 28, 2016 at 05:37:47PM +0100, Chris Wilson wrote:
> > Currently we install a callback for performing poll on a dma-buf,
> > irrespective of the timeout. This involves taking a spinlock, as well as
> >
On Sun, Aug 28, 2016 at 05:37:47PM +0100, Chris Wilson wrote:
> Currently we install a callback for performing poll on a dma-buf,
> irrespective of the timeout. This involves taking a spinlock, as well as
> unnecessary work, and greatly reduces scaling of poll(.timeout=0) across
> mult
callback to make the busy-query fast.
Single thread: 60% faster
8 threads on 4 (+4 HT) cores: 600% faster
Still not quite the perfect scaling we get with a native busy ioctl, but
poll(dmabuf) is faster due to the quicker lookup of the object and
avoiding drm_ioctl().
Signed-off-by: Chris Wilson
Cc
igt/prime_vgem
Testcase: igt/gem_concurrent_blit # *vgem*
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Daniel Vetter
Cc: Eric Anholt
Cc: linux-media@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linaro-mm-...@lists.linaro.org
Cc: linux-ker...@vger.kernel.org
---
drivers/dma-buf/
igt/prime_vgem
Testcase: igt/gem_concurrent_blit # *vgem*
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Daniel Vetter
Cc: Eric Anholt
Cc: linux-media@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linaro-mm-...@lists.linaro.org
Cc: linux-ker...@vger.kernel.org
---
drivers/dma-buf/
If we fail to create the anon file, we need to remember to release the
module reference on the owner.
Signed-off-by: Chris Wilson
Reviewed-by: Joonas Lahtinen
Cc: Joonas Lahtinen
Cc: Sumit Semwal
Cc: Daniel Vetter
Cc: linux-media@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linaro
dma-buf provides an interfaces for receiving notifications from DMA
hardware, and for implicitly tracking fences used for rendering into
dma-buf. We want to be able to use these event sources along with kfence
for easy collection and combining with other events.
Signed-off-by: Chris Wilson
Cc
provides a building
block which can be used for determining an order in which tasks can
execute.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kees Cook
Cc: Thomas Gleixner
Cc: "Paul E. McKenney"
A common requirement when scheduling a task is that it should be not be
begun until a certain point in time is passed (e.g.
queue_delayed_work()). kfence_await_hrtimer() causes the kfence to
asynchronously wait until after the appropriate time before being woken.
Signed-off-by: Chris Wilson
Cc
async_dependency_get() retrieves a kfence for inspection or waiting
upon.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kees Cook
Cc: Thomas Gleixner
Cc: "Paul E. McKenney"
Cc: Dan Williams
Cc: Andrey Ry
. not be scheduled) until all work queued before the
barrier is completed.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kees Cook
Cc: Thomas Gleixner
Cc: "Paul E. McKenney"
Cc: Dan Williams
scheme based upon
kfences and back every task with one. Any task may now wait upon the
kfence before being scheduled, and equally the kfence may be used to
wait on the task itself (rather than waiting on the cookie for all
previous tasks to be completed).
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
On Wed, Jul 13, 2016 at 11:38:52AM +0200, Peter Zijlstra wrote:
> On Fri, Jun 24, 2016 at 10:08:46AM +0100, Chris Wilson wrote:
> > diff --git a/kernel/async.c b/kernel/async.c
> > index d2edd6efec56..d0bcb7cc4884 100644
> > --- a/kernel/async.c
> > +++ b/kernel
() / kfence_add_completion()
sets the kfence to wait upon another fence, or completion respectively.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kees Cook
Cc: Thomas Gleixner
Cc: "Paul E. McKenney"
Cc: Da
scheme based upon
kfences and back every task with one. Any task may now wait upon the
kfence before being scheduled, and equally the kfence may be used to
wait on the task itself (rather than waiting on the cookie for all
previous tasks to be completed).
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
fences from the object and
adds the individual waits for the kfence.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kees Cook
Cc: Thomas Gleixner
Cc: "Paul E. McKenney"
Cc: Dan Williams
Cc: Andre
. not be scheduled) until all work queued before the
barrier is completed.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kees Cook
Cc: Thomas Gleixner
Cc: "Paul E. McKenney"
Cc: Dan Williams
kfence_add_delay() is a convenience wrapper around
hrtimer_start_range_ns() to provide a time source for a kfence graph.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kees Cook
Cc: Thomas Gleixner
Cc
dma-buf provides an interfaces for receiving notifications from DMA
hardware. kfence provides a useful interface for collecting such fences
and combining them with other events.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc
async_dependency_get() retrieves a kfence for inspection or waiting
upon.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kees Cook
Cc: Thomas Gleixner
Cc: "Paul E. McKenney"
Cc: Dan Williams
Cc: Andrey Ry
-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Shuah Khan
Cc: Tejun Heo
Cc: Daniel Vetter
Cc: Andrew Morton
Cc: Ingo Molnar
Cc: Kees Cook
Cc: Thomas Gleixner
Cc: "Paul E. McKenney"
Cc: Dan Williams
Cc: Andrey Ryabinin
Cc: Davidlohr Bueso
Cc: Nikolay Aleksandrov
Cc: "David
ess, which should include waiting upon rendering.
(Some drivers may need to do more work to ensure that the dma-buf mmap
is coherent as well as complete.)
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Daniel Vetter
Cc: linux-media@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linaro
If we fail to create the anon file, we need to remember to release the
module reference on the owner.
Signed-off-by: Chris Wilson
Cc: Sumit Semwal
Cc: Daniel Vetter
Cc: linux-media@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linaro-mm-...@lists.linaro.org
---
drivers/dma-buf/dma
On Wed, Mar 23, 2016 at 04:32:59PM +0100, David Herrmann wrote:
> Hi
>
> On Wed, Mar 23, 2016 at 12:56 PM, Chris Wilson
> wrote:
> > On Wed, Mar 23, 2016 at 12:30:42PM +0100, David Herrmann wrote:
> >> My question was rather about why we do this? Semantics for EINTR
eir flush. There are a few other possible deadlocks that are
also avoided with EAGAIN (again, the issue is more or less the lack of
fine grained locking).
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
--
To unsubscribe from this list: send the line "unsubscribe linux-medi
c11e391da2a8fe973c3c2398452000bed505851e
Author: Daniel Vetter
Date: Thu Feb 11 20:04:51 2016 -0200
dma-buf: Add ioctls to allow userspace to flush
Testcase: igt/gem_concurrent_blit/*dmabuf*interruptible
Testcase: igt/prime_mmap_coherency/ioctl-errors
Signed-off-by: Chris Wilson
Cc: Tiago Vignatti
Cc
a_buf->ops->begin_cpu_access() becomes mandatory as at a minimum it
has to point to the default implementation.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
For example: https://bugs.freedesktop.org/attachment.cgi?id=48933
which doesn't handle flushing of pending updates via the GPU when
writing with the CPU during interrupts (i.e. a panic).
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
--
To unsubscribe from this list: send the
58 matches
Mail list logo