Re: Surface ID assignment for multiple instances of an application

2021-06-16 Thread Vaibhav Dalvi
Hi,

Thank you for the input.

id-agent plugin is not in use at present. This particular system has 2
classes of applications.
1. Qt applications - They use IVI_SURFACE_ID environment variable to set
the surface ID for each application via an init shellscript.
2. XOrg applications - These are running with help from xwayland and the
surface IDs for these is at provided via weston.ini.

So I should use id-agent for XOrg applications to set surface ID instead of
providing via weston.ini. And so we will get control of surface ID's when
using multiple instances. I think this could be a valid solution. I'll try
to figure this out.

Meanwhile the multi-instance use case is thrown out for now, so for single
instance of the XOrg application, providing a fixed surface ID via
weston.ini works fine for now.

Regards,
Vaibhav


On Wed, Jun 16, 2021 at 1:53 AM Eugen Friedrich  wrote:

> Hi Dalvi,
>
> The ivi id is something application needs to set explicitly, need to
> understand how the id is assigned for application you already can control?!
> Are you using id-agent plugin?
>
> Best regards
> Jena
>
> Vaibhav Dalvi  schrieb am Fr. 4. Juni 2021 um
> 07:19:
>
>> Hi all,
>> I'm working on a simple C++ application (dbus-service) which can
>> start/stop another application ( a vnc viewer application ). My platform is
>> arm-linux with wayland + ivi-shell. Now I am able to start / stop the x11
>> vncviewer application one instance. Needed to add desktop-app entries in
>> weston.ini. I'm using LayerManagerControl commands to manage the window (
>> at present - further on ilm API's shall be used. )
>>
>> Now my next use case is multiple instances of this vnc viewer
>> application. And for that I added few more desktop-app entries to
>> weston.ini, and updated the service to launch multiple instances of vnc
>> viewer. I am able to launch multiple instances of vncviewer. But to control
>> those instances, I need to know which instance is assigned which surface
>> ID.
>>
>> Another look at ilm api's led me to surface-property's creator-PID field.
>> Unfortunately with xwayland, for x11 apps this field contains PID of
>> xwayland instance and not the actual x11 app.
>>
>> I looked at libweston as well as xwayland-api, but not able to see a
>> clear solution. Please suggest any way to obtain this mapping of surface ID
>> and process ID for x11 apps with xwayland.
>>
>> ___
>> wayland-devel mailing list
>> wayland-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/wayland-devel
>>
>
___
wayland-devel mailing list
wayland-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/wayland-devel


Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-16 Thread Jason Ekstrand
On Tue, Jun 15, 2021 at 3:41 AM Christian König
 wrote:
>
> Hi Jason & Daniel,
>
> maybe I should explain once more where the problem with this approach is
> and why I think we need to get that fixed before we can do something
> like this here.
>
> To summarize what this patch here does is that it copies the exclusive
> fence and/or the shared fences into a sync_file. This alone is totally
> unproblematic.
>
> The problem is what this implies. When you need to copy the exclusive
> fence to a sync_file then this means that the driver is at some point
> ignoring the exclusive fence on a buffer object.

Not necessarily.  Part of the point of this is to allow for CPU waits
on a past point in buffers timeline.  Today, we have poll() and
GEM_WAIT both of which wait for the buffer to be idle from whatever
GPU work is currently happening.  We want to wait on something in the
past and ignore anything happening now.

But, to the broader point, maybe?  I'm a little fuzzy on exactly where
i915 inserts and/or depends on fences.

> When you combine that with complex drivers which use TTM and buffer
> moves underneath you can construct an information leak using this and
> give userspace access to memory which is allocated to the driver, but
> not yet initialized.
>
> This way you can leak things like page tables, passwords, kernel data
> etc... in large amounts to userspace and is an absolutely no-go for
> security.

Ugh...  Unfortunately, I'm really out of my depth on the implications
going on here but I think I see your point.

> That's why I'm said we need to get this fixed before we upstream this
> patch set here and especially the driver change which is using that.

Well, i915 has had uAPI for a while to ignore fences.  Those changes
are years in the past.  If we have a real problem here (not sure on
that yet), then we'll have to figure out how to fix it without nuking
uAPI.

--Jason


> Regards,
> Christian.
>
> Am 10.06.21 um 23:09 schrieb Jason Ekstrand:
> > Modern userspace APIs like Vulkan are built on an explicit
> > synchronization model.  This doesn't always play nicely with the
> > implicit synchronization used in the kernel and assumed by X11 and
> > Wayland.  The client -> compositor half of the synchronization isn't too
> > bad, at least on intel, because we can control whether or not i915
> > synchronizes on the buffer and whether or not it's considered written.
> >
> > The harder part is the compositor -> client synchronization when we get
> > the buffer back from the compositor.  We're required to be able to
> > provide the client with a VkSemaphore and VkFence representing the point
> > in time where the window system (compositor and/or display) finished
> > using the buffer.  With current APIs, it's very hard to do this in such
> > a way that we don't get confused by the Vulkan driver's access of the
> > buffer.  In particular, once we tell the kernel that we're rendering to
> > the buffer again, any CPU waits on the buffer or GPU dependencies will
> > wait on some of the client rendering and not just the compositor.
> >
> > This new IOCTL solves this problem by allowing us to get a snapshot of
> > the implicit synchronization state of a given dma-buf in the form of a
> > sync file.  It's effectively the same as a poll() or I915_GEM_WAIT only,
> > instead of CPU waiting directly, it encapsulates the wait operation, at
> > the current moment in time, in a sync_file so we can check/wait on it
> > later.  As long as the Vulkan driver does the sync_file export from the
> > dma-buf before we re-introduce it for rendering, it will only contain
> > fences from the compositor or display.  This allows to accurately turn
> > it into a VkFence or VkSemaphore without any over- synchronization.
> >
> > This patch series actually contains two new ioctls.  There is the export
> > one mentioned above as well as an RFC for an import ioctl which provides
> > the other half.  The intention is to land the export ioctl since it seems
> > like there's no real disagreement on that one.  The import ioctl, however,
> > has a lot of debate around it so it's intended to be RFC-only for now.
> >
> > Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4037
> > IGT tests: https://patchwork.freedesktop.org/series/90490/
> >
> > v10 (Jason Ekstrand, Daniel Vetter):
> >   - Add reviews/acks
> >   - Add a patch to rename _rcu to _unlocked
> >   - Split things better so import is clearly RFC status
> >
> > v11 (Daniel Vetter):
> >   - Add more CCs to try and get maintainers
> >   - Add a patch to document DMA_BUF_IOCTL_SYNC
> >   - Generally better docs
> >   - Use separate structs for import/export (easier to document)
> >   - Fix an issue in the import patch
> >
> > v12 (Daniel Vetter):
> >   - Better docs for DMA_BUF_IOCTL_SYNC
> >
> > v12 (Christian König):
> >   - Drop the rename patch in favor of Christian's series
> >   - Add a comment to the commit message for the dma-buf sync_file export
> > ioctl saying w