On 13-10-28 08:18 AM, Axel Davy wrote:
On Mon, 28 Oct 2013, Pekka Paalanen wrote:


The only immediate effect I could see for the protocol proposal is
to replace the frequency field in a "monitor refresh rate changed"
event with a min and max duration, or whatever that could actually
describe how GSYNC affects timings.

I don't understand in which situations you would need to know this refresh rate. Again, I advocate to do the same thing than the X present extension: the ability to ask the frame to hit the screen at a specific ust 'if possible' and get the feedback at which ust it did actually hit the screen. If a client wants to estimate the refresh rate, it can.

It's because client-chosen target times/timestamps "will not" be be times at which the frames are actually presented on screen, because of the concept of scanout. So yeah, we understand the clients can estimate refresh rate from this sets of timestamps, and that's exactly want we wanna do in the "stable video case". But here, Pekka's is talking about cases where display refresh rate suddenly change (to avoid frame stutters cases, or in the GSYNC slaved rate case, or simply when mode changes for internal reason, etc...). Do you really want to let your clients spend many frames cycles to adapt to this new framerate, shooting their frames at times that would completely miss the scanout windows ? even if compositor knows the exact details of this refresh rate switch ? For this reason, I think we should remember the picture/time-perfect goal initially targetted in Wayland and try to convey this information back to clients.
I also expect video player timing algorithms to need to actually
take GSYNC into account, or they might become unstable as there is
no constant refresh rate. At least they would need to know about
GSYNC to take advantage of it.


The best for video players is the ability to ask the frame to be shown at a specific ust and get the feedback at which ust it did hit the screen. Then they don't have to care much about refresh rate.
I don't think he implied player app clients *had* to know refresh rates in the GSYNC case, because well, there's no refresh rate. I think he meant (and that's what I'd say too) client apps *should* be informed of the presence of GSYNC or at least, should be told : "Hey, we're in slaved scanout mode now !!, you don't have to take expected scanout times into account, I'll scanout your buffers out when you present them to me... cheers !". One simple gain from having this kind of information at the "very" beginning of a video playback, is that the player client can just shoot/queue its buffers at expected target times and don't take scanout times into account. In that case, we're going to have a perfectly synchronized video start. Now, if your app "does not" know about GSYNC, it will probably loose some cycles in the beginning trying to guess the current refresh rate... just to realize after some time that the refresh rate is slaved to source video framerate.

IOW, I'm not sure it's worth to design support for GSYNC at the
moment. I am tempted to say that let's leave it for another
extension (should not be too hard or invasive to add later, as the
only thing changing is the concept of output refresh rate, right?),
and wait for the hardware to be widely adopted. Oh, and also driver
support?

I see no real issue to support GSYNC.

I don't know about current/future driver support for this new GSYNC technology... but you know what, I definitely agree with Pekka as we should get this protocol and basic buffer queuing implementation reviewd and working for the general case for now and add HW-specific extensions later.

Axel
_______________________________________________
wayland-devel mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/wayland-devel

_______________________________________________
wayland-devel mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/wayland-devel

Reply via email to