On 02/08/2018 03:55 PM, Michel Dänzer wrote:
On 2018-02-08 12:14 PM, Mario Kleiner wrote:
As it turns out, doing so will make any gamma table updates
silently fail, because xf86HandleColorMaps() hooks the
.LoadPalette function to xf86RandR12LoadPalette() if the
.gamma_set function is supported by the ddx, as is in our
case.

Once xf86RandR12LoadPalette() has been called during server
startup, all palette and/or gamma table updates go through
xf86RandR12CrtcComputeGamma() to combine color palette
updates with gamma table updates into a proper hardware lut
for upload into the hw via the .gamma_set function/ioctl,
passing in a palette with palette_red/green/blue_size == the
size given by the visuals red/green/blueMask, which will
be == 1024 for a depth 30 screen.

That in turn is a problem, because the size of the hw lut
crtc->gamma_size is fixed to 256 slots on all kms-drivers
when using the legacy gamma_set ioctl, but
xf86RandR12CrtcComputeGamma() can only handle palettes of a
size <= the hw lut size. As the palette size of 1024 is greater
than the hw lut size of 256, the code silently fails
(gamma_slots == 0 in xf86RandR12CrtcComputeGamma()).

Skipping xf86HandleColormaps() on a depth > 24 screen disables
color palette handling, but keeps at least gamma table updates
via xf86vidmode extension and RandR working.

Sort of... It means xf86VidMode and RandR call directly into the driver
to set their LUTs, with no coordination between them, so whichever calls
into the driver last wins and clobbers the HW LUT.


Wouldn't that be desirable behavior in this case, assuming the client that calls last is the one that wants something more specific? I certainly wouldn't want two clients calculate separate gamma correction lut's, one submitting via xf86VidMode, the other via RandR and then those two getting concatenated/combined into one definitely wrong gamma correction lut? Each single lut would be a better/more correct choice than that. Or maybe i misunderstand your statement?

It would be better to fix xf86RandR12CrtcComputeGamma instead.



But how? The current code can upsample an input palette < hw lut by replicating input values onto multiple hw slots, which makes perfect sense. But downsampling a bigger input palette, e.g., by just picking every n'th slot, seems problematic to me. You lose information from the palette in a way that might not be at all what the application wanted if it went to the trouble of creating a custom palette, assuming any apps would do anything meaningful with palettes on a depth 30 screen in the first place. Unless the application just used the default palette or a truecolor visual, like probably most do nowadays, which i assume would be an identity mapping, so it doesn't matter if a palette is applied at all?

Btw. not at the devel-machine atm., not looking at the code, and looking at all the servers gamma/palette code makes me dizzy quickly, so maybe i'm just misunderstanding something atm.

-mario


_______________________________________________
[email protected]: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Reply via email to