Re: Steam Deck integrated display: saturation boosting or reduction?

2023-11-04 Thread Joshua Ashton

Hello,

The existing behaviour before any of our colour work was that the native 
display's primaries were being used for SDR content. (Ie. just scanning 
out game's buffer directly)


Games are not submitting us any primaries for the buffers they are sending.
I mean they are saying they are sRGB so "technically 709", but 
colorimetry for SDR content (outside of mastering) is very wishy-washy.


Deck Display Info:
static constexpr displaycolorimetry_t displaycolorimetry_steamdeck_spec
{
	.primaries = { { 0.602f, 0.355f }, { 0.340f, 0.574f }, { 0.164f, 0.121f 
} },

.white = { 0.3070f, 0.3220f },  // not D65
};

static constexpr displaycolorimetry_t displaycolorimetry_steamdeck_measured
{
	.primaries = { { 0.603f, 0.349f }, { 0.335f, 0.571f }, { 0.163f, 0.115f 
} },

.white = { 0.296f, 0.307f }, // not D65
};

https://github.com/ValveSoftware/gamescope/blob/master/src/color_helpers.h#L451

For the rest of this, consider displaycolorimetry_steamdeck_measured to 
be what we use for the internal display.


To improve the rendering of content on the Deck's internal display with 
the modest gamut, we go from the display's native primaries (sub 709) to 
somewhere between the native primaries (0.0) and a hypothetical wider 
gamut display (1.0) that we made up.


The hypothetical display's primaries were decided based by making 
content look appealing:

static constexpr displaycolorimetry_t displaycolorimetry_widegamutgeneric
{
	.primaries = { { 0.6825f, 0.3165f }, { 0.241f, 0.719f }, { 0.138f, 
0.050f } },

.white = { 0.3127f, 0.3290f },  // D65
};

We have a single knob for this in the UI, in code it's "SDR Gamut 
Wideness", but known in the UI as "Color Vibrance". It's the knob that 
picks the target color gamut that gets mapped to the native display.


This is how that single value interacts to pick the target primaries:

https://github.com/ValveSoftware/gamescope/blob/master/src/color_helpers.cpp#L798

We then use the result there to do a simple saturation fit based on the 
kob and some additional parameters that control how we interpolate.

(blendEnableMinSat, blendEnableMaxSat, blendAmountMin, blendAmountMax)

Those parameters also change with the SDR Gamut Wideness value, based on 
things that "look nice". :P


https://github.com/ValveSoftware/gamescope/blob/master/src/color_helpers.cpp#L769

We also do some other things like Bradford chromatic adaptation to fix 
the slightly-off whitepoint too.


We use all this to generate a 3D LUT with that saturation fit, chromatic 
adaptation and use Shaper + 3D LUT at scanout time to apply it.

(We also have a shader based fallback path)

The goal of all of this work is less 'color accuracy' and more 'making 
the display more inline with consumer expectations'.
We wanted to try and make the display appear much more 'vivid' and 
colourful without introducing horrible clipping.


We also use this same logic for wider gamut displays (where 0.0 = sRGB 
and 1.0 = native) and for SDR content on HDR.


Hope this helps!

- Joshie 🐸✨

On 11/3/23 13:00, Pekka Paalanen wrote:

This is a continuation of
https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14#note_2152254
because this is off-topic in that thread.


No, we did widening. The Deck's internal display has a modest gamut
that is < 71% sRGB.


If games do wide (well, full sRGB or wider) gamut, then why would you
need to make that gamut even wider to fit nicely into a significantly
smaller gamut display?

Here's what I think happened.

You have a game that produces saturation up to P3, let's say. When you
did the colorimetrically correct matrix conversion (CTM) from BT.2020
to the "modest gamut", you found out that it is horribly clipping
colors, right?

If you then removed that CTM, it means that you are
re-interpreting BT.2020 RGB encoding *as if* it was "modest gamut" RGB
encoding. This happens if you simply apply the input image EOTF and
then apply the display inverse-EOTF and do nothing to the color gamut
in between. Adjusting dynamic range does not count here. This is an
extreme case of saturation reduction.

(Note: Doing nothing to numbers equals to applying a major semantic
operation. Like telling someone something in cm and they take that
number in mm instead. Or metric vs. imperial units. Color space
primaries and white point define the units for RGB values, and if you
have other RGB values, they are not comparable without the proper CTM
conversion.)

That does not look good either, so after that re-interpretation you
added saturation boosting that nicely makes use of the capabilities of
the integrated display's "modest gamut" so that the image looks more
"vibrant" and less de-saturated. However, the total effect is still
saturation reduction, because the re-interpretation of the game content
RGB values is such a massive saturation reduction that your boosting
does not overcome it.

I could make up an analogue: Someone says they are making all sticks
50% longer than 

Re: [RFC PATCH v2 06/17] drm/doc/rfc: Describe why prescriptive color pipeline is needed

2023-11-04 Thread Christopher Braga
Just want to loop back to before we branched off deeper into the 
programming performance talk


On 10/26/2023 3:25 PM, Alex Goins wrote:

On Thu, 26 Oct 2023, Sebastian Wick wrote:


On Thu, Oct 26, 2023 at 11:57:47AM +0300, Pekka Paalanen wrote:

On Wed, 25 Oct 2023 15:16:08 -0500 (CDT)
Alex Goins  wrote:


Thank you Harry and all other contributors for your work on this. Responses
inline -

On Mon, 23 Oct 2023, Pekka Paalanen wrote:


On Fri, 20 Oct 2023 11:23:28 -0400
Harry Wentland  wrote:


On 2023-10-20 10:57, Pekka Paalanen wrote:

On Fri, 20 Oct 2023 16:22:56 +0200
Sebastian Wick  wrote:


Thanks for continuing to work on this!

On Thu, Oct 19, 2023 at 05:21:22PM -0400, Harry Wentland wrote:

v2:
  - Update colorop visualizations to match reality (Sebastian, Alex Hung)
  - Updated wording (Pekka)
  - Change BYPASS wording to make it non-mandatory (Sebastian)
  - Drop cover-letter-like paragraph from COLOR_PIPELINE Plane Property
section (Pekka)
  - Use PQ EOTF instead of its inverse in Pipeline Programming example (Melissa)
  - Add "Driver Implementer's Guide" section (Pekka)
  - Add "Driver Forward/Backward Compatibility" section (Sebastian, Pekka)


...


+An example of a drm_colorop object might look like one of these::
+
+/* 1D enumerated curve */
+Color operation 42
+β”œβ”€ "TYPE": immutable enum {1D enumerated curve, 1D LUT, 3x3 matrix, 3x4 
matrix, 3D LUT, etc.} = 1D enumerated curve
+β”œβ”€ "BYPASS": bool {true, false}
+β”œβ”€ "CURVE_1D_TYPE": enum {sRGB EOTF, sRGB inverse EOTF, PQ EOTF, PQ 
inverse EOTF, …}
+└─ "NEXT": immutable color operation ID = 43


I know these are just examples, but I would also like to suggest the possibility
of an "identity" CURVE_1D_TYPE. BYPASS = true might get different results
compared to setting an identity in some cases depending on the hardware. See
below for more on this, RE: implicit format conversions.

Although NVIDIA hardware doesn't use a ROM for enumerated curves, it came up in
offline discussions that it would nonetheless be helpful to expose enumerated
curves in order to hide the vendor-specific complexities of programming
segmented LUTs from clients. In that case, we would simply refer to the
enumerated curve when calculating/choosing segmented LUT entries.


That's a good idea.


Another thing that came up in offline discussions is that we could use multiple
color operations to program a single operation in hardware. As I understand it,
AMD has a ROM-defined LUT, followed by a custom 4K entry LUT, followed by an
"HDR Multiplier". On NVIDIA we don't have these as separate hardware stages, but
we could combine them into a singular LUT in software, such that you can combine
e.g. segmented PQ EOTF with night light. One caveat is that you will lose
precision from the custom LUT where it overlaps with the linear section of the
enumerated curve, but that is unavoidable and shouldn't be an issue in most
use-cases.


Indeed.


Actually, the current examples in the proposal don't include a multiplier color
op, which might be useful. For AMD as above, but also for NVIDIA as the
following issue arises:

As discussed further below, the NVIDIA "degamma" LUT performs an implicit fixed


If possible, let's declare this as two blocks. One that informatively 
declares the conversion is present, and another for the de-gamma. This 
will help with block-reuse between vendors.



point to FP16 conversion. In that conversion, what fixed point 0x maps
to in floating point varies depending on the source content. If it's SDR
content, we want the max value in FP16 to be 1.0 (80 nits), subject to a
potential boost multiplier if we want SDR content to be brighter. If it's HDR PQ
content, we want the max value in FP16 to be 125.0 (10,000 nits). My assumption
is that this is also what AMD's "HDR Multiplier" stage is used for, is that
correct?


It would be against the UAPI design principles to tag content as HDR or
SDR. What you can do instead is to expose a colorop with a multiplier of
1.0 or 125.0 to match your hardware behaviour, then tell your hardware
that the input is SDR or HDR to get the expected multiplier. You will
never know what the content actually is, anyway.


Right, I didn't mean to suggest that we should tag content as HDR or SDR in the
UAPI, just relating to the end result in the pipe, ultimately it would be
determined by the multiplier color op.



A multiplier could work but we would should give OEMs the option to 
either make it "informative" and fixed by the hardware, or fully 
configurable. With the Qualcomm pipeline how we absorb FP16 pixel 
buffers, as well as how we convert them to fixed point data actually has 
a dependency on the desired de-gamma and gamma processing. So for an 
example:


If a source pixel buffer is scRGB encoded FP16 content we would expect 
input pixel content to be up to 7.5, with the IGC output reaching 125 as 
in the NVIDIA case. Likewise gamma 2.2 encoded FP16 content would be 0-1 
in and 0-1 out.


S