Bill Spitzak <[email protected]> kirjoitti 13.3.2013 kello 22.29: > I believe this is viable. The same rules can be used for anti-aliasing shapes > and lines, too. > > Instead of 1-fg, I would guess the background to be a gray shade, with a > value of 1 until the foreground color goes above a brightness threshold and > then it should switch to fg-K where K is the threshold. > > There may also be approaches where it can peek at the destination to guess at > the background color.
Ideally it'd be a hint supplied by the application. In practice it doesn't even
need to be very precise. For instance, a gnome-terminal application with
transparent background could guess that the average RGB value of the background
is half of the transparency of the background image, and leave it at that -- it
would generate very reasonable alphas. I do not like the idea of sampling the
background image very much.
Without any hints, there are costs to guessing the color incorrectly. For
instance, someone may try to write very dim text against completely black
background, and while that text is not "designed to be readable", it looks
worse through this technique because the assumption of high contrast
nevertheless adjusts gammas for a light background. So in general when
foreground isn't using colors from the edges of the gamut, you probably want to
start to reduce any corrections you make, too, and it is possible that I should
reduce the correction even more aggressively than now.
This is, IMHO, a modeling problem which is best solved through trying different
fg-bg hypotheses with parameters, using real-world data about actually popular
color combinations, and designing an error function that tries to limit the
average error of the assumption by manipulating the parameters of the
hypothesis.
The current implementation I have is here, and is table-based:
/* Input sRGB component of foreground color, get alpha correction table. */
static void
fill_ac_table(float fg, unsigned char *table) {
int i;
float fg_lin = powf(fg, 2.2f);
float bg_lin = 1.0f - fg_lin;
float bg = powf(bg_lin, 1.0f/2.2f);
for (i = 0; i < 256; i ++) {
float blended_lin = (i * fg_lin + (255 - i) * bg_lin) / 255.0f;
float blended = powf(blended_lin, 1.0f/2.2f);
table[i] = round(255.0f * (blended - bg) / (fg - bg));
}
}
Ideally, the application of this table would be straight in the cached glyph so
that the glyph could be added to the ca-mask and then used without a
post-processing step. The post-processor is hacking on the result of the
add_glyphs():
/* Can we do alpha correction? */
if (src->type == SOLID && pixman_image_get_component_alpha(mask)) {
unsigned char subst_red[256];
unsigned char subst_green[256];
unsigned char subst_blue[256];
fill_ac_table(src->solid.color.red / 65535.0f, subst_red);
fill_ac_table(src->solid.color.green / 65535.0f, subst_green);
fill_ac_table(src->solid.color.blue / 65535.0f, subst_blue);
unsigned int *mdata = pixman_image_get_data(mask);
int mheight = pixman_image_get_height(mask);
int mstride = pixman_image_get_stride(mask) >> 2;
int y;
for (y = 0; y < mheight; y ++) {
int x;
for (x = 0; x < mstride; x ++) {
unsigned int p = mdata[y * mstride + x];
p = (subst_red[(p >> 16) & 0xff] << 16)
| (subst_green[(p >> 8) & 0xff] << 8)
| subst_blue[p & 0xff];
mdata[y * mstride + x] = p;
}
}
}
so each solid surface color gets their own alpha correction table, and the
component-alpha mask is manipulated through table lookup before it goes into
pixman_image_composite32 in the pixman_image_composite_glyphs() function. In
addition to this, I built Cairo and switched it to use the freetype's light lcd
filter, because as soon as you start to do gamma corrected rendering, the
annoying legacy filter that Cairo programs FreeType to use generates subtle
artifacts.
Anyway, this code is a prototype that can be used to study the feasibility of
the alpha correction technique. It is possible to see its effect in Epiphany
browser, because canvas's fillText() function seems to go through this code
path. I haven't seen anything else that would have behaved differently yet,
though! And even in this case sometimes the glyph rendering path is not used
because for some font sizes and some text inputs the results are the same from
the "javascript" code path as from the regular browser <div>blah blah</div>
rendering. Puzzling! When it does work, though, I think it does what it's
supposed to. I made a few sample images:
https://bel.fi/alankila/lcd/sample.png
https://bel.fi/alankila/lcd/sample2.png
It should be noted that Pixman is not the ideal consumer of the alpha
correction code, because it only becomes really useful when you have text in
ARGB textures and want to improve the rendering result for that case without
simultaneously using sRGB texture read/write machinery. In theory the pixman
glyphs function could probably invoke the sRGB code paths directly if it
wanted, and dispense with the notion of alpha correction.
In any case this solution space is getting crowded. I wish somebody had a
roadmap that would get the Linux world from "we ignore gamma in everything" to
"we actually get the blending done correctly in all relevant color spaces".
Alpha correction is IMHO a feasible technique that can be utilized in the
current state of XRENDER/OpenGL rendering, whether through using the pixman
glyphs function and changing it to do this, or through manipulating ARGB
textures in OpenGL when they are known to represent text.
--
Antti
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ Pixman mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/pixman
