http://bugs.freedesktop.org/show_bug.cgi?id=20340





--- Comment #2 from Owen Taylor <[email protected]>  2009-05-08 15:45:13 PST 
---
After much investigation, I managed to get a pretty good understanding of
what's going on with this test case.

There are basically two sources of fuzziness. The first is a strange leftover
in Mesa. In r300_state.c, the viewport is offset by 1/8th pixel:

 /*
  * To correctly position primitives:
  */
 #define SUBPIXEL_X 0.125
 #define SUBPIXEL_Y 0.125

Which comes originally from the r100 driver before the start of version control
in 2003. So, I'm not really sure the exact intent of it.

(This 1/8th pixel offset becomes an effective 1/6th pixel offset because of the
1/12th subpixel precision that the driver uses; which is why the observed
values from my test program are about 1/6th off.)

Setting this offset to 0 makes things much better. However, depending on the
size of the rendered primitives and the texture, there is still slight fuzzing
at some (but not all) the boundaries where values that should be 0xff are 0xfb
and values that should be 0x00 are 0x04... in other words, we're still 1/64th
off for some pixels. 

I analyzed this by writing a fragment shader that compared the incoming texture
coordinates and the sampled textured values to the expected value, and found
four things that combine to cause the problem:

1) When the rasterizer outputs interpolated texture coordinates, there is some
small error in them = ~1/1024th of pixel (for 1:1 pixels and texels). This can
be positive or negative.

2) When we sample a texture with a texture coordinate, the sampling is done at
quantized positions of 1/64th of a texel.

3) The quantized sample position is determined by truncation - so if the texel
coordinate is 3.4999, it samples at 3+31/64th. (!)

4) There's also considerable error additional error when determining sample
positions for non-power-of-two rectangular textures; it looks like there might
be a multiply and divide with a low-precision fixed point intermediate.

My basic conclusion is that given the above, this additional error probably
should just be ignored and considered "good enough". In the end, GL doesn't
really make any guarantee that LINEAR filtering  of a 1:1 texture gives perfect
result, and if you want that you probably should switch to NEAREST at 1:1.
Fixing the gross fixing the gross problem will mean that there won't be sudden
popping from fuzzy to not when you switch to NEAREST.

I did experiment some with using the TX_OFFSET field of RS_INST_COUNT; this
adds an offset to the texture coordinates coming out of the rasterizer. Since
our problem is too small values being truncated, adding a positive offset,
often does makes things better. (It helps if the offset is larger than the
errors coming out of the rasterizer, but not so larger that it bumps us to the
next subtexel position by itself.) This helping is rather coincidental, since
the size of the offset isn't a fixed texel offset (what we'd like here), but
rather dependent on the the size of the rendered primitive in some fairly
complex way. 

I don't think it makes sense to set TX_OFFSET as a resolution to this problem;
it's really designed to remove visible artifacts with NEAREST sampling and
perhaps also principally for DirectX, where the sample position rules make
problems with NEAREST more likely. (It might make sense to set TX_OFFSET
independently for other reasons; following whatever fglrx does probably is
appropriate.)

Adding the desired 1/128th texel offset directly to texture coordinates would
also conceivably be possible when generating the fragment shader, but the
complexity is unlikely to be worth the small improvement.

Neither TX_OFFSET nor a manual addition of 1/128th texel really helps the case
of NPOT rectangular textures since the errors are larger and introduced right
before sampling; and while NPOT rectangular textures may seem like a fringe
case, the reason I started investigating this was window textures in a
compositing window manager, which are typically NPOT rectangular textures...

So, in conclusion I think the right thing to do is to remove the subpixel
offset, and accept the remaining small error. I'll attach:

 A) A patch to do that
 B) The fragment shader test program I used to investigate

I'd like to have a set of piglit runs before/after to make sure that there
weren't regressions related to the original intent of "correctly position
primitive", but I was unable to get piglit to work for me. (With KMS, there are
too many visuals, takes forever, and fails a lot. Without KMS + and with the
radeon-rewrite branch of mesa, it crashes my X server.)


-- 
Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.

------------------------------------------------------------------------------
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
--
_______________________________________________
Dri-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to