On 2002.02.25 00:58 Leif Delgass wrote:
> In investigating texture environment modes on the mach64, I've discovered
> that the card can't modulate fragment and texture alpha values (this is
> confirmed by the docs, experimentation, and comments in the utah-glx
I've also been studying this but I'm far behind you, so I still didn't
understood very well what is the problem.
The TEX_LIGHT_FCN register (Mach64 Spec. p. 6-9) programed to values 0,1
and 2, matches the REPLACE, MODULATE, and DECAL definition (OpenGL Spec p.
148, t. 3.22). For modulate it says: output color = texel color *
interpolator color. So the only way this does not comply is if in this
operation the color multiplication doesn't include the alpha component,
but I didn't see any reference of that.
The only Utah-GLX comment to this is at mach64state.c:434, when setting
the ALPHA_TST_SRC_SEL (Mach64 Spec. p. 5-19), and I don't understand what
"modulated values" this refers to:
/* we are going to always take the texel alpha, because
we can't get modulated values (rage pro limitation),
and alpha testing from vertex shading doesn't make much
sense */
asrc = 0;
I hadn't been able to experiment this though (too busy lately), but I also
don't know if the current DRI driver mimics the Utah-GLX behaviour or not
(I have to study it first).
> source). The one exception being that there are a couple of bizarre
> "alpha masking" modes that allow you to modulate the fragment alpha with
> the texture alpha's most significant or least significant bit. The
> _only_
> way I can see this being useful is if the texture only uses 0 and/or 1 as
> alpha values. You'd have to search every textel for fractional alpha
> values to determine if you could use this feature, which just doesn't
> seem worth it (plus it would violate OpenGL invariance rules).
>
> Given this hardware limitation, most GL_MODULATE cases can produce
> incorrect results with alpha blending enabled. Using software fallbacks
> for these cases could seriously impact performance in applications that
> make heavy use of these modes. So my question is this: what's the best
> way to handle compliance v. performance? I can use software rendering by
> default when alpha blending is enabled to always give correct results,
> and
> offer an environment variable switch to use the flawed hardware
> implementation. If you have a fast enough processor, maybe you wouldn't
> notice so much if the fallback cases are used sparingly in an app. Then
> gamers could enable the hardware implementation to get better
> performance,
> at the cost of some rendering problems. There doesn't seem to be a GL
> API
> method for conveying that a core feature like this (not an extension) is
> broken or slow, so an environment var or config file option seems like
> the
> only alternative. Of course the advantage of an environment var is that
> it can be set per app.
>
I think that a common way to configurate the driver settings would be
excelent. A
configuration file that had default settings aiming OpenGL conformance and
per-application configuration would be a neat and clean way to deal with
these kind of subjects.
> Another issue is granularity of the hardware/software switch: if I can do
> GL_MODULATE correctly in hardware when the texture format is GL_RGB or
> GL_LUMINANCE, can I use hardware for that case even if I'm using software
> for all the other texture formats? The invariance rules seem to suggest
> that the answer is no (Appendix A, Section A.4 in the 1.3 OpenGL spec).
> The texture format is more of a "parameter" than a "mode vector", whereas
> I'm interpreting the GL_TEXTURE_ENV_MODE as a "mode vector". If "mode
> vectors" are only state variables changed with glEnable/glDisable, then
> the perfomance outlook for alpha blending would be pretty dire if I need
> to use software for _all_ environment modes.
>
State vector refers to _all_ OpenGL state variables, and the texture
environment mode isn't mentioned even in the "Stronly suggested" of
invariance rule 2. "Blend parameters" is though, but in the "Stronly", not
"required".
I think that one has to have in mind that OpenGL conformance is not an end
on itself. Nobody buys a 3D graphics card for having a better OpenGL
conformance but for more performance. OpenGL conformance is, of course,
very important otherwise it negates the whole point of using OpenGL, but I
think that we, as implementors, have to have in mind what the enduser
expects of a 3D driver, so we should give a option to trade the
conformance for performance. This way we can clame OpenGL conformance, but
at the same time meet the different user expectations.
> What are people's ideas on this, and how do other drivers address these
> issues? It seems that a consensus or standard for DRI drivers would be
> helpful (utah-glx had glx.conf). Maybe we could discuss this in
> tomorrow's irc meeting.
>
I'm looking forward to that too.
> --
> Leif Delgass
Regards,
Jos� Fonseca
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel