On Thu, Jul 17, 2003 at 12:51:09PM -0700, Ian Romanick wrote:
Ville Syrj�l� wrote:
mga_texenv.patch: - Fix all texenv modes for all texformats on G400.
- Implement GL_DECAL using linear blend mode. The G400 docs aren't very clear on this subject so I'm not entirely sure if it will work correctly with multi texturing. Anyone have a good test?
- GL_BLEND suffers from hardware limitations. All three components of Cc must be equal. Additionally GL_INTENSITY requires that Ac is 0.0 or 1.0. If Cc!=0.0 (or Ac!=0.0 for GL_INTENSITY) we need both units to get things done in a single pass.
Here's another thought that I've been toying with. All of the "classic" texture environments are a subset of the texture combine environment. That is, all of the equations for REPLACE, MODULATE, DECAL, BLEND, and ADD can be implemented using COMBINE. Why not implement the driver that way? Why not have a generic updateTexEnv routine that has a big switch based on the texture environment with inner-switches based on texture base format. Each case in the inner-switches would call a device specific UpdateTexEnvCombine routine with the right values.
Doing this for chips like the Radeon, R200, and i830 that actually support all of ARB_texture_env_combine would be easy. Doing it for chips like the Rage128 and G400 would be more difficult. There would be a lot of software fallbacks, but those drivers have software fallbacks for the "classic" texture environments as it is.
I took a brief look at this stuff for G400 but came to the conclusion that pretty much anything beyond the normal texenv modes would be a fallback. So I doubt there's much point in doing this.
There would be a lot of fallbacks. However, there would be a lot of cases that wouldn't be fallbacks. That wasn't the issue I was concerned about (i.e., that's not what has stopped me from already doing the work). The problem is that it would be a lot of effort to sort all the different cases out and generate the right register values.
When I was looking into it, I figured out that about 50% of the ARB_texture_env_combine modes could be supported and about 30% of the ATI_texture_env_combine3 modes could be supported. In the general case, most of the modes *except* the following could be supported:
- Anything that requires more than a single modulate (i.e., INTERPOLATE is right out).
- Anything that requires arg0 to be anything other than TEXTURE (i.e., PRIMARY_COLOR - TEXTURE is out).
- Many uses (but not all) of CONSTANT where ((Rc != Gc) && (Rc != Bc)).
The combine engine in the G400 is pretty flexable, it just doesn't quite match the flexability needed for ARB_texture_env_combine. There are a few modes that it can do that can't be expressed with ARB_texture_env_combine with just 2 TMUs. Most of those modes aren't terribly useful, but that's another story. :)
Two other things to keep in mind. At some point it would be nice to support ATI_envmap_bumpmap, which depends on ARB_texture_env_combine. Also, the G400 drivers for Windows support ARB_texture_env_combine. I'm sure there are a lot of fallbacks in that driver as well. :)
- Passes glean's texenv test (32bpp and 16bpp). And without fallbacks if I change the envcolor to comply with the aforementioned restrictions. Also Quake3 looks ok with the patch ;)
Quake3 intentionally uses a very limited set of texture environments. This was done to work around driver / hardware limitations at the time it was released. Anymore, I don't think that makes it a very good test. A better test would be RtCW: Enemy Territory. The whole game is free (as in beer), and it builds on the RtCW engine (which builds on the Quake3 engine).
Thanks. I'll have to try that one too.
Actually, I did some "testing" with that this weekend. It doesn't exercise the combine modes much, if any, more than Q3.
- I also removed the useless MGA_FALLBACK_BORDER_MODE thing. It didn't print anything since the debug print stopped at the first enabled fallback bit which was the texture fallback.
That doesn't seem like it should be true. When FALLBACK is called in update_tex_common (mga_texstate.c, line 686) there shouldn't be any bits already set. Try modifing tests/texwrap.c to always set TEXTURE_WRAP_T to CLAMP_TO_EDGE. You will get the "Mixing GL_CLAMP_TO_EDGE and GL_CLAMP" fallback message. :)
I'll have to try this again. Certainly the code suggests that it should work...
It just may be that you also hit some other fallback in your test. Try the modification to texwrap.c that I suggested. That should isolate it enough to hit just that one specific fallback.
------------------------------------------------------------------------
diff -urN mga.orig/mgastate.c mga/mgastate.c
--- mga.orig/mgastate.c 2003-07-12 22:40:08.000000000 +0300
+++ mga/mgastate.c 2003-07-12 22:39:47.000000000 +0300
@@ -123,9 +123,11 @@
/* BlendEquation sets ColorLogicOpEnabled in an unexpected * manner. */
+#if !defined(ACCEL_ROP)
FALLBACK( ctx, MGA_FALLBACK_LOGICOP,
(ctx->Color.ColorLogicOpEnabled && ctx->Color.LogicOp != GL_COPY));
+#endif
}
static void mgaDDBlendFunc(GLcontext *ctx, GLenum sfactor, GLenum dfactor) @@ -900,9 +902,11 @@
/* For some reason enable(GL_BLEND) affects ColorLogicOpEnabled.
*/
+#if !defined(ACCEL_ROP)
FALLBACK( ctx, MGA_FALLBACK_LOGICOP,
(ctx->Color.ColorLogicOpEnabled && ctx->Color.LogicOp != GL_COPY));
+#endif
break;
case GL_DEPTH_TEST:
FLUSH_BATCH( mmesa );
I'm not really familiar with the blend code for the MGA. Have you tested these changes? Clearly with ACCEL_ROP undefined they don't make any difference. Has any of this code ever been tested with ACCEL_ROP defined? What were the issues?
I have no idea. I'm running with ACCEL_ROP defined. Certainly glean doesn't pass the logicOp test but then again I don't think it passes it with ACCEL_ROP undefined either (as strange as it sounds). I'll have to do some more testing...
Snif. Snif. I smell bit-rot. I don't think anyone is still around that has any real knowledge of that code. Have fun. :)
------------------------------------------------------------------------
diff -urN mga.orig/mga_texstate.c mga/mga_texstate.c --- mga.orig/mga_texstate.c 2003-07-12 22:40:08.000000000 +0300 +++ mga/mga_texstate.c 2003-07-12 22:42:30.000000000 +0300 @@ -106,14 +106,17 @@ * GL_TEXTURE_MAX_LOD, GL_TEXTURE_BASE_LEVEL, and GL_TEXTURE_MAX_LEVEL. * Yes, this looks overly complicated, but it's all needed. */ - - firstLevel = tObj->BaseLevel + (GLint)(tObj->MinLod + 0.5); - firstLevel = MAX2(firstLevel, tObj->BaseLevel); - lastLevel = tObj->BaseLevel + (GLint)(tObj->MaxLod + 0.5); - lastLevel = MAX2(lastLevel, tObj->BaseLevel); - lastLevel = MIN2(lastLevel, tObj->BaseLevel + baseImage->MaxLog2); - lastLevel = MIN2(lastLevel, tObj->MaxLevel); - lastLevel = MAX2(firstLevel, lastLevel); /* need at least one level */ + if (tObj->MinFilter == GL_NEAREST || tObj->MinFilter == GL_LINEAR) { + firstLevel = lastLevel = tObj->BaseLevel; + } else { + firstLevel = tObj->BaseLevel + (GLint)(tObj->MinLod + 0.5); + firstLevel = MAX2(firstLevel, tObj->BaseLevel); + lastLevel = tObj->BaseLevel + (GLint)(tObj->MaxLod + 0.5); + lastLevel = MAX2(lastLevel, tObj->BaseLevel); + lastLevel = MIN2(lastLevel, tObj->BaseLevel + baseImage->MaxLog2); + lastLevel = MIN2(lastLevel, tObj->MaxLevel); + lastLevel = MAX2(firstLevel, lastLevel); /* need at least one level */ + } log2Width = tObj->Image[firstLevel]->WidthLog2; log2Height = tObj->Image[firstLevel]->HeightLog2; width = tObj->Image[firstLevel]->Width;
This is not correct. Both the OpenGL spec (page 142 (155 in the pdf) of the 1.4 spec) and the SGIS_texture_lod extension (http://oss.sgi.com/projects/ogl-sample/registry/SGIS/texture_lod.txt) are quite clear on this. The value of TEXTURE_BASE_LEVEL, which defaults to zero, is the texture level used by the NEAREST and LINEAR filters.
Isn't that exactly what my patch does?
I'm going to blame it on the codine that I was taking (I just got my wisdom teeth pulled out).
The most important change here is to make sure that lastLevel is set to firstLevel if mipmapping isn't used, correct? That seems good. I think other drivers could benefit from that change as well. This one of the blocks of code (that I mention below in my earlier message) that I think should be moved to a common file. The same block, modulo support for other texture targets, appears in every single driver.
In any case, a comment explaining the special-case is probably in order.
diff -urN mga.orig/mgatex.c mga/mgatex.c --- mga.orig/mgatex.c 2003-07-12 22:40:08.000000000 +0300 +++ mga/mgatex.c 2003-07-12 22:42:45.000000000 +0300 @@ -449,6 +445,7 @@
switch (pname) { case GL_TEXTURE_MIN_FILTER: + driSwapOutTextureObject( (driTextureObject *) t ); case GL_TEXTURE_MAG_FILTER: FLUSH_BATCH(mmesa); mgaSetTexFilter( t, tObj->MinFilter, tObj->MagFilter );
Why is this needed?
Because of the previous patch. When changing between LINEAR/NEAREST and mipmapping firstLevel needs to be recalculated.
Okay.
------------------------------------------------------------------------
diff -urN mga.orig/mga_texstate.c mga/mga_texstate.c --- mga.orig/mga_texstate.c 2003-07-12 22:44:37.000000000 +0300 +++ mga/mga_texstate.c 2003-07-12 22:46:49.000000000 +0300 @@ -134,7 +134,7 @@
totalSize = 0; for ( i = 0 ; i < numLevels ; i++ ) { - const struct gl_texture_image * const texImage = tObj->Image[i]; + const struct gl_texture_image * const texImage = tObj->Image[i+firstLevel];
This is correct. Good catch.
As for the rest of this patch, I think there is some confusion. Hopefully I can clear it up. The intention of the firstLevel and lastLevel values in classes (in a very loose, C-instead-of-C++ sense) derrived from driTextureObject is for them to refer to images in the Mesa controlled texture state. That is, firstLevel is the image in tObj->Image that corresponds to the base level (as set by TEXTURE_BASE_LEVEL) of the current texture environment. The idea is that this image is set to the level 0 image in the hardware.
But that isn't true for G200/G400 since the hw doesn't support LOD clamping.
Right. That's the whole idea. The hardware's level 0 is used for the base-level no matter what. The driTextureObject represents what's going on in the hardware, and the Mesa gl_texture_object represents what's going on in OpenGL. By mapping t->tObj->BaseLevel to 0 we bridge between the two views. That's one of the reasons why everything gets thrown out when TEXTURE_BASE_LEVEL changes.
diff -urN mga.orig/mga_texstate.c mga/mga_texstate.c --- mga.orig/mga_texstate.c 2003-07-12 22:48:26.000000000 +0300 +++ mga/mga_texstate.c 2003-07-12 22:51:08.000000000 +0300 @@ -52,6 +52,10 @@ [MESA_FORMAT_RGB565] = TMC_tformat_tw16 | TMC_takey_1 | TMC_tamask_0, [MESA_FORMAT_ARGB4444] = TMC_tformat_tw12 | TMC_takey_1 | TMC_tamask_0, [MESA_FORMAT_ARGB1555] = TMC_tformat_tw15 | TMC_takey_1 | TMC_tamask_0, + [MESA_FORMAT_AL88] = TMC_tformat_tw8al | TMC_takey_1 | TMC_tamask_0, + [MESA_FORMAT_A8] = TMC_tformat_tw8a | TMC_takey_1 | TMC_tamask_0, + [MESA_FORMAT_L8] = TMC_tformat_tw8a | TMC_takey_1 | TMC_tamask_0, + [MESA_FORMAT_I8] = TMC_tformat_tw8a | TMC_takey_1 | TMC_tamask_0, [MESA_FORMAT_CI8] = TMC_tformat_tw8 | TMC_takey_1 | TMC_tamask_0, [MESA_FORMAT_YCBCR] = TMC_tformat_tw422uyvy | TMC_takey_1 | TMC_tamask_0, [MESA_FORMAT_YCBCR_REV] = TMC_tformat_tw422 | TMC_takey_1 | TMC_tamask_0, @@ -81,6 +85,10 @@ case MESA_FORMAT_RGB565: txformat = TMC_tformat_tw16; break; case MESA_FORMAT_ARGB4444: txformat = TMC_tformat_tw12; break; case MESA_FORMAT_ARGB1555: txformat = TMC_tformat_tw15; break; + case MESA_FORMAT_AL88: txformat = TMC_tformat_tw8al; break; + case MESA_FORMAT_A8: txformat = TMC_tformat_tw8a; break; + case MESA_FORMAT_L8: txformat = TMC_tformat_tw8a; break; + case MESA_FORMAT_I8: txformat = TMC_tformat_tw8a; break; case MESA_FORMAT_CI8: txformat = TMC_tformat_tw8; break; case MESA_FORMAT_YCBCR: txformat = TMC_tformat_tw422uyvy; break; case MESA_FORMAT_YCBCR_REV: txformat = TMC_tformat_tw422; break; diff -urN mga.orig/mgatex.c mga/mgatex.c --- mga.orig/mgatex.c 2003-07-12 22:48:26.000000000 +0300 +++ mga/mgatex.c 2003-07-12 22:50:11.000000000 +0300 @@ -219,7 +219,7 @@ case GL_ALPHA16: case GL_COMPRESSED_ALPHA: /* FIXME: This will report incorrect component sizes... */ - return &_mesa_texformat_argb4444; + return MGA_IS_G400(mmesa) ? &_mesa_texformat_a8 : &_mesa_texformat_argb4444;
case 1: case GL_LUMINANCE: @@ -229,7 +229,7 @@ case GL_LUMINANCE16: case GL_COMPRESSED_LUMINANCE: /* FIXME: This will report incorrect component sizes... */ - return &_mesa_texformat_rgb565; + return MGA_IS_G400(mmesa) ? &_mesa_texformat_l8 : &_mesa_texformat_rgb565;
case 2: case GL_LUMINANCE_ALPHA: @@ -241,7 +241,7 @@ case GL_LUMINANCE16_ALPHA16: case GL_COMPRESSED_LUMINANCE_ALPHA: /* FIXME: This will report incorrect component sizes... */ - return &_mesa_texformat_argb4444; + return MGA_IS_G400(mmesa) ? &_mesa_texformat_al88 : &_mesa_texformat_argb4444;
case GL_INTENSITY: case GL_INTENSITY4: @@ -250,11 +250,12 @@ case GL_INTENSITY16: case GL_COMPRESSED_INTENSITY: /* FIXME: This will report incorrect component sizes... */ - return &_mesa_texformat_argb4444; + return MGA_IS_G400(mmesa) ? &_mesa_texformat_i8 : &_mesa_texformat_argb4444;
case GL_YCBCR_MESA: - if (type == GL_UNSIGNED_SHORT_8_8_APPLE || - type == GL_UNSIGNED_BYTE) + if (MGA_IS_G400(mmesa) && + (type == GL_UNSIGNED_SHORT_8_8_APPLE || + type == GL_UNSIGNED_BYTE)) return &_mesa_texformat_ycbcr; else return &_mesa_texformat_ycbcr_rev;
I was afraid of something like this. Have you tested all of these additional modes to make sure they work right?
glean -t texEnv passes everything. Also Mesa's texenv demo looks fine. Any other good tests I should try?
This stuff really needs the texenv patch to make things right. I think the previous texenv stuff wasn't that careful in selecting diffuse vs. modulated alpha for example.
The problem with the glean texEnv test is that, by it's own admission, it only hits a very small subset of the possible cases. One of the things that it does, IIRC, is only test RGB and RGBA as texture base formats. I don't think it hits L, I, AL, or A textures at all. If demos/texenv looks good, then it's a good bet that things are okay.
I don't think that too many of these other texture base formats are used that often. It will probably be difficult to find a "real world" test.
Also, "GL_MESA_ycbcr_texture" should be moved from card_extensions to g400_extensions in mga_xmesa.c
So it should be disabled on G200 because it can only support one of the types?
Hmpf. I'm going to have to stop looking at just diffs. From looking at the diff I thought both formats weren't supported for G200.
------------------------------------------------------- This SF.net email is sponsored by: VM Ware With VMware you can run multiple operating systems on a single machine. WITHOUT REBOOTING! Mix Linux / Windows / Novell virtual machines at the same time. Free trial click here: http://www.vmware.com/wl/offer/345/0 _______________________________________________ Dri-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/dri-devel
