> In any case, this change causes a ton of conformance failures. Several
> piglit tests in the "general" category also fail.
Yes, sorry, I actually run piglit, but twice on the same binary...
The problem was that last_tile_addr must be invalidated on any change
including flushes.
This was automat
Changes in v2:
- Invalidate last_tile_addr on any change, fixing regressions
- Correct coding style
Currently softpipe ends up allocating more than 200 MB of memory
for each context due to the tile caches.
Even worse, this memory is all explicitly cleared, which means that the
kernel must actuall
Currently makedepend is used by the Mesa Makefile-based build system,
but not required.
Unfortunately, not having it makes dependency resolution non-existent,
which is a source of subtle bugs, and is a rarely tested
configuration, since all Mesa developers likely have it installed.
Furthermore so
This is needed to be able to use EGL on any existing X window, and
seems a good idea in general,
Rejecting single-buffered configs should be done in EGL itself if
necessary, and not in the native API.
---
src/gallium/state_trackers/egl/x11/native_dri2.c |4
1 files changed, 0 insertions(
Currently softpipe ends up allocating more than 200 MB of memory
for each context due to the tile caches.
Even worse, this memory is all explicitly cleared, which means that the
kernel must actually back it with physical RAM right away.
This change allocates tile memory on demand.
---
src/galliu
Currently softpipe ends up allocating more than 200 MB of memory
for each context due to the tile caches.
Even worse, this memory is all explicitly cleared, which means that the
kernel must actually back it with physical RAM right away.
This change allocates tile memory on demand.
---
src/galliu
This is needed to be able to use EGL on any existing X window, and
seems a good idea in general,
Rejecting single-buffered configs should be done in EGL itself if
necessary, and not in the native API.
---
src/gallium/state_trackers/egl/x11/native_dri2.c |4
1 files changed, 0 insertions(
Yes, that used to happen for me too.
Just edit llvm-config to remove the offending text and ideally file a
bug on the LLVM bug tracker.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Currently makedepend is used by the Mesa Makefile-based build system,
but not required.
Unfortunately, not having it makes dependency resolution non-existent,
which is a source of subtle bugs, and is a rarely tested
configuration, since all Mesa developers likely have it installed.
Furthermore so
Currently makedepend is used by the Mesa Makefile-based build system,
but not required.
Unfortunately, not having it makes dependency resolution non-existent,
which is a source of subtle bugs, and is a rarely tested
configuration, since all Mesa developers likely have it installed.
Furthermore so
Regarding the tpf.h file, the only thing which is similar to Microsoft
files are the enum names.
In particular, we use the same names for the opcodes as those used by
Microsoft (even though the identifiers have a different prefix).
However, the "dlls/wined3d/shader_sm1.c" file in Wine contains an
> A way to unblock this would be to split thed3d1x state tracker in two parts:
> the runtime, and the client driver. As Keith also suggested.
>
> The client driver could be used on Windows -- precisely as the DDK and WDK
> are intended.
>
> The runtime part could be re-implemented from scratch by
> Is there any reason for the binaries to be commited?
The idea is to allow people to test (to be added) Wine support or run
the demos in Windows to see what they are supposed to do without
needing to setup a Windows build environment.
___
mesa-dev maili
> The basic rule Wine uses is that if you've ever seen Microsoft code,
> you can't work on similar code for Wine.
OK, but that means source code as far as I can tell from the FAQ, not
SDK headers.
In this case, I'm not aware of any public Microsoft source code
implementing any of the functionality
> Since this is derived from code in the DDK, this will prevent me or
The state tracker does not contain code derived from the DDK and
doesn't need any Microsoft code or tools except the Microsoft HLSL
compiler to compile shaders ahead of time (precompiled shaders are
included).
Also, as far as I k
> A couple of questions - it looks like this is a drop-in for the
> d3d10/11 runtime, rather than an implementation of the DDI.
Yes.
> I think
> that makes sense, but it could also be possible to split it into two
> pieces implementing either side of the d3d10 DDI interface. Any
> thoughts on whe
>> Note that the id parameter to DrawTransformFeedback is _not_ the place
>> to write to stream output to, but the place to take the count of
>> primitives to draw from.
>
> Which is size of the buffer / pipe_stream_output_state::num_outputs.
Huh?
Surely if I allocate a 10MB buffer, output a singl
I just pushed a Direct3D 10/11 state tracker to the d3d1x branch.
The commit message is very verbose and includes a lot of details.
What do you think?
If you tried it, did it work? If not, what issues did you find?
At what point should it be merged to master?
___
>
>> 2. How do you pass to Gallium the id parameter to DrawTransformFeedback?
>
> You bind the buffer with the given id before issuing draw_stream_output.
Bind to what?
Note that the id parameter to DrawTransformFeedback is _not_ the place
to write to stream output to, but the place to take the cou
nVidia dropped hardware support starting from nv40.
For texture compression, S3TC or R-component-only textures should
usually be a better option, and for other uses shader-based techniques
are more flexible.
___
mesa-dev mailing list
mesa-dev@lists.freed
> No, you don't store it at all. Storing it in the client side data structures
> implies fetching.
Well, of course you don't store it in the client data structure.
In the driver-specific part of the data structures, you store some
kind of reference to the GPU-side object where you instruct the GPU
> The difference between this and the D3D semantics is that in D3D you bind the
> buffer explicitely and in GL implicitly i.e. the buffer associated with the
> stream output object id is bound for you. So for GL the state tracker would
> have to bind the appropriate buffers on DrawTransformFeedback
> It's because I never had the time to actually test it properly. The interface
> is right though, the implementation is supposed to follow the D3D semantics
> and we should stick with that, instead of passing cso's and making behavior
> switch based on magic null arguments.
I think you need that
The current version of draw_stream_output in softpipe seems to attempt
to draw using the currently bound stream output buffer as input.
This does not match D3D10/11's DrawAuto, which instead draws with the
current vertex buffer (and requires having only one bound), but using
the primitive count fr
Thanks a lot for your work in fixing, adapting, improving and merging
this, and sorry for the previous bugginess.
How about the loop unrolling patches?
In particular, unless I'm wrong on this bug existing,
"glsl/loop_analysis: fix miscompilation with continues before cond
breaks" should probably
> Exported function tables reduce the ability of the driver to replace the
> entire dispatch table with a single pointer change for new state
> combinations or rendering modes. If the app gets a pointer to a
> dispatch table and caches it, the only way for the driver to change how
> commands are d
Added a commit to update to the new glext.h and solve a minor issue
caused by that.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev
>> +#ifndef GL_CONTEXT_FLAG_ROBUST_ACCESS_BIT_ARB
>> +#define GL_CONTEXT_FLAG_ROBUST_ACCESS_BIT_ARB 0x0004
>> +#endif
>> +
>
> This part of the patch should go into glheader.h
Actually, I think the correct solution is to update glext.h from
version 61 to version 64, which has it (it's core in
> Yes. Returning 0.5 is wrong. I don't think HLSL has noise, so there
> shouldn't be a compatibility issue there.
Yes, in fact, nothing seems to use the NOISE opcodes at all.
Also, I can't find evidence of any hardware ever having implemented it.
AMD IL has Perlin noise opcodes, but it looks lik
ir_lower_jumps should already do the jump unification, and
could/should also remove the continue.
It's probably best to put all jump manipulations there to avoid
risking rerunning all passes a number of times linear in the program
size.
Regarding ifs, it would be nice to also remove empty else bra
> I think you forgot to include this file in this commit.
Yes, sorry.
> This is one that I was going to give some review comments on. I really
> want to have this change in 7.9, but the patch is going to need some
> changes. When we first discussed using C++ in the GLSL compiler in
> Mesa, we go
Jose Fonseca seemed open to accepting this, but no final go ahead was given.
> We need this to enable full loop unrolling for r3xx->r4xx fragment shaders,
> which don't support loops. It's needed for the blur shader in KWin to work.
> This is a regression since the GLSL compiler merge, because the
I think I understand: since you reparent everything to the new root
and then call talloc_free on the old root, everything dies anyway
whatever its parent was.
However, this makes me think that a copying garbage collection scheme
would be more efficient, since you already have support for visiting
Thanks!
But, does this mean that if I allocate using new(existing_ir) ir_foo()
or opposed to new(talloc_parent(existing_ir)) ir_foo() then this new
object will always remain alive as long as existing_ir is alive, even
if it's no longer referenced?
If so, why would anyone do that, which seems to o
> The only people that it helps are people like
> S3 or XGI that don't have a budget to hire a full driver team. For
> everyone else it just plain sucks.
This is the case for all Mesa/Gallium drivers, if you define "full
driver team" by the standards of ATI and nVidia Windows driver teams.
Hence
> Structured flow control is the issue. What's the solution for getting
> that back out of LLVM?
Well, my idea is the following.
First, break any self loop edges, then do an SCC decomposition of the
CFG, and output the DAG of the SCCs in topological order.
SCCs that have size 1 are basic blocks,
> And never mind that you can't make a conformant OpenGL driver with
> Gallium due to the impossibility of software fallbacks.
Well, you could use the failover module to use softpipe for fallbacks,
but no one does, for the following reasons:
1. Modern hardware doesn't need any software fallbacks,
> 2. Compile assembly shaders to GLSL IR (possibly via Mesa IR -> GLSL IR
> translation). This will allow support of other NV assembly extensions
> for free on more advanced GLSL hardware. This will give better support
> for applications that use Cg. The GLSL backend for Cg generates some
> ugl
> We intend to change the front-end for ARB fp/vp to generate GLSL IR directly.
Nice.
It's probably a good idea to extend GLSL IR to represent things like
LOG and LIT directly and have optional lowering passes for them, so
that DX9 hardware with native instructions for them can still easily
generat
> I think some of the danger lies in constructs like:
>
> for (int i = 0; i < limit; i++)
> if (i < gl_TexCoord.length())
> do_something_with(gl_TexCoord[i]);
> else
> do_something_else();
Yes, you are right.
What
> If you're working on a driver for a scalar chip, you might want to pull
> brw_fs_channel_expressions and brw_fs_vector_splitting up and get them
> used -- it should make sensible codegen a lot easier for them.
Current drivers for scalar hardware take Mesa IR/TGSI input and not GLSL IR.
Using GL
> I keep hearing this, and a bunch of people have been trying to build the
> equivalent gallium hardware drivers to various core drivers for a long
> time. So, can we get some details on a success story? What driver is
> now more correct/faster than it was before? By how much? How much of
> tha
> Too bad LLVM doesn't have a clue about hardware that requires structured
> branching. Any decent optimizer for general purpose CPUs generates
> spaghetti code.
Yes, that's the biggest challenge, but I think it can be solved.
Also, modern hardware tends to have more flexible branching than GLSL,
> Clever. :)
>
> Technically speaking only the access to the array has undefined
> behavior, and that undefined behavior does not include program
> termination.
No, my interpretation is correct, because ARB_robustness contains the
following text:
<<
... Undefined behavior results from indexing
I think my patch can be seen an intermediate step towards that.
I'm not sure that it's a good idea to implement them in terms of all/any.
ir_binop_all_equal and ir_binop_any_nequal represent the concept of
"whole object equality", which seems quite useful in itself.
For instance, it can be trivi
For anyone who doesn't know it yet, there is a new closed-source
OpenGL game for Linux called Amnesia, with the very interesting
property that the developers offer a test program that renders several
complex GLSL-based effects one at a time.
Unfortunately, it's pretty broken right now (on softpipe
Yes, I think what is really needed here is a class that only ignores
IR related to declaration and expression statements.
This would allow lower_jumps to avoid caring about stuff like
ir_texture that it doesn't care about at all.
Would this be acceptable?
BTW, using hierarchical_visitor is less
I pushed yet another commit that teaches the unroller to unroll loops like this:
uniform int texcoords;
float4 gl_TexCoord[8];
for(i = 0; i < texcoords; ++i)
do_something_with(gl_TexCoord[i]);
After this, we should now be able to inline all functions and unroll
all loops with reasonably obvio
I added two commits that allow unrolling of loops with jumps.
One fixes a serious bug in loop controls, where the analyzer wouldn't
recognize the from/to/increment data it itself created on a previous
run.
The second adds code to build the if structures to the unroller.
Note that some orderings
>> at least for chips with no control flow support like nv30 and i915
> s/at least/only
> This doesn't reduce divergence, only increases code size.
The purpose of this unrolling is not to reduce divergence, but to
avoid the expense of computing and checking the loop iteration
variable, and the exp
I just put an updated version in the shader-work branch, which better
adheres to the coding conventions, has better comments, and fixes and
improves the execute flag if-insertion logic.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists
>> 2. Remove all "continue"s, replacing them with an "execute flag"
>
> This essentially wraps the rest of the code in the loop with
> if-statements, right? I had been thinking about adding a pass to do
> this as well.
Yes.
>> 3. Replace all "break" with a single conditional one at the end of the
Sorry, should be fixed now.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev
Well, the alternative is to copy&paste this into ir_lower_jumps and
other similar passes.
I added a new class instead of changing ir_visitor so that subclasses
that need to examine all leaves will get a compile error if not
updated when new leaves are introduced. This is debatable though,
since yo
---
src/mesa/program/ir_to_mesa.cpp |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/src/mesa/program/ir_to_mesa.cpp b/src/mesa/program/ir_to_mesa.cpp
index 7ec883e..5346d8f 100644
--- a/src/mesa/program/ir_to_mesa.cpp
+++ b/src/mesa/program/ir_to_mesa.cpp
@@ -2730,6 +2730
}
-
- /* If only one side returns, then the block of code after the "if"
-* is only executed by the other side, so those instructions don't
-* need to be anywhere but that other side.
-*
-* This will usually pull a return statement up into the other
-* side, so we&
This is just a subclass of ir_visitor with empty implementations of all
the visit methods.
Used to avoid duplicating that in ir_visitor subclasses.
ir_hierarchical_visitor is another way to solve this, but is less natural
for some applications.
---
src/glsl/ir_visitor.h | 21 ++
> And prog_optimize can't do anything if there are temporaries with indirect
> addressing. We need either the temporary array file or to do register
> allocation in GLSL IR.
Sure, but I think this can/should be done on top of this change, and
thus isn't an argument against it.
Instead of a tempor
> Furthermore, if the idea is to make PIPE_PROCESSOR_* and
> TGSI_PROCESSOR_* match, then the first thing that comes to mind is: why
> do we need two equal named enums? If we're going through the trouble of
> changing this, then IMO, the change should be to delete one of these
> enums.
Right, shou
On Mon, Sep 6, 2010 at 4:45 PM, Zack Rusin wrote:
> Maybe lets skip this and other tessellation patches until we have code that
> actually does something. It's just going to be confusing to have not finished
> (or really "not started" =) ) code that doesn't do anything.
The idea is that having it
> I'm not a fan of this patch. Technically the token is from a different
> extension, one which we never supported. We use our own representation of a
> geometry program, did you double check whether what we have is at all
> compatible with the NV extension?
I think this will not be visible from o
>>> Unfortunately, some GLSL shaders such as an SSAO fragment
>>> post-processing shader in Unigine Tropics, go over this limit at
>>> least before program optimizations are applied.
>
> By the time we generate Mesa IR all the optimizations should be done.
Are you sure?
As far as I could tell, ir
Currently each shader cap has FS and VS versions.
However, we want a version of them for geometry, tessellation control,
and tessellation evaluation shaders, and want to be able to easily
query a given cap type for a given shader stage.
Since having 5 duplicates of each shader cap is unmanageable
This turns on if conversion and unlimited loop unrolling if control
flow is not supported.
Also, programs whose control flow cannot be emulated will now
fail GLSL linkage.
---
src/mesa/state_tracker/st_extensions.c |9 +
1 files changed, 9 insertions(+), 0 deletions(-)
diff --git a/s
This purpose of this is so that loop/if emulation in the glsl compiler
doesn't get enabled in the next change.
Note that doing such emulation in driver-specific code is probably
a bad idea, especially since the result won't be exposed to further
generic optimizations, and the driver-specific code
Don't add the pipe_screen/pipe_context members yet, since no actual
implementation is planned right now.
DirectX 11 calls these "hull" and "domain" shaders.
Since we are already using "fragment" instead of "pixel", and the
OpenGL naming is much clearer, use it.
They are abbreviated to "tessctrl"
These didn't match PIPE_SHADER_*, and it seems much better to make
all such indices match.
Vertex is first because cards with vertex shaders came first.
---
src/gallium/include/pipe/p_shader_tokens.h |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/gallium/include/p
This increases the chance that GLSL programs will actually work.
Note that continues and returns are not yet lowered, so linking
will just fail if not supported.
---
src/glsl/glsl_parser_extras.cpp|4 ++--
src/glsl/ir_optimization.h |2 +-
src/glsl/linker.cpp
This allows us to specify different options, especially useful for chips
without unified shaders.
---
src/mesa/main/mtypes.h | 15 ---
src/mesa/main/nvprogram.c |4 +++-
src/mesa/main/shaderapi.c | 31 ---
sr
This includes tessellation shaders.
---
src/mesa/main/mtypes.h | 17 +
src/mesa/main/shaderobj.h | 39 +++
src/mesa/program/program.h | 38 ++
3 files changed, 94 insertions(+), 0 deletions(-)
diff
---
src/mesa/main/extensions.c |1 +
src/mesa/main/mtypes.h |3 +++
2 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/src/mesa/main/extensions.c b/src/mesa/main/extensions.c
index 50b97f5..2ab3801 100644
--- a/src/mesa/main/extensions.c
+++ b/src/mesa/main/extensions.c
@@
There is a published extension with a token, so let's use it.
---
src/mesa/main/context.c |4 ++--
src/mesa/main/mtypes.h |8
src/mesa/main/state.c |2 +-
src/mesa/program/prog_print.c |2 +-
src/mesa/program/pro
#5 have no effect on r300g
5. Set the GLSL compiler options depending on the Gallium caps
Luca Barbieri (10):
mesa: s/MESA_GEOMETRY_PROGRAM/GL_GEOMETRY_PROGRAM_NV
mesa: add ARB_tessellation_shader boolean and constants
mesa: add PIPE_SHADER_* like constants and conversions to/from enums
> How the ... is it using 1000 TEMPs ?
> Wait, does it have large arrays and are they accessed indirectly ?
No, it has a large unrolled (by the game) loop which generates
something like 1500 instructions, each creating a different value.
On first sight, the unrolled loop seems to be the sampling k
Currently Mesa has a limit of 1024 temporaries.
Unfortunately, some GLSL shaders such as an SSAO fragment
post-processing shader in Unigine Tropics, go over this limit at
least before program optimizations are applied.
Instead of just enlarging the limit, remove it completely and
replace all arra
The nv30/nv40 driver expects that all optimizations that can be
performed on TGSI without target knowledge to have already been
performed.
This seems a sensible principle in general to avoid drivers duplicating work.
In particular, registers are expected to be optimally allocated.
Doing this in th
> I think we need to be sure we're not infringing on this patent. Until
> we know one way or the other I'd prefer that we don't merge this
> branch into master. In the mean time I'll see if I can learn more
> about the situation and find a way to proceed.
For anyone interested in determining the
> It still sounds that you're referring to sampling from a texture and not
> rendering/blending to it. Of course the are related (we only want one
> swizzled layout at most), but the current swizzled layout was chosen to
> make blending easy; and not to make texture sampling easy.
>
> No SoA swizzl
> Well, don't forget that you have to populate the tile from somewhere -
> so you'll hit all of the same cachelines that the non-swizzled version
> would have.
>
> We still get locality from binning, meaning that all accesses to a group
> of cachelines come in a single burst, after which they are d
> It's an impressive amount of work you did here. I'll comment only on the
> llvmpipe of the changes for now.
Thanks for your feedback!
> Admittedly, always using a floating point is not ideal. A better
> solution would be to choose a swizzled data type (unorm8, fixed point,
> float, etc) that ma
With Cg toolkit 2.1, it prints this:
test.glsl(6) : error C1011: cannot index a non-array value
With Cg toolkit 3.0, it works, but produces a mess in any profile,
which seems among the worst ways you could imagine of doing this.
Perhaps the feature is infrequently used, so they didn't bother to
op
nv30 and nv40 support SEQ everywhere, and Marek's SEQ/DP4 seems optimal.
BTW, the nVidia Cg compiler (which can tell us what nVidia does)
doesn't seem to accept my naive attempts to index a vector in GLSL: is
something special needed to do it? (some version directive, or special
syntax?)
_
I created a new branch called "floating" which includes an apparently
successful attempt at full support of floating-point textures and
render targets.
I believe it is fundamentally correct, but should be considered a
prototype and almost surely contains some oversights.
Specifically, the followi
OpenGL 4.1 compatibility profile says:
<<
All fragments produced in rasterizing a point sprite are assigned the same as-
sociated data, which are those of the vertex corresponding to the
point. However,
the fragment shader built-in gl_PointCoord contains point sprite texture coor-
dinates. Addition
> On r300->r500, if there are both a vertex shader output and sprite coords at
> the same varying index, the output takes precedence, meaning that it behaves
> as if sprite coords were disabled. This is OK because Gallium doesn't
> specify which one to write to FS if there are both at the same slot
> I think the only complication is in the state tracker where we'll have to
> examine the fragment shader during raster state validation to determine >
> if/where we need to set sprite_coord_enable bits.
Couldn't we just set the one for the PNTC generic unconditionally if
point sprites are enabl
> There really needs to be a new TGSI semantic label for this. The draw
> module needs some way to determine which fragment shader input expects PNTC.
> Care to write a patch?
Actually, I'd propose using a GENERIC slot which is currently
unassigned, and using sprite_coord_enable to activate coord
More concretely, as far as I can tell, the issue is that a fragment
program reading both gl_PointCoord and gl_TexCoord[0] will get both
translated to reads from GENERIC[0], which is broken unless the
GL/GLSL restrict this somehow.
___
mesa-dev mailing lis
The issue is in the Mesa state tracker.
In particular, it currently maps FRAG_ATTRIB_PNTC (used by GLSL
gl_PointCoord) to GENERIC[0] unconditionally.
Hence, the "| 1" in the nvfx driver puts the sprite coordinate in
GENERIC[0] if point_quad_rasterization is enabled.
This mapping by mesa/st is ob
Are these patches correct?
Should I push them?
On Wed, Aug 18, 2010 at 12:52 PM, Luca Barbieri wrote:
> According to both GLSL 1.20 and 4.0, these are a struct with one field
> called "sceneColor".
>
> Fixes a crash on loading in FlightGear.
> ---
> src/mesa/p
Does debug-refcnt-2 look good now?
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev
> And define magic is very brittle. Especially with C++: you #define
> printf to be something else, but nothing prevents a class or a namespace
> to have the printf symbol in its scope.
Yes, but hopefully that's going to be very rare.
Alternatively, you can do this:
1. Compile with cl /Dsprintf=d
Pushed another version which uses the util_* wrappers (hopefully I
didn't miss anything).
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev
> The advantage of having a prefix is that it makes it easy to detect when
> people don't include the proper headers such as u_string.h, and it
> guarantees one can wrap around even buggy OS implementations without
> doing #define magic. It also allows to easily add wrapping.
It is indeed possibly
> Hasn't this already happened somewhere - util/u_snprintf.c ?
Indeed, I'll fix it to use those.
There's something (independent from this) that bugs me though.
Why does Gallium feel the need to implement all this stuff with ad-hoc
names, instead of, for instance, just implementing a function call
I pushed a new version as debug-refcnt-2, which uses os_stream instead of FILE*.
A new commit adds a printf facility to os_stream to support this.
It still uses the sprintf functions from stdio.h, but I suppose this is OK.
If a platform doesn't have those, they can be taken from a BSD libc
(or gnu
There is also another small issue: a new tool is necessary to
post-process the traces, to resolve function names and line numbers.
I put it a new directory called "src/gallium/tools" since none of the
existing places seem appropriate.
Is this a good idea?
___
> Yes, definitely. Thanks again for your efforts on this, Luca.
No problem :)
> Sounds like a useful facility, I hadn't noticed these commits though -
> let me take a look.
>
> I see some direct header file inclusions, not sure if that's an issue
> for embedded platforms - maybe Jose can comment.
gallium-rect-textures adds the PIPE_TEXTURE_RECT target as discussed
in the "gallium & texture rectangles" thread.
I tested nv30, nv40, softpipe and "softpipe with NPOT disabled" using piglit.
debug-refcnt adds the ability to log reference count modifications on
Gallium objects to a file, which al
> configs: Add -lstdc++ to default.
Does this actually work if the application itself links to a different
version of libstdc++ such as libstdc++.so.5?
If not, it might be necessary to use -lstdc++_pic to link to the PIC
static library version.
LLVM probably has the same problem too (but in this
1 - 100 of 222 matches
Mail list logo