On 15.05.2017 13:29, Iago Toral wrote:
On Fri, 2017-05-12 at 10:36 +0200, Nicolai Hähnle wrote:
On 11.05.2017 13:10, Iago Toral Quiroga wrote:
---
 src/mesa/main/uniform_query.cpp | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/src/mesa/main/uniform_query.cpp
b/src/mesa/main/uniform_query.cpp
index 42abd18..25acc31 100644
--- a/src/mesa/main/uniform_query.cpp
+++ b/src/mesa/main/uniform_query.cpp
@@ -579,6 +579,13 @@ _mesa_get_uniform(struct gl_context *ctx,
GLuint program, GLint location,
                   memcpy(&dst[didx].u, &tmp, sizeof(tmp));
                   break;
                }
+               case GLSL_TYPE_DOUBLE: {
+                  double d;
+                  memcpy(&d, &src[sidx].f, sizeof(d));
+                  int64_t tmp = IROUNDD64(d);
+                  memcpy(&dst[didx].u, &tmp, sizeof(tmp));
+                  break;
+               }
                default:
                   assert(!"Should not get here.");
                   break;
@@ -616,6 +623,13 @@ _mesa_get_uniform(struct gl_context *ctx,
GLuint program, GLint location,
                   memcpy(&dst[didx].u, &tmp, sizeof(tmp));
                   break;
                }
+               case GLSL_TYPE_DOUBLE: {
+                  double d;
+                  memcpy(&d, &src[sidx].f, sizeof(d));
+                  uint64_t tmp = (d < 0.0) ? 0 : IROUNDD64(d);

This needs to use an unsigned rounding function, doesn't it?

Good question... the existing code seems to assume otherwise, since
there are only signed IROUND*() macros and conversions from
float/double to unsigned are currently using these. As far as I know,
casts between signed and unsigned integers are not supposed to alter
the bit-level representation, so I believe this should be fine?

I think the existing code is wrong and needs to be fixed. The fact that it wasn't noticed is a testament to how difficult it is to write thorough test suites...

You're right that _once the values are integer_, going back and forth between signed and unsigned should not affect the bit pattern. Although, it might hit formally undefined behavior which might tempt an optimizing compiler to do some funny business, so it's best not to rely on that too much.

The problem here is rather the conversion from floating point to integer types. (int)f and (unsigned)f have different results when f > INT_MAX. Try this:

#include <stdio.h>

int main()
{
    float f = 4000000000.0;
    printf("%u %u\n", (unsigned)f, (unsigned)(int)f);
}

It'll print:

nha@capella:~/tmp$ ./a.out
4000000000 2147483647

Cheers,
Nicolai
--
Lerne, wie die Welt wirklich ist,
Aber vergiss niemals, wie sie sein sollte.
_______________________________________________
mesa-dev mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to