Am 18.11.2013 19:16, schrieb Julien Cristau:
On Sun, Nov 17, 2013 at 21:30:44 +0100, Jonas Petersen wrote:
Am 17.11.2013 20:20, schrieb Mouse:
I guess the sizeof comparison would not be necessary since the
condition should never meet with 64-bit longs.
Unless it's in a code fragment that's used only on machines with
<64-bit longs, it will; X runs on systems with 64-bit longs.
I meant the condition "dpy->request < dpy->xcb_last_flushed" should
not meet on systems with 64-bit longs (at least not until the 64-bit
wrap). So the "sizeof(uint64_t) > sizeof(unsigned long)" would not
be necessary. It would just increase the overhead on <64-bit
systems.
The sizeof comparison is a compile-time constant so I'd expect it to be
optimized out by the compiler anyway, so no overhead.
This makes sense. Being curious, I did some tests (with gcc 4.7.2 on
i686). They showed that this code:
int i = 0;
for (;;) {
++i;
if (sizeof(unsigned long) > sizeof(unsigned int) && i < 100) {
printf("%d\n", i);
}
}
on a 32-bit system compiles into the same binary as this code:
for (;;) {
}
and on a 64-bit system it compiles into the same binary as this code:
int i = 0;
for (;;) {
++i;
if (i < 100) {
printf("%d\n", i);
}
}
All that without any optimization options.
So adding the size comparison seems to not only not increasing overhead,
but even effectively optimizing for the targets. At least with this
compiler on these architectures.
- Jonas
_______________________________________________
[email protected]: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel