add the additional difficulty of getting 64-bit drivers
for windows, at least. 64b-ness was never much of an issue for linux.
and what-not, I don't think it's worth messing with 64-bit
computing for apps that don't need the address space.
I think you underestimate the number of jobs that can effectively
use more than 2GB/proc, and which can make excellent use of having
twice as many registers. not to mention the fact that the kernel
likes having a big-flat address space, even if procs get by with 32b.
32b procs run rather well on 64b systems - you get small pointers
and you don't get those annoying extra registers. just compile -m32.
but I'm curious whether you have some data to back up the assertion
that carrying around the extra bits is a significant cost. are you
doing something involving lots of pointers (sparse matrices, perhaps
or something with graphs/trees)?
One additional way 64-bit computing is being oversold
is that there aren't now, and maybe never will be, any
human written program that requires more than 32 bits
for the instruction segment of the program. It's simply
I've heard it said that some DB's have surprisingly large text.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf