On Oct 19, 2008, at 5:15 PM, Mark Hahn wrote:

The SiCortex systems are clusters of 6-core SMPs. There is no load/ store access to memory on other nodes, although the interconnect is fast enough to make software access to remote memory quite interesting.

interesting way to put it, since competing interconnects are at least as fast (I'm thinking about the usual IB claim of ~1 us latency).

I see HPCC results around 1.2 usec for Infinipath.. The submissions at those sort of timings are pretty thin.
I am embarassed to say we haven't submitted a full run either.



I've been coding shmem and GASnet implementations for the SiCortex interconnect recently, and on the older 500 MHz systems a "put" takes about 800 ns and a "get" takes a little under 3 microseconds before any particular
so the 800ns put is half RTT for two nodes doing ping-pong puts?

No, It is bad scholarship. That figure is back to back puts, which is pretty meaningless. I expect the latency to be similar to MPI Send+Recv which is around 1.4. I'll measure the
half RTT.



On the remote-paging side, I put together a prototype that gets about 2 GB/sec and 64K page fault latencies under 100 microseconds, again, not optimized.

ouch. do you know how much of that time is due to slow MMU interaction? (at 2 GB/s, the wiretime for 64k should be 33 us, if I calculate correctly)

The 2 GB figure is aggregate bandwidth of multiple requests using multiple rails. Any particular transfer uses only one rail (right now) and the latency includes the server's not particularly polished thought processes.

after all. I am kind of amused by all the press and angst about "multicore",
well, academics need _something_ to motivate grants, and the gov isn't likley to support all the nation's CS depts on gaming research ;)
also, I suspect intel and perhaps sun have encouraged the brouhaha.


_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to