Tri-bonded?  Have you tried this?  What MPI stack and/or other
interconnect software do you plan to use?
My understanding is that kernel support for bonding has been vastly
improved. This is only through anectodtal accounts of others. I have
not yet tried it myself.

Instead of using MPI, I am writing my own message-passing protocol
based on UDP Multicast with Forward Error Correction. I will be using
the cluster to run a semantic web-crawler with some of the features
found in IBM's WebFountain. So, the entire cluster will be
rack-mounted in a data center.

However, the older non-PCI-Express versions of that MSI K8N
Master2-Far board all connected only ONE of the two Opterons to the
DIMMs.  Thus the 2nd Opteron has to do all its memory access via the
1st Opterons HT link, so the 2nd Opteron sees more memory latency, and
probably more important for you, the total aggregate memory bandwith
is only 1/2 what you'd get with a real server-grade dual Opteron
board.

I've no idea whether or not that is still the case with the current
MSI K8N Master2-Far or not, but it's something you'll want to check
carefully when considering those sorts of motherboards...

I wasn't aware of the memory bandwidth issues with the MSI K8N
Master2-Far. This is good to know. Thanks for the tip. I am interested
in dual-opteron boards such as the Asus K8N-DL primarily for the
cost-savings they offer over higher-end boards. I am using 4U
rackmount chasis', so form-factor is not a concern.

I've also been looking at the MSI Master3-FA4R because it has 12 x DDR
slots. However, I have read reports about boards shipped with
innoperative DDR slots.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to